• 1 Post
  • 19 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle
  • The concern with LLM’s as any sort of source of truth is that they have no concept of facts or truth. They simply read training material and then pattern match to come up with a response to input. There is no concept of correct information. And unless you fact check it, you will not know if it is correct or it’s reasoning is sound. Using this to teach is dangerous IMO. Using the word reasoning is a anthropomorphising it too; it’s just pattern matching.

    Could we develop some adversarial system that fact checks it in the future? Possibly. But I don’t know of one that’s effective. Besides, good luck determining what is true when your training set is the internet. Or having it account for advances in understanding.

    From the article you linked:

    The incredible capabilities of large language models like ChatGPT are centered on how they have been trained on a vast corpus of knowledge. They provide us with an unparalleled resource for information and guidance. As your virtual professor, LLMs can guide you through the intricacies of each subject for deeper understanding of complex concepts.

    That’s a very naive take on LLMs. It assumes that because the training material is valid, it’s output is valid. It is not!

    I worry about the future where LLMs become the basis of information exchange because outputs “look right”.

    Show me a system that can guarantee correct answers and I’m 100% on board.










  • I was recently diagnosed from a neuro-psych. Similar process of many hours of testing (~5h). My friend was also diagnosed recently from a psychiatrist through question answer, but no formal cognitive evaluation measure. The amount of clarity I got from the neuro-psych in terms of cognitive function and my specific circumstances was significantly more helpful than what my friend got from the psychiatrist.

    After all the formal testing, I was given a thorough 17 page report including a breakdown of each aspect of cognitive functioning, any applicable disorders (with recommendation for therapy to investigate further and confirm), next steps, and treatment and coping mechanism recommendations. My friend was given a broad diagnosis of unspecified ADHD with no additional information.

    If you are able to afford the neuropsych eval, it is well worth it.











  • The number of articles out about the latest and greatest game updates from a few hours ago which are rehashing patches released a week or more ago drive me nuts. How many times do I have to wade through multiple screens of preamble to find out that content is being recycled from week old news.

    But yes, the ratio of low signal to high signal content is crazy in general. I get that people have to make a living and want to do it via communicating on YouTube/articles/… but I feel we’ve really lost access to high quality content. ChatGPT and other LLMs are going to make this wayyyyyyy worse.

    Content recommendation algorithms push for length and frequency, which inevitably means meeting the quantity bar is more important than quality. Meanwhile we have really thought out high quality content buried in a mountain of clickbait and those creators both don’t get as good monetization or exposure. It’s a sad system :(. I want to see more ErrantSignal quality bar and less clickbait please.