I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.

Any good examples on how to explain this in simple terms?

Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?

  • Hucklebee@lemmy.worldOP
    link
    fedilink
    arrow-up
    6
    ·
    6 months ago

    I commented something similair on another post, but this is exactly why I find this phenomenon so hard to describe.

    A teenager in a new group still has some understanding and has a mind. It knows many of the meaning of the words that are said. Sure, some catchphrases might be new, but general topics shouldn’t be too hard to follow.

    This is nothing like genAI. GenAI doesn’t know anything at all. It has (simplified) a list of words that somehow are connected to eachother. But AI has no meaning of a wheel, what round is, what rolling is, what rubber is, what an axle is. NO understanding. Just words that happened to describe all of it. For us humans it is so difficult to understand that something uses language without knowing ANY of the meaning.

    How can we describe this so our brains make sense that you can have language without understanding? The Chinese Room experiment comes close, but is quite complicated to explain as well I think.

    • Zos_Kia@lemmynsfw.com
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      6 months ago

      I think a flaw in this line of reasoning is that it assigns a magical property to the concept of knowing. Do humans know anything? Or do they just infer meaning from identifying patterns in words? Ultimately this question is a spiritual question and does not hold any water in a scientific conversation.

      • bcovertigo@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        6 months ago

        It’s valid to point out that we have difficulty defining knowledge, but the output from these machines are inconsistent at a conceptual level, and you can easily get them to contradict themselves in the spirit of being helpful.

        If someone told you that a wheel can be made entirely of gas do you have confidence that they have a firm grasp of a wheel’s purpose? Tool use is a pretty widely agreed upon marker of intelligence and so not grasping the purpose of a thing that they can describe at great length and exhaustive detail, while also making boldly incorrect claims on occassion should raise an eyebrow.

    • NeoNachtwaechter@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      6 months ago

      How can we describe this so our brains make sense that you can have language without understanding?

      I think it is really impossible to describe in easy and limited words.

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      Joel Haver has a sketch in which one person in a group laughs at an inside joke from a trip they didn’t go on. When pressed I think they say something like they laughed because everyone else was. As someone who has been in this situation, it’s true. Even though I don’t understand the specific reference being made, it’s usually being done in a funny manner such that the story telling is enjoyable and humorous. Or I’m able to use context clues to guess what they might be joking about and it’s funny, even if my understanding is off.

    • Turun@feddit.de
      link
      fedilink
      arrow-up
      1
      ·
      6 months ago

      NO understanding. Just words that happened to describe all of it.

      If being able to describe it does not mean understanding, then what is understanding?