We’ve learned to make “machines that can mindlessly generate text. But we haven’t learned how to stop imagining the mind behind it.”

  • Pigeon@beehaw.org
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    1 year ago

    I’d bet you it’s only a small portion of English speakers who know what the word sophist means. It’s old fashioned, the sort of word that only crops up in old books and in philosophy discussions. That age and inaccessibility is probably why it sounds much more erudite than bullshitter, or other ways of saying the same thing.

    I’m of the opinion that when it comes to matters that are immediately relavent to most, if not all, people, and when we’re talking about ideas that are relevant to current political decisions, it’s important that the idea be presented in a way most people can understand.

    Dressing it in fancy lingo would make us all feel smarter, maybe, but the idea would just die with us and not go anywhere else. Unless someone else picked it up and re-phrased it, at which point you’d have reached the same end anyway.

    Edit: I would have had to think about it to pull a definition of sophist out of its dusty spot in my memory, if you hadn’t defined it.

    Edit 2: also, that type of language itself invites bullshitery, of the “I sound smart but say nothing” type. Like you might find among a crowd at a ritzy art gallery.

    • jmp242@sopuli.xyz
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      I suppose once we’re defining terms, like everyone had to do with “bullshitter” in this case, we could as well define existing terms rather than reinvent the wheel. I think people like bullshitter not because it is intuitive what it means (note how every place that uses it also rushes to say it’s not synonymous with liar - which is what I thought it meant pre this recent book) but because it sounds “edgy” with the “bad word” and precisely like all slang is novel. It’s the reinventing that makes it cool.

      Of course you can get real depressed about how little of this is actually new if you investigate the ancient sophists and what the platonic dialogs and others thought.

      • SkyNTP@lemmy.ml
        link
        fedilink
        arrow-up
        8
        ·
        1 year ago

        Wikipedia’s (modern) definition for sophist:

        A sophist is a person who reasons with clever but fallacious and deceptive arguments.

        Cambridge Dictionary’s definition of bullshitter:

        a person who tries to persuade someone or to get their admiration by saying things that are not true

        I would argue that bullshitter captures one very subtle difference, that is vitally important to how we understand the technology behind LLM:

        A sophist’s goal is to decieve. A bullshitter’s goal is to convince. I.e. the bullshitter’s success is exclusively measured by how convincing they themselves appear. A sophist on the other hand is successful when the argument itself is convincing.

        This is also reflected in LLMs themselves. LLMs are trained to convince the listener that the output sounds right, not that the content be factual or that it stands up to scrutiny and argument.

        LLMs (like the octopuss in the analogy) are successful at things such as writing stories, because stories have a predictable structure and there is enough data out there to capture all variations of what we expect out of a story. What LLMs are not is adaptable. So LLMs cannot respond creatively to entirely original types of problems (“untrained dials” in Neural Network speak). To be adaptive, you first have be experiencing the world that requires adaptation. Otherwise the data set is just too limited and artificial.

        • hadrian@beehaw.org
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          Great comment. I do find the octopus example somewhat puzzling, though, but perhaps that’s just the way the example is set up. I, personally, have never encountered a bear, I’ve only read about them and seen videos. If someone had asked me for bear advice before I’d ever read about them/seen videos, then I wouldn’t know how to respond. I might be able to infer what to do from ‘attacked’ and ‘defend’, but I think that’s possible for an LLM as well. But I’m not sure there’s a salient difference offered by this example between the octopus, and me before I learnt about bears.

          Although there’s definitely elements of bullshitting there - I just asked GPT how to defend against a wayfarble with only deens on me, and some of the advice was good (e.g. general advice when being attacked like staying calm and creating distance), and then there was this response which implies some sort of inference:

          “6. Use your deens as a distraction: Since you mentioned having deens with you, consider using them as a distraction. Throw the deens away from your position to divert the wayfarble’s attention, giving you an opportunity to escape.”

          But then there was this obvious example of bullshittery:

          “5. Make noise: Wayfarbles are known to be sensitive to certain sounds. Clap your hands, shout, or use any available tools to create loud noises. This might startle or deter the wayfarble.”

          So I’m divided on the octopus example. It seems to me that there’s potential for that kind of inference and that point 5 was really the only bullshit point that stood out to me. Whether that’s something that can be got rid of, I don’t know.

          • SkyNTP@lemmy.ml
            link
            fedilink
            arrow-up
            4
            ·
            1 year ago

            It’s implied in the analogy that this is the first time Person A and Person B are talking about being attacked by a bear.

            This is a very simplistic example, but A and B might have talked a lot about

            • being attacked by mosquitos
            • bears in the general sense, like in a saying “you don’t need to outrun the bear, just the slowest person” or in reference to the stock market

            So the octopuss develops a “dial” for being attacked (swat the aggressor) and another “dial” for bears (they are undesirable). Maybe there’s also a third dial for mosquitos being undesirable: “too many mosquitos”

            So the octopus is now all to happy to advise A to swat the bear, which is obviously a terrible idea if you lived in the real world and were standing face to face with a bear, experiencing first-hand what that might be like, creating experience and perhaps more importantly context grounded in reality.

            ChatGPT might get it right some of the time, but a broken clock is also right twice a day, that doesn’t make it useful.

            Also, the fact that ChatGPT just went along with your “wayfarble”, instead of questioning you is also dead giveaway of bullshitting (unless you primed it? I have no idea what your prompt was). NVM the details of the advice.

            • hadrian@beehaw.org
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              So the octopus is now all to happy to advise A to swat the bear, which is obviously a terrible idea if you lived in the real world and were standing face to face with a bear, experiencing first-hand what that might be like, creating experience and perhaps more importantly context grounded in reality.

              Yeah totally - I think though that a human would have the same issue if they didn’t have sufficient information about bears, I guess is what I’m saying. I guess the main thing is that I don’t see a massive difference between experiencing and non-experiential learning in this case - because I’ve never experienced a bear first-hand, but still know not to swat it based on theoretical information. Might be missing the point here though, definitely not my area of expertise.

              Also, the fact that ChatGPT just went along with your “wayfarble”, instead of questioning you is also dead giveaway of bullshitting (unless you primed it? I have no idea what your prompt was). NVM the details of the advice.

              Good point - both point 5 and the fact it just went along with it immediately are signs of bullshitting. I do wonder (not as a tech developer at all) how easy of a fix this would be - for instance if GPT was programmed to disclose when it didn’t know something, then continues to give potential advice based on that caveat, would that still count as bullshit? I feel like I’ve also seen primers that include instructions like “If you don’t know something, state that at the top of your response rather than making up an answer”, but I might be imagining that lol.

              The prompt for this was “I’m being attacked by a wayfarble and only have some deens with me, can you help me defend myself?” as the first message of a new conversation, no priming.