• Lvxferre@mander.xyz
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 month ago

    [Replying to myself to avoid editing the above]

    Here’s another example. This time without involving names of RL people, only logical reasoning.

    And here’s a situation showing that it’s bullshit:

    All A are B. Some B are C. But no A is C. So yes, they have awful logic reasoning.

    You could also have a situation where C is a subset of B, and it would obey the prompt by the letter. Like this:

    • all A are B; e.g. “all trees are living beings”
    • some B are C; e.g. “some living beings can bite you”
    • [INCORRECT] thus some B are C; e.g. “some trees can bite you”
    • CileTheSane@lemmy.ca
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      Yup, the AI models are currently pretty dumb. We knew that when it told people to put glue on pizza.

      If you think this is proof against consciousness, does that mean if a human gets that same question wrong they aren’t conscious?

      For the record I am not arguing that AI systems can be conscious. Just pointing out a deeply flawed argument.

      • Lvxferre@mander.xyz
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 month ago

        Yup, the AI models are currently pretty dumb. We knew that when it told people to put glue on pizza.

        That’s dumb, sure, but on a different way. It doesn’t show lack of reasoning; it shows incorrect information being fed into the model.

        If you think this is proof against consciousness

        Not really. I phrased it poorly but I’m using this example to show that the other example is not just a case of “preventing lawsuits” - LLMs suck at basic logic, period.

        does that mean if a human gets that same question wrong they aren’t conscious?

        That is not what I’m saying. Even humans with learning impairment get logic matters (like “A is B, thus B is A”) considerably better than those models do, provided that they’re phrased in a suitable way. That one might be a bit more advanced, but if I told you “trees are living beings. Some living beings can bite. So some trees can bite.”, you would definitively feel like something is “off”.

        And when it comes to human beings, there’s another complicating factor: cooperativeness. Sometimes we get shit wrong simply because we can’t be arsed, this says nothing about our abilities. This factor doesn’t exist when dealing with LLMs though.

        Just pointing out a deeply flawed argument.

        The argument itself is not flawed, just phrased poorly.