• racemaniac@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    1
    ·
    1 month ago

    The problem i have with responses like yours is you start from the principle “consiousness can only be consiousness if it works exactly like human consiousness”. Chess engines intiially had the same stigma “they’ll never be better than humans since they can just calculate, no creativity, real analysis, insight, …”.

    As the person you replied to, we don’t even know what consiousness is. If however you define it as “whatever humans have”, then yeah, a consious AI is a loooong way off. However, even extremely simple systems when executed on a large scale can result into incredible emergent behaviors. Take the “Conway’s game of life”. A very simple system of how black/white dots in a grid ‘reproduce and die’. It’s got 4 rules governing how the dots behave. By now we’ve got reproducing systems in there, implemented turing machines (means anything a computer can calculate can be calculated by a machine in the game of life), etc…

    Am i saying that GPT is consious? nope, i wouldn’t know how to even assess that. But being like “it’s just a text predictor, it can’t be consious” feels like you’re missing soooo much of how things work. Yeah, extremely simple systems at large enough scale can result in insane emergent behaviors. So it just being a predictor doesn’t exclude consiousness.

    Even us as human beings, looking at our cells, our brains, … what else are we than also tiny basic machines that somehow at a large enough scale form something incomprehenisbly complex and consious? Your argument almost sounds to me like “a human can’t be aware, their brain just exists out of simple braincells that work like this, so it’s just storing data it experiences & then repeats it in some ways”.

    • TheOakTree@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Chess engines initially had the same stigma “they’ll never be better than humans since they can just calculate, no creativity, real analysis, insight…”

      I don’t know if this is a great example. Chess is an environment with an extremely defined end goal and very strict rules.

      The ability of a chess engine to defeat human players does not mean it became creative or grew insight. Rather, we advanced the complexity of the chess engine to encompass more possibilities, more strategies, etc. In addition, it’s quite naive for people to have suggested that a computer would be incapable of “real analysis” when its ability to do so entirely depends on the ability of humans to create a complex enough model to compute “real analyses” in a known system.

      I guess my argument is that in the scope of chess engines, humans underestimated the ability of a computer to determine solutions in a closed system, which is usually what computers do best.

      Consciousness, on the other hand, cannot be easily defined, nor does it adhere to strict rules. We cannot compare a computer’s ability to replicate consciousness to any other system (e.g. chess strategy) as we do not have a proper and comprehensive understanding of consciousness.

      • racemaniac@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        1
        ·
        1 month ago

        I’m not saying chess engines became better than humans so LLM’s will become concious, just using that example to say humans always have this bias to frame anything that is not human is inherently less, while it might not be. Chess engines don’t think like a human do, yet play better. So for an AI to become concious, it doesn’t need to think like a human either, just have some mechanism that ends up with a similar enough result.