Am I the only one getting agitated by the word AI (Artificial Intelligence)?

Real AI does not exist yet,
atm we only have LLMs (Large Language Models),
which do not think on their own,
but pass turing tests
(fool humans into thinking that they can think).

Imo AI is just a marketing buzzword,
created by rich capitalistic a-holes,
who already invested in LLM stocks,
and now are looking for a profit.

  • Seudo@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    2
    ·
    edit-2
    5 months ago

    We have to work out what intelligence is before we can develop AI. Sentient AI? Forget about it!

    • doctorcrimson@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      edit-2
      5 months ago

      I think generally Sentience is considered a very low bar while Sapience better describes thinking on the level of a real person. I get them confused sometimes.

      • evranch@lemmy.ca
        link
        fedilink
        arrow-up
        1
        arrow-down
        3
        ·
        5 months ago

        In the case of an LLM-type AI though, the bars can be swapped in a sense. LLMs are strange, because they can talk but not feel.

        You can’t argue that a series of tensor calculations are sentient (def. able to perceive or feel) - capable of experiencing life from the “inside”. A dog is sentient by most definitions, it could be argued to have a “soul”. When you look at a dog, the dog looks back at you. An LLM does not. It is not conscious, not “alive”.

        However an LLM does put on a fair appearance of being sapient (def. intelligent; able to think) - they contain large stores of knowledge and aside from humans are now the only other thing on the planet that can talk. You can have a discussion with one, you can tell it that it was wrong and it can debate or clarify using its internal knowledge. It can “reason” and anyone who has worked with one writing code can attest to this as they’ve seen their capability to work around restrictions.

        It doesn’t have to be sentient to be able to do this sort of thing, even though we used to think that was practically a prerequisite. Thus the philosophical confusion around them.

        Even if this is simply a clever trick of a glorified autocomplete algorithm, this is something the dog cannot do despite its sentience. Thus an LLM with a decent number of parameters is “smarter” than a dog, and arguably more sapient.

        • doctorcrimson@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          5 months ago

          No, not really. You’re misunderstanding the words, and also vastly overestimating LLMs. LLMs such as the OpenAI™ models cannot reproduce dogs barking to the point of fooling humans or animals unless they’re trained on dog barking data to the point of specialization. That’s because they lack any general thinking capability, period.

          Learning Algorithms require massive amounts of sample data to function, and pretty much never function outside of specific purposes such as predicting what word will come next in a sentence. I personally think that disqualifies them from sentience and sapience, but they could certainly pass a sentience written test.