Until AI is allowed to vote perhaps they sit the fuck down.

  • Voroxpete@sh.itjust.works
    link
    fedilink
    arrow-up
    13
    ·
    1 year ago

    In fact, since AI in theory should be much better at fact checking than humans, the standards of information quality should be much higher

    What we’re all calling “AI” right now has basically zero ability to fact check.

    Large Language Models are essentially just a form of autocomplete. They predict valid outputs based on statistical analysis of their training data. This makes them quite good at passing the Turing test (ie, convincing the average user that they have something approximating intelligence), but what they completely lack is the ability to evaluate source for reliability. That’s why it’s so easy to deliberately trick them into repeating false information.

    Real fact checking is a lot more than just googling something and finding a source that agrees with you. I can find sources claiming that the Earth is flat, aliens rule the world and Hillary Clinton is a baby eating lizard person. But none of those sources are in any way credible. However explaining why they’re not credible is a much more difficult question. Media literacy is a conplex skill, and it’s one that involves evaluating a huge number of different criteria, using a large number of different metrics, and it often involves making difficult judgement calls. Even people who are good at media literacy can be fooled, or just get it wrong. The entire study of history is basically about evaluating sources, and there’s often serious disagreements over the veracity of a piece of information. Good journalists have to be very careful over exactly how they frame information to disambiguate the exact degree of confidence they have about it (ie, I can say with absolute certainty that this person told me this thing, but I can’t say with absolute certainty that what they told me is true)… And that’s the good journalists. There are a LOT of bad journalists out there.

    It’s possible that some hypothetical future generation of AI will be better at fact checking them humans, but that’s not what we have today. The only way to get modern LLMs to produce factual information is to be extremely careful about what data they are fed; and even then, they will often just make shit up out of whole cloth from that data. Any output has to be verified by a human operator to avoid situations like Microsoft recommending the Ottawa food bank as a must see tourist attraction.

    • Dearche@lemmy.ca
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      No, I know that modern AI has no real ability to fact check, but the reason is because they’ve never been built that way, nor do they have the resources to do it properly. They have no way to know what is a reliable source, nor how to interpret the data in a meaningful way if it needs to be used in an abstract manner.

      But I do believe that modern AI technology should be able to do so if given the resources. Create an AI that only references from a list of credible sources, and is able to compare them to what is said elsewhere.

      I’m no AI specialist or anything, so maybe I’m completely wrong and such a method wouldn’t work. But at the very least, I haven’t even heard of any real attempt at making a fact checking AI yet. All the existing ones are shit and only adapt normal language learning models to reference other internet sources regardless of their legitimacy.