• FaceDeer@kbin.social
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    1 year ago

    It’s only a “shortcoming” if you aren’t aware of how these LLMs function and are using it for something it’s not good at (in this case information retrieval). If instead you want it to be making stuff up, what was previously an undesirable hallucination becomes desirable creativity.

    This also helps illustrate the flaws in the “they’re just plagarism machines” argument. LLMs come up with stuff that definitely wasn’t in their training data.

    • csfirecracker@lemmyf.uk
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      I didn’t mean to argue against the usefulness of LLMs entirely, they absolutely have their place. I was moreso referring to how everyone and their dog are making AI assistants for tasks that need accurate data without addressing how easy it is for them to present you bad data with total confidence.