• Chewy@discuss.tchncs.de
        link
        fedilink
        arrow-up
        8
        ·
        edit-2
        2 months ago

        I noticed those language models don’t work well for articles with dense information and complex sentence structure. Sometimes they forget the most important point.

        They are useful as a TLDR but shouldn’t be taken as fact, at least not yet and for the foreseeable future.

        A bit off topic, but I’ve read a comment in another community where someone asked chatgpt something and confidently posted the answer. Problem: the answer is wrong. That’s why it’s so important to mark AI LLM generated texts (which the TLDR bots do).

        • statist43@feddit.de
          link
          fedilink
          arrow-up
          3
          ·
          2 months ago

          I think the Internet would benefit a lot, if peope would mark their Informations with sources!

          • source my brain
          • Chewy@discuss.tchncs.de
            link
            fedilink
            arrow-up
            2
            ·
            2 months ago

            Yeah that’s right. Having to post sources rules out usage of LLMs for the most part, since most of them do a terrible job at providing them - even if the information is correct for once.