• jkintree@slrpnk.net
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    5
    ·
    14 hours ago

    They succeeded in getting attention. Look at all the comments posted here. The issue needs attention. The issue also needs fact checking. I was pleased with fact checking I got from diffy.chat about the wildfires in LA County. Maybe fact checking bots should be included in online discussion forums.

    • silence7@slrpnk.netOPM
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      14 hours ago

      The bots are mostly langauge models, not knowledge models. I don’t regard them as sufficiently reliable to do any kind of fact checking.

      • jkintree@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        13 hours ago

        The language model for diffy.chat has been trained not to respond from its own learned parameters, but to use the Diffbot external knowledge base. Each sentence or paragraph in a Diffy response has a link to the source of the information.

        • silence7@slrpnk.netOPM
          link
          fedilink
          arrow-up
          1
          ·
          13 hours ago

          That’s still not into the realm where I trust it; the underlying model is a language model. What you’re describing is a recipe for ending up with paltering a significant fraction of the time.

          • jkintree@slrpnk.net
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            11 hours ago

            Did you even try diffy.chat to test how factually correct it is and how well it cites its sources? How good does it have to be to be useful? How bad does it have to be to be useless?

            • silence7@slrpnk.netOPM
              link
              fedilink
              arrow-up
              1
              ·
              11 hours ago

              I tried it. It produces reasonably accurate results a meaningful fraction of the time. The problem is that when it’s wrong, it still uses authoritative language, and you can’t tell the difference without underlying knowledge.

              • jkintree@slrpnk.net
                link
                fedilink
                English
                arrow-up
                1
                ·
                9 hours ago

                There does need to be a mechanism to keep the human in the loop to correct the knowledge base by people who have the underlying knowledge. Perhaps notification needs to be sent to people who have previously viewed the incorrect information when a correction is made.