• AndromedusGalacticus@lemm.ee
    link
    fedilink
    arrow-up
    10
    arrow-down
    15
    ·
    1 year ago

    Here’s what Chatgpt/google bard have to say:

    The answer is: not necessarily. Most of the bacteria on our skin are adapted to living in wet environments, so they will not suffocate. However, some bacteria may be washed away or killed by the chlorine in the pool.

    • CineMaddie@lemmy.film
      link
      fedilink
      arrow-up
      36
      arrow-down
      1
      ·
      1 year ago

      Why are we relying on language models to answer questions. These things don’t really “know” anything right?

      • NightFantom@slrpnk.net
        link
        fedilink
        arrow-up
        14
        ·
        1 year ago

        They don’t, but they sound as convincing, (and are probably as correct) as a random blog you’d find googling your question

      • Catoblepas@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        They don’t, and people are way too blasé about how “oh it’s actually the same as googling because it’s just taking from sources online anyway,” when in reality it does nothing to “keep” the knowledge it gets from those sites and is just stringing together words that often go together. It’s like thinking your phone’s predictive text can answer your questions, if your phone also invented quotes and sources (this has already been an issue with journalists and lawyers using ChatGPT to “research”).

        • CineMaddie@lemmy.film
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          Thank god someone who understands. I hated how towards the end reddit was so full of misinformation and people talking out of their ass with confidence. Hope Lemmy can steer away from those tendencies. It’s okay if we don’t have the answer sometimes.

      • Sethayy@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        arrow-down
        5
        ·
        1 year ago

        No one knows anything get over yourself buddy - it gave a correct answer way more polite than I ever could so who’s gonna complain

      • jscummy@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        arrow-down
        4
        ·
        1 year ago

        Don’t they pull from online sources? So it’s basically googling with extra steps and an unpredictable middleman

        • CineMaddie@lemmy.film
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          That would be right if they understood/knew what they were talking about. It’s more akin to really advanced autocorrect that sounds/reads like something the ai was trained on. So it sounds correct but really has 0 basis on truth other than “the model predicts a human would say X next”. Truth is rarely the goal of any of these machine learning language models afaik.

      • AndromedusGalacticus@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Yeah, I’m aware. There were like 10 comments with no replies, so I thought it’d be fun to see what the Chatbot would say. I didn’t take its answer too seriously, but I knew people might be sensitive to the answer. It would have been unfair of me to not say that it was though. Now people can at least decide whether or not to discard the information by providing a “source”.