ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.

  • adeoxymus@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    10 months ago

    I’d say that a measurement always trumps arguments. At least you know how accurate they are, this statement cannot follow from reason:

    The JAMA study found that 12.5% of ChatGPT’s responses were “hallucinated,” and that the chatbot was most likely to present incorrect information when asked about localized treatment for advanced diseases or immunotherapy.

    • zeppo@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 months ago

      That’s useful. It’s also good to note that the information the agent can relay depends heavily on the data used to train the model, so it could change.