Poisoned AI went rogue during training and couldn’t be taught to behave again in ‘legitimately scary’ study::AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.

  • theluddite@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 months ago

    I’m deeply concerned that as a society we’re becoming unable to distinguish between science, aka the search for knowledge, and corporate product development. More concerning still is the distinction between a scientific paper, which exists to communicate experimental finding such that it can be reproduced, and what is functionally advertising of proprietary products masquerading as such. No one can reproduce that “paper” cited there, because it’s being done in-house at a company. That’s antithetical to science.