Original title: The US Military Is Taking Generative AI Out for a Spin
Summary: The US military is testing five LLMs as part of an eight-week exercise run by the Pentagon’s digital and AI office. “It was highly successful. It was very fast,” a US Air Force colonel is quoted as saying. “We did it with secret-level data,” he adds, saying that it could be deployed by the military in the very near term.
So, there’s a high probability that people will die because an AI lied for the US military. Plausible deniability has a new toy.
What a strange game, the only winning move is not to play.
This is the next arms race.
Please comply with our rule 6 and rule 11.
Copy this into the title:
(article from 5.07.2023)
Copy this into the body of the post:
Original title: The US Military Is Taking Generative AI Out for a Spin
Thank you.
Done.
Terrifying, but inevitable…
Honestly, I was freaked out when I read that “secret-level data” in the article, but it’s probably a choice between (a) giving an officially approved LLM for people to use, that’s set up in a way where it’s safe to feed secret-level data into it, or (b) having people fire up ChatGPT and put the secret-level data into it. There have already been issues of employees pasting proprietary code into ChatGPT and doubtless that’ll happen with military data try or not to prevent it.
Inevitable, but still scary AF.