If it’s free then, you’re the product
Last July, Google made an eight-word change to its privacy policy that represented a significant step in its race to build the next generation of artificial intelligence.
Buried thousands of words into its document, Google tweaked the phrasing for how it used data for its products, adding that public information could be used to train its A.I. chatbot and other services.
We use publicly available information to help train Google’s language AI models and build products and features like Google Translate, Bard, and Cloud AI capabilities.
The subtle change was not unique to Google. As companies look to train their A.I. models on data that is protected by privacy laws, they’re carefully rewriting their terms and conditions to include words like “artificial intelligence,” “machine learning” and “generative A.I.”
Those terms and conditions — which many people have long ignored — are now being contested by some users who are writers, illustrators and visual artists and worry that their work is being used to train the products that threaten to replace them.
Archive : https://archive.is/SOe5w