That’s the problem though–any “AI” health bot is trained on existing data, and that’s created from a long history of biased, religious, prejudiced health professionals. And assuming the bots are trained on as much data as possible, they might even be worse, because older data will be that much more biased and prejudiced.
Also, in this case, Bill Gates is full of shit. AI will be nowhere near sophisticated enough in 10 years to do most jobs.
yea…unfortunately this is not happening. Back in the days where AI didn’t only meant LLM or generative algs, people tried to predict crime with algorithms. It have been shown that they exibhit the same bias humans do because they learnt for humans. One of maiy examples: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing but i think i read stories like this happening well before 2010.
I would almost accept AI accelerating humanity’s downfall if it could get us neutral, non-religious, non-prejudiced doctors & therapists for a while.
That’s the problem though–any “AI” health bot is trained on existing data, and that’s created from a long history of biased, religious, prejudiced health professionals. And assuming the bots are trained on as much data as possible, they might even be worse, because older data will be that much more biased and prejudiced.
Also, in this case, Bill Gates is full of shit. AI will be nowhere near sophisticated enough in 10 years to do most jobs.
Thanks. I hate it.
yea…unfortunately this is not happening. Back in the days where AI didn’t only meant LLM or generative algs, people tried to predict crime with algorithms. It have been shown that they exibhit the same bias humans do because they learnt for humans. One of maiy examples: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing but i think i read stories like this happening well before 2010.
But wait ! There is more ! This isn’t something new at all. Before police used algorithms, even before computers existed, police forces tranied dogs to help them in their missions (defending people, detecting drugs…). Guess what ? The dogs also learnt the bias of trainers. https://daily.jstor.org/the-police-dog-as-weapon-of-racial-terror/, https://www.npr.org/sections/thetwo-way/2011/01/07/132738250/report-drug-sniffing-dogs-are-wrong-more-often-than-right are examples for both categories.
So yeah. Unbiased AI, or dogs, or unicorns won’t happen for as long as we humans training them are biased.