After the Israeli assassination ad, digital rights group 7amleh tested the limits of Facebook and Meta’s machine-learning moderation.
After the Israeli assassination ad, digital rights group 7amleh tested the limits of Facebook and Meta’s machine-learning moderation.
Shall we play “Unreasonably evil” or “ Criminally negligent”.
Let’s head to the article!
The verdict is in! Looks like we landed on “ Possibly criminally negligent”
“Last year, an external audit commissioned by Meta found that while the company was routinely using algorithmic censorship to delete Arabic posts, the company had no equivalent algorithm in place to detect “Hebrew hostile speech” like racist rhetoric and violent incitement. “