Bias and stereotyping are still huge problems for systems like DALL-E 2 and Stable Diffusion, despite companies’ attempts to fix it.

New tools built by researchers at AI startup Hugging Face and Leipzig University and detailed in a non-peer-reviewed paper allow people to examine biases in three popular AI image-generating models: DALL-E 2 and the two recent versions of Stable Diffusion.

Here is the paper:

Fantastic, thank you for sharing this. None of this surprises me as I keep up to date on AI and ethical concerns, but I’m glad it’s receiving more attention.

Create a post

Rumors, happenings, and innovations in the technology sphere. If it’s technological news, it probably belongs here.

  • 0 users online
  • 11 users / day
  • 35 users / week
  • 107 users / month
  • 218 users / 6 months
  • 6 subscribers
  • 570 Posts
  • Modlog