University of Washington researchers found significant racial, gender and intersectional bias in how three state-of-the-art large language models ranked resumes. The models favored white-associated...
We find that the MTEs are biased, significantly favoring White-associated names in 85.1% of cases and female-associated names in only 11.1% of cases, with a minority of cases showing no statistically significant differences. Further analyses show that Black males are disadvantaged in up to 100% of cases, replicating real-world patterns of bias in employment settings, and validate three hypotheses of intersectionality. We also find an impact of document length as well as the corpus frequency of names in the selection of resumes.
Quote from abstract:
Pretty damning… And not surprising.