Instead of a temporary halt to training high-performance AI systems as demanded by 3,000 signatories of a recently published letter, Urs Gasser urges lawmakers around the globe to ensure that such technologies are safe and comply with fundamental rights. An “AI technical inspection agency” would make sense, he argues.

  • Gaywallet (they/it)@beehaw.org
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 years ago

    I think the biggest issue is people not understanding the risks and biases that these systems bring with them and apply these models in places which end up reinforcing existing biases present in the data it was trained on. An example of this sort of overenthusiastic application would be the application of AI to managing population health which underestimates the needs of non-white populations because of systematic forces which result in non-white individuals having less total healthcare spend than their white counterparts.

    The AIAAC tracks incidences and controversies surrounding AI application. While this data set is a bit too wide for my own tastes, it does track a lot of incidences like the one mentioned above and other applications of AI which I think fit the same model of not understanding the moral implications or simply being too enthusiastic to use AI, such as by Oregon’s department of human services. Notably, none of the problems I’m surfacing here have anything to do with the training that artificial intelligence is receiving and deal entirely with the human application of these models.