ASME.MVC.Models.DynamicPage.ContentDetailViewModel ContentDetailViewModel
Recent Webinar Examines Data Discrimination & Algorithmic Bias in AI

Recent Webinar Examines Data Discrimination & Algorithmic Bias in AI

In a recent webinar hosted by All Tech Is Human, experts discussed how artificial intelligence (AI) is commonly misconceived as unbiased, infallible, and omniscient. One webinar panelist, Meredith Broussard, Associate Professor of Data Journalism at NYU, has coined the term “techno-chauvinism,” and defines it as “the idea that technology is always the highest and best solution, and it is superior to the people-based solution.” She went on to discuss how this is a falsehood of AI, and that in fact AI is no more or less biased than the people who have built it. She describes how data discrimination and algorithmic biases can be unintentionally, or intentionally, programmed into technology and shares that in order to create technology that is unbiased, one must first be aware of their own biases and actively working to challenge them.
 
Broussard spoke on the perceived infallibility of AI and stressed the importance of pushing back against the narrative that technology is “superior.” Education, Broussard offers, is an important way to challenge this narrative and mitigate data discrimination and algorithmic biases. Broussard emphasized the need to introduce these subjects as early as kindergarten and continue the conversation across all levels of education. Every K-12 student should be taking courses in computational and media literacy, Broussard said, as well as being taught how to identify disinformation and biases in media. Introducing technology early educates students about how these systems work, and this is an important first step in teaching them how to identify both their own biases and the external ones they see at play.
 
The same webinar featured Dr. Safiya Noble, author of Algorithms of Oppression and Associate Profession in the Department of Information Studies at UCLA. Noble is also the Co-Director of the UCLA Center for Critical Internet Inquiry (C2i2), an interdisciplinary research center committed to transparency, accountability, and critical inquires of the internet and society. Noble briefly mentioned the need for legislation to establish limits in the AI space, particularly in regard to facial recognition software and surveillance, which she noted are disproportionately weaponized against people of color. Through C2i2, she hopes to provide a space for people who are interested in issues of the digital world as well as educate her peers in the field about how systems can be imbedded with biases.
 
In a similar vein, ASME’s Public Affairs and Outreach (PAO) Council recently heard from Joanna Bryson, Professor of Ethics and Technology at the Hertie School in Berlin. In her presentation, Bryson elaborated on how biases can be intentional or unintentional and made a point to refute the common misconception that AI is unbiased. Rather, she said that AI is a human artifact deliberately built to facilitate its creator’s intentions and thus replicate its creator’s biases. Bryson defined three types of AI bias: implicit, accidental, and deliberate. The significance behind the source of bias depends on the ramifications: for example, a soap dispenser that could not recognize black people’s hands while noticing white people’s hands easily could be accidental bias; facial recognition software that is more able to identify a man’s face than a woman’s, or a white person’s face but not a black person’s face, could be implicit or even deliberate.
 
These biases can be avoided, Bryson noted, by improving machines’ designs and by ensuring diverse teams prior to creation. Even deliberate bias can be challenged through audits, regulations, and development accountability. In order to trust a technology that implements artificial intelligence, one must be able to trust its maker, and this trust can be built through accountability, transparency, and due diligence.
 
Accountability in the digital world has been in the spotlight in recent weeks as companies evaluate the many ways their technology may be used—or abused—by others. In June, Amazon said it would put a one-year pause on letting the police force use its facial recognition tool. Although the company did not give a reason for the change, it noted that the pause “might give Congress enough time to put in place appropriate rules” for the ethical use of facial recognition. There is currently no national law regulating facial recognition, but the Justice in Policing Act of 2020 (H.R.7120) introduced in the House on June 8would ban the use of facial recognition technology with police recording equipment.
 
For more information about the UCLA Center for Critical Internet Inquiry, see here:  https://www.c2i2.ucla.edu/.
 
To watch the All Tech Is Human webinar with Meredith Broussard and Dr. Noble, see here: https://www.youtube.com/watch?v=fNow1xifa48&feature=youtu.be.

You are now leaving ASME.org