NIST Releases Draft Report on Managing Bias in AI

NIST Releases Draft Report on Managing Bias in AI

The National Institute of Standards and Technology (NIST) has released a draft special publication titled “A Proposal for Identifying and Managing Bias in Artificial Intelligence.” The publication is part of NIST’s initial stems in developing standards around building and using trustworthy artificial intelligence (AI) systems. The report’s authors proactively involved stakeholders from many groups to ensure that diversity in thought informed the creation of the publication.

 

The report takes an in-depth look at the many stages of developing an AI system to ensure that bias is considered and planned for at each step. From pre-design, to design and development, through deployment, the report’s authors work through problems with AI decision making and operations. The authors pay particular attention to ensuring that AI end users are made aware of the capabilities of the system so that users are aware of the system’s true capabilities and performance, including what it can and cannot do.

 

“Managing the risk of bias in AI is a critical part of developing trustworthy AI systems, but the path to achieving this remains unclear,” said NIST’s Reva Schwartz, one of the report’s authors. “We want to engage the community in developing voluntary, consensus-based standards for managing AI bias and reducing the risk of harmful outcomes that it can cause.”

 

The report suggests a three-stage approach for managing bias in AI bias that is meant to foster additional discussion regarding future standards for solving and addressing bias in AI systems. In identifying in managing bias in AI, NIST hopes to promote the development of more effective systems that are worthy of public trust and provide greater benefit to society. NIST plans to continue to work collaboratively to develop additional guidance in this area and is accepting comments on the document until August 5, 2021.

You are now leaving ASME.org