NIST Seeking Feedback on Draft Report on AI Explainability

Sep 21, 2020

by ASME.org

The National Institute of Standards and Technology (NIST) is seeking feedback on a recently released report titled Four Principles of Explainable Artificial Intelligence (Draft NISTIR 8312). The report proposes a set of principles to judge how “explainable” AI’s decisions are. The report was released in draft form to encourages conversation about what we should expect and require from out decision-making devices. The report is part of a broader NIST effort to help develop trustworthy AI through better understanding the system’s theoretical capabilities and limitations. Also to this end, NIST is working to improve AI accuracy, reliability, security, robustness, and explainability, which is the focus of this latest publication. 

 

The report presents four principles of “explainable” AI: explanation, meaning, accuracy, and knowledge of limitations. The four principles are based on how the human recipient of the information will be impacted by the information. Therefore, each situation in which AI produces information may require a different category of explanation that appropriately satisfies the recipient’s own limitations on synthesizing the information received. Understanding how different AI systems operate and how different types of output information is received will dictate how AI should operate to meet explainability goals and requirements.

 

To read the full report, visit: https://www.nist.gov/system/files/documents/2020/08/17/NIST%20Explainable%20AI%20Draft%20NISTIR8312%20%281%29.pdf.

 

NIST is accepting comments on the draft until October 15, 2020. For more details, visit NIST's webpage on AI explainability

You are now leaving ASME.org