James Yoon, Sam Jungyun Choi and Lee Tiedrich of Covington & Burling write:
The National Institute of Standards and Technology (“NIST”) is seeking comments on the first draft of the Four Principles of Explainable Artificial Intelligence (NISTIR 8312), a white paper that seeks to define the principles that capture the fundamental properties of explainable AI systems. NIST will be accepting comments until October 15, 2020.
[…]
NIST’s white paper focuses on explainability and identifies four principles underlying explainable AI.
- Explanation. AI systems must supply evidence, support, or reasoning for their outputs. Researchers have developed different models to explain AI systems, such as self-explainable models where the models themselves are the provided explanation.
- Meaningful. The recipient must understand the AI system’s explanation. This principle is a contextual requirement—for example, different types of user groups may require different explanations, or a particular user’s prior knowledge, experiences, and mental processes may affect meaningfulness. Hence, tailoring is necessary for effective communication.
- Explanation Accuracy. The explanation must correctly reflect the AI system’s process for generating its output. In contrast to decision accuracy, explanation accuracy is not concerned with whether or not the system’s judgment is correct. It is referencing how the system came to its conclusion. The principle is also contextual—there may be different explanation accuracy metrics for different types of groups and users.
- Knowledge Limits. The AI system must identify cases it was not designed or approved to operate, or where its answers are not reliable. This ensures that reliance on AI system’s decision processes occurs only where it is appropriate.
Read more on InsidePrivacy.