Guideline 11: Make clear why the system did what it did

yellow header bar
11When Wrong

Make clear why the system did what it did

Enable the user to access an explanation of why the AI system behaved as it did.

Enable the user to access an explanation of why the AI system behaved as it did.

Make available an explanation for the AI system’s actions/outputs as appropriate.  

Apply this guideline judiciously, keeping in mind that the mere presence of an explanation has been shown to increase user trust. This may cause over-reliance on the system and over-inflated expectations. Over-inflated expectations can lead to trusting an AI even when it could be wrong (automation bias) For setting expectations, see also Guideline 1 and Guideline 2

The explanation can be global, explaining the entire system, or local, explaining each output. Mix and match explanation patterns as needed, keeping in mind that not all explanations are equal in every scenario. Studies have shown that explanations’ content and design significantly impact whether they help or distract people from achieving their goals.  

Use tools such as InterpretML to improve model explainability. 

Use Guideline 11 patterns (mix and match as appropriate) to explain the AI system’s behavior:

Examples

Guideline 11 > Pattern 11A > Example
card example thumbnail
Guideline 11 > Pattern 11A > Example
card example thumbnail
Guideline 11 > Pattern 11A > Example
card example thumbnail
Guideline 11 > Pattern 11A > Example
card example thumbnail
Guideline 11 > Pattern 11G > Example
card example thumbnail
Guideline 11 > Pattern 11G > Example
card example thumbnail
Guideline 11 > Pattern 11B > Example
card example thumbnail
Guideline 11 > Pattern 11G > Example
card example thumbnail
Guideline 11 > Pattern 11G > Example
card example thumbnail
Guideline 11 > Pattern 11F > Example
card example thumbnail