Guideline 2: Make clear how well the system can do what it can do

blue header bar
2Initially

Make clear how well the system can do what it can do

Help the user understand how often the AI system may make mistakes.

Help the user understand how often the AI system may make mistakes.

Set expectations about how well the AI system will perform. People often over- or under-estimate how many mistakes an AI system may make, even for tasks it is designed for. For example, a fitness tracker designed to track steps while walking or running may still miss some (e.g., it may not work as well on hills or stairs) or detect spurious steps (e.g., mistaking sitting in a swing as steps). Unrealistic expectations of performance can lead to disappointment and product abandonment.  

In some cases, over-trusting an AI even when it could be wrong (automation bias) or under-trusting an AI when it could be right (algorithm aversion) can also lead to harms. For example, judges using AI systems to help make sentencing decisions may unknowingly make harmful sentencing decisions if they over-trust the AI’s recommendations even when it could be wrong or biased (as in the well-known COMPAS examples).

Use Guideline 2 design patterns to set expectations about system performance: