Understanding the effect of accuracy on trust in machine learning models

M Yin, J Wortman Vaughan, H Wallach - … of the 2019 chi conference on …, 2019 - dl.acm.org
Proceedings of the 2019 chi conference on human factors in computing systems, 2019dl.acm.org
We address a relatively under-explored aspect of human-computer interaction: people's
abilities to understand the relationship between a machine learning model's stated
performance on held-out data and its expected performance post deployment. We conduct
large-scale, randomized human-subject experiments to examine whether laypeople's trust in
a model, measured in terms of both the frequency with which they revise their predictions to
match those of the model and their self-reported levels of trust in the model, varies …
We address a relatively under-explored aspect of human-computer interaction: people's abilities to understand the relationship between a machine learning model's stated performance on held-out data and its expected performance post deployment. We conduct large-scale, randomized human-subject experiments to examine whether laypeople's trust in a model, measured in terms of both the frequency with which they revise their predictions to match those of the model and their self-reported levels of trust in the model, varies depending on the model's stated accuracy on held-out data and on its observed accuracy in practice. We find that people's trust in a model is affected by both its stated accuracy and its observed accuracy, and that the effect of stated accuracy can change depending on the observed accuracy. Our work relates to recent research on interpretable machine learning, but moves beyond the typical focus on model internals, exploring a different component of the machine learning pipeline.
ACM Digital Library