Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleJune 2024
Analyzing the Relationship Between Difference and Ratio-Based Fairness Metrics
FAccT '24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and TransparencyPages 518–528https://doi.org/10.1145/3630106.3658922In research studying the fairness of machine learning algorithms and models, fairness often means that a metric is the same when computed for two different groups of people. For example, one might define fairness to mean that the false positive rate of a ...
- research-articleAugust 2021Distinguished Paper
Bias in machine learning software: why? how? what to do?
ESEC/FSE 2021: Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software EngineeringPages 429–440https://doi.org/10.1145/3468264.3468537Increasingly, software is making autonomous decisions in case of criminal sentencing, approving credit cards, hiring employees, and so on. Some of these decisions show bias and adversely affect certain social groups (e.g. those defined by sex, race, age,...
Fairway: a way to build fair ML software
ESEC/FSE 2020: Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software EngineeringPages 654–665https://doi.org/10.1145/3368089.3409697Machine learning software is increasingly being used to make decisions that affect people's lives. But sometimes, the core part of this software (the learned model), behaves in a biased manner that gives undue advantages to a specific group of people (...