skip to main content
10.1145/3593013.3594010acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Open access

Rethinking Transparency as a Communicative Constellation

Published: 12 June 2023 Publication History

Abstract

In this paper we make the case for an expanded understanding of transparency. Within the now extensive FAccT literature, transparency has largely been understood in terms of explainability. While this approach has proven helpful in many contexts, it falls short of addressing some of the more fundamental issues in the development and application of machine learning, such as the epistemic limitations of predictions and the political nature of the selection of fairness criteria. In order to render machine learning systems more democratic, we argue, a broader understanding of transparency is needed. We therefore propose to view transparency as a communicative constellation that is a precondition for meaningful democratic deliberation. We discuss four perspective expansions implied by this approach and present a case study illustrating the interplay of heterogeneous actors involved in producing this constellation. Drawing from our conceptualization of transparency, we sketch implications for actor groups in different sectors of society.

References

[1]
Madeleine Akrich. 1992. The De-Scription of Technical Objects. In Shaping Technology / Building Society, Wiebe E. Bijker and John Law (Eds.). MIT Press, Cambridge, MA, 205–224.
[2]
Doris Allhutter, Florian Cech, Fabian Fischer, Gabriel Grill, and Astrid Mager. 2020. Algorithmic Profiling of Job Seekers in Austria: How Austerity Politics Are Made Effective. Frontiers in Big Data 3 (2020), 1–17. https://doi.org/10.3389/fdata.2020.00005
[3]
Doris Allhutter, Astrid Mager, Florian Cech, Fabian Fischer, and Gabriel Grill. 2020. Der AMS Algorithmus - Eine Soziotechnische Analyse des Arbeitsmarktchancen-Assistenz-Systems (AMAS). https://doi.org/10.1553/ITA-pb-2020-02
[4]
Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine Bias. ProPublica (2016). https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
[5]
arbeit plus. 2019. Algorithmen und das AMS Arbeitsmarkt-Chancen-Modell. https://arbeitplus.at/wordpress/wp-content/uploads/2019/09/2019-09_Position-Algorithmus-und-Segmentierung.pdf
[6]
Stefan Aykut, David Demortain, and Bilel Benboudiz. 2019. The Politics of Anticipatory Expertise: Plurality and Contestation of Futures Knowledge in Governance — Introduction to the Special Issue. Science & Technology Studies 32, 4 (2019), 2–12. https://doi.org/10.23987/sts.87369
[7]
Gaston Bachelard. 2002 [1938]. The Formation of the Scientific Mind. A Contribution to a Psychoanalysis of Objective Knowledge. Clinamen, Manchester.
[8]
Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. ACM, Virtual Event, Canada, 610–623. https://doi.org/10.1145/3442188.3445922
[9]
Garfield Benjamin. 2022. #FuckTheAlgorithm: algorithmic imaginaries and political resistance. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul, Republic of Korea, 46–57. https://doi.org/10.1145/3531146.3533072
[10]
Ruha Benjamin. 2019. Race After Technology: Abolitionist Tools for the New Jim Code. Polity, Medford, MA.
[11]
Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir Puri, José M. F. Moura, and Peter Eckersley. 2020. Explainable machine learning in deployment. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ACM, Barcelona, Spain, 648–657. https://doi.org/10.1145/3351095.3375624
[12]
Reuben Binns. 2018. Fairness in Machine Learning: Lessons from Political Philosophy. Proceedings of Machine Learning Research 81 (2018), 1–11.
[13]
Abeba Birhane, Pratyusha Kalluri, Dallas Card, William Agnew, Ravit Dotan, and Michelle Bao. 2022. The Values Encoded in Machine Learning Research. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul, Republic of Korea, 173–184. https://doi.org/10.1145/3531146.3533083
[14]
Emily Black, Manish Raghavan, and Solon Barocas. 2022. Model Multiplicity: Opportunities, Concerns, and Solutions. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul Republic of Korea, 850–863. https://doi.org/10.1145/3531146.3533149
[15]
William Boag, Harini Suresh, Bianca Lepe, and Catherine D’Ignazio. 2022. Tech Worker Organizing for Power and Accountability. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul, Republic of Korea, 452–463. https://doi.org/10.1145/3531146.3533111
[16]
Luc Boltanski and Laurent Thévenot. 2006. On Justification: Economies of Worth. Princeton University Press, Princeton. https://www.degruyter.com/document/doi/10.1515/9781400827145/html
[17]
Sebastian Bordt, Michèle Finck, Eric Raidl, and Ulrike von Luxburg. 2022. Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul, Republic of Korea, 891–905. https://doi.org/10.1145/3531146.3533153
[18]
Katharina Braunsmann, Korbinian Gall, and Falk Justus Rahn. 2022. Discourse Strategies of Implementing Algorithmic Decision Support Systems: The Case of the Austrian Employment Service. Historical Social Research 47, 3 (2022). https://doi.org/10.12759/HSR.47.2022.30
[19]
Mark B. Brown. 2015. Politicizing science: Conceptions of politics in science and technology studies. Social Studies of Science 45, 1 (2015), 3–30. https://doi.org/10.1177/0306312714556694
[20]
Massimiano Bucchi and Brian Trench (Eds.). 2022. Routledge Handbook of Public Communication of Science and Technology. Routledge, London.
[21]
Jenna Burrell. 2016. How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society 3, 1 (2016), 1–12. https://doi.org/10.1177/2053951715622512
[22]
Michel Callon, Pierre Lascoumes, and Yannick Barthe. 2009. Acting in an Uncertain World: An Essay on Technical Democracy. MIT Press, Cambridge, MA.
[23]
Florian Cech, Fabian Fischer, Soheil Human, Paola Lopez, and Ben Wagner. 2019. Dem AMS-Algorithmus fehlt der Beipackzettel. Futurezone (March 2019). https://futurezone.at/meinung/dem-ams-algorithmus-fehlt-der-beipackzettel/400636022
[24]
Adriane Chapman, Philip Grylls, Pamela Ugwudike, David Gammack, and Jacqui Ayling. 2022. A Data-driven analysis of the interplay between Criminological theory and predictive policing algorithms. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul, Republic of Korea, 36–45. https://doi.org/10.1145/3531146.3533071
[25]
Bobby Chesney and Danielle Citron. 2019. Deep Fakes: A Looming Challenge for Privacy. California Law Review 107 (2019), 1753–1819. https://doi.org/10.15779/Z38RV0D15J
[26]
Alexandra Chouldechova. 2017. Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments. Big Data 5, 2 (2017), 153–163. https://doi.org/10.1089/big.2016.0047
[27]
Wolfie Christl. 2020. Tweet by @WolfieChris on November 17th 2020, 13:09 CET. https://twitter.com/WolfieChristl/status/1328671651521835008
[28]
Patricia Hill Collins. 2019. Intersectionality as Critical Social Theory. Duke University Press, New York.
[29]
Catherine D’Ignazio and Lauren F. Klein. 2020. Data Feminism. The MIT Press, Cambridge, MA.
[30]
Danielle Ensign, Sorelle A. Friedler, Scott Neville, Carlos Scheidegger, and Suresh Venkatasubramanian. 2018. Runaway Feedback Loops in Predictive Policing. Proceedings of Machine Learning Research 81 (2018), 1–12. https://proceedings.mlr.press/v81/ensign18a.html
[31]
epicenter.works. 2019. AMS Algorithmus - Auskunft gem. §§ 2, 3 AuskunftspflichtG. https://en.epicenter.works/document/2104
[32]
epicenter.works. 2022. Stoppt den AMS-Algorithmus. https://amsalgorithmus.at/de/
[33]
Elena Esposito. 2013. Digital prophecies and web intelligence. In Privacy, Due Process and the Computational Turn, Mireille Hildebrandt and Katja De Vries (Eds.). Routledge, London, 117–138.
[34]
Virginia Eubanks. 2017. Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press, New York.
[35]
Florian Eyert. 2023. Mathematical Science Communication as a Strategy for Democratizing Algorithmic Governance. In Handbook of Mathematical Science Communication, Anna Maria Hartkopf and Erin Henning (Eds.). World Scientific, Hackensack, NJ, 295–321. https://doi.org/10.1142/9789811253072_0017
[36]
Andrea Ferrario and Michele Loi. 2022. How Explainability Contributes to Trust in AI. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul, Republic of Korea, 1457–1466. https://doi.org/10.1145/3531146.3533202
[37]
Sorelle A. Friedler, Carlos Scheidegger, Suresh Venkatasubramanian, Sonam Choudhary, Evan P. Hamilton, and Derek Roth. 2019. A comparative study of fairness-enhancing interventions in machine learning. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, Atlanta, GA, USA, 329–338. https://doi.org/10.1145/3287560.3287589
[38]
Ben Gansky and Sean McDonald. 2022. CounterFAccTual: How FAccT Undermines Its Organizing Principles. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul, Republic of Korea, 1982–1992. https://doi.org/10.1145/3531146.3533241
[39]
Ben Green. 2018. Data Science as Political Action: Grounding Data Science in a Politics of Justice. (2018). https://arxiv.org/abs/1811.03435
[40]
Ben Green and Salomé Viljoen. 2020. Algorithmic realism: expanding the boundaries of algorithmic thought. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ACM, Barcelona, Spain, 19–31. https://doi.org/10.1145/3351095.3372840
[41]
Elias Grünewald and Frank Pallas. 2021. TILT: A GDPR-Aligned Transparency Information Language and Toolkit for Practical Privacy Engineering. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. ACM, Virtual Event, Canada, 636–646. https://doi.org/10.1145/3442188.3445925
[42]
Jürgen Habermas. 1996. Between facts and norms: Contributions to a discourse theory of law and democracy. MIT Press, Cambridge, MA.
[43]
Leif Hancox-Li and I. Elizabeth Kumar. 2021. Epistemic values in feature importance methods: Lessons from feminist epistemology. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. ACM, Virtual Event, Canada, 817–826. https://doi.org/10.1145/3442188.3445943
[44]
Donna Haraway. 1988. Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective. Feminist Studies 14, 3 (1988), 575–599.
[45]
Frank Herrmann. 2018. Tweet by @herrfrankherrmann on October 13th 2018, 11:00 CET. https://twitter.com/herrfrankmann/status/1051035016627712000
[46]
Jürgen Holl, Günter Kernbeiß, and Michael Wagner-Pinter. 2018. Das AMS-Arbeitsmarktchancen-Modell. Dokumentation zur Methode. http://www.forschungsnetzwerk.at/downloadpub/arbeitsmarktchancen_methode_%20dokumentation.pdf
[47]
Maja Horst. 2008. In Search of Dialogue: Staging Science Communication in Consensus Conferences. In Communicating Science in Social Contexts, Donghong Cheng, Michel Claessens, Toss Gascoigne, Jenni Metcalfe, Bernard Schiele, and Shunke Shi (Eds.). Springer Netherlands, Dordrecht, 259–274. http://link.springer.com/10.1007/978-1-4020-8598-7_15
[48]
Maja Horst, Sarah R. Davies, and Alan Irwin. 2016. Reframing Science Communication. In The Handbook of Science and Technology Studies, Ulrike Felt, Rayvon Fouché, Clark A. Miller, and Laurel Smith-Doerr (Eds.). MIT Press, Cambridge, MA, 881–907.
[49]
Sheila Jasanoff (Ed.). 2004. States of Knowledge. Routledge, London.
[50]
Harmanpreet Kaur, Eytan Adar, Eric Gilbert, and Cliff Lampe. 2022. Sensible AI: Re-imagining Interpretability and Explainability using Sensemaking Theory. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul, Republic of Korea, 702–714. https://doi.org/10.1145/3531146.3533135
[51]
Sebastian Kienzl and András Szigetvari. 2018. Jobchancen-Berechnung: Testen Sie einen der 96 AMS-Algorithmen. Der Standard (Oct. 2018). https://www.derstandard.at/story/2000089925698/berechnen-sie-ihre-jobchancen-so-wie-es-das-ams-tun
[52]
Alexandros Kioupkiolis. 2011. Keeping it open: Ontology, ethics, knowledge and radical democracy. Philosophy & Social Criticism 37, 6 (2011), 691–708. https://doi.org/10.1177/0191453711402941
[53]
Goda Klumbytė, Claude Draude, and Alex S. Taylor. 2022. Critical Tools for Machine Learning: Working with Intersectional Critical Concepts in Machine Learning Systems Design. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul, Republic of Korea, 1528–1541. https://doi.org/10.1145/3531146.3533207
[54]
Johannes Kopf. 2018. Wie Ansicht zur Einsicht werden könnte. https://www.johanneskopf.at/2018/11/14/wie-ansicht-zur-einsicht-werden-koennte/
[55]
Johannes Kopf. 2019. Offener Brief an Fr. Prof. Sarah Spiekermann zum Thema Einsatz von KI im AMS. https://www.johanneskopf.at/2019/09/24/offener-brief-fr-prof/
[56]
Benjamin Laufer, Sameer Jain, A. Feder Cooper, Jon Kleinberg, and Hoda Heidari. 2022. Four Years of FAccT: A Reflexive, Mixed-Methods Analysis of Research Contributions, Shortcomings, and Future Prospects. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul, Republic of Korea, 401–426. https://doi.org/10.1145/3531146.3533107
[57]
Min Kyung Lee, Daniel Kusbit, Anson Kahng, Ji Tae Kim, Xinran Yuan, Allissa Chan, Daniel See, Ritesh Noothigattu, Siheon Lee, Alexandros Psomas, and Ariel D. Procaccia. 2019. WeBuildAI: Participatory Framework for Algorithmic Governance. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (2019), 1–35. https://doi.org/10.1145/3359283
[58]
Q.Vera Liao and S. Shyam Sundar. 2022. Designing for Responsible Trust in AI Systems: A Communication Perspective. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul, Republic of Korea, 1257–1268. https://doi.org/10.1145/3531146.3533182
[59]
Gabriel Lima, Nina Grgić-Hlača, Jin Keun Jeong, and Meeyoung Cha. 2022. The Conflict Between Explainable and Accountable Decision-Making Algorithms. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul Republic of Korea, 2103–2113. https://doi.org/10.1145/3531146.3534628
[60]
Paola Lopez. 2019. Reinforcing intersectional inequality via the AMS Algorithm in Austria. In Conference Proceedings o f the STS Graz Conference 2019. Critical Issues in Science, Technology, and Society Studies. 289–309. https://doi.org/10.3217/978-3-85125-668-0-16
[61]
Giuliana Mandich. 2020. Modes of engagement with the future in everyday life. Time & Society 29, 3 (2020), 681–703. https://doi.org/10.1177/0961463X19883749
[62]
Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model Cards for Model Reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, Atlanta, GA, USA, 220–229. https://doi.org/10.1145/3287560.3287596
[63]
Bianca Prietl. 2019. Big Data: Inequality by Design?. In Proceedings of the Weizenbaum Conference 2019. https://doi.org/10.34669/wi.cp/2.11
[64]
Mahima Pushkarna, Andrew Zaldivar, and Oddur Kjartansson. 2022. Data Cards: Purposeful and Transparent Dataset Documentation for Responsible AI. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul, Republic of Korea, 1776–1826. https://doi.org/10.1145/3531146.3533231
[65]
Inioluwa Deborah Raji, I. Elizabeth Kumar, Aaron Horowitz, and Andrew Selbst. 2022. The Fallacy of AI Functionality. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul, Republic of Korea, 959–972. https://doi.org/10.1145/3531146.3533158
[66]
Simone Rödder, Martina Franzen, and Peter Weingart (Eds.). 2012. The Sciences’ Media Connection – Public Communication and its Repercussions. Springer Netherlands, Dordrecht.
[67]
Ina Sander. 2020. What is critical big data literacy and how can it be implemented?Internet Policy Review 9, 2 (2020). https://doi.org/10.14763/2020.2.1479
[68]
Jakob Schoeffer, Niklas Kuehl, and Yvette Machowski. 2022. “There Is Not Enough Information”: On the Effects of Explanations on Perceptions of Informational Fairness and Trustworthiness in Automated Decision-Making. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul, Republic of Korea, 1616–1628. https://doi.org/10.1145/3531146.3533218
[69]
Pola Schwöbel and Peter Remmers. 2022. The Long Arc of Fairness: Formalisations and Ethical Discourse. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul, Republic of Korea, 2179–2188. https://doi.org/10.1145/3531146.3534635
[70]
Kristen M. Scott, Sonja Mei Wang, Milagros Miceli, Pieter Delobelle, Karolina Sztandar-Sztanderska, and Bettina Berendt. 2022. Algorithmic Tools in Public Employment Services: Towards a Jobseeker-Centric Perspective. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul, Republic of Korea, 2138–2148. https://doi.org/10.1145/3531146.3534631
[71]
Steven Shapin and Simon Schaffer. 1985. Leviathan and the air-pump: Hobbes, Boyle, and the experimental life. Princeton University Press, Princeton.
[72]
Jessie J. Smith, Saleema Amershi, Solon Barocas, Hanna Wallach, and Jennifer Wortman Vaughan. 2022. REAL ML: Recognizing, Exploring, and Articulating Limitations of Machine Learning Research. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul, Republic of Korea, 587–597. https://doi.org/10.1145/3531146.3533122
[73]
Wonyoung So, Pranay Lohia, Rakesh Pimplikar, A.E. Hosoi, and Catherine D’Ignazio. 2022. Beyond Fairness: Reparative Algorithms to Address Historical Injustices of Housing Discrimination in the US. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul, Republic of Korea, 988–1004. https://doi.org/10.1145/3531146.3533160
[74]
Timo Speith. 2022. A Review of Taxonomies of Explainable Artificial Intelligence (XAI) Methods. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul, Republic of Korea, 2239–2250. https://doi.org/10.1145/3531146.3534639
[75]
Sarah Spiekermann. 2019. Warum das AMS keine KI auf österreichische Bürger loslassen sollte. https://www.derstandard.at/story/2000108890110/warum-das-ams-keine-ki-auf-oesterreichische-buerger-loslassen-sollte
[76]
Harini Suresh and John Guttag. 2021. A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle. In Equity and Access in Algorithms, Mechanisms, and Optimization. New York, USA, 1–9. https://doi.org/10.1145/3465416.3483305
[77]
András Szigetvari. 2018. AMS bewertet Arbeitslose künftig per Algorithmus. Der Standard (Oct. 2018). https://www.derstandard.at/story/2000089095393/ams-bewertet-arbeitslose-kuenftig-per-algorithmus
[78]
András Szigetvari. 2018. AMS-Vorstand Kopf: "Was die EDV gar nicht abbilden kann, ist die Motivation". Der Standard (Oct. 2018). https://www.derstandard.at/story/2000089096795/ams-vorstand-kopf-menschliche-komponente-wird-entscheidend-bleiben?ref=article
[79]
András Szigetvari. 2018. Leseanleitung zum AMS-Algorithmus. Der Standard (Oct. 2018). https://www.derstandard.at/story/2000089720308/leseanleitung-zum-ams-algorithmus
[80]
Linnet Taylor, Luciano Floridi, and Bart van der Sloot (Eds.). 2017. Group Privacy: New Challenges of Data Technologies. Springer International Publishing, Cham.
[81]
Lars Tønder and Lasse Thomassen (Eds.). 2014. Radical Democracy: Politics between Abundance and Lack. Manchester University Press, Manchester.
[82]
Ehsan Toreini, Mhairi Aitken, Kovila Coopamootoo, Karen Elliott, Carlos Gonzalez Zelaya, and Aad van Moorsel. 2020. The relationship between trust in AI and trustworthy machine learning technologies. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ACM, Barcelona, Spain, 272–283. https://doi.org/10.1145/3351095.3372834
[83]
Brian Trench. 2008. Towards an Analytical Framework of Science Communication Models. In Communicating Science in Social Contexts, Donghong Cheng, Michel Claessens, Toss Gascoigne, Jenni Metcalfe, Bernard Schiele, and Shunke Shi (Eds.). Springer Netherlands, Dordrecht, 119–135. https://doi.org/10.1007/978-1-4020-8598-7_7
[84]
Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2021. Bias Preservation in Machine Learning: The Legality of Fairness Metrics Under EU Non-Discrimination Law. West Virginia Law Review 123, 3 (2021), 735–790. https://www.ssrn.com/abstract=3792772
[85]
Ben Wagner. 2019. Ethics As An Escape From Regulation. From “Ethics-Washing” To Ethics-Shopping? In BEING PROFILED, Emre Bayamlioglu, Irina Baraliuc, Liisa Albertha Wilhelmina Janssens, and Mireille Hildebrandt (Eds.). Amsterdam University Press, Amsterdam, 84–89. https://www.degruyter.com/document/doi/10.1515/9789048550180-016/html
[86]
Joseph Weizenbaum. 1966. ELIZA — A computer program for the study of natural language communication between man and machine. Commun. ACM 9, 1 (1966), 36–45. https://doi.org/10.1145/365153.365168
[87]
Barbara Wimmer. 2018. AMS-Chef: "Mitarbeiter schätzen Jobchancen pessimistischer ein als der Algorithmus". Futurezone (Dec. 2018). https://futurezone.at/netzpolitik/ams-chef-mitarbeiter-schaetzen-jobchancen-pessimistischer-ein-als-der-algorithmus/400143839
[88]
Barbara Wimmer. 2018. "AMS-Sachbearbeiter erkennen nicht, wann ein Programm falsch liegt". Futurezone (Oct. 2018). https://futurezone.at/netzpolitik/ams-sachbearbeiter-erkennen-nicht-wann-ein-programm-falsch-liegt/400147472
[89]
Barbara Wimmer. 2018. Der AMS-Algorithmus ist ein „Paradebeispiel für Diskriminierung“. Futurezone (Oct. 2018). https://futurezone.at/netzpolitik/der-ams-algorithmus-ist-ein-paradebeispiel-fuer-diskriminierung/400147421
[90]
Langdon Winner. 1980. Do artifacts have politics?Daedalus 109, 1 (1980), 121–136.
[91]
Meg Young, Michael Katell, and P.M. Krafft. 2022. Confronting Power and Corporate Capture at the FAccT Conference. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul, Republic of Korea, 1375–1386. https://doi.org/10.1145/3531146.3533194
[92]
Marilyn Zhang. 2022. Affirmative Algorithms: Relational Equality as Algorithmic Fairness. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul, Republic of Korea, 495–507. https://doi.org/10.1145/3531146.3533115
[93]
Shoshana Zuboff. 2018. The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs, New York.

Cited By

View all
  • (2024)Transparency in the Wild: Navigating Transparency in a Deployed AI System to Broaden Need-Finding ApproachesProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658985(1494-1514)Online publication date: 3-Jun-2024
  • (2024)Predictive AnalyticsThe Oxford Handbook of the Sociology of Machine Learning10.1093/oxfordhb/9780197653609.013.32Online publication date: 21-Mar-2024
  • (2024)Modelle des Demos. Hybride Repräsentation und die Politik der InferenzenDie Fabrikation von Demokratie10.1007/978-3-658-42936-2_5(123-150)Online publication date: 15-Feb-2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency
June 2023
1929 pages
ISBN:9798400701924
DOI:10.1145/3593013
This work is licensed under a Creative Commons Attribution-NonCommercial International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 June 2023

Check for updates

Author Tags

  1. deliberation
  2. explainability
  3. prediction
  4. science communication
  5. transparency

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

FAccT '23

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)363
  • Downloads (Last 6 weeks)36
Reflects downloads up to 06 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Transparency in the Wild: Navigating Transparency in a Deployed AI System to Broaden Need-Finding ApproachesProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658985(1494-1514)Online publication date: 3-Jun-2024
  • (2024)Predictive AnalyticsThe Oxford Handbook of the Sociology of Machine Learning10.1093/oxfordhb/9780197653609.013.32Online publication date: 21-Mar-2024
  • (2024)Modelle des Demos. Hybride Repräsentation und die Politik der InferenzenDie Fabrikation von Demokratie10.1007/978-3-658-42936-2_5(123-150)Online publication date: 15-Feb-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media