Why Transparency Alone Can't Fix Algorithmic Systems

tldt arrow

Too Long; Didn't Read

Efforts to make algorithmic systems transparent often fall short of delivering true accountability. Without contextual understanding and genuine stakeholder participation, these initiatives risk being performative, shifting power without addressing structural issues.

Coin Mentioned

Mention Thumbnail
featured image - Why Transparency Alone Can't Fix Algorithmic Systems
Tech Media Bias [Research Publication] HackerNoon profile picture
0-item

Abstract and 1. Introduction

2. Related Work

3. Theoretical Lenses

3.1. Handoff Model

3.2. Boundary objects

4. Applying the Theoretical Lenses and 4.1 Handoff Triggers: New tech, new threats, new hype

4.2. Handoff Components: Shifting experts, techniques, and data

4.3. Handoff Modes: Abstraction and constrained expertise

4.4 Handoff Function: Interrogating the how and 4.5. Transparency artifacts at the boundaries: Spaghetti at the wall

5. Uncovering the Stakes of the Handoff

5.1. Confidentiality is the tip of the iceberg

5.2. Data Utility

5.3. Formalism

5.4. Transparency

5.5. Participation

6. Beyond the Census: Lessons for Transparency and Participation and 6.1 Lesson 1: The handoff lens is a critical tool for surfacing values

6.2 Lesson 2: Beware objects without experts

6.3 Lesson 3: Transparency and participation should center values and policy

7. Conclusion

8. Research Ethics and Social Impact

8.1. Ethical concerns

8.2. Positionality

8.3. Adverse impact statement

Acknowledgments and References

Calls to make technical systems more trustworthy and accountable point to transparency and participation as key interventions. Such calls are common across academic [25, 28, 38, 68, 133], industry [34], and governmental and NGObased initiatives [16, 27, 48, 98, 99]. Transparency efforts have called for visibility around choices about data, processes, or mathematical properties of algorithms [67], e.g., through documentation of the development process [39, 61, 102] or structured disclosures of data properties [51, 57, 88]. Other efforts have been made to release code, make ‘inscrutable’ algorithms interpretable or their decisions explainable [25, 44, 82].


Yet revealing such inner workings can fail to live up to promises of participation, contestability, or trust. Researchers have consistently cautioned that algorithmic transparency efforts may not bring the benefits they promise [13, 21, 30, 76, 78]. Making inscrutable algorithms explainable or readily available is insufficient without accounting for the social, organizational, and political power structures that shape their outcomes [15, 44, 74, 76, 81]. Intermediate objects for transparency efforts, such as structured disclosure documents, prioritize and shift ownership, effort, expertise and values [136]. These efforts offer visibility without necessarily offering meaningful paths to accountability for public administration [26, 67] or governance more broadly [30, 31].


Accountability therefore relies on enabling substantive participation and contestation [28, 31, 67, 73, 129]. However, approaches to participation are complicated by the technical features of algorithmic systems and the broader social, political, and organizational contexts in which they operate [66, 80, 104]. Participation efforts – whether participatory design activities, communicating with a wider range of stakeholders, or creating structured opportunities for feedback – can then stand in for meaningful stakeholder empowerment. Indeed, critiques have shown how participation efforts can be exploitative and performative [31, 33, 100, 112], where incentive structures can encourage “participation-washing” [17, 29, 32, 100].


Contextual analysis is needed to support meaningful participation in (algorithmic) governance. Tools to articulate responsibility and decision-making can be disconnected from what practitioners need [83, 114, 131], but articulating the decisions being made in the implementation of systems can better reveal where decision-making is happening in the first place [61, 62, 75, 84, 113]. Transparency is needed not just of design choices, but also of the policy questions these choices often seek to answer [89]. Past literature has pointed to how the introduction or substitution of technologies can reconfigure values and social and political arrangements, and that understanding this context shift is key to responsible technology development [11, 79, 90, 110]. This missing lens on what happens when we substitute a new technology [83, 110], and how that shifts organizational roles [86, 111, 136] with technical and political impacts [e.g., 63, 90], must then be considered to support meaningful participation. A fundamental challenge is to reveal how values and decisions change as a result of the introduction or substitution of a new technology: the handoff model that we discuss in the following section intervenes on this specific challenge.


Authors:

(1) AMINA A. ABDU, University of Michigan, USA;

(2) LAUREN M. CHAMBERS, University of California, Berkeley, USA;

(3) DEIRDRE K. MULLIGAN, University of California, Berkeley, USA;

(4) ABIGAIL Z. JACOBS, University of Michigan, USA.


This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.


Trending Topics

blockchaincryptocurrencyhackernoon-top-storyprogrammingsoftware-developmenttechnologystartuphackernoon-booksBitcoinbooks