152 reads

The Role of Boundary Objects in New Tech Adoption

tldt arrow

Too Long; Didn't Read

The Census Bureau’s use of differential privacy shows that successful algorithm adoption in government requires expert stakeholders and well-designed tools to support meaningful transparency and

People Mentioned

Mention Thumbnail
Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coins Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - The Role of Boundary Objects in New Tech Adoption
Tech Media Bias [Research Publication] HackerNoon profile picture
0-item

Abstract and 1. Introduction

2. Related Work

3. Theoretical Lenses

3.1. Handoff Model

3.2. Boundary objects

4. Applying the Theoretical Lenses and 4.1 Handoff Triggers: New tech, new threats, new hype

4.2. Handoff Components: Shifting experts, techniques, and data

4.3. Handoff Modes: Abstraction and constrained expertise

4.4 Handoff Function: Interrogating the how and 4.5. Transparency artifacts at the boundaries: Spaghetti at the wall

5. Uncovering the Stakes of the Handoff

5.1. Confidentiality is the tip of the iceberg

5.2. Data Utility

5.3. Formalism

5.4. Transparency

5.5. Participation

6. Beyond the Census: Lessons for Transparency and Participation and 6.1 Lesson 1: The handoff lens is a critical tool for surfacing values

6.2 Lesson 2: Beware objects without experts

6.3 Lesson 3: Transparency and participation should center values and policy

7. Conclusion

8. Research Ethics and Social Impact

8.1. Ethical concerns

8.2. Positionality

8.3. Adverse impact statement

Acknowledgments and References


7 CONCLUSION

The adoption of differential privacy by the U.S. Census Bureau marked a pivot in their practices around transparent and participatory algorithmic governance. The complex nature of this adoption, and its subsequent impacts revealed the ways in which handoffs in algorithmic adoption in government must be mediated by different stakeholders with different levels of expertise, including via the use of carefully-designed boundary objects, to allow for meaningful participation. The lessons learned here apply more broadly to processes of algorithmic adoption, well-intentioned (and carefully planned) shifts towards transparency, and practices for successful handoffs in modern algorithmic governance.

ACKNOWLEDGMENTS

We would like to thank Jeremy Seeman, the participants at Yale ISP’s 2023 Data (Re)Makes the World Conference and the 2023 Privacy Law Scholars Conference. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE 2146752. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. Deirdre K. Mulligan is a Professor at UC Berkeley School of Information, a faculty director of the Berkeley Center for Law and Technology, and currently serving as Principal Deputy US Chief Technology Officer in the White House Office of Science and Technology Policy. The content herein represents the personal views of the authors and is not intended to reflect the views of the United States Government or any Federal agency.

REFERENCES

[1] 2021. Alabama v. U.S. Dep’t of Commerce. 546 F. Supp. 3d 1057 (M.D. Ala.).


[2] Rediet Abebe, Solon Barocas, Jon Kleinberg, Karen Levy, Manish Raghavan, and David G. Robinson. 2020. Roles for computing in social change. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* ’20). Association for Computing Machinery, New York, NY, USA, 252–260. https://doi.org/10.1145/3351095.3372871


[3] John Abowd, Robert Ashmead, Ryan Cumings-Menon, Simson Garfinkel, Micah Heineck, Christine Heiss, Robert Johns, Daniel Kifer, Philip Leclerc, Ashwin Machanavajjhala, Brett Moran, William Sexton, Matthew Spence, and Pavel Zhuravlev. 2022. The 2020 Census Disclosure Avoidance System TopDown Algorithm. Harvard Data Science Review Special Issue 2 (jun 24 2022). https://hdsr.mitpress.mit.edu/pub/7evz361i.


[4] John M. Abowd. 2018. Disclosure Avoidance for Block Level Data and Protection of Confidentiality in Public Tabulations. https://www2.census.gov/cac/sac/meetings/2018-12/abowd-disclosure-avoidance.pdf


[5] John M. Abowd. 2018. Protecting the Confidentiality of America’s Statistics: Adopting Modern Disclosure Avoidance Methods at the Census Bureau. https://www.census.gov/newsroom/blogs/research-matters/2018/08/protecting_the_confi.html Section: Government.


[6] John M. Abowd. 2021. Declaration of John M. Abowd. In State of Alabama v. U.S. Department of Commerce. https://censusproject.files.wordpress.com/2021/04/2021.04.13-abowd-declaration-alabama-v.-commerce-ii-final-signed.pdf


[7] John M. Abowd and Michael B. Hawes. 2023. Confidentiality Protection in the 2020 US Census of Population and Housing. Annual Review of Statistics and Its Application 10, 1 (March 2023), 119–144. https://doi.org/10.1146/annurev-statistics-010422-034226


[8] John M Abowd and Ian M Schmutte. 2019. An economic analysis of privacy protection and statistical accuracy as social choices. American Economic Review 109, 1 (2019), 171–202.


[9] John M. Abowd and Victoria A. Velkoff. 2019. Balancing privacy and accuracy: New opportunity for disclosure avoidance analysis. Census Blogs (2019).


[10] John M. Abowd and Victoria A. Velkoff. 2020. Modernizing disclosure avoidance: What we’ve learned, where we are now. Census Blogs (2020).


[11] Madeleine Akrich. 1992. The De-Scription of Technical Objects. In Shaping Technology / Building Society: Studies in Sociotechnical Change, Wiebe E. Bijker, John Law, Trevor Pinch, and Rebecca Slayton (Eds.). MIT Press, Cambridge, MA, USA, 208.


[12] Kevin Allis. 2020. [Letter from Kevin Allis to Steven D. Dillingham]. https://archive.ncai.org/policy-research-center/research-data/recommendations/NCAI_Letter_to_US_Census_Burea


[13] Mike Ananny and Kate Crawford. 2016. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society 20, 3 (2016), 973–989.


[14] Solon Barocas and Moritz Hardt. 2014. Scope. https://www.fatml.org/schedule/2014/page/scope-2014


[15] Andrew Bell, Ian Solano-Kamaiko, Oded Nov, and Julia Stoyanovich. 2022. It’s Just Not That Simple: An Empirical Study of the AccuracyExplainability Trade-off in Machine Learning for Public Policy. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 248–266. https://doi.org/10.1145/3531146.3533090


[16] Joseph R. Biden. 2023. Executive order on the safe, secure, and trustworthy development and use of artificial intelligence. (2023).


[17] Abeba Birhane, William Isaac, Vinodkumar Prabhakaran, Mark Diaz, Madeleine Clare Elish, Iason Gabriel, and Shakir Mohamed. 2022. Power to the people? opportunities and challenges for participatory AI. Equity and Access in Algorithms, Mechanisms, and Optimization (2022), 1–8.


[18] Dan Bouk and danah boyd. 2021. Democracy’s Data Infrastructure. http://knightcolumbia.org/content/democracys-data-infrastructure


[19] danah boyd and Jayshree Sarathy. 2022. Differential Perspectives: Epistemic Disconnects Surrounding the U.S. Census Bureau’s Use of Differential Privacy. Harvard Data Science Review Special Issue 2 (June 2022). https://doi.org/10.1162/99608f92.66882f0e


[20] Jay Breidt, Deborah Balk, John Czajka, Kathy Pettit, Allison Plyer, Kunal Talwar, Richelle Winkler, and Joe Whitley. 2020. Differential Privacy Working Group Deliverables: Report of the CSAC Differential Privacy Working Group. https://www2.census.gov/cac/sac/differential-privacy-wg-deliverables.pdf


[21] Jenna Burrell. 2016. How the machine “thinks”: Understanding opacity in machine learning algorithms. Big Data & Society 3, 1 (June 2016), 205395171562251. https://doi.org/10.1177/2053951715622512


[22] Pat Cantwell. 2021. How We Complete the Census When Households or Group Quarters Don’t Respond. https://www.census.gov/newsroom/blogs/random-samplings/2021/04/imputation-when-households-or-group-quarters-dont-respond.html Section: Government.


[23] Paul R. Carlile. 2002. A pragmatic view of knowledge and boundaries: Boundary objects in new product development. Organization Science 13, 4 (2002), 442–455.


[24] Stephanie Russo Carroll, Ibrahim Garba, Oscar L Figueroa-Rodríguez, Jarita Holbrook, Raymond Lovett, Simeon Materechera, Mark Parsons, Kay Raseroka, Desi Rodriguez-Lonebear, Robyn Rowe, et al. 2020. The CARE principles for indigenous data governance. Data Science Journal 19 (2020), 43–43.


[25] Danielle Keats Citron. 2007. Technological due process. Wash. UL Rev. 85 (2007), 1249.


[26] Cary Coglianese and David Lehr. 2019. Transparency and algorithmic governance. Administrative law review 71, 1 (2019), 1–56.


[27] European Commission. 2021. Laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts. Eur Comm 106 (2021), 1–108.


[28] A Feder Cooper, Emanuel Moss, Benjamin Laufer, and Helen Nissenbaum. 2022. Accountability in an algorithmic society: relationality, responsibility, and robustness in machine learning. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 864–876.


[29] Ned Cooper, Tiffanie Horne, Gillian R Hayes, Courtney Heldreth, Michal Lahav, Jess Holbrook, and Lauren Wilcox. 2022. A systematic review and thematic analysis of community-collaborative approaches to computing research. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–18.


[30] Eric Corbett and Emily Denton. 2023. Interrogating the T in FAccT. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 1624–1634.


[31] Eric Corbett, Emily Denton, and Sheena Erete. 2023. Power and Public Participation in AI. In Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization. 1–13.


[32] Sasha Costanza-Chock. 2020. Design justice: Community-led practices to build the worlds we need. The MIT Press.


[33] Fernando Delgado, Stephen Yang, Michael Madaio, and Qian Yang. 2021. Stakeholder Participation in AI: Beyond" Add Diverse Stakeholders and Stir". arXiv preprint arXiv:2111.01122 (2021).


[34] Deloitte. 2020. Trustworthy AI: Bridging the ethics gap surrounding AI. https://www2.deloitte.com/us/en/pages/deloitte-analytics/solutions/ethics-of-ai-framework.html


[35] William Deringer. 2018. Calculated values: Finance, politics, and the quantitative age. Harvard University Press.


[36] Uma Desai. 2019. uscensusbureau/census-dp. https://github.com/uscensusbureau/census-dp


[37] Alain Desrosières. 1998. The politics of large numbers: A history of statistical reasoning. Harvard University Press.


[38] Nicholas Diakopoulos. 2016. Accountability in algorithmic decision making. Commun. ACM 59, 2 (2016), 56–62.


[39] Nicholas Diakopoulos and Michael Koliska. 2017. Algorithmic transparency in the news media. Digital journalism 5, 7 (2017), 809–828.


[40] Irit Dinur and Kobbi Nissim. 2003. Revealing information while preserving privacy. In Proceedings of the twenty-second ACM SIGMOD-SIGACTSIGART symposium on Principles of database systems. ACM, San Diego California, 202–210. https://doi.org/10.1145/773153.773173


[41] Cynthia Dwork, Gary King, Ruth Greenwood, William T. Adler, and Joel Alvarez. 2021. Re: Request for release of "noisy measurements file" by September 30 along with redistricting data products. https://gking.harvard.edu/files/gking/files/2021.08.12_group_letter_to_abowd_re_noisy_measurements.pdf


[42] Cynthia Dwork, Nitin Kohli, and Deirdre Mulligan. 2019. Differential Privacy in Practice: Expose your Epsilons! Journal of Privacy and Confidentiality 9, 2 (Oct. 2019). https://doi.org/10.29012/jpc.689


[43] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. Calibrating Noise to Sensitivity in Private Data Analysis. In Theory of Cryptography (Lecture Notes in Computer Science), Shai Halevi and Tal Rabin (Eds.). Springer, Berlin, Heidelberg, 265–284. https://doi.org/10.1007/11681878_14


[44] Upol Ehsan, Q Vera Liao, Michael Muller, Mark O Riedl, and Justin D Weisz. 2021. Expanding explainability: Towards social transparency in ai systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–19.


[45] John Eltinge, Robert Sienkiewicz, Michael B. Hawes, Quentin Brummet, Edward Mulrow, Kurt Wolter, David Van Riper, Tracy Kugler, Johnathan Schroeder, José Pacas, Steven Ruggles, Brian Asquith, Brad Hershbien, Shane Reed, and Steve Yesiltepe. 2019. Differential Privacy for 2020 US Census. https://assets.pubpub.org/j2yr11kl/11587735061843.pdf


[46] Wendy Nelson Espeland and Mitchell L. Stevens. 1998. Commensuration as a social process. Annual Review of Sociology 24, 1 (1998), 313–343.


[47] Wendy Nelson Espeland and Berit Irene Vannebo. 2007. Accountability, quantification, and law. Annu. Rev. Law Soc. Sci. 3 (2007), 21–43.


[48] International Organization for Standardization. 2020. ISO/IEC TR 24028:2020 Overview of trustworthiness in artificial intelligence. https://www.iso.org/standard/77608.html


[49] Jody Freeman. 1997. Collaborative governance in the administrative state. UCLA L. Rev. 45 (1997), 1.


[50] Simson L. Garfinkel, John M. Abowd, and Sarah Powazek. 2018. Issues Encountered Deploying Differential Privacy. In Proceedings of the 2018 Workshop on Privacy in the Electronic Society. ACM, Toronto Canada, 133–137. https://doi.org/10.1145/3267323.3268949


[51] Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III, and Kate Crawford. 2021. Datasheets for datasets. Commun. ACM 64, 12 (Nov. 2021), 86–92. https://doi.org/10.1145/3458723


[52] Ruobin Gong. 2022. Transparent privacy is principled privacy. Harvard Data Science Review Special Issue 2 (June 2022). https://doi.org/10.1162/99608f92.b5d3faaa


[53] Ben Green. 2019. "Good" isn’t good enough. In Conference and Workshop on Neural Information Processing Systems (AI for Social Good Workshop). Vancouver. https://aiforsocialgood.github.io/neurips2019/accepted/track3/pdfs/67_aisg_neurips2019.pdf


[54] Kenneth Haase. 2021. uscensusbureau/DAS\_2020\_Redistricting\_Production\_Code. https://github.com/uscensusbureau/DAS_2020_Redistricting_Production_Code


[55] Sam Haney, William Sexton, Ashwin Machanavajjhala, Michael Hay, and Gerome Miklau. 2021. Differentially Private Algorithms for 2020 Census Detailed DHC Race & Ethnicity. https://doi.org/10.48550/arXiv.2107.10659 arXiv:2107.10659 [cs, stat].


[56] Michael Hawes. 2021. The Census Bureau’s Simulated Reconstruction-Abetted Re-identification Attack on the 2010 Census. https://www.census.gov/data/academy/webinars/2021/disclosure-avoidance-series/simulated-reconstruction-abetted-re-identification-attack-on-the-2010-census.html Section: Government.


[57] Sarah Holland, Ahmed Hosny, Sarah Newman, Joshua Joseph, and Kasia Chmielinski. 2020. The Dataset Nutrition Label: A Framework to Drive Higher Data Quality Standards. In Data Protection and Democracy, Dara Hallinan, Ronald Leenes, Serge Gutwirth, and Paul De Hert (Eds.). Data Protection and Privacy, Vol. 12. Bloomsbury Publishing, 1–26. Google-Books-ID: F2HRDwAAQBAJ.


[58] V Joseph Hotz, Christopher R Bollinger, Tatiana Komarova, Charles F Manski, Robert A Moffitt, Denis Nekipelov, Aaron Sojourner, and Bruce D Spencer. 2022. Balancing data privacy and usability in the federal statistical system. Proceedings of the National Academy of Sciences 119, 31 (2022), e2104906119.


[59] V. Joseph Hotz and Joseph Salvo. 2022. A Chronicle of the Application of Differential Privacy to the 2020 Census. Harvard Data Science Review Special Issue 2 (June 2022). https://hdsr.mitpress.mit.edu/pub/ql9z7ehf.


[60] Jessica Hullman. 2022. Show me the noisy numbers!(or not). https://statmodeling.stat.columbia.edu/2022/12/28/show-me-the-noisy-numbers-or-not/


[61] Ben Hutchinson, Andrew Smart, Alex Hanna, Emily Denton, Christina Greer, Oddur Kjartansson, Parker Barnes, and Margaret Mitchell. 2021. Towards accountability for machine learning datasets: Practices from software engineering and infrastructure. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 560–575.


[62] Abigail Z Jacobs. 2021. Measurement as governance in and for responsible AI. arXiv preprint arXiv:2109.05658 (2021).


[63] Abigail Z Jacobs and Deirdre K Mulligan. 2022. The Hidden Governance in AI. The Regulatory Review (July 2022).


[64] Julia Jahansoozi. 2006. Organization-stakeholder relationships: exploring trust and transparency. Journal of management development 25, 10 (2006), 942–955.


[65] Ron S Jarmin, John M Abowd, Robert Ashmead, Ryan Cumings-Menon, Nathan Goldschlag, Michael B Hawes, Sallie Ann Keller, Daniel Kifer, Philip Leclerc, Jerome P Reiter, et al. 2023. An in-depth examination of requirements for disclosure risk assessment. Proceedings of the National Academy of Sciences 120, 43 (2023), e2220558120.


[66] Sheila Jasanoff. 2016. Reclaiming the Future. In The Ethics of Invention: Technology and the Human Future. W. W. Norton & Company, New York, 211–245.


[67] Margot E. Kaminski. 2020. Understanding transparency in algorithmic accountability. In Cambridge Handbook of the Law of Algorithms, Woodrow Barfield (Ed.). Cambridge University Press, 20–34.


[68] Davinder Kaur, Suleyman Uslu, Kaley J Rittichier, and Arjan Durresi. 2022. Trustworthy artificial intelligence: a review. ACM Computing Surveys (CSUR) 55, 2 (2022), 1–38.


[69] Sallie Ann Keller and John M. Abowd. 2023. Database reconstruction does compromise confidentiality. Proceedings of the National Academy of Sciences 120, 12 (March 2023), e2300976120. https://doi.org/10.1073/pnas.2300976120 Publisher: Proceedings of the National Academy of Sciences.


[70] Christopher T Kenny, Shiro Kuriwaki, Cory McCartan, Evan TR Rosenman, Tyler Simko, and Kosuke Imai. 2022. Comment: The Essential Role of Policy Evaluation for the 2020 Census Disclosure Avoidance System. arXiv preprint arXiv:2210.08383 (2022).


[71] Christopher T. Kenny, Shiro Kuriwaki, Cory McCartan, Evan T. R. Rosenman, Tyler Simko, and Kosuke Imai. 2021. The use of differential privacy for census data and its impact on redistricting: The case of the 2020 U.S. Census. Science Advances 7, 41 (Oct. 2021). https://doi.org/10.1126/sciadv.abk3283


[72] Chris Kimble, Corinne Grenier, and Karine Goglio-Primard. 2010. Innovation and knowledge sharing across professional boundaries: Political interplay between boundary objects and brokers. International Journal of Information Management 30, 5 (2010), 437–444.


[73] Daniel N Kluttz, Nitin Kohli, and Deirdre K Mulligan. 2022. Shaping our tools: Contestability as a means to promote responsible algorithmic decision making in the professions. Ethics of Data and Analytics. Auerbach Publications (2022), 420–428.


[74] Joshua A Kroll. 2018. The fallacy of inscrutability. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376, 2133 (2018), 20180084.


[75] Joshua A Kroll. 2021. Outlining traceability: A principle for operationalizing accountability in computing systems. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 758–771.


[76] Joshua A Kroll, Joanna Huey, Solon Barocas, Edward W Felten, Joel R Reidenberg, David G Robinson, and Harlan Yu. 2017. Accountable Algorithms. University of Pennsylvania Law Review 165, 3 (2017), 633.


[77] Ann Langley, Kajsa Lindberg, BjÞrn Erik MÞrk, Davide Nicolini, Elena Raviola, and Lars Walter. 2019. Boundary Work among Groups, Occupations, and Organizations: From Cartography to Process. Academy of Management Annals 13, 2 (July 2019), 704–736. https://doi.org/10.5465/annals.2017.0089 Publisher: Academy of Management.


[78] Bruno Lepri, Nuria Oliver, Emmanuel Letouzé, Alex Pentland, and Patrick Vinck. 2018. Fair, transparent, and accountable algorithmic decisionmaking processes: The premise, the proposed solutions, and the open challenges. Philosophy & Technology 31 (2018), 611–627.


[79] Nancy G Leveson. 2016. Engineering a safer world: Systems thinking applied to safety. The MIT Press.


[80] Karen Levy, Kyla E Chasalow, and Sarah Riley. 2021. Algorithms and decision-making in the public sector. Annual Review of Law and Social Science 17 (2021), 309–334.


[81] Gabriel Lima, Nina Grgić-Hlača, Jin Keun Jeong, and Meeyoung Cha. 2022. The conflict between explainable and accountable decision-making algorithms. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 2103–2113.


[82] Michele Loi, Andrea Ferrario, and Eleonora Viganò. 2021. Transparency as design publicity: explaining and justifying inscrutable algorithms. Ethics and Information Technology 23, 3 (2021), 253–263.


[83] Michael Madaio, Lisa Egede, Hariharan Subramonyam, Jennifer Wortman Vaughan, and Hanna Wallach. 2022. Assessing the Fairness of AI Systems: AI Practitioners’ Processes, Challenges, and Needs for Support. Proceedings of the ACM on Human-Computer Interaction 6, CSCW1 (2022), 1–26.


[84] Kirsten Martin. 2019. Ethical implications and accountability of algorithms. Journal of Business Ethics 160 (2019), 835–850.


[85] Laura McKenna. 2018. Disclosure Avoidance Techniques Used for the 1970 through 2010 Decennial Censuses of Population and Housing. Technical Report. U.S. Census Bureau Research & Methodology Directorate. https://www2.census.gov/ces/wp/2018/CES-WP-18-47.pdf


[86] Jacob Metcalf, Emanuel Moss, and danah boyd. 2019. Owning ethics: Corporate logics, silicon valley, and the institutionalization of ethics. Social Research: An International Quarterly 86, 2 (2019), 449–476.


[87] minutephysics. 2019. Protecting Privacy with MATH (Collab with the Census). https://www.youtube.com/watch?v=pT19VwBAqKA


[88] Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model Cards for Model Reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* ’19). Association for Computing Machinery, New York, NY, USA, 220–229. https://doi.org/10.1145/3287560.3287596


[89] Deirdre K Mulligan and Kenneth A Bamberger. 2018. Saving governance-by-design. California Law Review 106, 3 (2018), 697–784.


[90] Deirdre K. Mulligan and Kenneth A. Bamberger. 2019. Procurement as policy: Administrative process for machine learning. Berkeley Tech. LJ 34 (2019), 773.


[91] Deirdre K. Mulligan and Helen Nissenbaum. 2020. The concept of handoff as a model for ethical analysis and design. The Oxford handbook of ethics of AI 1, 1 (2020), 233.


[92] Priyanka Nanayakkara and Jessica Hullman. 2022. What’s driving conflicts around differential privacy for the U.S. Census. IEEE Security & Privacy 01 (2022), 2–11.


[93] National Academies of Sciences, Engineering, and Medicine. 2020. 2020 Census Data Products: Data Needs and Privacy Considerations: Proceedings of a Workshop. National Academies Press.


[94] National Conference of State Legislatures. 2021. Differential Privacy for Census Data Explained. https://www.ncsl.org/technology-and-communication/differential-privacy-for-census-data-explained


[95] NCAI Policy Research Center. 2021. Differential Privacy and the 2020 Census: A Guide to the Data Analyses and Impacts on AI/AN Data. Research Policy Update. National Congress of American Indians, Washington, D.C. https://archive.ncai.org/policy-research-center/research-data/prc-publications/NCAI_PRC_2020_Census_Guide_to_Data_and_Impacts_5_17_2021_FINAL.pdf


[96] Helen Nissenbaum. 2004. Privacy as contextual integrity. Wash. L. Rev. 79 (2004), 119.


[97] Steven A. Ochoa and Terry Ao Minnis. 2021. Impact of Differential Privacy & the 2020 Census on Latinos, Asian Americans and Redistricting. https://www.maldef.org/wp-content/uploads/2021/04/FINAL-MALDEF-AAJC-Differential-Privacy-Preliminary-Report-4.5.2021-1.pdf


[98] OECD. 2021. Tools for trustworthy AI: A framework to compare implementation tools for trustworthy AI systems. OECD Digital Economy Papers 312 (Jun 2021). https://doi.org/10.1787/008232ec-en


[99] National Institute of Standards and Technology. 2023. Artificial Intelligence Risk Management Framework (AI RMF 1.0). (Jan 2023). https://doi.org/10.6028/nist.ai.100-1


[100] Jennifer Pierre, Roderic Crooks, Morgan Currie, Britt Paris, and Irene Pasquetto. 2021. Getting Ourselves Together: Data-centered participatory design research & epistemic burden. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–11.


[101] Theodore M. Porter. 1995. Trust in numbers: The pursuit of objectivity in science and public life. Princeton University Press.


[102] Inioluwa Deborah Raji, Andrew Smart, Rebecca N White, Margaret Mitchell, Timnit Gebru, Ben Hutchinson, Jamila Smith-Loud, Daniel Theron, and Parker Barnes. 2020. Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 33–44.


[103] Brad Rawlins. 2008. Give the emperor a mirror: Toward developing a stakeholder measurement of organizational transparency. Journal of public relations research 21, 1 (2008), 71–99.


[104] David Gerald Robinson. 2022. Voices in the code: a story about people, their values, and the algorithm they made. Russell Sage Foundation, New York.


[105] Simo Sarkki, Hannu I. Heikkinen, Teresa Komu, Mari Partanen, Karoliina Vanhanen, and Elise Lepy. 2020. How boundary objects help to perform roles of science arbiter, honest broker, and issue advocate. Science and Public Policy 47, 2 (2020), 161–171.


[106] Andrew K. Schnackenberg and Edward C. Tomlinson. 2016. Organizational transparency: A new perspective on managing trust in organizationstakeholder relationships. Journal of management 42, 7 (2016), 1784–1810.


[107] Mike Schneider. 2021. Census releases guidelines for controversial privacy tool. https://apnews.com/article/business-census-2020-55519b7534bd8d61028020d79854e909 Section: Voting rights.


[108] Jeremy Seeman. 2023. Framing Effects in the Operationalization of Differential Privacy Systems as Code-Driven Law. In International Conference on Computer Ethics, Vol. 1.


[109] Jeremy Seeman and Daniel Susser. 2023. Between Privacy and Utility: On Differential Privacy in Theory and Practice. ACM J. Responsib. Comput. (oct 2023). https://doi.org/10.1145/3626494 Just Accepted.


[110] Andrew D Selbst, Danah Boyd, Sorelle A Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and abstraction in sociotechnical systems. In Proceedings of the conference on fairness, accountability, and transparency. 59–68.


[111] Mona Sloane and Emanuel Moss. 2022. Introducing a Practice-Based Compliance Framework for Addressing New Regulatory Challenges in the AI Field. TechReg Chronicle (2022).


[112] Mona Sloane, Emanuel Moss, Olaitan Awomolo, and Laura Forlano. 2022. Participation is not a design fix for machine learning. In Equity and Access in Algorithms, Mechanisms, and Optimization. 1–6.


[113] Mona Sloane, Ian René Solano-Kamaiko, Jun Yuan, Aritra Dasgupta, and Julia Stoyanovich. 2023. Introducing contextual transparency for automated decision systems. Nature Machine Intelligence 5, 3 (2023), 187–195.


[114] Mona Sloane and Janina Zakrzewski. 2022. German AI Start-Ups and “AI Ethics”: Using A Social Practice Lens for Assessing and Implementing Socio-Technical Innovation. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 935–947.


[115] Susan Leigh Star. 2010. This is not a boundary object: Reflections on the origin of a concept. Science, Technology, & Human Values 35, 5 (2010), 601–617.


[116] Susan Leigh Star and James R. Griesemer. 1989. Institutional ecology,translations’ and boundary objects: Amateurs and professionals in Berkeley’s Museum of Vertebrate Zoology, 1907-39. Social Studies of Science 19, 3 (1989), 387–420.


[117] Ryan Steed, Terrance Liu, Zhiwei Steven Wu, and Alessandro Acquisti. 2022. Policy impacts of statistical uncertainty and privacy. Science 377, 6609 (2022), 928–931. https://doi.org/10.1126/science.abq4481 arXiv:https://www.science.org/doi/pdf/10.1126/science.abq4481


[118] Cass R Sunstein. 2002. The cost-benefit state: the future of regulatory protection. American Bar Association.


[119] Latanya Sweeney. 2000. Simple Demographics Often Identify People Uniquely. Working Paper. Carnegie Mellon University Data Privacy Lab, Pittsburgh. https://dataprivacylab.org/projects/identifiability/paper1.pdf


[120] U.S. Census Bureau. 2018. Soliciting Feedback From Users on 2020 Census Data Products. https://www.federalregister.gov/documents/2018/07/19/2018-15458/soliciting-feedback-from-u


[121] U.S. Census Bureau. 2020. 2020 Census Tribal Consultations with Federally Recognized Tribes. Report. U.S. Census Bureau. https://www.census.gov/content/dam/Census/library/publications/2020/dec/census-federal-tc-final-report-2020-508.pdf


[122] U.S. Census Bureau. 2020. Invariants Set for 2020 Census Data Products. https://www.census.gov/programs-surveys/decennial-census/decade/2020/planning-management/process/discl


[123] U.S. Census Bureau. 2021. Census Bureau Sets Key Parameters to Protect Privacy in 2020 Census Results. https://www.census.gov/newsroom/press-releases/2021/2020-census-key-parameters.html Section: Government.


[124] U.S. Census Bureau. 2021. Disclosure Avoidance for the 2020 Census: An Introduction. Handbook. US Government Publishing Office, Washington, D.C. https://www2.census.gov/library/publications/decennial/2020/2020-census-disclosure-avoidance-handbook.pdf


[125] U.S. Census Bureau. 2023. 2020 Decennial Census: Processing the Count: Disclosure Avoidance Modernization. https://www.census.gov/programs-surveys/decennial-census/decade/2020/planning-management/process/disclosure-avoidance.html


[126] U.S. Census Bureau. 2023. Coming This Spring: New 2010 Redistricting and DHC "Production Settings" Demonstration Microdata with Noisy Measurement Files. https://www.census.gov/programs-surveys/decennial-census/decade/2020/planning-management/process/disclosure-avoidance/newsletters/new-2010-redistricting-dhSection: Government.


[127] U.S. Census Bureau. 2023. DisclosureAvoidance Webinar Series. https://www.census.gov/data/academy/webinars/series/disclosure-avoidance.html Section: Government.


[128] U.S. Census Bureau. 2023. Why the Census Bureau Chose Differential Privacy. Brief C2020BR-03. U.S. Census Bureau. https://www2.census.gov/library/publications/decennial/2020/census-briefs/c2020br-03.pdf


[129] Kristen Vaccaro, Karrie Karahalios, Deirdre K Mulligan, Daniel Kluttz, and Tad Hirsch. 2019. Contestability in algorithmic systems. In Conference companion publication of the 2019 on computer supported cooperative work and social computing. 523–527.


[130] David Van Riper, Jonathan Schroeder, and Steven Ruggles. 2021. Feedback on the April 2021 Census Demonstration Files. https://users.pop.umn.edu/~ruggl001/Articles/IPUMS_response_to_Census.pdf


[131] Michael Veale, Max Van Kleek, and Reuben Binns. 2018. Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. In Proceedings of the 2018 chi conference on human factors in computing systems. 1–14.


[132] Salome Viljoen. 2021. A relational theory of data governance. Yale Law Journal 131 (2021), 573.


[133] Maranke Wieringa. 2020. What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 1–18.


[134] Aaron R. Williams and Claire McKay Bowen. 2023. The promise and limitations of formal privacy. Wiley Interdisciplinary Reviews: Computational Statistics (2023), e1615.


[135] Richmond Y. Wong. 2020. Values by Design Imaginaries: Exploring Values Work in UX Practice. PhD Dissertation. University of California, Berkeley, Berkeley, California.


[136] Richmond Y Wong, Michael A Madaio, and Nick Merrill. 2023. Seeing like a toolkit: How toolkits envision the work of AI ethics. Proceedings of the ACM on Human-Computer Interaction 7, CSCW1 (2023), 1–27.


[137] Larry Wright, Jr. 2022. Letter from National Congress of American Indians CEO to to US Census Director. https://www.ncai.org/policy-research-center/research-data/prc-publications/20220728_NCAI_Letter_to_US_Census_Bureau_FINAL.pdf


[138] Felix T Wu. 2013. Defining Privacy and Utility in Data Sets. University of Colorado Law Review 84 (2013), 1117–1177.


Authors:

(1) AMINA A. ABDU, University of Michigan, USA;

(2) LAUREN M. CHAMBERS, University of California, Berkeley, USA;

(3) DEIRDRE K. MULLIGAN, University of California, Berkeley, USA;

(4) ABIGAIL Z. JACOBS, University of Michigan, USA.


This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.


Trending Topics

blockchaincryptocurrencyhackernoon-top-storyprogrammingsoftware-developmenttechnologystartuphackernoon-booksBitcoinbooks