1932

Abstract

Artificial intelligence (AI) is transforming how governments work, from distribution of public benefits, to identifying enforcement targets, to meting out sanctions. But given AI's twin capacity to cause and cure error, bias, and inequity, there is little consensus about how to regulate its use. This review advances debate by lifting up research at the intersection of computer science, organizational behavior, and law. First, pushing past the usual catalogs of algorithmic harms and benefits, we argue that what makes government AI most concerning is its steady advance into discretion-laden policy spaces where we have long tolerated less-than-full legal accountability. The challenge is how, but also whether, to fortify existing public law paradigms without hamstringing government or stymieing useful innovation. Second, we argue that sound regulation must connect emerging knowledge about internal agency practices in designing and implementing AI systems to longer-standing lessons about the limits of external legal constraints in inducing organizations to adopt desired practices. Meaningful accountability requires a more robust understanding of organizational behavior and law as AI permeates bureaucratic routines.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-lawsocsci-120522-091626
2023-10-05
2024-04-30
Loading full text...

Full text loading...

/deliver/fulltext/lawsocsci/19/1/annurev-lawsocsci-120522-091626.html?itemId=/content/journals/10.1146/annurev-lawsocsci-120522-091626&mimeType=html&fmt=ahah

Literature Cited

  1. Albright A. 2019. If you give a judge a risk score: evidence from Kentucky bail decisions Work. Pap. Harvard Univ. Cambridge, MA: http://www.law.harvard.edu/programs/olin_center/Prizes/2019-1.pdf
  2. Alkhatib A, Bernstein M. 2019. Street-level algorithms: a theory at the gaps between policy and decisions. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems1–13. New York: Assoc. Comput. Mach.
    [Google Scholar]
  3. Allegheny County Dep. Hum. Serv 2019. Allegheny methodological report Rep. Allegheny County Dep. Hum. Serv. Pittsburgh, PA:
  4. Ames D, Handan-Nader C, Ho DE, Marcus D. 2020. Due process and mass adjudication: crisis and reform. Stanford Law Rev. 72:1–78
    [Google Scholar]
  5. Ananny M, Crawford K. 2018. Seeing without knowing: limitations of the transparency ideal and its application to algorithmic accountability. New Media Soc. 20:3973–89
    [Google Scholar]
  6. Bagley N. 2019. The procedure fetish. Mich. Law Rev. 118:3345–402
    [Google Scholar]
  7. Bainbridge L 1983. Ironies of automation. In Analysis, Design and Evaluation of Man-Machine Systems, ed. G Johannsen, JE Rijnsdorp 129–35. Oxford, UK: Pergamon
    [Google Scholar]
  8. Bansal G, Wu T, Zhou J, Fok R, Nushi B et al. 2021. Does the whole exceed its parts? The effect of AI explanations on complementary team performance. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems1–16. New York: Assoc. Comput. Mach.
    [Google Scholar]
  9. Barocas S, Selbst AD. 2016. Big data's disparate impact. Calif. Law Rev. 104:671–732
    [Google Scholar]
  10. Benjamin R. 2019. Race After Technology: Abolitionist Tools for the New Jim Code Cambridge, UK: Polity
  11. Black E, Elzayn H, Chouldechova A, Goldin J, Ho DE. 2022. Algorithmic fairness and vertical equity: income fairness with IRS tax audit models. FAccT ’22: 2022 ACM Conference on Fairness, Accountability, and Transparency1479–503. New York: Assoc. Comput. Mach.
    [Google Scholar]
  12. Bondi E, Koster R, Sheahan H, Chadwick M, Bachrach Y et al. 2022. Role of human-AI interaction in selective prediction. Proc. AAAI 36:55286–94
    [Google Scholar]
  13. Bovens M, Zouridis S. 2002. From street-level to system-level bureaucracies: how information and communication technology is transforming administrative discretion and constitutional control. Public Adm. Rev. 62:2174–84
    [Google Scholar]
  14. Brayne S. 2020. Predict and Surveil: Data, Discretion, and the Future of Policing. Oxford, UK: Oxford Univ. Press
  15. Brayne S, Christin A 2021. Technologies of crime prediction: the reception of algorithms in policing and criminal courts. Soc. Probl. 68:608–24
    [Google Scholar]
  16. Brennan-Marquez K, Levy K, Susser D. 2019. Strange loops: apparent versus actual human involvement in automated decision making. Berkeley Technol. Law J. 34:745–72
    [Google Scholar]
  17. Buçinca Z, Malaya MB, Gajos KZ. 2021. To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proc. ACM Hum.-Comput. Interact. 5:CSCW1188
    [Google Scholar]
  18. Bullock J, Young MM, Wang Y-F, Giest S, Grimmelikhuijsen S. 2020. Artificial intelligence, bureaucratic form, and discretion in public service. Inf. Polity 25:4491–506
    [Google Scholar]
  19. Bullock JB. 2019. Artificial intelligence, discretion, and bureaucracy. Am. Rev. Public Adm. 49:7751–61
    [Google Scholar]
  20. Bullock JB, Huang H, Kim K-C(C) 2022. Machine intelligence, bureaucracy, and human control. Perspect. Public Manag. Gov. 5:2187–96
    [Google Scholar]
  21. Burgess M. 2020. The lessons we all must learn from the A-levels algorithm debacle. Wired UK Aug. 20. https://www.wired.co.uk/article/gcse-results-alevels-algorithm-explained
    [Google Scholar]
  22. Calo R, Citron DK. 2021. The automated administrative state: a crisis of legitimacy. Emory Law J. 70:797–845
    [Google Scholar]
  23. Casey AJ, Niblett A. 2019. Framework for the new personalization of law. Univ. Chicago Law Rev. 86:2333–58
    [Google Scholar]
  24. Charette RN. 2018. Michigan's MiDAS Unemployment System: Algorithm alchemy created lead, not gold. IEEE Spectrum Jan. 24. https://spectrum.ieee.org/michigans-midas-unemployment-system-algorithm-alchemy-that-created-lead-not-gold
    [Google Scholar]
  25. Chohlas-Wood A, Nudell J, Yao K, Lin Z(J), Nyarko J, Goel S. 2021. Blind justice: algorithmically masking race in charging decisions. Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society35–45. New York: Assoc. Comput. Mach.
    [Google Scholar]
  26. Chouldechova A. 2017. Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big Data 5:2153–63
    [Google Scholar]
  27. Chouldechova A, Putnam-Hornstein E, Benavides-Prado D, Fialko O, Vaithianathan R. 2018. A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions. Proc. Mach. Learn. Res. 81:134–48
    [Google Scholar]
  28. Chugg B, Anderson B, Eicher S, Lee S, Ho DE. 2021. Enhancing environmental enforcement with near real-time monitoring: likelihood-based detection of structural expansion of intensive livestock farms. Int. J. Appl. Earth Obs. Geoinf. 103:102463
    [Google Scholar]
  29. Chugunova M, Sele D. 2022. We and it: an interdisciplinary review of the experimental evidence on how humans interact with machines. J. Behav. Exp. Econ. 99:101897
    [Google Scholar]
  30. Citron DK. 2008. Technological due process. Washington Univ. Law Rev. 85:1249–313
    [Google Scholar]
  31. Coglianese C, Hefter K. 2022. From negative to positive algorithm rights. William Mary Bill Rights J. 30:883–923
    [Google Scholar]
  32. Coglianese C, Lai A. 2021. Algorithm v. algorithm. Duke Law J. 71:1281–340
    [Google Scholar]
  33. Coglianese C, Lehr D. 2017. Regulating by robot: administrative decision making in the machine-learning era. Georgetown Law J. 105:1147–223
    [Google Scholar]
  34. Coglianese C, Lehr D. 2018. Transparency and algorithmic governance. Adm. Law Rev. 71:1–56
    [Google Scholar]
  35. Cohen JE. 2012. Configuring the Networked Self: Law, Code, and the Play of Everyday Practice New Haven, CT: Yale Univ. Press
  36. Crawford K, Calo R. 2016. There is a blind spot in AI research. Nature 538:7625311–13
    [Google Scholar]
  37. Crawford K, Schultz J. 2019. AI systems as state actors. Columbia Law Rev. 119:71941–72
    [Google Scholar]
  38. Crootof R, Kaminski ME, Price WN. 2023. Humans in the loop. Vanderbilt Law Rev. 76:429–510
    [Google Scholar]
  39. Cuccaro-Alamin S, Foust R, Vaithianathan R, Putnam-Hornstein E. 2017. Risk assessment and decision making in child protective services: predictive risk modeling in context. Child. Youth Serv. Rev. 79:291–98
    [Google Scholar]
  40. Dawes R, Faust D, Meehl P. 1989. Clinical versus actuarial judgment. Science 243:48991668–74
    [Google Scholar]
  41. de la Garza A. 2020. States’ automated systems are trapping citizens in bureaucratic nightmares with their lives on the line. Time May 28. https://time.com/5840609/algorithm-unemployment/
    [Google Scholar]
  42. De-Arteaga M, Fogliato R, Chouldechova A. 2020. A case for humans-in-the-loop: decisions in the presence of erroneous algorithmic scores. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems1–12. New York: Assoc. Comput. Mach.
    [Google Scholar]
  43. Dietvorst BJ, Simmons JP, Massey C. 2015. Algorithm aversion: People erroneously avoid algorithms after seeing them err. J. Exp. Psychol. 144:1114–26
    [Google Scholar]
  44. Dietvorst BJ, Simmons JP, Massey C. 2018. Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Manag. Sci. 64:31155–70
    [Google Scholar]
  45. DiMaggio PJ, Powell WW. 1983. The iron cage revisited: institutional isomorphism and collective rationality in organizational fields. Am. Sociol. Rev. 48:2147–60
    [Google Scholar]
  46. Douek E. 2023. Content moderation as systems thinking. Harvard Law Rev. 136:526–607
    [Google Scholar]
  47. Drake B, Jonson-Reid M, Ocampo MG, Morrison M, Dvalishvili D(D). 2020. A practical framework for considering the use of predictive risk modeling in child welfare. Ann. Am. Acad. Political Soc. Sci. 692:1162–81
    [Google Scholar]
  48. Duchessi P, O'Keefe R, O'Leary D 1993. A research perspective: artificial intelligence, management and organizations. Intell. Syst. Account. Finance Manag. 2:3151–59
    [Google Scholar]
  49. Dyzenhaus D. 2006. The Constitution of Law: Legality in a Time of Emergency Cambridge, UK: Cambridge Univ. Press
  50. Edelman LB, Krieger LH, Eliason SR, Albiston CR, Mellema V. 2011. When organizations rule: judicial deference to institutionalized employment structures. Am. J. Sociol. 117:3888–954
    [Google Scholar]
  51. Edwards L, Veale M. 2017. Slave to the algorithm? Why a “right to an explanation” is probably not the remedy you are looking for. Duke Law Technol. Rev. 16:18–84
    [Google Scholar]
  52. Ehsan U, Passi S, Liao QV, Chan L, Lee I-H et al. 2021. The who in explainable AI: how AI background shapes perceptions of AI explanations. arXiv 13509. https://doi.org/10.48550/arXiv.2107.13509
  53. Engel C, Grgić-Hlača N. 2021. Machine advice with a warning about machine limitations: experimentally testing the solution mandated by the Wisconsin Supreme Court. J. Legal Anal. 13:1284–340
    [Google Scholar]
  54. Engler A. 2022. The EU AI Act will have global impact, but a limited Brussels effect. Brookings June 14. https://www.brookings.edu/research/the-eu-ai-act-will-have-global-impact-but-a-limited-brussels-effect/
    [Google Scholar]
  55. Engstrom DF. 2023. The automated state: a realist's view. Law Soc. Rev. 56: In press
    [Google Scholar]
  56. Engstrom DF, Ho DE. 2020. Algorithmic accountability in the administrative state. Yale J. Regul. 37:800–54
    [Google Scholar]
  57. Engstrom DF, Ho DE. 2021. Artificially intelligent government: a review and agenda. Research Handbook on Big Data Law R Vogl 57–86. Cheltenham, UK: Edward Elgar Publ.
    [Google Scholar]
  58. Engstrom DF, Ho DE, Sharkey CM, Cuéllar M-F. 2020. Government by algorithm: artificial intelligence in federal administrative agencies Rep. Adm. Conf. US Washington, DC:
  59. Ensign D, Friedler SA, Neville S, Scheidegger C, Venkatasubramanian S. 2018. Runaway feedback loops in predictive policing. Proceedings of the 1st Conference on Fairness, Accountability, and Transparency160–71. New York: Assoc. Comput. Mach.
    [Google Scholar]
  60. Eubanks V. 2018. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor New York: St. Martins
  61. Evans PB, Rueschemeyer D, Skocpol T, eds. 1985. Bringing the State Back In Cambridge, UK: Cambridge Univ. Press
  62. Exec. Off. Pres 2020. Promoting the use of trustworthy artificial intelligence in the federal government Exec. Ord. 13960 Exec. Off. Pres. Washington, DC:
  63. Feldman M, Friedler SA, Moeller J, Scheidegger C, Venkatasubramanian S. 2015. Certifying and removing disparate impact. Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining259–68. New York: Assoc. Comput. Mach.
    [Google Scholar]
  64. Ferguson AG. 2017. The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement New York: N.Y. Univ. Press
  65. Fitts PM 1951.. Human engineering for an effective air-navigation and traffic-control system Rep. Air Navig. Dev. Board, Natl. Res. Counc. Washington, DC: https://apps.dtic.mil/sti/pdfs/ADB815893.pdf
  66. Fogliato R, De-Arteaga M, Chouldechova A. 2020. Lessons from the deployment of an algorithmic tool in child welfare Pap., Fair Respons. AI Workshop, CHI 2020. https://fair-ai.owlstown.net/publications/1422
  67. Fountaine T, McCarthy B, Saleh T. 2019. Building the AI-powered organization. Harvard Business Review July–Aug. https://hbr.org/2019/07/building-the-ai-powered-organization
    [Google Scholar]
  68. Fourcade M, Gordon J. 2020. Learning like a state: statecraft in the digital age. J. Law Political Econ. 1:1 https://doi.org/10.5070/LP61150258
    [Google Scholar]
  69. Gama J, Žliobaitė I, Bifet A, Pechenizkiy M, Bouchachia A. 2014. A survey on concept drift adaptation. ACM Comput. Surv. 46:444
    [Google Scholar]
  70. Gaube S, Suresh H, Raue M, Merritt A, Berkowitz SJ et al. 2021. Do as AI say: susceptibility in deployment of clinical decision-aids. npj Digit. Med. 4:31
    [Google Scholar]
  71. Glaze K, Ho DE, Ray GK, Tsang C 2022. Artificial intelligence for adjudication: the Social Security Administration and AI governance. The Oxford Handbook on AI Governance JB Bullock, Y-C Chen, J Himmelreich, VM Hudson, A Korinek et al. Oxford, UK: Oxford Univ. Press
    [Google Scholar]
  72. Goel S, Shroff R, Skeem JL, Slobogin C. 2021. The accuracy, equity, and jurisprudence of criminal risk assessment. Research Handbook on Big Data Law R Vogl 9–28. Cheltenham, UK: Edward Elgar Publ.
    [Google Scholar]
  73. Green B. 2021. Data science as political action: grounding data science in a politics of justice. J. Soc. Comput. 2:3249–65
    [Google Scholar]
  74. Green B. 2022. The flaws of policies requiring human oversight of government algorithms. Comput. Law Secur. Rev. 45:105681
    [Google Scholar]
  75. Green B, Chen Y. 2019a. Disparate interactions: an algorithm-in-the-loop analysis of fairness in risk assessments. Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT* ’1990–99. New York: Assoc. Comput. Mach. Press
    [Google Scholar]
  76. Green B, Chen Y. 2019b. The principles and limits of algorithm-in-the-loop decision making. Proc. ACM Hum.-Comput. Interact. 3:CSCW50
    [Google Scholar]
  77. Green B, Chen Y. 2021. Algorithmic risk assessments can alter human decision-making processes in high-stakes government contexts. Proc. ACM Hum.-Comput. Interact. 5:CSCW2418
    [Google Scholar]
  78. Grove WM, Meehl PE. 1996. Comparative efficiency of informal (subjective, impressionistic) and formal (mechanical, algorithmic) prediction procedures: the clinical-statistical controversy. Psychol. Public Policy Law 2:293–323
    [Google Scholar]
  79. Harcourt BE. 2006. Against Prediction: Profiling, Policing, and Punishing in an Actuarial Age Chicago: Univ. Chicago Press
  80. Heckler v. Chaney, 470 US 821 1985.)
  81. Henderson P, Chugg B, Anderson B, Ho DE. 2022. Beyond ads: sequential decision-making algorithms in law and public policy. Proceedings of the 2022 Symposium on Computer Science and Law87–100. New York: Assoc. Comput. Mach.
    [Google Scholar]
  82. Henley J, Booth R. 2020. Welfare surveillance system violates human rights, Dutch court rules. The Guardian Feb. 5. https://www.theguardian.com/technology/2020/feb/05/welfare-surveillance-system-violates-human-rights-dutch-court-rules
    [Google Scholar]
  83. Hildebrandt M. 2018. Algorithmic regulation and the rule of law. Philos. Trans. R. Soc. A 376:212820170355
    [Google Scholar]
  84. Hill K. 2020. Wrongfully accused by an algorithm. The New York Times June 24. https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html
    [Google Scholar]
  85. Himmelreich J. 2022. Against “democratizing AI. .” AI Soc. In press
    [Google Scholar]
  86. Ho DE, Marcus D, Ray GK. 2021.. Quality assurance systems in agency adjudication: emerging practices and insights Rep. Adm. Conf. US Washington, DC:
  87. Ho DE, Sherman S. 2017. Managing street-level arbitrariness: the evidence base for public sector quality improvement. Annu. Rev. Law Soc. Sci. 13:251–72
    [Google Scholar]
  88. Hoffman M, Kahn LB, Li D. 2018. Discretion in hiring. Q. J. Econ. 133:2765–800
    [Google Scholar]
  89. Horvitz E. 1999. Principles of mixed-initiative user interfaces. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems159–66. New York: Assoc. Comput. Mach.
    [Google Scholar]
  90. Huq A. 2020a. Constitutional rights in the machine learning state. Cornell Law Rev. 105:1875–954
    [Google Scholar]
  91. Huq AZ. 2020b. A right to a human decision. Va. Law Rev. 106:611–88
    [Google Scholar]
  92. Johnson RA, Rostain T. 2020. Tool for surveillance or spotlight on inequality? Big data and the law. Annu. Rev. Law. Soc. Sci. 16:453–72
    [Google Scholar]
  93. Joshi D. 2021. Algorithmic accountability for the public sector Rep. AI NOW Inst. New York:
  94. Kahneman D. 2011. Thinking, Fast and Slow New York: Farrar, Straus & Giroux
  95. Kaminski ME. 2023. Regulating the risks of AI. Boston Univ. Law Rev. 103: In press
    [Google Scholar]
  96. Kellogg KC, Valentine MA. 2022. Five mistakes managers make when introducing AI—and how to fix them. Wall Street Journal Novemb. 5 8
    [Google Scholar]
  97. Kellogg KC, Valentine MA, Christin A 2020. Algorithms at work: the new contested terrain of control. Acad. Manag. Ann. 14:1366–410
    [Google Scholar]
  98. Keswani V, Lease M, Kenthapadi K. 2021. Towards unbiased and accurate deferral to multiple experts. Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society154–65. New York: Assoc. Comput. Mach.
    [Google Scholar]
  99. Kingdon JW. 1984. Agendas, Alternatives, and Public Policies New York: Longman
  100. Kleinberg J, Ludwig J, Mullainathan S, Sunstein CR. 2018. Discrimination in the age of algorithms. J. Legal Anal. 10:113–74
    [Google Scholar]
  101. Kleinberg J, Mullainathan S, Raghavan M. 2016. Inherent trade-offs in the fair determination of risk scores. arXiv 1609.05807. https://doi.org/10.48550/arXiv.1609.05807 [ cs, stat ]
  102. Koopman C. 2019. How We Became Our Data: A Genealogy of the Informational Person Chicago: Univ. Chicago Press
  103. Krass M, Henderson P, Mello MM, Studdert DM, Ho DE. 2021. How US law will evaluate artificial intelligence for covid-19. BMJ 372:n234
    [Google Scholar]
  104. Levy K, Chasalow KE, Riley S. 2021. Algorithms and decision-making in the public sector. Annu. Rev. Law Soc. Sci. 17:309–34
    [Google Scholar]
  105. Lipsky M. 2010. Street-Level Bureaucracy: Dilemmas of the Individual in Public Services New York: Russell Sage Found. , 2nd ed..
  106. Liu H, Lai V, Tan C. 2021. Understanding the effect of out-of-distribution examples and interactive explanations on human-AI decision making. Proc. ACM Hum.-Comput. Interact. 5:CSCW2408
    [Google Scholar]
  107. Longoni C, Bonezzi A, Morewedge CK. 2019. Resistance to medical artificial intelligence. J. Consum. Res. 46:4629–50
    [Google Scholar]
  108. Lorenz L, van Erp J, Meijer A. 2022. Machine-learning algorithms in regulatory practice: nine organisational challenges for regulatory agencies. Technol. Regul. 2022:1–11
    [Google Scholar]
  109. Lu Z, Yin M. 2021. Human reliance on machine learning models when performance feedback is limited: heuristics and risks. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems1–16. New York: Assoc. Comput. Mach.
    [Google Scholar]
  110. Magill E, Vermeule A. 2011. Allocating power within agencies. Yale Law J. 120:1032–83
    [Google Scholar]
  111. Mashaw JL. 1983. Bureaucratic Justice: Managing Social Security Disability Claims New Haven, CT: Yale Univ. Press
  112. Mayson SG. 2019. Bias in, bias out. Yale Law J. 128:2218–300
    [Google Scholar]
  113. Meijer A, Grimmelikhuijsen S 2021. Responsible and accountable algorithmization: how to generate citizen trust in governmental usage of algorithms. The Algorithmic Society M Schuilenburg, R Peeters 53–66. London: Routledge
    [Google Scholar]
  114. Metcalf J, Moss E, Watkins EA, Singh R, Elish MC. 2021. Algorithmic impact assessments and accountability: the co-construction of impacts. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency735–46. New York: Assoc. Comput. Mach.
    [Google Scholar]
  115. Metzger GE, Stack KM. 2017. Internal administrative law. Mich. Law Rev. 115:1239–307
    [Google Scholar]
  116. Moldogaziev TT, Resh WG. 2016. A systems theory approach to innovation implementation: why organizational location matters. J. Public Adm. Res. Theory 26:4677–92
    [Google Scholar]
  117. Monahan J, Skeem JL. 2016. Risk assessment in criminal sentencing. Annu. Rev. Clin. Psychol. 12:489–513
    [Google Scholar]
  118. Moss E, Watkins EA, Metcalf J, Elish MC. 2020. Governing with algorithmic impact assessments: six observations. Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society1010–22. New York: Assoc. Comput. Mach.
    [Google Scholar]
  119. Mulligan DK, Bamberger KA. 2019. Procurement as policy: administrative process for machine learning. Berkeley Technol. Law J. 34:773–852
    [Google Scholar]
  120. Noble SU. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism New York: N.Y. Univ. Press
  121. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366:6464447–53
    [Google Scholar]
  122. O'Neil C. 2017. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy New York: Crown
  123. Pääkkönen J, Nelimarkka M, Haapoja J, Lampinen A. 2020. Bureaucracy as a lens for analyzing and designing algorithmic systems. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems1–14. New York: Assoc. Comput. Mach.
    [Google Scholar]
  124. Parasuraman R, Manzey DH. 2010. Complacency and bias in human use of automation: an attentional integration. Hum. Factors 52:3381–410
    [Google Scholar]
  125. Parasuraman R, Riley V. 1997. Humans and automation: use, misuse, disuse, abuse. Hum. Factors 39:2230–53
    [Google Scholar]
  126. Pasquale F. 2015. The Black Box Society: The Secret Algorithms that Control Money and Inflation. Cambridge, MA: Harvard Univ. Press
  127. Pasquale F. 2019. The second wave of algorithmic accountability. Law and Political Economy Novemb. 25. https://lpeproject.org/blog/the-second-wave-of-algorithmic-accountability/
    [Google Scholar]
  128. Pasquale F. 2020. New Laws of Robotics: Defending Human Expertise in the Age of AI Cambridge, MA: Belknap
  129. Passi S, Vorvoreanu M 2022. Overreliance on AI: literature review. Work. Pap. Microsoft Res. Redmond, WA: https://www.microsoft.com/en-us/research/uploads/prod/2022/06/Aether-Overreliance-on-AI-Review-Final-6.21.22.pdf
  130. Perrow C. 1999. Organizing to reduce the vulnerabilities of complexity. J. Conting. Crisis Manag. 7:3150–55
    [Google Scholar]
  131. Ponomarenko M. 2022. Substance and procedure in local administrative law. Univ. Pa. Law Rev. 170:1527–88
    [Google Scholar]
  132. Poursabzi-Sangdeh F, Goldstein DG, Hofman JM, Wortman Vaughan JW, Wallach H 2021. Manipulating and measuring model interpretability. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems1–52. New York: Assoc. Comput. Mach.
    [Google Scholar]
  133. Pozen DE. 2018. Transparency's ideological drift. Yale Law J. 128:100–65
    [Google Scholar]
  134. Raji ID, Smart A, White RN, Mitchell M, Gebru T et al. 2020. Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing. FAT* Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency33–44. New York: Assoc. Comput. Mach.
    [Google Scholar]
  135. Ranchordás S. 2022. Empathy in the digital administrative state. Duke Law J. 71:1341–89
    [Google Scholar]
  136. Raso J. 2017. Displacement as regulation: new regulatory technologies and front-line decision-making in Ontario Works. Can. J. Law Soc. 32:175–95
    [Google Scholar]
  137. Raso J. 2021. Implementing digitalization in an administrative justice context. Oxford Handbook of Administrative Justice M Hertogh 521–44. Oxford, UK: Oxford Univ. Press
    [Google Scholar]
  138. Ray GK, Lubbers JS. 2015. A government success story: how data analysis by the Social Security Appeals Council (with a push from the Administrative Conference of the United States) is transforming Social Security disability adjudication. George Washington Law Rev. 83:1575–608
    [Google Scholar]
  139. Richardson R. 2019. Confronting black boxes: a shadow report of the New York City Automated Decision System Task Force Rep. AI Now Inst. New York:
  140. Roberts YH, O'Brien K, Pecora PJ. 2018. Considerations for Implementing Predictive Analytics in Child Welfare. New York: Casey Fam. Programs
  141. Sabin P. 2021. Public Citizens: The Attack on Big Government and the Remaking of American Liberalism New York: W.W. Norton & Co.
  142. Seifert JW. 2004. Data mining and the search for security: challenges for connecting the dots and databases. Gov. Inf. Q. 21:4461–80
    [Google Scholar]
  143. Selbst AD, Barocas S. 2018. The intuitive appeal of explainable machines. Fordham Law Rev. 87:1085–139
    [Google Scholar]
  144. Selbst AD, Boyd D, Friedler SA, Venkatasubramanian S, Vertesi J. 2019. Fairness and abstraction in sociotechnical systems. Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT* ’1959–68. Atlanta: ACM Press
    [Google Scholar]
  145. Skitka LJ, Mosier KL, Burdick M. 1999. Does automation bias decision-making?. Int. J. Hum. Comput. Stud. 51:5991–1006
    [Google Scholar]
  146. Skitka LJ, Mosier K, Burdick MD. 2000. Accountability and automation bias. Int. J. Hum. Comput. Stud. 52:4701–17
    [Google Scholar]
  147. Solow-Niederman A. 2023. Algorithmic grey holes. J. Law Innov. 5:1117–39
    [Google Scholar]
  148. State v. Loomis, 881 N.W.2d 749 (Wis. 2016. cert. denied, 137 S. Ct. 2290 (2017)
  149. Stevenson M. 2018. Assessing risk assessment in action. Minn. Law Rev. 103:303–84
    [Google Scholar]
  150. Stone HS. 1971. Introduction to Computer Organization and Data Structures New York: McGraw-Hill
  151. Stuntz WJ. 2006. Against privacy and transparency. The New Republic April 17. https://newrepublic.com/article/65393/against-privacy-and-transparency
    [Google Scholar]
  152. Tutt A. 2017. An FDA for algorithms. Adm. Law Rev. 69:183–124
    [Google Scholar]
  153. US Gov. Account. Off 2021. GAO report US Gov. Account. Off. Washington, DC:
  154. Vaithianathan R, Benavides-Prado D, Dalton E, Chouldechova A, Putnam-Hornstein E 2021. Using a machine learning tool to support high-stakes decisions in child protection. AI Mag. 42:153–60
    [Google Scholar]
  155. Veale M, Brass I 2019. Administration by algorithm: public management meets public sector machine learning. Algorithmic Regulation K Yeung, M Lodge Oxford, UK: Oxford Univ. Press
    [Google Scholar]
  156. Veale M, Van Kleek M, Binns R. 2018. Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’181–14. Montreal, Can: ACM Press
    [Google Scholar]
  157. Vermeule A. 2016. Law's Abnegation: From Law's Empire to the Administrative State Cambridge, MA: Harvard Univ. Press
  158. Waldman A. 2020. Privacy law's false promise. Washington Univ. Law Rev. 97:3773–834
    [Google Scholar]
  159. Wexler R. 2018. Life, liberty, and trade secrets: intellectual property in the criminal justice system. Stanford Law Rev. 70:1343–429
    [Google Scholar]
  160. Whitaker M, Crawford K, Dobbe R, Fried G, Kaziunas E et al. 2018. AI Now Report 2018 Rep. AI NOW Inst. New York: https://ainowinstitute.org/publication/ai-now-2018-report-2
  161. White A, Walsh P. 2006. Risk assessment in child welfare: an issues paper Issues Pap. Cent. Parent. Res. Ashfield, NSW: Aust. http://www.community.nsw.gov.au/__data/assets/pdf_file/0005/321647/research_riskassessment.pdf
  162. Wilson JQ. 1991. Bureaucracy: What Government Agencies Do and Why They Do It New York: Basic Books
  163. Yacoby Y, Green B, Griffin CL Jr., Velez FD. 2022.. “ If it didn't happen, why would I change my decision?”: How judges respond to counterfactual explanations for the public safety assessment. arXiv 05424. https://doi.org/10.48550/arXiv.2205.05424
  164. Yang Q, Steinfeld A, Rosé C, Zimmerman J. 2020. Re-examining whether, why, and how human-AI interaction is uniquely difficult to design. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems1–13. New York: Assoc. Comput. Mach.
    [Google Scholar]
  165. Yeomans M, Shah A, Mullainathan S, Kleinberg J. 2019. Making sense of recommendations. J. Behav. Decis. Mak. 32:4403–14
    [Google Scholar]
  166. Yeung K. 2018. Algorithmic regulation: a critical interrogation. Regul. Gov. 12:4505–23
    [Google Scholar]
  167. Young MM, Bullock JB, Lecy JD. 2019. Artificial discretion as a tool of governance: a framework for understanding the impact of artificial intelligence on public administration. Perspect. Public Manag. Gov. 2:4301–13
    [Google Scholar]
/content/journals/10.1146/annurev-lawsocsci-120522-091626
Loading
/content/journals/10.1146/annurev-lawsocsci-120522-091626
Loading

Data & Media loading...

  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error