1932

Abstract

This article surveys the use of algorithmic systems to support decision-making in the public sector. Governments adopt, procure, and use algorithmic systems to support their functions within several contexts—including criminal justice, education, and benefits provision—with important consequences for accountability, privacy, social inequity, and public participation in decision-making. We explore the social implications of municipal algorithmic systems across a variety of stages, including problem formulation, technology acquisition, deployment, and evaluation. We highlight several open questions that require further empirical research.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-lawsocsci-041221-023808
2021-10-13
2024-06-14
Loading full text...

Full text loading...

/deliver/fulltext/lawsocsci/17/1/annurev-lawsocsci-041221-023808.html?itemId=/content/journals/10.1146/annurev-lawsocsci-041221-023808&mimeType=html&fmt=ahah

Literature Cited

  1. Abebe R, Barocas S, Kleinberg J, Levy K, Raghavan M, Robinson DG. 2020a. Roles for computing in social change. Proceedings of the 2020 ACM Conference on Fairness, Accountability and Transparency252–60 New York: Assoc. Comput. Mach.
    [Google Scholar]
  2. Abebe R, Kleinberg J, Weinberg SM. 2020b. Subsidy allocations in the presence of income shocks. Proc. AAAI Conf. Artif. Intel. 34:57032–39
    [Google Scholar]
  3. Ada Lovelace Inst 2020. Examining the black box: tools for assessing algorithmic systems. Rep., Ada Lovelace Inst London: https://www.adalovelaceinstitute.org/report/examining-the-black-box-tools-for-assessing-algorithmic-systems/
    [Google Scholar]
  4. Adams C. 2018. Home rules: the case for local administrative procedure. Fordham Law Rev 87:629–69
    [Google Scholar]
  5. Advis. Comm. Intergov. Relat 1982. State and local roles in the federal system Rep., Advis. Comm. Intergov. Relat Washington, DC: https://library.unt.edu/gpo/acir/Reports/brief/B-6.pdf
    [Google Scholar]
  6. Albright A. 2019. If you give a judge a risk score: evidence from Kentucky bail decisions Discuss. Pap. Ser. 85 John M. Olin Cent. Law Econ. Bus., Harvard Law School Cambridge, MA:
    [Google Scholar]
  7. Bambauer J, Zarsky T. 2018. The algorithm game. Notre Dame Law Rev 94:1–47
    [Google Scholar]
  8. Bansal G, Nushi B, Kamar E, Horvitz E, Weld DS. 2020. Is the most accurate AI the best teammate? Optimizing AI for teamwork. arXiv. https://arxiv.org/abs/2004.13102
  9. Barabas C. 2020. Beyond bias: re-imagining the terms of “ethical AI” in criminal law. Georgetown J. Law Mod. Crit. Race Perspect. 12:83–111
    [Google Scholar]
  10. Barocas S, Selbst AD. 2016. Big data's disparate impact. Calif. Law Rev. 104:3671–732
    [Google Scholar]
  11. Bedi N, McGrory K. 2020. Pasco's sheriff uses grades and abuse histories to label school children potential criminals. The kids and their parents don't know. Tampa Bay Times Nov. 19. https://projects.tampabay.com/projects/2020/investigations/police-pasco-sheriff-targeted/school-data/
    [Google Scholar]
  12. ben-Aaron J, Denny M, Desmarais B, Wallach H. 2017. Transparency by conformity: a field experiment evaluating openness in local governments. Public Admin. Rev. 77:168–77
    [Google Scholar]
  13. Berman DR. 2019. Local Government and the States: Autonomy, Politics, and Policy Abingdon, UK: Routledge
    [Google Scholar]
  14. Bertsimas D, Delarue A, Martin S 2019. Optimizing schools’ start times and bus routes. PNAS 116:135943–948
    [Google Scholar]
  15. Bloch-Wehba H. 2020. Access to algorithms. Fordham Law Rev 88:1265–314
    [Google Scholar]
  16. Bouk D. 2015. How Our Days Became Numbered: Risk and the Rise of the Statistical Individual Chicago: Univ. Chicago Press
    [Google Scholar]
  17. Bovens M, Zouridis S. 2002. From street-level to system-level bureaucracies: how information and communication technology is transforming administrative discretion and constitutional control. Public Adm. Rev. 62:174–84
    [Google Scholar]
  18. Brauneis R, Goodman EP. 2018. Algorithmic transparency for the smart city. Yale J. Law Technol. 20:103–76
    [Google Scholar]
  19. Brayne S, Christin A 2020. Technologies of crime prediction: the reception of algorithms in policing and criminal courts. Soc. Probl. 2020.spaa004
    [Google Scholar]
  20. Breckenridge K, Szreter S 2012. Registration and Recognition: Documenting the Person in World History Oxford, UK: Br. Acad.
    [Google Scholar]
  21. Breiman L. 2001. Statistical modeling: the two cultures. Stat. Sci. 16:3199–215
    [Google Scholar]
  22. Bridges KM. 2017. The Poverty of Privacy Rights Stanford, CA: Stanford Univ. Press
    [Google Scholar]
  23. Brown A, Chouldechova A, Putnam-Hornstein E, Tobin A, Vaithianathan V 2019. Toward algorithmic accountability in public services: a qualitative study of affected community perspectives on algorithmic decision-making in child welfare services. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systemspap. 41 New York: Assoc. Comput. Mach.
    [Google Scholar]
  24. Buolamwini J, Gebru T 2018. Gender shades: intersectional accuracy disparities in commercial gender classification. PMLR 81:77–91
    [Google Scholar]
  25. Burrell J. 2016. How the machine “thinks”: understanding opacity in machine learning algorithms. Big Data Soc 3:1 https://doi.org/10.1177/2053951715622512
    [Crossref] [Google Scholar]
  26. Cahn AF. 2019. The first effort to regulate AI was a spectacular failure. Fast Company Nov. 26. https://www.fastcompany.com/90436012/the-first-effort-to-regulate-ai-was-a-spectacular-failure
    [Google Scholar]
  27. Callon M. 1998. The embeddedness of economic markets in economics. Sociol. Rev. 46:Suppl. 11–57
    [Google Scholar]
  28. Caruana R, Lou Y, Gehrke J, Koch P, Sturm M, Elhadad N. 2015. Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission. Proceedings of the 21th ACM International Conference on Knowledge Discovery and Data Mining1721–30 New York: Assoc. Comput. Mach.
    [Google Scholar]
  29. Chasalow K, Levy K. 2021. Representativeness in statistics, politics, and machine learning. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency77–89 New York: Assoc. Comput. Mach.
    [Google Scholar]
  30. Chouldechova A, Benavides-Prado D, Fialko O, Vaithianathan R. 2018. A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions. Proceedings of the 2018 ACM Conference on Fairness, Accountability and Transparency134–148 New York: Assoc. Comput. Mach.
    [Google Scholar]
  31. Chouldechova A, Roth A. 2018. The frontiers of fairness in machine learning. arXiv. http://arxiv.org/abs/1810.08810
  32. Churchman CW. 1967. Wicked problems. Manag. Sci. 14:4B-141–B-142
    [Google Scholar]
  33. Citron DK. 2008. Technological due process. Wash. Univ. Law Rev. 85:1249–313
    [Google Scholar]
  34. Coglianese C, Lehr D. 2019. Transparency and algorithmic governance. Adm. Law Rev. 71:1–56
    [Google Scholar]
  35. Crawford K. 2013. The hidden biases in big data. Harvard Business Review Apr. 1. https://hbr.org/2013/04/the-hidden-biases-in-big-data
    [Google Scholar]
  36. Crump C. 2016. Surveillance policy making by procurement. Wash. Law Rev. 91:1595–662
    [Google Scholar]
  37. Davidson NM. 2016. Localist administrative law. Yale Law J 126:564–634
    [Google Scholar]
  38. Dawes RM, Faust D, Meehl PE. 1989. Clinical versus actuarial judgment. Science 243:48991668–74
    [Google Scholar]
  39. De-Arteaga M, Fogliato R, Chouldechova A. 2020. A case for humans-in-the-loop: decisions in the presence of erroneous algorithmic scores. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems New York: Assoc. Comput. Mach.
    [Google Scholar]
  40. Dencik L, Hintz A, Redden J, Warne H. 2018. Data scores as governance: investigating uses of citizen scoring in public services Rep., Data Justice Lab., Cardiff Univ. Cardiff, UK: https://datajustice.files.wordpress.com/2018/12/data-scores-as-governance-project-report2.pdf
    [Google Scholar]
  41. Dettlaff A, Rivaux S, Baumann DJ, Fluke JD, Rycraft JR, James J 2011. Disentangling substantiation: the influence of race, income, and risk on the substantiation decision in child welfare. Child. Youth Serv. Rev. 33:91630–37
    [Google Scholar]
  42. Didier E. 2020. America by the Numbers: Quantification, Democracy, and the Birth of National Statistics Cambridge, MA: MIT Press
    [Google Scholar]
  43. Dietvorst B, Simmons JP, Massey C. 2016. Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Manag. Sci. 64:31155–70
    [Google Scholar]
  44. Early DE. 2016. Silicon Valley “triage tool” calculates the neediest homeless cases. Mercury News Feb. 17. https://www.mercurynews.com/2016/02/17/silicon-valley-triage-tool-calculates-the-neediest-homeless-cases/
    [Google Scholar]
  45. Ehrenkranz M. 2019. Amazon's face recognition tech once again pegs politicians as criminals. Gizmodo Aug. 13. https://gizmodo.com/amazons-face-recognition-tech-once-again-pegs-politicia-1837215790
    [Google Scholar]
  46. Engstrom DF, Ho DE, Sharkey CM, Cuéllar M-F. 2020. Government by algorithm: artificial intelligence in federal administrative agencies Rep., Adm. Conf. US Washington, DC: https://www-cdn.law.stanford.edu/wp-content/uploads/2020/02/ACUS-AI-Report.pdf
    [Google Scholar]
  47. Ensign D, Friedler SA, Neville S, Scheidegger C, Venkatasubramanian S 2018. Runaway feedback loops in predictive policing. PMLR 81:160–71
    [Google Scholar]
  48. Espeland W, Sauder M. 2007. Rankings and reactivity: how public measures recreate social worlds. Am. J. Sociol. 113:11–40
    [Google Scholar]
  49. Eubanks V. 2018. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor New York: St. Martin's
    [Google Scholar]
  50. Fink K. 2018. Opening the government's black boxes: freedom of information and algorithmic accountability. Inform. Commun. Soc. 21:101453–71
    [Google Scholar]
  51. Flood J. 2010. The Fires: How a Computer Formula, Big Ideas, and the Best of Intentions Burned Down New York CityAnd Determined the Future of Cities New York: Riverhead Books
    [Google Scholar]
  52. Gama J, Žliobaitė I, Bifet A, Pechenizkiy M, Bouchachia A. 2014. A survey on concept drift adaptation. ACM Comput. Surv. 46:444
    [Google Scholar]
  53. Garip F. 2020. What failure to predict life outcomes can teach us. PNAS 117:8234–35
    [Google Scholar]
  54. Garvie CA. 2019. Garbage in, garbage out: face recognition on flawed data Rep., Georgetown Cent. Priv. Technol Washington, DC: https://www.flawedfacedata.com/
    [Google Scholar]
  55. Gilman M. 2020. Poverty lawgorithms: a poverty lawyer's guide to fighting automated decision-making harms on low-income communities Rep., Data Soc. Res. Inst., NY https://datasociety.net/wp-content/uploads/2020/09/Poverty-Lawgorithms-20200915.pdf
    [Google Scholar]
  56. Goodman EP. 2019. The challenge of equitable algorithmic change. The Regulatory Review Feb. https://www.theregreview.org/wp-content/uploads/2019/02/Goodman-The-Challenge-of-Equitable-Algorithmic-Change.pdf
    [Google Scholar]
  57. Green B. 2019. The Smart Enough City: Putting Technology in Its Place to Reclaim Our Urban Future Cambridge, MA: MIT Press
    [Google Scholar]
  58. Guay JP, Parent G. 2018. Broken legs, clinical overrides, and recidivism risk: an analysis of decisions to adjust risk levels with the LS/CMI. Crim. Justice Behav. 45:182–100
    [Google Scholar]
  59. Hadden SG, Feinstein JL. 1989. Symposium: expert systems. Introduction to expert systems. J. Policy Anal. Manag. 8:2182–87
    [Google Scholar]
  60. Hamilton M. 2015. Adventures in risk: predicting violent and sexual recidivism in sentencing law. Ariz. State Law J. 47: https://arizonastatelawjournal.org/wp-content/uploads/2015/06/Hamilton_Final.pdf
    [Google Scholar]
  61. Harcourt BE. 2018. The systems fallacy: a genealogy and critique of public policy and cost-benefit analysis. J. Legal Stud. 47:2419–47
    [Google Scholar]
  62. Harkinson J. 2016. Could this Silicon Valley algorithm pick which homeless people get housing?. Mother Jones June 29. https://www.motherjones.com/politics/2016/06/homelessness-data-silicon-valley-prediction-santa-clara/
    [Google Scholar]
  63. Hartzog W, Selinger E. 2015. Surveillance as loss of obscurity. Washington Lee Law Rev. 72:31343–87
    [Google Scholar]
  64. Hofman JM, Sharma A, Watts DJ. 2017. Prediction and explanation in social systems. Science 355:6324486–88
    [Google Scholar]
  65. Hogue C. 2013. Government organization summary report: 2012 Rep., US Census Bur Washington, DC:
    [Google Scholar]
  66. Inst. Cyber Law Policy Secur 2021. Pittsburgh task force on public algorithms https://www.cyberinstitute.pitt.edu/algorithms
    [Google Scholar]
  67. Jacobs AZ, Wallach H. 2021. Measurement and fairness. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency375–85 New York: Assoc. Comput. Mach.
    [Google Scholar]
  68. Johnson RA, Rostain T. 2020. Tool for surveillance or spotlight on inequality? Big data and the law. Annu. Rev. Law Soc. Sci. 16:453–72
    [Google Scholar]
  69. Kaplan A. 1964. The age of the symbol: a philosophy of library education. Libr. Q. 34:4295–304
    [Google Scholar]
  70. Kaye K. 2021. Oakland, CA to reconsider “totally unproven” ShotSpotter tech. Smart Cities Dive Feb. 9. https://www.smartcitiesdive.com/news/oakland-ca-to-reconsider-totally-unproven-shotspotter-tech/594718/
    [Google Scholar]
  71. Kelion L. 2020. Excel: why using Microsoft's tool caused Covid-19 results to be lost. BBC Oct. 5. https://www.bbc.co.uk/news/technology-54423988
    [Google Scholar]
  72. Kellogg KC, Valentine MA, Christin A 2020. Algorithms at work: the new contested terrain of control. Acad. Manag. Ann. 14:1366–410
    [Google Scholar]
  73. Kitchin R. 2017. Thinking critically about and researching algorithms. Inform. Commun. Soc. 20:114–29
    [Google Scholar]
  74. Kleinberg J, Ludwig J, Mullainathan S, Obermeyer Z. 2015. Prediction policy problems. Am. Econ. Rev. 105:5491–95
    [Google Scholar]
  75. Kleinberg J, Ludwig J, Mullainathan S, Sunstein C. 2018. Discrimination in the age of algorithms. J. Legal Anal. 10:113–74
    [Google Scholar]
  76. Kleinberg JM, Mullainathan S, Raghavan M. 2017. Inherent trade-offs in the fair determination of risk scores. 8th Innovations in Theoretical Computer Science Conference CH Papadimitrou, art. 43 Saarbrücken, Ger: Dagstuhl Publ.
    [Google Scholar]
  77. Koepke JL, Robinson DG. 2018. Danger ahead: risk assessment and the future of bail reform. Wash. Law Rev. 93:1725–807
    [Google Scholar]
  78. Koh PW, Sagawa S, Marklund H, Xie SM, Zhang M et al. 2020. WILDS: a benchmark of in-the-wild distribution shifts. arXiv. https://arxiv.org/abs/2012.07421
  79. Koops B. 2021. The concept of function creep. Law Innov. Technol. 13:129–56
    [Google Scholar]
  80. Krafft PM, Young M, Katell M, Huang K, Bugingo G. 2019. Defining AI in policy versus practice. Proceedings of the 2020 AAAI/ACM Conference on AI, Ethics, and Society (AIES) New York: Assoc. Comput. Mach.
    [Google Scholar]
  81. Kroll JA, Huey J, Barocas S, Felten EW, Reidenberg JR et al. 2016. Accountable algorithms. Univ. Pa. Law Rev. 165:633–705
    [Google Scholar]
  82. Ku M, Gil-Garcia JR. 2018. Ready for data analytics? Data collection and creation in local governments. Proceedings of the 19th Annual International Conference on Digital Government Research A Zuiderwijk, CC Hinnant, art 36 New York: Assoc. Comput. Mach.
    [Google Scholar]
  83. Kube A, Das S, Fowler PJ. 2019. Allocating interventions based on predicted outcomes: a case study on homelessness services. Proc. AAAI Conf. Artif. Intel. 33:1622–29
    [Google Scholar]
  84. Lee MK, Kusbit D, Kahng A, Kim JT, Yuan X et al. 2019. WeBuildAI: participatory framework for algorithmic governance. Proceedings of the ACM Conference on Human-Computer Interaction (CSCW)art. 181 New York: Assoc. Comput. Mach.
    [Google Scholar]
  85. Lerman J. 2013. Big data and its exclusions. Stanford Law Rev. Online 66:55–63
    [Google Scholar]
  86. Levy K, Johns DM. 2016. When open data is a Trojan Horse: the weaponization of transparency in science and governance. Big Data Soc 3:1 https://doi.org/10.1177/2053951715621568
    [Crossref] [Google Scholar]
  87. Lipsky M. 1980. Street Level Bureaucracy: Dilemmas of the Individual in Public Services New York: Russell Sage Found.
    [Google Scholar]
  88. Lipton ZC. 2018. The mythos of model interpretability. Queue 16:331–57
    [Google Scholar]
  89. Lum K, Chowdhury R. 2021. What is an “algorithm”? It depends whom you ask. MIT Technology Review Feb. 26. https://www.technologyreview.com/2021/02/26/1020007/what-is-an-algorithm/
    [Google Scholar]
  90. Lum K, Isaac W. 2016. To predict and serve?. Significance 13:14–19
    [Google Scholar]
  91. MacKenzie D. 2008. An Engine, Not a Camera: How Financial Models Shape Markets Cambridge, MA: MIT Press
    [Google Scholar]
  92. Madden M, Gilman M, Levy K, Marwick A. 2017. Privacy, poverty, and big data: a matrix of vulnerabilities for poor Americans. Wash. Univ. Law Rev. 95:53–125
    [Google Scholar]
  93. Malomo F, Sena V. 2017. Data intelligence for local government? Assessing the benefits and barriers to use of big data in the public sector. Policy Internet 9:17–27
    [Google Scholar]
  94. McKelvey F, MacDonald M. 2019. Artificial intelligence policy innovations at the Canadian federal government. Can. J. Comm. 44:243–50
    [Google Scholar]
  95. Metcalf J, Moss E, Watkins EA, Singh R, Elish MC. 2021. Algorithmic impact assessments and accountability: The co-construction of impacts. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency735–46 New York: Assoc. Comput. Mach.
    [Google Scholar]
  96. Mitchell S, Potash E, Barocas S, D'Amour A, Lum K 2021. Algorithmic fairness: choices, assumptions, and definitions. Annu. Rev. Stat. Appl. 8:141–63
    [Google Scholar]
  97. Molnar P, Gill L. 2018. Bots at the gate: a human rights analysis of automated decision-making in Canada's immigration and refugee system. Res. Rep. 114 Citiz. Lab Int. Hum. Rights Progr., Fac. Law, Univ. Tor., Tor., Can https://citizenlab.ca/wp-content/uploads/2018/09/IHRP-Automated-Systems-Report-Web-V2.pdf
    [Google Scholar]
  98. Mormann F. 2021. Beyond algorithms: toward a normative theory of automated regulation. Boston Coll. Law Rev. 62:2
    [Google Scholar]
  99. Mulligan DK, Bamberger KA. 2019. Procurement as policy: administrative process for machine learning. Berkeley Technol. Law J. 34:773–851
    [Google Scholar]
  100. Mulligan DK, Joshua AK, Kohli N, Wong RY. 2019. This thing called fairness: disciplinary confusion realizing a value in technology. Proceedings of the 2019 ACM Conference on Human-Computer Interaction119–35 New York: Assoc. Comput. Mach.
    [Google Scholar]
  101. Nissenbaum H. 2010. Privacy in Context: Technology, Policy, and the Integrity of Social Life Stanford, CA: Stanford Univ. Press
    [Google Scholar]
  102. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366:6464447–53
    [Google Scholar]
  103. O'Brien DT, Hill NE, Contreras M, Phillips NE, Sidoni G. 2018.. An evaluation of equity in the Boston public schools' home-based assignment policy Doc., Boston Area Res. Initiat Boston, MA: https://news.northeastern.edu/wp-content/uploads/2018/07/BPSHBAP.pdf
    [Google Scholar]
  104. Ohm P. 2009. Broken promises of privacy: responding to the surprising failure of anonymization. UCLA Law Rev 57:1701–77
    [Google Scholar]
  105. Pasquale F. 2015. The Black Box Society: The Secret Algorithms That Control Money and Information Cambridge, MA: Harvard Univ. Press
    [Google Scholar]
  106. Passi S, Barocas S. 2019. Problem formulation and fairness. Proceedings of the 2019 ACM Conference on Fairness, Accountability, and Transparency39–48 New York: Assoc. Comput. Mach.
    [Google Scholar]
  107. Pozen DE. 2018. Transparency's ideological drift. Yale Law J 128:100–65
    [Google Scholar]
  108. Pozen DE, Schudson M 2018. Troubling Transparency: The History and Future of Freedom of Information New York: Columbia Univ. Press
    [Google Scholar]
  109. Quiñonero-Candela J 2009. Dataset Shift in Machine Learning Cambridge, MA: MIT Press
    [Google Scholar]
  110. Rabanser S, Günnemann S, Lipton Z 2019. Failing loudly: an empirical study of methods for detecting dataset shift. Advances in Neural Information Processing Systems MI Jordan, Y LeCun, SA Solla Cambridge, MA: MIT Press
    [Google Scholar]
  111. Reisman D, Schultz J, Crawford K, Whittaker M 2018. Algorithmic impact assessments: a practical framework for public agency accountability Rep., AI Now Inst. New York: https://ainowinstitute.org/aiareport2018.pdf
    [Google Scholar]
  112. Richardson R 2019. Confronting black boxes: a shadow report of the New York City Automated Decision System Task Force. Rep., AI Now Inst. New York: https://ainowinstitute.org/ads-shadowreport-2019.html
    [Google Scholar]
  113. Richardson R, Schultz JM, Sutherland VM. 2019. Litigating algorithms 2019 US Report: new challenges to government use of algorithmic decision systems Rep., AI Now Inst New York: https://ainowinstitute.org/litigatingalgorithms-2019-us.pdf
    [Google Scholar]
  114. Salganik MJ, Lundberg I, Kindel AT, Ahearn CE, Al-Ghoneim K et al. 2020. Measuring the predictability of life outcomes with a scientific mass collaboration italicPNAS 1178398–403
    [Google Scholar]
  115. Salsburg D. 2001. The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century New York: W.H. Freeman
    [Google Scholar]
  116. Schrock AR. 2016. Civic hacking as data activism and advocacy: a history from publicity to open government data italicNew Media Soc 184581–99
    [Google Scholar]
  117. Scott JC. 1998. Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed New Haven, CT: Yale Univ. Press
    [Google Scholar]
  118. Selbst AD. 2017. Disparate impact in big data policing. Ga. Law Rev. 52:109–95
    [Google Scholar]
  119. Selbst AD, Barocas S. 2018. The intuitive appeal of explainable machines. Fordham Law Rev 87:1085–139
    [Google Scholar]
  120. Selbst AD, Boyd D, Friedler SA, Venkatasubramanian S, Vertesi J. 2019. Fairness and abstraction in sociotechnical systems. Proceedings of the Conference on Fairness, Accountability, and Transparency59–68 New York: Assoc. Comput. Mach.
    [Google Scholar]
  121. Simon J. 1988. The ideological effects of actuarial practices. Law Soc. Rev. 22:4771–800
    [Google Scholar]
  122. Singleton RA Jr., Straits BC. 2005. Approaches to Social Research Oxford, UK: Oxford Univ. Press
    [Google Scholar]
  123. Skeem J, Scurich N, Monahan J. 2020. Impact of risk assessment on judges’ fairness in sentencing relatively poor defendants. Law Hum. Behav. 44:151–59
    [Google Scholar]
  124. Skitka LJ, Mosier KL, Burdick M. 1999. Does automation bias decision-making?. Int. J. Hum. Comput. Stud. 51:5991–1006
    [Google Scholar]
  125. Sloane M, Moss E, Awomolo O, Forlano L. 2020. Participation is not a design fix for machine learning. arXiv. https://arxiv.org/abs/2007.02423
  126. Smith A. 2018. Attitudes toward algorithmic decision-making. Pew Research Center Nov. 16. https://www.pewresearch.org/internet/2018/11/16/attitudes-toward-algorithmic-decision-making/
    [Google Scholar]
  127. Smith A. 2019. More than half of U.S. adults trust law enforcement to use facial recognition responsibly. Pew Research Center Sept 5: https://www.pewresearch.org/internet/2019/09/05/more-than-half-of-u-s-adults-trust-law-enforcement-to-use-facial-recognition-responsibly/
    [Google Scholar]
  128. Solow-Niederman A, Choi Y, Van den Broeck G. 2019. The institutional life of algorithmic risk assessment. Berkeley Technol. Law J. 34:705–44
    [Google Scholar]
  129. Spielkamp M 2019. Automating Society: Taking Stock of Automated Decision-Making in the EU Berlin: AlgorithmWatch https://www.ivir.nl/publicaties/download/Automating_Society_Report_2019.pdf
    [Google Scholar]
  130. Spivack J, Garvie C 2020. A taxonomy of legislative approaches to face recognition in the United States. Regulating Biometrics: Global Approaches and Urgent Questions A Kak 86–95 New York: AI Now Inst.
    [Google Scholar]
  131. Stevenson M. 2018. Assessing risk assessment in action. Minn. Law Rev. 103:303–84
    [Google Scholar]
  132. Stevenson MT, Doleac JL. 2019. Algorithmic risk assessment in the hands of humans Discuss. Pap. 12853, Inst. Labor Econ. Bonn, Ger: http://ftp.iza.org/dp12853.pdf
    [Google Scholar]
  133. Stone HS. 1971. Introduction to Computer Organization and Data Structures New York: McGraw-Hill
    [Google Scholar]
  134. Surden H. 2007. Structural rights in privacy. SMU Law Rev 60:1605–29
    [Google Scholar]
  135. Suresh H, Guttag JV. 2020. A framework for understanding sources of harm throughout the machine learning life cycle. arXiv. https://arxiv.org/abs/1901.10002
  136. Toros H, Flaming D. 2018. Prioritizing homeless assistance using predictive algorithms: an evidence-based approach. Cityscape 20:1117–46
    [Google Scholar]
  137. Valverde M, Flynn A 2020. Smart Cities in Canada: Digital Dreams, Corporate Designs Toronto, Can: James Lorimer & Co.
    [Google Scholar]
  138. Veale M, Brass I 2019. Administration by algorithm? Public management meets public sector machine learning. Algorithmic Regulation K Yeung, M Lodge 121–49 Oxford, UK: Oxford Univ. Press
    [Google Scholar]
  139. Veale M, Van Kleek M, Binns R. 2018. Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems pap 440 New York: Assoc. Comput. Mach.
    [Google Scholar]
  140. Vogl T 2020. The impact of information technology evolution on the forms of knowledge in public sector social work: examples from Canada and the UK. Proceedings of the 53rd Hawaii International Conference on System Sciences TX Bui 2083–92 Mānoa: Hawaii Int. Conf. Syst. Sci.
    [Google Scholar]
  141. Vogl TM, Seidelin C, Ganesh B, Bright J. 2020. Smart technology and the emergence of algorithmic bureaucracy: artificial intelligence in UK local authorities. Public Adm. Rev. 80:946–61
    [Google Scholar]
  142. Waldman AE. 2019. Privacy law's false promise. Wash. Univ. Law Rev. 97:773–834
    [Google Scholar]
  143. Weed J. 2021. Résumé-writing tips to help you get past the A.I. gatekeepers. New York Times March 21. https://www.nytimes.com/2021/03/19/business/resume-filter-articial-intelligence.html
    [Google Scholar]
  144. Wexler R. 2018. Life, liberty, and trade secrets: intellectual property in the criminal justice system. Stanford Law Rev 70:1343–429
    [Google Scholar]
  145. Whittaker M, Crawford K, Dobbe R, Fried G, Kaziunas E et al. 2018. AI Now report 2018 Rep., AI Now New York: https://ainowinstitute.org/AI_Now_2018_Report.pdf
    [Google Scholar]
  146. Wildasin DE. 2010. Intergovernmental transfers to local governments. Municipal Revenues and Land Policies GK Ingram, Y-H Hong 47–76 Cambridge, MA: Lincoln Inst. Land Policy
    [Google Scholar]
  147. Winston A. 2014. Oakland City Council rolls back the Domain Awareness Center. East Bay Express March 5. https://eastbayexpress.com/oakland-city-council-rolls-back-the-domain-awareness-center-1/
    [Google Scholar]
  148. Wood C. 2013. Domain Awareness Center may bring proactive policing to Oakland. Government Technology Sept 17. https://www.govtech.com/em/safety/Domain-Awareness-Center-Policing-Oakland.html
    [Google Scholar]
  149. Yang Q, Steinfeld A, Rosé C, Zimmerman J. 2020. Re-examining whether, why, and how human-AI interaction is uniquely difficult to design. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems New York: Assoc. Comput. Mach.
    [Google Scholar]
  150. Yeung K. 2017. Algorithmic regulation: a critical interrogation. Regul. Gov. 12:4505–23
    [Google Scholar]
  151. Young M, Katell M, Krafft PM. 2019b. Municipal surveillance regulation and algorithmic accountability. Big Data Soc 6:2 https://doi.org/10.1177/2053951719868492
    [Crossref] [Google Scholar]
  152. Young M, Magassa L, Friedman B. 2019c. Toward inclusive tech policy design: a method for underrepresented voices to strengthen tech policy documents. Ethics Inform. Technol. 21:289–103
    [Google Scholar]
  153. Young M, Rodriguez L, Keller E, Sun F, Sa B et al. 2019a. Beyond open vs. closed: balancing individual privacy and public accountability in data sharing. Proceedings of the ACM Conference on Fairness, Accountability, and Transparency191–200 New York: Assoc. Comput. Mach.
    [Google Scholar]
  154. Zhang B, Dafoe A. 2019. Artificial intelligence: American attitudes and trends Public Opin. Rep., Cent. Gov. AI, Fut. Hum. Inst., Univ. Oxford Oxford, UK:
    [Google Scholar]
  155. Ziewitz M. 2016. Governing algorithms: myth, mess, and methods. Sci. Technol. Hum. Values 41:13–16
    [Google Scholar]
  156. Zuboff S. 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power New York: Public Aff.
    [Google Scholar]
/content/journals/10.1146/annurev-lawsocsci-041221-023808
Loading
  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error