1932

Abstract

This review surveys contemporary challenges in the field of technology and human rights. The increased use of artificial intelligence (AI) in decision making in the public and private sectors—e.g., in criminal justice, employment, public service, and financial contexts—poses significant threats to human rights. AI obscures and attenuates responsibility for harms in ways that undermine traditional mechanisms for holding wrongdoers accountable. Further, technologies that scholars and practitioners once thought would democratize human rights fact finding have been weaponized by state and non-state actors. They are now used to surveil and track citizens and spread disinformation that undermines public trust in knowledge. Addressing these challenges requires efforts to ensure that the development and implementation of new technologies respects and promotes human rights. Traditional distinctions between public and private must be updated to remain relevant in the face of deeply enmeshed state and corporate action in connection with technological innovation.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-lawsocsci-060220-081955
2020-10-13
2024-04-20
Loading full text...

Full text loading...

/deliver/fulltext/lawsocsci/16/1/annurev-lawsocsci-060220-081955.html?itemId=/content/journals/10.1146/annurev-lawsocsci-060220-081955&mimeType=html&fmt=ahah

Literature Cited

  1. Abbas M, Al-Wohaibi E, Donovan J, Hale E, Marugg T et al. 2019. Invisible threats: mitigating the risk of violence from online hate speech against human rights defenders Rep., Am. Bar Assoc. Chicago:
  2. AI Now Inst., Aoun S, ARTICLE 19, Aspiration, Beebe M et al. 2019. Open Letter to WFP re: Palantir Agreement Lett., Respons. Data. https://responsibledata.io/2019/02/08/open-letter-to-wfp-re-palantir-agreement/
  3. Al-Sharif M. 2018. The dangers of digital activism. New York Times Sept. 16. https://www.nytimes.com/2018/09/16/opinion/politics/the-dangers-of-digital-activism.html
    [Google Scholar]
  4. Alston P. 2019. Extreme poverty and human rights Report of the Special Rapporteur on Extreme Poverty and Human Rights, UN Hum Rights Counc., UN Doc. A/74/493, Oct 11
  5. Amnesty International 2019. Morocco: human rights defenders targeted with NSO group's spyware. Amnesty International Oct. 10. https://www.amnesty.org/en/latest/news/2019/10/moroccan-human-rights-defenders-targeted-using-malicious-nso-israeli-spyware/
    [Google Scholar]
  6. Anderson M, Perrin A, Jiang J, Kumar M 2019. 10% of Americans don't use the internet. Who are they. Fact Tank April 22. https://www.pewresearch.org/fact-tank/2019/04/22/some-americans-dont-use-the-internet-who-are-they/
    [Google Scholar]
  7. Angwin J, Larson J. 2016a. Machine bias. ProPublica May 23. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
    [Google Scholar]
  8. Angwin J, Larson J. 2016b. Bias in criminal risk scores is mathematically inevitable, researchers say. ProPublica Dec. 30. https://www.propublica.org/article/bias-in-criminal-risk-scores-is-mathematically-inevitable-researchers-say
    [Google Scholar]
  9. Barrett PM. 2019. Disinformation and the 2020 election: how the social media industry should prepare Rep., Stern School Bus., N.Y. Univ New York: https://bhr.stern.nyu.edu/tech-disinfo-and-2020-election
  10. Barry-Jester AM. 2015. Should prison sentences be based on crimes that haven't been committed yet. FiveThirtyEight Aug. 4. https://fivethirtyeight.com/features/prison-reform-risk-assessment/
    [Google Scholar]
  11. Benkler Y. 2017. Law, innovation, and collaboration in networked economy and society. Annu. Rev. Law Soc. Sci. 13:231–50
    [Google Scholar]
  12. Berners-Lee T. 2019. 30 years on, what's next #ForTheWeb. Web Foundation March 12. https://webfoundation.org/2019/03/web-birthday-30/
    [Google Scholar]
  13. Birhane A. 2019. The algorithmic colonization of Africa. CyFyAfrica2019 Blog July 10. https://abebabirhane.wordpress.com/2019/07/10/the-algorithmic-colonization-of-africa/
    [Google Scholar]
  14. Bloch-Wehba H. 2019. Global platform governance: private power in the shadow of the state. SMU Law Rev 72:27–80
    [Google Scholar]
  15. Bolukbasi T, Chang K-W, Zou J, Saligrama V, Kalai A 2016. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. Proceedings of the 30th Conference on Neural Information Processing Systems (NIPS 2016)4356–64 New York: Assoc. Comput. Mach.
    [Google Scholar]
  16. Bond P, Dugard J. 2010. The case of Johannesburg water: what really happened at the pre-paid “Parish pump. .” Law Dem. Dev. 12:11–28
    [Google Scholar]
  17. Boran M. 2019. “Short window” to stop AI taking control of society, warns ex-Google employee. Irish Times Dec. 5. https://www.irishtimes.com/business/technology/short-window-to-stop-ai-taking-control-of-society-warns-ex-google-employee-1.4104535
    [Google Scholar]
  18. Buolamwini J, Gebru T. 2018. Gender shades: intersectional accuracy disparities in commercial gender classification. Proc. Mach. Learn. Res. 81:7791
    [Google Scholar]
  19. Burdick A. 2017. The A.I. “gaydar” study and the real dangers of big data. New Yorker Sept. 15. https://www.newyorker.com/news/daily-comment/the-ai-gaydar-study-and-the-real-dangers-of-big-data
    [Google Scholar]
  20. Callamard A. 2019. The human rights obligations of non-state actors. Human Rights in the Age of Platforms RF Jørgensen 191–226 Cambridge, MA: MIT Press
    [Google Scholar]
  21. Captain S. 2019. Portland plans to propose the strictest facial recognition ban in the country. Fast Company Dec. 2. https://www.fastcompany.com/90436355/portlands-proposed-facial-recognition-ban-could-be-the-strictest-yet
    [Google Scholar]
  22. Chammah M, Hansen M. 2016. Policing the future. The Marshall Project Feb. 3. https://www.themarshallproject.org/2016/02/03/policing-the-future
    [Google Scholar]
  23. Chesney B, Citron D. 2019. Deep fakes: a looming challenge for privacy, democracy, and national security. Calif. Law. Rev. 107:1753–819
    [Google Scholar]
  24. Cockerell I. 2019. Inside China's massive surveillance operation. Wired May 9. https://www.wired.com/story/inside-chinas-massive-surveillance-operation/
    [Google Scholar]
  25. Cope KL, Crabtree C, Lupu Y 2018. Beyond physical integrity. Law Contemp. Probl. 81:4185–95
    [Google Scholar]
  26. Corbett-Davies S, Pierson E, Feller A, Goel S 2016. A computer program used for bail and sentencing decisions was labeled biased against blacks. It's actually not that clear. Washington Post Oct. 17. https://www.washingtonpost.com/news/monkey-cage/wp/2016/10/17/can-an-algorithm-be-racist-our-analysis-is-more-cautious-than-propublicas/
    [Google Scholar]
  27. Costanza-Chock S. 2019. Designing AI with justice. Public Books Oct. 23. https://www.publicbooks.org/designing-ai-with-justice/
    [Google Scholar]
  28. Costanza-Chock S. 2020. Design Justice: Community-Led Practices to Build the Worlds We Need Cambridge, MA: MIT Press
  29. Cozzens S, Thakur D. 2014. Problems and concepts. Innovation and Inequality: Emerging Technologies in an Unequal World S Cozzens, D Thakur 3–22 Cheltenham, UK: Edward Elgar Publ.
    [Google Scholar]
  30. Crawford K, Schultz J. 2019. AI systems as state actors. Columbia Law Rev 119:1941–72
    [Google Scholar]
  31. Dattner B, Chamorro-Premuzic T, Buchband R, Schettler L 2019. The legal and ethical implications of using AI in hiring. Harvard Business Review April 25. https://hbr.org/2019/04/the-legal-and-ethical-implications-of-using-ai-in-hiring
    [Google Scholar]
  32. De-Arteaga M, Romanov A, Wallach H, Chayes J, Borgs C et al. 2019. Bias in bios: a case study of semantic representation bias in a high-stakes setting. Proceedings of the Conference on Fairness, Accountability, and Transparency—FAT* ’19120–28 Atlanta: ACM Press
    [Google Scholar]
  33. Dubberley S, Koenig A, Murray D 2020. Introduction: the emergence of digital witnesses. Digital Witness: Using Open Source Information for Human Rights Investigation, Documentation, and Accountability S Dubberley, A Koenig, D Murray 1–11 Oxford, UK: Oxford Univ. Press
    [Google Scholar]
  34. Garvie C, Bedoya A, Frankle J 2016. The perpetual line-up: unregulated police face recognition in America Rep., Georgetown Law Cent. Priv. Technol Washington, DC: https://www.perpetuallineup.org/
  35. Gillespie T. 2018. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media New Haven, CT: Yale Univ. Press
  36. Green B. 2018. The Smart Enough City: Putting Technology in Its Place to Reclaim Our Urban Future Cambridge, MA: MIT Press
  37. Green B. 2019. “Good” isn't good enough Paper presented at the AI for Social Good Workshop at NeurIPS Vancouver, Can: https://www.benzevgreen.com/wp-content/uploads/2019/11/19-ai4sg.pdf
  38. Greenwood F. 2019. Why humanitarians are worried about Palantir's partnership with the U.N. Slate Magazine Feb. 13. https://slate.com/technology/2019/02/palantir-un-world-food-programme-data-humanitarians.html
    [Google Scholar]
  39. Gregory S. 2019. Cameras everywhere revisited: how digital technologies and social media aid and inhibit human rights documentation and advocacy. J. Hum. Rights Pract. 11:2373–92
    [Google Scholar]
  40. Griffiths J. 2019. China is rolling out facial recognition for all new mobile phone numbers. CNN Dec. 2. https://www.cnn.com/2019/12/02/tech/china-facial-recognition-mobile-intl-hnk-scli/index.html
    [Google Scholar]
  41. Hopkins N, Sabbagh D. 2019. WhatsApp spyware attack was attempt to hack human rights data, says lawyer. Guardian May 14. https://www.theguardian.com/technology/2019/may/14/whatsapp-spyware-vulnerability-targeted-lawyer-says-attempt-was-desperate
    [Google Scholar]
  42. Indep. Int. Fact-Finding Mission Myanmar 2018. Report of the Detailed Findings of the Independent International Fact-Finding Mission on Myanmar UN Doc. A/HRC/39/CRP.2. https://www.ohchr.org/Documents/HRBodies/HRCouncil/FFM-Myanmar/A_HRC_39_64.pdf
  43. Jones C. 2018. 2,000 wrongly identified as “criminals” at Champions League final. WalesOnline May 5. https://www.walesonline.co.uk/news/wales-news/facial-recognition-wrongly-identified-2000-14619145
    [Google Scholar]
  44. Jørgensen RF. 2018. Human rights and private actors. See Land & Aronson 2018 243–69
  45. Kapczynski A. 2012. The cost of price: why and how to get beyond intellectual property internalism. UCLA Law Rev 59:970–1026
    [Google Scholar]
  46. Karanicolas M 2019–2020. Squaring the circle between freedom of expression and platform law. J. Technol. Law Policy 20: https://doi.org/10.5195/tlp.2020.236
    [Crossref] [Google Scholar]
  47. Kaye D. 2019. Surveillance and human rights Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, UN Hum. Rights Counc. UN Doc. A/HRC/41/35, May 28
  48. Kaye D. 2020. Disease pandemics and the freedom of opinion and expression Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, UN Hum. Rights Counc UN Doc. A/HRC/44/49, April 23
  49. Kelly S, Truong M, Shahbaz A, Earp M, White J 2017. Freedom on the net 2017: manipulating social media to undermine democracy Doc., Freedom House Washington, DC: https://freedomhouse.org/report/freedom-net/2017/manipulating-social-media-undermine-democracy
  50. Klonick K. 2018. The new governors: the people, rules, and processes governing online speech. Harvard Law Rev 131:1598–670
    [Google Scholar]
  51. Koenig A. 2019. “Half the truth is often a great lie”: deep fakes, open source information, and international criminal law. AJIL Unbound 113:250–55
    [Google Scholar]
  52. Laidlaw EB. 2015. Regulating Speech in Cyberspace: Gatekeepers, Human Rights and Corporate Responsibility Cambridge, UK: Cambridge Univ. Press
  53. Land MK. 2015. Participatory fact-finding: developing new approaches to human rights investigation through new technologies.. The Future of Human Rights Fact-Finding P Alston, S Knuckey 399–424 Oxford, UK: Oxford Univ. Press
    [Google Scholar]
  54. Land MK. 2019. Regulating private harms online: content regulation under human rights law. Human Rights in the Age of Platforms RF Jørgensen 285–316 Cambridge, MA: MIT Press
    [Google Scholar]
  55. Land MK. 2020. Against privatized censorship: proposals for responsible delegation. Va. J. Int. Law 60:363–432
    [Google Scholar]
  56. Land MK, Aronson JD 2018. New Technologies for Human Rights Law and Practice Cambridge, UK: Cambridge Univ. Press, 1st ed..
  57. Langvardt K. 2018. Regulating online content moderation. Georgetown Law J 106:1353–88
    [Google Scholar]
  58. Lanier J. 2009. You Are Not a Gadget: A Manifesto New York: Vintage
  59. Latonero M. 2019. Opinion: AI for good is often bad. WIRED Nov. 18. https://www.wired.com/story/opinion-ai-for-good-is-often-bad/
    [Google Scholar]
  60. Levin S. 2017. Face-reading AI will be able to detect your politics and IQ, professor says. Guardian Sept. 12. https://www.theguardian.com/technology/2017/sep/12/artificial-intelligence-face-recognition-michal-kosinski
    [Google Scholar]
  61. Lum K, Isaac W. 2016. To predict and serve. Significance 13:514–19
    [Google Scholar]
  62. Mazzucato M. 2014. The Entrepreneurial State: Debunking Public versus Private Sector Myths New York: Anthem, Rev. ed..
  63. Megiddo T. 2020. Online activism, digital domination, and the rule of trolls: mapping and theorizing technological oppression by governments. Columbia J. Transnatl. Law 58:394–442
    [Google Scholar]
  64. Miller AP. 2018. Want less-biased decisions? Use algorithms. Harvard Business Review July 26. https://hbr.org/2018/07/want-less-biased-decisions-use-algorithms
    [Google Scholar]
  65. Mohler GO, Short MB, Malinowski S, Johnson M, Tita GE et al. 2015. Randomized controlled field trials of predictive policing. J. Am. Stat. Assoc. 110:5121399–411
    [Google Scholar]
  66. Mozur P. 2019. One month, 500,000 face scans: how China is using A.I. to profile a minority. New York Times April 14. https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html
    [Google Scholar]
  67. Nay O. 2020. Comment: Can a virus undermine human rights. Lancet 5:238–39
    [Google Scholar]
  68. Obermeyer Z, Powers B, Vogeli C, Mullainathan S 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366:6464447–53
    [Google Scholar]
  69. O'Neil C. 2017. The era of blind faith in big data must end. TED2017 April Vancouver, Can: https://www.ted.com/talks/cathy_o_neil_the_era_of_blind_faith_in_big_data_must_end
    [Google Scholar]
  70. Ozer N. 2019. Face recognition tech presents a surveillance issue and Amazon is running amok. USA Today Jan. 20. https://www.usatoday.com/story/opinion/2019/01/20/face-recognition-surveillance-issue-amazon-google-microsoft-column/2581992002/
    [Google Scholar]
  71. Picard RW. 1997. Affective Computing Cambridge, MA: MIT Press
  72. Piracés E. 2018. Let's avoid an artificial intelligentsia: inclusion, artificial intelligence, and human rights. Medium Oct. 12. https://points.datasociety.net/lets-avoid-an-artificial-intelligentsia-inclusion-artificial-intelligence-and-human-rights-3905d708e7ed
    [Google Scholar]
  73. Priv. Int., Artic. 19 2018. Privacy and freedom of expression in the age of artificial intelligence Rep., Priv. Int London: https://www.article19.org/wp-content/uploads/2018/04/Privacy-and-Freedom-of-Expression-In-the-Age-of-Artificial-Intelligence-1.pdf
  74. Qiang X. 2019. The road to digital unfreedom: President Xi's surveillance state. J. Democr. 30:153–67
    [Google Scholar]
  75. Quach K. 2019. The infamous AI gaydar study was repeated—and, no, code can't tell if you're straight or not just from your face. Register March 5. https://www.theregister.co.uk/2019/03/05/ai_gaydar/
    [Google Scholar]
  76. Ram N. 2018. Innovating criminal justice. Northwest. Univ. Law Rev. 112:4659–724
    [Google Scholar]
  77. Richardson R, Cahn A, Kak A, Diaz A, Samant A et al. 2019a. Confronting black boxes: a shadow report of the New York City Automated Decision System Task Force Rep., AI Now Inst New York: https://ainowinstitute.org/ads-shadowreport-2019.pdf
  78. Richardson R, Schultz JM, Crawford K 2019b. Dirty data, bad predictions: how civil rights violations impact police data, predictive policing systems, and justice. N.Y. Univ. Law Rev. 94:192–233
    [Google Scholar]
  79. Roth K. 2004. Defending economic, social and cultural rights: practical issues faced by an international human rights organization. Hum. Rights Q. 26:163–73
    [Google Scholar]
  80. Schippers B. 2018. Face recognition technology: the human rights concerns. Eolas Magazine Dec. https://www.eolasmagazine.ie/face-recognition-technology-the-human-rights-concerns/
    [Google Scholar]
  81. Shaver L. 2018. Safeguarding human rights from problematic technologies. See Land & Aronson 2018 25–45
  82. Snow J. 2018. Amazon's face recognition falsely matched 28 members of Congress with mugshots. Free Future July 26. https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28
    [Google Scholar]
  83. Sweeney L. 2013. Discrimination in online ad delivery. Commun. ACM 56:544–54
    [Google Scholar]
  84. Taylor A. 2018. The automation charade. Logic Magazine Aug. 1. https://logicmag.io/failure/the-automation-charade/
    [Google Scholar]
  85. Thompson SA, Warzel C. 2019. Twelve million phones, one dataset, zero privacy. New York Times Dec. 19. https://www.nytimes.com/interactive/2019/12/19/opinion/location-tracking-cell-phone.html
    [Google Scholar]
  86. Tucker P. 2016. Refugee or terrorist? IBM thinks its software has the answer. Defense One Jan. 27. https://www.defenseone.com/technology/2016/01/refugee-or-terrorist-ibm-thinks-its-software-has-answer/125484/
    [Google Scholar]
  87. Tufekci Z. 2017. Twitter and Tear Gas: The Power and Fragility of Networked Protest New Haven, CT: Yale Univ. Press
  88. UN News 2020. Startling disparities in digital learning emerge as COVID-19 spreads: UN Education Agency. UN News April 21. https://news.un.org/en/story/2020/04/1062232
    [Google Scholar]
  89. van Veen C. 2020. Landmark judgment from the Netherlands on digital welfare states and human rights. OpenGlobalRights March 19. https://www.openglobalrights.org/landmark-judgment-from-netherlands-on-digital-welfare-states/
    [Google Scholar]
  90. Vincent J. 2018. Google “fixed” its racist algorithm by removing gorillas from its image-labeling tech. The Verge Jan. 12. https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai
    [Google Scholar]
  91. Wang Y, Kosinski M. 2018. Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. J. Personal. Soc. Psychol. 114:2246–57
    [Google Scholar]
  92. Weizman E. 2017. Forensic Architecture: Violence at the Threshold of Detectability Brooklyn, NY: Zone Books
  93. West SM, Whittaker M, Crawford K 2019. Discriminating systems: gender, race, and power in AI Rep., AI Now Inst New York: https://ainowinstitute.org/discriminatingsystems.pdf
  94. Wexler R. 2018. Life, liberty, and trade secrets: intellectual property in the criminal justice system. Stanford Law Rev 70:1343–429
    [Google Scholar]
  95. Whittaker M, Alper M, College O, Kaziunas L, Morris MR 2019. Disability, bias, and AI Rep., AI Now Inst New York: https://ainowinstitute.org/disabilitybiasai-2019.pdf
  96. Wu X, Zhang X. 2016. Responses to critiques on machine learning of criminality perceptions. arXiv:1611.04135 [cs.CV] (addendum)
  97. Zuboff S. 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power New York: PublicAffairs, 1st ed..
/content/journals/10.1146/annurev-lawsocsci-060220-081955
Loading
  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error