1932

Abstract

While generative AI shares some similarities with previous technological breakthroughs, it also raises unique challenges for containing social and economic harms. State approaches to AI governance vary; some lay a foundation for transnational governance whereas others do not. We consider some technical dimensions of AI safety in both open and closed systems, as well as the ideas that are presently percolating to safeguard their future development. Examining initiatives for the global community and for the coalition of open societies, we argue for building a dual-track interactive strategy for containing AI's potentially nightmarish unintended consequences. We conclude that AI safety is AI governance, which means that pluralist efforts to bridge gaps between theory and practice and the STEM–humanities divide are critical for democratic sustainability.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-polisci-041322-042247
2024-07-29
2024-12-14
Loading full text...

Full text loading...

/deliver/fulltext/polisci/27/1/annurev-polisci-041322-042247.html?itemId=/content/journals/10.1146/annurev-polisci-041322-042247&mimeType=html&fmt=ahah

Literature Cited

  1. 60 Minutes. 2022.. TikTok in China versus the United States. . 60 Minutes, Nov. 8. https://www.youtube.com/watch?v=0j0xzuh-6rY
    [Google Scholar]
  2. Acemoğlu D. 2021.. Harms of AI. NBER Work. Pap. 29247
    [Google Scholar]
  3. Acemoğlu D. 2022.. Harms of AI. . See Bullock et al. 2022, https://doi.org/10.1093/oxfordhb/9780197579329.013.65
    [Crossref]
  4. AI Safety Summit. 2023.. The Bletchley Declaration by countries attending the AI Safety Summit. , 1–2 November 2023. https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023
  5. Allen D, Hubbard S, Lim W, Stanger A, Wagman S, Zalesne K. 2024.. A roadmap for governing AI: technology governance and power sharing liberalism. Work. Pap., Ash Cent. Democr. Gov. Innov. , Harvard Kennedy School, Cambridge, MA:. https://www.youtube.com/watch?v=0j0xzuh-6rYhttps://ash.harvard.edu/publications/roadmap-governing-ai-technology-governance-and-power-sharing-liberalism
    [Google Scholar]
  6. Altman S, Brockman G, Sutskever I. 2023.. Governance of superintelligence. . OpenAI, May 22. https://openai.com/blog/governance-of-superintelligence
    [Google Scholar]
  7. Amodei D. 2023.. Written testimony of Dario Amodei, PhD, co-founder and CEO, Anthropic: for a hearing on “Oversight of A.I.: Principles for Regulation” before the Judiciary Committee Subcommittee on Privacy, Technology, and the Law. . US Senate, July 25. https://www.judiciary.senate.gov/imo/media/doc/2023-07-26_-_testimony_-_amodei.pdf
  8. Anderljung M, Barnhart J, Kolinek A, Leung J. 2023.. Frontier AI regulation: managing emerging risks to public safety. . arXiv:2307.03718 [cs.CY]
  9. Anderljung M, Hazell J. 2023.. Protecting society from AI misuse: When are restrictions on capabilities warranted?. arXiv:2303.09377 [cs.AI]
  10. Anthropic 2023a.. Frontier model security. . Anthropic, July 25. https://www.anthropic.com/news/frontier-model-security
    [Google Scholar]
  11. Anthropic 2023b.. Frontier threats: red teaming for AI safety. . Anthropic, July 26. https://www.anthropic.com/news/frontier-threats-red-teaming-for-ai-safety
    [Google Scholar]
  12. Arsenault A, Kreps S. 2022.. AI and international politics. . See Bullock et al. 2022, https://doi.org/10.1093/oxfordhb/9780197579329.013.49
    [Crossref]
  13. Askell A, Brundage M, Hadfield G. 2019.. The role of cooperation in responsible AI development. . arXiv:1907.04534 [cs.CY]
  14. Bajraktari Y, Gable M, Ponmakha A. 2023.. Generative AI: the future of innovation power. Rep. Spec. Compet. Stud. Proj., Arlington, VA:. https://www.scsp.ai/wp-content/uploads/2023/09/GenAI-web.pdf
    [Google Scholar]
  15. Barrett AM, Hendrycks D, Newman J, Nonnecke B. 2023.. Actionable guidance for high-consequence AI risk management: towards standards addressing AI catastrophic risks. . arXiv:2206.08966 [cs.CY]
  16. Belfield H. 2022.. Compute and antitrust. . Verfassungsblog: On Matters Constitutional, Aug. 19. https://verfassungsblog.de/compute-and-antitrust/
    [Google Scholar]
  17. Blumenthal R, Hawley J. 2023.. Bipartisan framework for U.S. AI Act. Sep. 7. . Richard Blumenthal, U.S. Senator for Connecticut. https://www.blumenthal.senate.gov/imo/media/doc/09072023bipartisanaiframework.pdf
    [Google Scholar]
  18. Boix C. 2022.. AI and the economic and informational foundations of democracy. . See Bullock et al. 2022, https://doi.org/10.1093/oxfordhb/9780197579329.013.64
    [Crossref]
  19. Bommasani R, Hashimoto T, Ho D, Schaake M, Liang P. 2023b.. Towards compromise: a concrete two-tier proposal for foundation models in the EU AI Act. Work. Pap., Cent. Res. Found. Models , Stanford Univ., Stanford, CA:. https://crfm.stanford.edu/2023/12/01/ai-act-compromise.html
    [Google Scholar]
  20. Bommasani R, Kapoor S, Zhang D, Narayanan A, Liang P. 2023c.. June 12 letter to the Department of Commerce. Cent. Res. Found. Models , Stanford Univ., Stanford, CA:. https://hai.stanford.edu/sites/default/files/2023-06/Reponse-to-NTIAs-.pdf
    [Google Scholar]
  21. Bommasani R, Klyman K, Zhang D, Liang P. 2023a.. Do foundation model providers comply with the draft EU AI Act? Work. Pap., Cent. Res. Found. Models , Stanford Univ., Stanford, CA:
    [Google Scholar]
  22. Boulanin V, Saalman L, Topychkanov P, Su F, Peldán C. 2020.. Artificial Intelligence, Strategic Stability and Nuclear Risk. Stockholm:: Stockholm Int. Peace Res. Inst. https://www.sipri.org/publications/2020/policy-reports/artificial-intelligence-strategic-stability-and-nuclear-risk
    [Google Scholar]
  23. Bradford A. 2020.. The Brussels Effect: How the European Union Rules the World. New York:: Oxford Univ. Press
    [Google Scholar]
  24. Bradford A. 2023a.. Digital Empires: The Global Battle to Regulate Technology. New York:: Oxford Univ. Press
    [Google Scholar]
  25. Bradford A. 2023b.. The race to regulate artificial intelligence. . Foreign Aff., June 27. https://www.foreignaffairs.com/united-states/race-regulate-artificial-intelligence
    [Google Scholar]
  26. Bremmer I, Suleyman M. 2023.. The AI power paradox. . Foreign Aff., Aug. 16. https://www.foreignaffairs.com/world/artificial-intelligence-power-paradox
    [Google Scholar]
  27. Bucknall B, Trager R. 2023.. Structured access for third-party research on frontier AI models: investigating researchers’ model access requirements. Rep. , Cent. Gov. AI, Oxford Univ., Oxford, UK:. https://cdn.governance.ai/Structured_Access_for_Third-Party_Research.pdf
    [Google Scholar]
  28. Bullock JB, Chen Y-C, Himmelreich J, Hudson VM, Korinek A, et al., eds. 2022.. The Oxford Handbook of AI Governance. New York:: Oxford Univ. Press
    [Google Scholar]
  29. Casper S, Davies X, Shi C, Gilbert TK, Scheurer J, et al. 2023.. Open problems and fundamental limitations of reinforcement learning from human feedback. . arXiv:2307.15217 [cs.AI]
  30. Cent. AI Safety. 2023.. Statement on AI risk. . Cent. AI Safety. https://www.safe.ai/statement-on-ai-risk
    [Google Scholar]
  31. Clement-Jones L. 2023.. The Westminster Parliament's impact on UK AI strategy. . In Missing Links in AI Governance, ed. B Prud'homme, C Régis, G Farnadi , pp. 191209. Paris:: UNESCO
    [Google Scholar]
  32. Dafoe A. 2018.. AI governance: a research agenda. Rep. Future Humanit. Inst. , Univ. Oxford, Oxford, UK:. https://www.fhi.ox.ac.uk/wp-content/uploads/GovAI-Agenda.pdf
    [Google Scholar]
  33. Dafoe A. 2022.. AI governance: overview and theoretical lenses. . See Bullock et al. 2022, https://doi.org/10.1093/oxfordhb/9780197579329.013.2
    [Crossref]
  34. Danks D. 2022.. Governance via explainability. . See Bullock et al. 2022, https://doi.org/10.1093/oxfordhb/9780197579329.013.11
    [Crossref]
  35. Danzig R. 2018.. Technology roulette: managing loss of control as many militaries pursue technological superiority. Rep. Cent. New Am. Secur., Washington, DC:. https://www.cnas.org/publications/reports/technology-roulette
    [Google Scholar]
  36. Davidson T. 2023.. What a compute-centric framework says about takeoff speeds. . Lesswrong Online Forum, Jan. 22. https://www.lesswrong.com/posts/Gc9FGtdXhK9sCSEYu/what-a-compute-centric-framework-says-about-ai-takeoff
    [Google Scholar]
  37. Ding J. 2022.. Dueling perspectives in AI and U.S.–China relations: technonationalism vs. technoglobalism. . See Bullock et al. 2022, https://doi.org/10.1093/oxfordhb/9780197579329.013.53
    [Crossref]
  38. Dobbe R. 2022.. System safety and artificial intelligence. . See Bullock et al. 2022, https://doi.org/10.1093/oxfordhb/9780197579329.013.67
    [Crossref]
  39. Egan J, Heim L. 2023.. Oversight for frontier AI through a know-your-customer scheme for compute providers. . arXiv:2310.13625 [cs.CY]
  40. Erdil E, Besiroglu T. 2023.. Explosive growth from AI automation: a review of the arguments. . arXiv:2309.11690 [econ.GN]
  41. Espinoza J, Criddle C, Liu Q. 2023.. The global race to set the rules for AI. . Financ. Times, Sep. 13. https://www.ft.com/content/59b9ef36-771f-4f91-89d1-ef89f4a2ec4e
    [Google Scholar]
  42. European Parliament. 2024.. The principle of subsidiarity. Fact Sheet Eur. Union. https://www.europarl.europa.eu/erpl-app-public/factsheets/pdf/en/FTU_1.2.2.pdf
    [Google Scholar]
  43. Fischer S-C, Leung J, Anderljung M, O'Keefe C, Torges S, et al. 2021.. AI policy levers: a review of the U.S. government's tools to shape AI research, development, and deployment. Rep. Cent. Gov. AI , Future Humanit. Inst., Univ. Oxford, Oxford, UK:. https://www.fhi.ox.ac.uk/wp-content/uploads/2021/03/AI-Policy-Levers-A-Review-of-the-U.S.-Governments-tools-to-shape-AI-research-development-and-deployment—-Fischer-et-al.pdf
    [Google Scholar]
  44. Fist T, Heim L, Schneider J. 2023.. Chinese firms are evading chip controls. . Foreign Policy, June 21. https://foreignpolicy.com/2023/06/21/china-united-states-semiconductor-chips-sanctions-evasion/
    [Google Scholar]
  45. Future of Life Inst. 2023.. Pause giant AI experiments: an open letter. . Future of Life, Mar. 22. https://futureoflife.org/open-letter/pause-giant-ai-experiments/
    [Google Scholar]
  46. Ganguli D, Hernandez D, Lovitt L, DasSarma N, Henighan T, et al. 2022.. Predictability and surprise in large generative models. . In FAccT ’22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 174764. New York:: Assoc. Comput. Mach. https://dl.acm.org/doi/pdf/10.1145/3531146.3533229
    [Google Scholar]
  47. Gopal A, Helm-Burger N, Justen L, Soice EH, Tzeng T, et al. 2023.. Will releasing the weights of future large language models grant widespread access to pandemic agents?. arXiv:2310.18233 [cs.AI]
  48. Graham L, Warren E. 2023.. When it comes to Big Tech, enough is enough. . N. Y. Times, July 27. https://www.nytimes.com/2023/07/27/opinion/lindsey-graham-elizabeth-warren-big-tech-regulation.html
    [Google Scholar]
  49. Gray ML, Suri S. 2019.. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Boston:: Houghton Mifflin Harcourt
    [Google Scholar]
  50. Guszcza J, Danks D, Fox CR, Hammond KJ, Ho DE, et al. 2022.. Hybrid intelligence: a paradigm for more responsible practice. Soc. Sci. Res. Netw. http://dx.doi.org/10.2139/ssrn.4301478
    [Crossref] [Google Scholar]
  51. gwern. 2023.. Douglas Hofstadter changes his mind on deep learning & AI risk. . Lesswrong Online Forum, July 2. https://www.lesswrong.com/posts/kAmgdEjq2eYQkB5PP/douglas-hofstadter-changes-his-mind-on-deep-learning-and-ai
    [Google Scholar]
  52. Harari Y. 2017.. Reboot for the AI revolution. . Nature 550::32427
    [Crossref] [Google Scholar]
  53. Hawley J. 2021.. The Tyranny of Big Tech. Washington, DC:: Regnery
    [Google Scholar]
  54. Heidegger M. 1977.. The Question Concerning Technology, and Other Essays., transl. W Lovitt. New York:: Garland
    [Google Scholar]
  55. Ho L, Barnhart J, Trager R, Bengio Y, Brundage M, et al. 2023.. International institutions for advanced AI. . arXiv:2307.04699 [cs.CY]
  56. Hoffman M. 2023.. The EU AI Act: a primer. . CSET Blog, Sep. 26, Cent. Secur. Emerg. Technol. , Georgetown Univ., Washington, DC:. https://cset.georgetown.edu/article/the-eu-ai-act-a-primer/
    [Google Scholar]
  57. Hoos H, Irgens M. 2023.. “AI made in Europe”—boost it or lose it. CLAIRE Statement on Future of AI in Europe, June 26 . Confed. Lab. Artif. Intell. Eur., The Hague, Neth.: https://claire-ai.org/wp-content/uploads/2023/06/CLAIRE-Statement-on-Future-of-AI-in-Europe-2023-1.pdf
    [Google Scholar]
  58. Horowitz M, Pindyck S, Mahoney C. 2022.. AI, the international balance of power, and national security strategy. . See Bullock et al. 2022, https://doi.org/10.1093/oxfordhb/9780197579329.013.55
    [Crossref]
  59. Horowitz M, Scharre P. 2021.. AI and international stability: risks and confidence-building measures. Rep. Cent. New Am. Secur., Jan. 12. https://www.cnas.org/publications/reports/ai-and-international-stability-risks-and-confidence-building-measures
    [Google Scholar]
  60. Jervis R. 1997.. System Effects: Complexity in Political and Social Life. Princeton, NJ:: Princeton Univ. Press
    [Google Scholar]
  61. Kauffman S. 2008.. Reinventing the Sacred: A New View of Science, Reason, and Religion. New York:: Basic Books
    [Google Scholar]
  62. Khan LM. 2017.. Amazon's antitrust paradox. . Yale Law J. 126:(3):710805
    [Google Scholar]
  63. Knight W. 2023.. OpenAI's CEO says the age of giant AI models is already over. . WIRED, Apr. 17. https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/
    [Google Scholar]
  64. Kokas A. 2023.. Trafficking Data: How China Is Winning the Battle for Digital Sovereignty. New York:: Oxford Univ. Press
    [Google Scholar]
  65. Kurzweil R. 2006.. The Singularity Is Near: When Humans Transcend Biology. New York:: Penguin
    [Google Scholar]
  66. Landemore H. 2020.. Open Democracy: Reinventing Popular Rule for the Twenty-First Century. Princeton, NJ:: Princeton Univ. Press
    [Google Scholar]
  67. Lanier J, Stanger A. 2024.. The one internet hack that could save everything. . Wired, Feb. 13. https://www.wired.com/story/the-one-internet-hack-that-could-save-everything-section-230/
    [Google Scholar]
  68. Lanier J, Weyl EG. 2018.. A blueprint for a better digital society. . Harvard Bus. Rev., Sep. 26. https://hbr.org/2018/09/a-blueprint-for-a-better-digital-society
    [Google Scholar]
  69. Larsen B. 2022.. The geopolitics of AI and the rise of digital sovereignty. . Brookings, Dec. 8. https://www.brookings.edu/articles/the-geopolitics-of-ai-and-the-rise-of-digital-sovereignty/
    [Google Scholar]
  70. Lazar S. 2022.. Power and AI: nature and justification. . See Bullock et al. 2022, https://doi.org/10.1093/oxfordhb/9780197579329.013.12
    [Crossref]
  71. Lee EA. 2023.. Edward A. Lee: Deep neural networks, explanations, and rationality. Lecture presented in 2nd Digital Humanism Summer School 2023, Sep. 6. https://www.youtube.com/watch?v=yva7kPCQ2lc
    [Google Scholar]
  72. Lessig L. 2006.. Code: Version 2.0. New York:: Basic Books
    [Google Scholar]
  73. Lipton E. 2023.. A.I. brings the robot wingman to aerial combat. . N. Y. Times, Aug. 27. https://www.nytimes.com/2023/08/27/us/politics/ai-air-force.html
    [Google Scholar]
  74. Maas MM, Villalobos JJ. 2023.. International AI institutions: a literature review of models, examples, and proposals. Soc. Sci. Res. Netw. https://doi.org/10.2139/ssrn.4579773
    [Crossref] [Google Scholar]
  75. Maslej N, Fattorini L, Brynjolfsson E, Etchemendy J, Ligett K, et al. 2023.. The AI Index 2023 annual report. Rep. Inst. Hum.-Cent. Artif. Intell. , Stanford Univ., Stanford, CA:. https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index-Report_2023.pdf
    [Google Scholar]
  76. Matthews D. 2023.. The AI rules that US policymakers are considering, explained. . Vox, Aug. 1. https://www.vox.com/future-perfect/23775650/ai-regulation-openai-gpt-anthropic-midjourney-stable
    [Google Scholar]
  77. McKean BL. 2022.. Disorienting Neoliberalism: Global Justice and the Outer Limit of Freedom. New York:: Oxford Univ. Press
    [Google Scholar]
  78. Microsoft. 2023.. Microsoft, Anthropic, Google, and OpenAI launch Frontier Model Forum. . Microsoft On the Issues Blog, Jul. 26. https://blogs.microsoft.com/on-the-issues/2023/07/26/anthropic-google-microsoft-openai-launch-frontier-model-forum/
    [Google Scholar]
  79. Miller C. 2022.. Chip Wars: The Fight for the World's Most Critical Technology. New York:: Scribner
    [Google Scholar]
  80. Mitchell M 2020.. Artificial Intelligence: A Guide for Thinking Humans. New York:: Picador
    [Google Scholar]
  81. Mitchell M. 2023.. Can large language models reason?. AI: A Guide for Thinking Humans Substack, Sep. 10. https://aiguide.substack.com/p/can-large-language-models-reason
    [Google Scholar]
  82. Mullainathan S, Obermeyer Z. 2021.. On the inequity of predicting A while hoping for B. . AEA Pap. Proc. 111::3742
    [Crossref] [Google Scholar]
  83. Noble S. 2018.. Algorithms of Oppression: How Search Engines Reinforce Racism. New York:: NYU Press
    [Google Scholar]
  84. O'Neil C. 2016.. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York:: Crown
    [Google Scholar]
  85. Ord T. 2022.. Lessons from the development of the atomic bomb. Rep. Cent. Gov. AI , Oxford Univ., Oxford, UK:. https://cdn.governance.ai/Ord_lessons_atomic_bomb_2022.pdf
    [Google Scholar]
  86. Ovadya A. 2022.. Bridging-based ranking. Rep. Belfer Cent. Sci. Int. Aff. , Harvard Kennedy Sch., Cambridge, MA:. https://www.belfercenter.org/sites/default/files/files/publication/TAPP-Aviv_BridgingBasedRanking_FINAL_220518_0.pdf
    [Google Scholar]
  87. Park PS, Goldstein S, O'Gara A, Chen M, Hendrycks D. 2023.. AI deception: a survey of examples, risks, and potential solutions. . arXiv:2308.14752 [cs.CY]
  88. Patel D, Ahmad A. 2023.. Google “we have no moat, and neither does OpenAI. .” SemiAnalysis, May 4. https://www.semianalysis.com/p/google-we-have-no-moat-and-neither
    [Google Scholar]
  89. Pearson M, Rithmire M, Tsai K 2022.. China's party-state capitalism and international backlash: from interdependence to insecurity. . Int. Secur. 47:(2):13576
    [Crossref] [Google Scholar]
  90. Rees T. 2022.. Non-human words: on GPT-3 as a philosophical laboratory. . Daedalus 151::16882
    [Crossref] [Google Scholar]
  91. Rescorla M. 2020.. The computational theory of mind. . In Stanford Encyclopedia of Philosophy Archive, ed. EN Zalta. https://plato.stanford.edu/archives/fall2020/entries/computational-mind/
    [Google Scholar]
  92. Russell S. 2022.. Banning lethal autonomous weapons: an education. . Issues Sci. Technol. Spring :6065. https://issues.org/wp-content/uploads/2022/04/60-65-Russell-Lethal-Autonomous-Weapons-Spring-2022.pdf
    [Google Scholar]
  93. Sandbrink JB. 2023.. Artificial intelligence and biological misuse: differentiating risks of language models and biological design tools. . arXiv:2306.13952 [cs.CY]
  94. Schaeffer R, Miranda B, Koyejo S. 2023.. Are emergent abilities of large language models a mirage?. arXiv:2304.15004 [cs.AI]
  95. Schlott L. 2023.. China is hurting our kids with TikTok but protecting its own youth with Douyin. . N. Y. Post, Feb. 26. https://nypost.com/2023/02/25/china-is-hurting-us-kids-with-tiktok-but-protecting-its-own/
    [Google Scholar]
  96. Seger E, Dreksler N, Moulange R, Dardaman E, Schuett J, et al. 2023.. Open-sourcing highly capable foundation models. Rep. Cent. Gov. AI , Oxford Univ., Oxford, UK:. https://cdn.governance.ai/Open-Sourcing_Highly_Capable_Foundation_Models_2023_GovAI.pdf
    [Google Scholar]
  97. Sheehan M. 2023.. Reverse engineering Chinese AI governance: China's AI regulations and how they get made. Rep. Carnegie Endow. Int. Peace, Washington, DC:. https://carnegieendowment.org/files/202307-Sheehan_Chinese%20AI%20gov.pdf
    [Google Scholar]
  98. Shevlane T, Farquhar S, Garfinkel B, Phuong M, Whittlestone J, et al. 2023.. Model evaluation for extreme risks. . arXiv:2305.15324 [cs.AI]
  99. Smith G, Kessler S, Alstott J, Mitre J. 2023.. Industry and Government Collaboration on Security Guardrails for AI Systems: Summary of the AI Safety and Security Workshops. Santa Monica, CA:: RAND
    [Google Scholar]
  100. Soice EH, Rocha R, Cordova K, Specter M, Esvelt KM. 2023.. Can large language models democratize access to dual-use biotechnology?. arXiv:2306.03809 [cs.CY]
  101. Stanger A. 2019.. Whistleblowers: Honesty in America from Washington to Trump. New Haven, CT:: Yale Univ. Press
    [Google Scholar]
  102. Stanger A. 2020.. Consumers versus citizens in democracy's public sphere. . Commun. ACM 63:(7):2931
    [Crossref] [Google Scholar]
  103. Stanger A. 2021.. Digital humanism and democracy in geopolitical context. Digital Humanism Sem. Ser. , TU Wien, June 15. https://www.youtube.com/watch?v=ELeR1ViNiUQ
    [Google Scholar]
  104. Stanger A. 2024.. The First Amendment meets Big Tech. . Daedalus. In press
    [Google Scholar]
  105. Stiglitz JE. 2002.. Globalization and Its Discontents. New York:: W. W. Norton
    [Google Scholar]
  106. Strittmatter K. 2020.. We Have Been Harmonized: Life in China's Surveillance State. New York:: Custom House
    [Google Scholar]
  107. Susskind RE, Susskind D. 2022.. The Future of the Professions: How Technology Will Transform the Work of Human Experts. Oxford, UK:: Oxford Univ. Press
    [Google Scholar]
  108. Tang A, Landemore H. 2021.. Taiwan's digital democracy, collaborative civic technologies, and beneficial information flows. Rep. Cent. Gov. AI , Oxford Univ., Oxford, UK:, Mar. 9. https://www.governance.ai/post/audrey-tang-and-helene-landemore-on-taiwans-digital-democracy-collaborative-civic-technologies-and-beneficial-information-flows
    [Google Scholar]
  109. Teachout Z. 2020.. Break ’Em Up: Recovering Our Freedom from Big Ag, Big Tech, and Big Money. New York:: All Points Books
    [Google Scholar]
  110. Toner H, Xiao J, Ding J. 2023.. The illusion of China's AI prowess. . Foreign Aff., June 2. https://www.foreignaffairs.com/china/illusion-chinas-ai-prowess-regulation
    [Google Scholar]
  111. Touvron H, Martin L, Stone K, Albert P, Almahairi A, et al. 2023.. Llama 2: open foundation and fine-tuned chat models. . arXiv:2307.09288 [cs.CL]
  112. Trager R, Harack B, Reuel A, Carnegie A, Heim L, et al. 2023.. International governance of civilian AI: a jurisdictional certification approach. . arXiv:2308.15514 [cs.AI]
  113. Viehoff J. 2022.. Beyond justice: artificial intelligence and the value of community. . See Bullock et al. 2022, https://doi.org/10.1093/oxfordhb/9780197579329.013.71
    [Crossref]
  114. Vipra J, Myers West S. 2023.. Computational Power and AI. AI Now Inst. https://ainowinstitute.org/wp-content/uploads/2023/09/AI-Now_Computational-Power-an-AI.pdf
    [Google Scholar]
  115. Von der Leyen U. 2023.. Statement by President von der Leyen at Session III of the G20, “One Future. .” Eur. Comm., Sep. 10, New Delhi. https://ec.europa.eu/commission/presscorner/detail/en/statement_23_4424
    [Google Scholar]
  116. von Neumann J. 1963.. Collected Works, VI, , 50419. Oxford, UK:: Pergamon
    [Google Scholar]
  117. Vredenburgh K. 2022.. Fairness. . See Bullock et al. 2022, https://doi.org/10.1093/oxfordhb/9780197579329.013.8
    [Crossref]
  118. Wei J, Tay Y, Bommasani R, Raffel C, Zoph B, et al. 2022.. Emergent abilities of large language models. . arXiv:2206.07682 [cs.CL]
  119. Weissinger L. 2022.. AI, complexity, and regulation. . See Bullock et al. 2022, https://doi.org/10.1093/oxfordhb/9780197579329.013.66
    [Crossref]
  120. White House. 2023a.. Biden-Harris administration launches artificial intelligence cyber challenge to protect America's critical software. Statement, Aug. 9, Briefing Room , White House, Washington, DC:. https://www.whitehouse.gov/briefing-room/statements-releases/2023/08/09/biden-harris-administration-launches-artificial-intelligence-cyber-challenge-to-protect-americas-critical-software/
    [Google Scholar]
  121. White House. 2023b.. Blueprint for an AI Bill of Rights. White Pap., Off. Sci. Technol. Policy , White House, Washington, DC:. https://www.whitehouse.gov/ostp/ai-bill-of-rights/
    [Google Scholar]
  122. White House. 2023c.. President Biden issues executive order on safe, secure, and trustworthy artificial intelligence. Fact Sheet, Oct. 30, Briefing Room , White House, Washington, DC:. https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
    [Google Scholar]
  123. White House. 2023d.. Biden-Harris administration secures voluntary commitments from leading artificial intelligence companies to manage the risks posed by AI. Fact Sheet, July 21, Briefing Room , White House, Washington, DC:. https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/
    [Google Scholar]
  124. White House. 2023e.. Executive Order on the safe, secure, and trustworthy development of artificial intelligence. Exec. Order, Oct. 30, Briefing Room , White House, Washington, DC:. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
    [Google Scholar]
  125. Zuboff S. 2019.. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York:: Public Affairs
    [Google Scholar]
/content/journals/10.1146/annurev-polisci-041322-042247
Loading
/content/journals/10.1146/annurev-polisci-041322-042247
Loading

Data & Media loading...

  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error