1932

Abstract

Moral psychology was shaped around three categories of agents and patients: humans, other animals, and supernatural beings. Rapid progress in artificial intelligence has introduced a fourth category for our moral psychology to deal with: intelligent machines. Machines can perform as moral agents, making decisions that affect the outcomes of human patients or solving moral dilemmas without human supervision. Machines can be perceived as moral patients, whose outcomes can be affected by human decisions, with important consequences for human–machine cooperation. Machines can be moral proxies that human agents and patients send as their delegates to moral interactions or use as a disguise in these interactions. Here we review the experimental literature on machines as moral agents, moral patients, and moral proxies, with a focus on recent findings and the open questions that they suggest.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-psych-030123-113559
2024-01-18
2024-04-28
Loading full text...

Full text loading...

/deliver/fulltext/psych/75/1/annurev-psych-030123-113559.html?itemId=/content/journals/10.1146/annurev-psych-030123-113559&mimeType=html&fmt=ahah

Literature Cited

  1. Angwin J, Larson J, Mattu S, Kirchner L. 2016. Machine bias. ProPublica May 23. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
    [Google Scholar]
  2. Awad E, Dsouza S, Bonnefon JF, Shariff A, Rahwan I. 2020a. Crowdsourcing moral machines. Commun. ACM 63:348–55
    [Google Scholar]
  3. Awad E, Dsouza S, Kim R, Schulz J, Henrich J et al. 2018. The Moral Machine experiment. Nature 563:772959–64
    [Google Scholar]
  4. Awad E, Levine S, Anderson M, Anderson SL, Conitzer V et al. 2022. Computational ethics. Trends Cogn. Sci. 26:5388–405
    [Google Scholar]
  5. Awad E, Levine S, Kleiman-Weiner M, Dsouza S, Tenenbaum JB et al. 2020b. Drivers are blamed more than their automated cars when both make mistakes. Nat. Hum. Behav. 4:2134–43
    [Google Scholar]
  6. Beckers N, Siebert LC, Bruijnes M, Jonker C, Abbink D. 2022. Drivers of partially automated vehicles are blamed for crashes that they cannot reasonably avoid. Sci. Rep. 12:16193
    [Google Scholar]
  7. Bernotat J, Eyssel F, Sachse J. 2021. The (fe)male robot: how robot body shape impacts first impressions and trust towards robots. Int. J. Soc. Robot. 13:3477–89
    [Google Scholar]
  8. Bigman YE, Gray K. 2018. People are averse to machines making moral decisions. Cognition 181:21–34
    [Google Scholar]
  9. Bigman YE, Waytz A, Alterovitz R, Gray K. 2019. Holding robots responsible: the elements of machine morality. Trends Cogn. Sci. 23:5365–68
    [Google Scholar]
  10. Bigman YE, Wilson D, Arnestad MN, Waytz A, Gray K. 2023. Algorithmic discrimination causes less moral outrage than human discrimination. J. Exp. Psychol. Gen. 152:14–27
    [Google Scholar]
  11. Bigman YE, Yam KC, Marciano D, Reynolds SJ, Gray K. 2021. Threat of racial and economic inequality increases preference for algorithm decision-making. Comput. Hum. Behav. 122:106859
    [Google Scholar]
  12. Birhane A. 2022. The unseen Black faces of AI algorithms. Nature 610:451–52
    [Google Scholar]
  13. Bonnefon JF, Černy D, Danaher J, Devillier N, Johansson V et al. 2020a. Ethics of Connected and Automated Vehicles: Recommendations on Road Safety, Privacy, Fairness, Explainability and Responsibility Brussels, Belg.: Publ. Off. Eur. Union
  14. Bonnefon JF, Shariff A, Rahwan I. 2016. The social dilemma of autonomous vehicles. Science 352:62931573–76
    [Google Scholar]
  15. Bonnefon JF, Shariff A, Rahwan I. 2019. The trolley, the bull bar, and why engineers should care about the ethics of autonomous cars [point of view]. Proc. IEEE 107:3502–4
    [Google Scholar]
  16. Bonnefon JF, Shariff A, Rahwan I. 2020b. The moral psychology of AI and the ethical opt-out problem. The Ethics of Artificial Intelligence SM Liao 109–26. Oxford, UK: Oxford Univ. Press
    [Google Scholar]
  17. Bono T, Croxson K, Giles A. 2021. Algorithmic fairness in credit scoring. Oxf. Rev. Econ. Policy 37:3585–617
    [Google Scholar]
  18. Briggs G, Scheutz M. 2014. How robots can affect human behavior: investigating the effects of robotic displays of protest and distress. Int. J. Soc. Robot. 6:3343–55
    [Google Scholar]
  19. Bryan G, Karlan D, Nelson S. 2010. Commitment devices. Annu. Rev. Econ. 2:671–98
    [Google Scholar]
  20. Buolamwini J, Gebru T. 2018. Gender shades: intersectional accuracy disparities in commercial gender classification. PMLR 81:77–91
    [Google Scholar]
  21. Cadario R, Longoni C, Morewedge CK. 2021. Understanding, explaining, and utilizing medical artificial intelligence. Nat. Hum. Behav. 5:121636–42
    [Google Scholar]
  22. Caldwell M, Andrews JT, Tanay T, Griffin LD. 2020. AI-enabled future crime. Crime Sci. 9:14
    [Google Scholar]
  23. Calvano E, Calzolari G, Denicolò V, Harrington JE Jr., Pastorello S. 2020a. Protecting consumers from collusive prices due to AI. Science 370:65201040–42
    [Google Scholar]
  24. Calvano E, Calzolari G, Denicolò V, Pastorello S. 2020b. Artificial intelligence, algorithmic pricing, and collusion. Am. Econ. Rev. 110:103267–97
    [Google Scholar]
  25. Cheng Y, Jiang H. 2022. Customer–brand relationship in the era of artificial intelligence: understanding the role of chatbot marketing efforts. J. Product Brand Manag. 31:2252–64
    [Google Scholar]
  26. Chouldechova A. 2017. Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big Data 5:2153–63
    [Google Scholar]
  27. Combs TS, Sandt LS, Clamann MP, McDonald NC. 2019. Automated vehicles and pedestrian safety: exploring the promise and limits of pedestrian detection. Am. J. Prev. Med. 56:11–7
    [Google Scholar]
  28. Cominelli L, Feri F, Garofalo R, Giannetti C, Meléndez-Jiménez MA et al. 2021. Promises and trust in human–robot interaction. Sci. Rep. 11:9687
    [Google Scholar]
  29. Connolly J, Mocz V, Salomons N, Valdez J, Tsoi N et al. 2020. Prompting prosocial human interventions in response to robot mistreatment. Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction211–20. New York: ACM
    [Google Scholar]
  30. Crandall JW, Oudah M, Tennom, Ishowo-Oloko F, Abdallah S et al. 2018. Cooperating with machines. Nat. Commun. 9:233
    [Google Scholar]
  31. Cushman F. 2015. Deconstructing intent to reconstruct morality. Curr. Opin. Psychol. 6:97–103
    [Google Scholar]
  32. Dafoe A, Bachrach Y, Hadfield G, Horvitz E, Larson K, Graepel T. 2021. Cooperative AI: Machines must learn to find common ground. Nature 593:33–36
    [Google Scholar]
  33. Danaher J. 2022. Tragic choices and the virtue of techno-responsibility gaps. Philos. Technol. 35:26
    [Google Scholar]
  34. Darling K, Nandy P, Breazeal C. 2015. Empathic concern and the effect of stories in human-robot interaction. 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)770–75. Piscataway, NJ: IEEE
    [Google Scholar]
  35. De Freitas J, Cikara M. 2021. Deliberately prejudiced self-driving vehicles elicit the most outrage. Cognition 208:104555
    [Google Scholar]
  36. De Kleijn R, van Es L, Kachergis G, Hommel B. 2019. Anthropomorphization of artificial agents leads to fair and strategic, but not altruistic behavior. Int. J. Hum.-Comput. Stud. 122:168–73
    [Google Scholar]
  37. de Melo CM, Marsella S, Gratch J. 2018. Social decisions and fairness change when people's interests are represented by autonomous agents. Auton. Agents Multi-Agent Syst. 32:163–87
    [Google Scholar]
  38. Dietvorst BJ, Bartels DM. 2022. Consumers object to algorithms making morally relevant tradeoffs because of algorithms' consequentialist decision strategies. J. Consum. Psychol. 32:3406–24
    [Google Scholar]
  39. Drugov M, Hamman J, Serra D. 2014. Intermediaries in corruption: an experiment. Exp. Econ. 17:78–99
    [Google Scholar]
  40. Epstein Z, Levine S, Rand DG, Rahwan I. 2020. Who gets credit for AI-generated art?. iScience 23:9101515
    [Google Scholar]
  41. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM et al. 2017. Dermatologist-level classification of skin cancer with deep neural networks. Nature 542:7639115–18
    [Google Scholar]
  42. Eyssel F, Hegel F. 2012. (S)he's got the look: gender stereotyping of robots. J. Appl. Soc. Psychol. 42:92213–30
    [Google Scholar]
  43. Faulhaber AK, Dittmer A, Blind F, Wächter MA, Timm S et al. 2019. Human decisions in moral dilemmas are largely described by utilitarianism: Virtual car driving study provides guidelines for autonomous driving vehicles. Sci. Eng. Ethics 25:399–418
    [Google Scholar]
  44. Ferrucci D, Brown E, Chu-Carroll J, Fan J, Gondek D et al. 2010. Building Watson: an overview of the DeepQA project. AI Mag. 31:359–79
    [Google Scholar]
  45. Fossa F, Sucameli I. 2022. Gender bias and conversational agents: an ethical perspective on social robotics. Sci. Eng. Ethics 28:23
    [Google Scholar]
  46. Franklin M, Ashton H, Awad E, Lagnado D. 2022. Causal framework of artificial autonomous agent responsibility. Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society276–84. New York: ACM
    [Google Scholar]
  47. Franklin M, Awad E, Lagnado D. 2021. Blaming automated vehicles in difficult situations. iScience 24:4102252
    [Google Scholar]
  48. Freedman R, Borg JS, Sinnott-Armstrong W, Dickerson JP, Conitzer V. 2020. Adapting a kidney exchange algorithm to align with human values. Artif. Intell. 283:103261
    [Google Scholar]
  49. Fumagalli E, Rezaei S, Salomons A. 2022. OK computer: worker perceptions of algorithmic recruitment. Res. Policy 51:2104420
    [Google Scholar]
  50. Gabriel I. 2020. Artificial intelligence, values, and alignment. Minds Mach. 30:3411–37
    [Google Scholar]
  51. Gill T. 2020. Blame it on the self-driving car: how autonomous vehicles can alter consumer morality. J. Consum. Res. 47:2272–91
    [Google Scholar]
  52. Gill T. 2021. Ethical dilemmas are really important to potential adopters of autonomous vehicles. Ethics Inform. Technol. 23:4657–73
    [Google Scholar]
  53. Giubilini A, Savulescu J. 2018. The artificial moral advisor: the “ideal observer” meets artificial intelligence. Philos. Technol. 31:169–88
    [Google Scholar]
  54. Goldenthal E, Park J, Liu SX, Mieczkowski H, Hancock JT. 2021. Not all AI are equal: exploring the accessibility of AI-mediated communication technology. Comput. Hum. Behav. 125:106975
    [Google Scholar]
  55. Goodall NJ. 2016. Away from trolley problems and toward risk management. Appl. Artif. Intell. 30:8810–21
    [Google Scholar]
  56. Guerouaou N, Vaiva G, Aucouturier JJ. 2022. The shallow of your smile: the ethics of expressive vocal deep-fakes. Philos. Trans. R. Soc. B 377:184120210083
    [Google Scholar]
  57. Hamilton M. 2019. The biased algorithm: evidence of disparate impact on Hispanics. Am. Crim. Law Rev. 56:1553–77
    [Google Scholar]
  58. Hancock JT, Guillory J. 2015. Deception with technology. The Handbook of the Psychology of Communication Technology SS Sundar 270–89. New York: Wiley
    [Google Scholar]
  59. Hancock JT, Naaman M, Levy K. 2020. AI-mediated communication: definition, research agenda, and ethical considerations. J. Comput.-Mediat. Commun. 25:189–100
    [Google Scholar]
  60. Hao K, Stray J. 2019. Can you make AI fairer than a judge? Play our courtroom algorithm game. MIT Technology Review Oct. 17. https://www.technologyreview.com/2019/10/17/75285/ai-fairer-than-judge-criminal-risk-assessment-algorithm/
    [Google Scholar]
  61. Harrison G, Hanson J, Jacinto C, Ramirez J, Ur B. 2020. An empirical study on the perceived fairness of realistic, imperfect machine learning models. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency392–402. New York: ACM
    [Google Scholar]
  62. Hassani BK. 2021. Societal bias reinforcement through machine learning: a credit scoring perspective. AI Ethics 1:3239–47
    [Google Scholar]
  63. Hertwig R, Engel C. 2016. Homo ignorans: deliberately choosing not to know. Perspect. Psychol. Sci. 11:3359–72
    [Google Scholar]
  64. Hidalgo CA, Orghian D, Canals JA, De Almeida F, Martín N. 2021. How Humans Judge Machines Cambridge, MA: MIT Press
  65. Hohenstein J, Kizilcec RF, DiFranzo D, Aghajari Z, Mieczkowski H et al. 2023. Artificial intelligence in communication impacts language and social relationships. Sci. Rep. 13:5487
    [Google Scholar]
  66. Hong JW, Wang Y, Lanz P. 2020. Why is artificial intelligence blamed more? Analysis of faulting artificial intelligence for self-driving car accidents in experimental settings. Int. J. Hum.–Comput. Interact. 36:181768–74
    [Google Scholar]
  67. Hsieh TY, Cross ES. 2022. People's dispositional cooperative tendencies towards robots are unaffected by robots' negative emotional displays in prisoner's dilemma games. Cogn. Emot. 36:5995–1019
    [Google Scholar]
  68. Huang K, Greene JD, Bazerman M. 2019. Veil-of-ignorance reasoning favors the greater good. PNAS 116:4823989–95
    [Google Scholar]
  69. Ishowo-Oloko F, Bonnefon JF, Soroye Z, Crandall J, Rahwan I, Rahwan T. 2019. Behavioural evidence for a transparency–efficiency tradeoff in human–machine cooperation. Nat. Mach. Intell. 1:11517–21
    [Google Scholar]
  70. Jagatic TN, Johnson NA, Jakobsson M, Menczer F. 2007. Social phishing. Commun. ACM 50:1094–100
    [Google Scholar]
  71. Jago AS, Laurin K. 2022. Assumptions about algorithms' capacity for discrimination. Pers. Soc. Psychol. Bull. 48:4582–95
    [Google Scholar]
  72. Jakesch M, French M, Ma X, Hancock JT, Naaman M. 2019. AI-mediated communication: how the perception that profile text was written by AI affects trustworthiness. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems Pap. 239 New York: ACM
    [Google Scholar]
  73. Kalra N, Groves DG. 2017. The Enemy of Good: Estimating the Cost of Waiting for Nearly Perfect Automated Vehicles. Santa Monica, CA: RAND
  74. Kalra N, Paddock SM. 2016. Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability?. Transp. Res. A Policy Pract. 94:182–93
    [Google Scholar]
  75. Karpus J, Krüger A, Verba JT, Bahrami B, Deroy O. 2021. Algorithm exploitation: Humans are keen to exploit benevolent AI. iScience 24:6102679
    [Google Scholar]
  76. Kleinberg J, Lakkaraju H, Leskovec J, Ludwig J, Mullainathan S. 2018. Human decisions and machine predictions. Q. J. Econ. 133:1237–93
    [Google Scholar]
  77. Kleinberg J, Mullainathan S, Raghavan M. 2017. Inherent trade-offs in the fair determination of risk scores Paper presented at the 8th Innovations in Theoretical Computer Science Conference Berkeley, CA: Jan. 9–11
  78. Köbis NC, Verschuere B, Bereby-Meyer Y, Rand D, Shalvi S. 2019. Intuitive honesty versus dishonesty: meta-analytic evidence. Perspect. Psychol. Sci. 14:5778–96
    [Google Scholar]
  79. Komatsu T, Malle BF, Scheutz M. 2021. Blaming the reluctant robot: parallel blame judgments for robots in moral dilemmas across US and Japan. Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction63–72. New York: ACM
    [Google Scholar]
  80. Kozyreva A, Herzog SM, Lewandowsky S, Hertwig R, Lorenz-Spreen P et al. 2023. Resolving content moderation dilemmas between free speech and harmful misinformation. PNAS 120:e2210666120
    [Google Scholar]
  81. Krügel S, Uhl M. 2022. Autonomous vehicles and moral judgments under risk. Transp. Res. A Policy Pract. 155:1–10
    [Google Scholar]
  82. Lima G, Grgić-Hlača N, Cha M. 2021. Human perceptions on moral responsibility of AI: a case study in AI-assisted bail decision-making.. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems Pap. 235 New York: ACM
    [Google Scholar]
  83. Liu P, Du M, Li T. 2021. Psychological consequences of legal responsibility misattribution associated with automated vehicles. Ethics Inform. Technol. 23:4763–76
    [Google Scholar]
  84. Liu P, Du Y. 2022. Blame attribution asymmetry in human–automation cooperation. Risk Anal. 42:81769–83
    [Google Scholar]
  85. Liu P, Du Y, Xu Z. 2019a. Machines versus humans: people's biased responses to traffic accidents involving self-driving vehicles. Accid. Anal. Prevent. 125:232–40
    [Google Scholar]
  86. Liu P, Liu J. 2021. Selfish or utilitarian automated vehicles? Deontological evaluation and public acceptance. Int. J. Hum.–Comput. Interact. 37:131231–42
    [Google Scholar]
  87. Liu P, Yang R, Xu Z. 2019b. How safe is safe enough for self-driving vehicles?. Risk Anal. 39:2315–25
    [Google Scholar]
  88. Liu Y, Mittal A, Yang D, Bruckman A. 2022. Will AI console me when I lose my pet? Understanding perceptions of AI-mediated email writing. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems Pap. 474 New York: ACM
    [Google Scholar]
  89. Longoni C, Bonezzi A, Morewedge CK. 2019. Resistance to medical artificial intelligence. J. Consum. Res. 46:4629–50
    [Google Scholar]
  90. Longoni C, Cian L, Kyung EJ. 2022. Algorithmic transference: People overgeneralize failures of AI in the government. J. Market. Res. 60:1170–88
    [Google Scholar]
  91. Luetge C. 2017. The German ethics code for automated and connected driving. Philos. Technol. 30:4547–58
    [Google Scholar]
  92. Makovi K, Sargsyan A, Li W, Bonnefon JF, Rahwan T. 2023. Trust within human-machine collectives depends on the perceived consensus about cooperative norms. Nat. Commun. 14:3108
    [Google Scholar]
  93. Malle BF, Guglielmo S, Voiklis J, Monroe AE. 2022. Cognitive blame is socially shaped. Curr. Dir. Psychol. Sci. 31:2169–76
    [Google Scholar]
  94. Malle BF, Scheutz M, Arnold T, Voiklis J, Cusimano C. 2015. Sacrifice one for the good of many? People apply different moral norms to human and robot agents. 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI)117–24. Piscataway, NJ: IEEE
    [Google Scholar]
  95. March C 2021. Strategic interactions between humans and artificial intelligence: lessons from experiments with computer players. J. Econ. Psychol. 87:102426
    [Google Scholar]
  96. Martin R, Kusev P, Van Schaik P. 2021. Autonomous vehicles: how perspective-taking accessibility alters moral judgments and consumer purchasing behavior. Cognition 212:104666
    [Google Scholar]
  97. Martinho A, Herber N, Kroesen M, Chorus C. 2021. Ethical issues in focus by the autonomous vehicles industry. Transp. Rev. 41:5556–77
    [Google Scholar]
  98. Mathur MB, Reichling DB. 2016. Navigating a social world with robot partners: a quantitative cartography of the uncanny valley. Cognition 146:22–32
    [Google Scholar]
  99. Mayer MM, Bell R, Buchner A. 2021. Self-protective and self-sacrificing preferences of pedestrians and passengers in moral dilemmas involving autonomous vehicles. PLOS ONE 16:12e0261673
    [Google Scholar]
  100. McAllister A. 2016. Stranger than science fiction: the rise of AI interrogation in the dawn of autonomous robots and the need for an additional protocol to the UN Convention Against Torture. Minn. Law Rev. 101:2527–73
    [Google Scholar]
  101. McCallum S, Vallance C. 2022. Start-up denies using tech to turn call centre accents “white. .” BBC News Aug. 26. https://www.bbc.com/news/technology-62633188#
    [Google Scholar]
  102. Miller T. 2019. Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267:1–38
    [Google Scholar]
  103. Mittelstadt B. 2019. Principles alone cannot guarantee ethical AI. Nat. Mach. Intell. 1:11501–7
    [Google Scholar]
  104. Moor JH. 2006. The nature, importance, and difficulty of machine ethics. IEEE Intell. Syst. 21:418–21
    [Google Scholar]
  105. Morley J, Floridi L, Kinsey L, Elhalal A. 2020. From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Sci. Eng. Ethics 26:42141–68
    [Google Scholar]
  106. Mullainathan S. 2019. Biased algorithms are easier to fix than biased people. New York Times Dec. 6. https://www.nytimes.com/2019/12/06/business/algorithm-bias-fix.html
    [Google Scholar]
  107. Nielsen YA, Pfattheicher S, Keijsers M. 2022a. Prosocial behavior toward machines. Curr. Opin. Psychol. 43:260–65
    [Google Scholar]
  108. Nielsen YA, Thielmann I, Zettler I, Pfattheicher S. 2022b. Sharing money with humans versus computers: on the role of honesty-humility and (non-)social preferences. Soc. Psychol. Pers. Sci. 13:61058–68
    [Google Scholar]
  109. Nightingale SJ, Farid H. 2022. AI-synthesized faces are indistinguishable from real faces and more trustworthy. PNAS 119:8e2120481119
    [Google Scholar]
  110. Noy IY, Shinar D, Horrey WJ. 2018. Automated driving: safety blind spots. Saf. Sci. 102:68–78
    [Google Scholar]
  111. Nussberger AM, Luo L, Celis LE, Crockett MJ. 2022. Public attitudes value interpretability but prioritize accuracy in artificial intelligence. Nat. Commun. 13:5821
    [Google Scholar]
  112. O'Leary DE. 2019. Google's duplex: pretending to be human. Intell. Syst. Account. Finance Manag. 26:146–53
    [Google Scholar]
  113. O'Neil C. 2017. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown
  114. Ostermaier A, Uhl M. 2017. Spot on for liars! How public scrutiny influences ethical behavior. PLOS ONE 12:7e0181682
    [Google Scholar]
  115. Pammer K, Gauld C, McKerral A, Reeves C. 2021. “They have to be better than human drivers!” Motorcyclists' and cyclists' perceptions of autonomous vehicles. Transp. Res. F Traffic Psychol. Behav. 78:246–58
    [Google Scholar]
  116. Pammer K, Predojevic H, McKerral A. 2023. Humans vs. machines: Motorcyclists and car drivers differ in their opinion and trust of self-drive vehicles. Transp. Res. F Traffic Psychol. Behav. 92:143–54
    [Google Scholar]
  117. Pauketat JV, Anthis JR. 2022. Predicting the moral consideration of artificial intelligences. Comput. Hum. Behav. 136:107372
    [Google Scholar]
  118. Pleiss G, Raghavan M, Wu F, Kleinberg J, Weinberger KQ. 2017. On fairness and calibration. Adv. Neural Inform. Process. Syst. 30:5684–93
    [Google Scholar]
  119. Rauhut H. 2013. Beliefs about lying and spreading of dishonesty: undetected lies and their constructive and destructive social dynamics in dice experiments. PLOS ONE 8:11e77878
    [Google Scholar]
  120. Rebitschek FG, Gigerenzer G, Wagner GG. 2021. People underestimate the errors made by algorithms for credit scoring and recidivism prediction but accept even fewer errors. Sci. Rep. 11:20171
    [Google Scholar]
  121. Rosenthal-von der Pütten AM, Krämer NC, Hoffmann L, Sobieraj S, Eimler SC. 2013. An experimental study on emotional reactions towards a robot. Int. J. Soc. Robot. 5:117–34
    [Google Scholar]
  122. Samuel S, Yahoodik S, Yamani Y, Valluru K, Fisher DL. 2020. Ethical decision making behind the wheel—a driving simulator study. Transp. Res. Interdiscip. Perspect. 5:100147
    [Google Scholar]
  123. Sandoval EB, Brandstetter J, Obaid M, Bartneck C. 2016. Reciprocity in human-robot interaction: a quantitative approach through the prisoner's dilemma and the ultimatum game. Int. J. Soc. Robot. 8:2303–17
    [Google Scholar]
  124. Santoni de Sio F. 2021. The European Commission report on ethics of connected and automated vehicles and the future of ethics of transportation. Ethics Inform. Technol. 23:4713–26
    [Google Scholar]
  125. Savulescu J, Gyngell C, Kahane G. 2021. Collective reflective equilibrium in practice (CREP) and controversial novel technologies. Bioethics 35:7652–63
    [Google Scholar]
  126. Saxena NA, Huang K, DeFilippis E, Radanovic G, Parkes DC, Liu Y. 2020. How do fairness definitions fare? Testing public attitudes towards three algorithmic definitions of fairness in loan allocations. Artif. Intell. 283:103238
    [Google Scholar]
  127. Schwarting W, Pierson A, Alonso-Mora J, Karaman S, Rus D. 2019. Social behavior for autonomous vehicles. PNAS 116:5024972–78
    [Google Scholar]
  128. Schwitzgebel E, Cushman F. 2015. Philosophers' biased judgments persist despite training, expertise and reflection. Cognition 141:127–37
    [Google Scholar]
  129. Seering J, Flores JP, Savage S, Hammer J. 2018. The social roles of bots: evaluating impact of bots on discussions in online communities. Proceedings of the ACM Human-Computer Interaction, Vol. 2 Pap. 157 New York: ACM
    [Google Scholar]
  130. Seymour J, Tully P. 2016. Weaponizing data science for social engineering: automated E2E spear phishing on Twitter Black Hat Video. https://www.youtube.com/watch?v=-Y-xLQy0UuQ
  131. Shank DB, DeSanti A, Maninger T. 2019. When are artificial intelligence versus human agents faulted for wrongdoing? Moral attributions after individual and joint decisions. Inform. Commun. Soc. 22:5648–63
    [Google Scholar]
  132. Shao C, Ciampaglia GL, Varol O, Yang KC, Flammini A, Menczer F. 2018. The spread of low-credibility content by social bots. Nat. Commun. 9:4787
    [Google Scholar]
  133. Shariff A, Bonnefon JF, Rahwan I. 2017. Psychological roadblocks to the adoption of self-driving vehicles. Nat. Hum. Behav. 1:10694–96
    [Google Scholar]
  134. Shariff A, Bonnefon JF, Rahwan I. 2021. How safe is safe enough? Psychological mechanisms underlying extreme safety demands for self-driving cars. Transp. Res. C Emerg. Technol. 126:103069
    [Google Scholar]
  135. Silver D, Huang A, Maddison CJ, Guez A, Sifre L et al. 2016. Mastering the game of Go with deep neural networks and tree search. Nature 529:7587484–89
    [Google Scholar]
  136. Smith A. 2019. Public attitudes toward computer algorithms Rep. Pew Res. Cent. Washington, DC: https://www.pewresearch.org/internet/2018/11/16/public-attitudes-toward-computer-algorithms
  137. Srinivasan R, Sarial-Abi G. 2021. When algorithms fail: consumers' responses to brand harm crises caused by algorithm errors. J. Market. 85:574–91
    [Google Scholar]
  138. Srivastava M, Heidari H, Krause A. 2019. Mathematical notions versus human perception of fairness: a descriptive approach to fairness for machine learning. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining2459–68. New York: ACM
    [Google Scholar]
  139. Starke C, Baleis J, Keller B, Marcinkowski F. 2022. Fairness perceptions of algorithmic decision-making: a systematic review of the empirical literature. Big Data Soc. 9:220539517221115189
    [Google Scholar]
  140. Stella M, Ferrara E, De Domenico M. 2018. Bots increase exposure to negative and inflammatory content in online social systems. PNAS 115:4912435–40
    [Google Scholar]
  141. Suzuki Y, Galli L, Ikeda A, Itakura S, Kitazaki M. 2015. Measuring empathy for human and robot hand pain using electroencephalography. Sci. Rep. 5:15924
    [Google Scholar]
  142. Takaguchi K, Kappes A, Yearsley JM, Sawai T, Wilkinson DJ, Savulescu J. 2022. Personal ethical settings for driverless cars and the utility paradox: an ethical analysis of public attitudes in UK and Japan. PLOS ONE 17:11e0275812
    [Google Scholar]
  143. Thomas PS, Castro da Silva B, Barto AG, Giguere S, Brun Y, Brunskill E. 2019. Preventing undesirable behavior of intelligent machines. Science 366:6468999–1004
    [Google Scholar]
  144. Tsvetkova M, García-Gavilanes R, Floridi L, Yasseri T. 2017. Even good bots fight: the case of Wikipedia. PLOS ONE 12:2e0171774
    [Google Scholar]
  145. Van Zant AB, Kray LJ. 2014. “I can't lie to your face”: Minimal face-to-face interaction promotes honesty. J. Exp. Soc. Psychol. 55:234–38
    [Google Scholar]
  146. Villani V, Pini F, Leali F, Secchi C. 2018. Survey on human–robot collaboration in industrial settings: safety, intuitive interfaces and applications. Mechatronics 55:248–66
    [Google Scholar]
  147. von Schenk A, Klockmann V, Köbis N. 2022. Social preferences towards machines and humans Work. Pap. Max Planck Inst. Hum. Dev. Berlin, Ger.:
  148. Wegner D, Gray K. 2016. The Mind Club: Who Thinks, What Feels, and Why It Matters New York: Viking
  149. Wellman MP, Rajan U. 2017. Ethical issues for autonomous trading agents. Minds Mach. 27:609–24
    [Google Scholar]
  150. Wotton ME, Bennett JM, Modesto O, Challinor KL, Prabhakharan P. 2022. Attention all “drivers”: You could be to blame, no matter your behaviour or the level of vehicle automation. Transp. Res. F Traffic Psychol. Behav. 87:219–35
    [Google Scholar]
  151. Zhu A, Yang S, Chen Y, Xing C. 2022. A moral decision-making study of autonomous vehicles: Expertise predicts a preference for algorithms in dilemmas. Pers. Individ. Differ. 186:111356
    [Google Scholar]
  152. Złotowski J, Sumioka H, Nishio S, Glas DF, Bartneck C, Ishiguro H. 2016. Appearance of a robot affects the impact of its behaviour on perceived trustworthiness and empathy. Paladyn J. Behav. Robot. 7:55–66
    [Google Scholar]
/content/journals/10.1146/annurev-psych-030123-113559
Loading
/content/journals/10.1146/annurev-psych-030123-113559
Loading

Data & Media loading...

  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error