1932

Abstract

Here we review work examining reactions to machines replacing humans in both professional and personal domains. Using a mind-role fit perspective, we synthesize findings across several decades of research spanning multiple disciplines to suggest the types and trends for how people will respond to machines replacing humans. We propose that as intelligent machines have evolved to possess “minds,” their range of replacement and the scope of people's reactions to this replacement increase. Additionally, we suggest that people's reactions to machine replacement depend on the fit between the perceived mind of the machine and their ideal conception of the mind deemed suitable for that particular role. Our review organizes the literature on machine replacement into three distinct phases: the pre-2000s era, characterized by the perception of machines as mindless tools; the 2000s, which explored the extent to which machines are perceived as possessing minds; and the 2010s, marked by the proliferation of artificial intelligence and the emergence of reactions such as algorithm aversion and appreciation. This review suggests that our mind-role fit perspective is influenced by three key factors: how an individual in the machine interaction is involved in or affected by the introduction of intelligent machines, the characteristics of the machine itself, and the nature of the task the machine is intended to perform.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-orgpsych-030223-044504
2025-01-21
2025-06-16
Loading full text...

Full text loading...

/deliver/fulltext/orgpsych/12/1/annurev-orgpsych-030223-044504.html?itemId=/content/journals/10.1146/annurev-orgpsych-030223-044504&mimeType=html&fmt=ahah

Literature Cited

  1. Allen RT, Choudhury P. 2022.. Algorithm-augmented work and domain experience: the countervailing forces of ability and aversion. . Organ. Sci. 33:(1):14969. https://doi.org/10.1287/orsc.2021.1554
    [Crossref] [Google Scholar]
  2. Anthony C. 2021.. When knowledge work and analytical technologies collide: the practices and consequences of black boxing algorithmic technologies. . Adm. Sci. Q. 66:(4):1173212. https://doi.org/10.1177/00018392211016755
    [Crossref] [Google Scholar]
  3. Aquino K, Reed A. 2002.. The self-importance of moral identity. . J. Pers. Soc. Psychol. 83:(6):142340. https://doi.org/10.1037/0022-3514.83.6.1423
    [Crossref] [Google Scholar]
  4. Bainbridge WA, Hart J, Kim ES, Scassellati B. 2008.. The effect of presence on human-robot interaction. . In RO-MAN 2008: The 17th IEEE International Symposium on Robot and Human Interactive Communication, pp. 7016. New York:: IEEE. https://doi.org/10.1109/ROMAN.2008.4600749
    [Google Scholar]
  5. Bannon LJ. 1995.. From human factors to human actors: the role of psychology and human-computer interaction studies in system design. . In Readings in Human-Computer Interaction, ed. RM Baecker, J Grudin, WAS Buxton, S Greenberg , pp. 20514. San Francisco:: Elsevier. https://doi.org/10.1016/B978-0-08-051574-8.50024-8
    [Google Scholar]
  6. Bartneck C, Forlizzi J. 2004.. A design-centred framework for social human-robot interaction. . In RO-MAN 2004: 13th IEEE International Workshop on Robot and Human Interactive Communication, pp. 59194. New York:: IEEE. https://doi.org/10.1109/ROMAN.2004.1374827
    [Google Scholar]
  7. Bartneck C, Hu J. 2008.. Exploring the abuse of robots. . Interact. Stud. 9:(3):41533. https://doi.org/10.1075/is.9.3.04bar
    [Crossref] [Google Scholar]
  8. Bartneck C, Nomura T, Kanda T, Suzuki T, Kato K. 2005.. A cross-cultural study on attitudes towards robots. . In Proceedings of the HCI International 2005. HCI Int.
    [Google Scholar]
  9. Bartneck C, Suzuki T, Kanda T, Nomura T. 2007.. The influence of people's culture and prior experiences with Aibo on their attitude towards robots. . AI Soc. 21:(1):21730. https://doi.org/10.1007/s00146-006-0052-7
    [Google Scholar]
  10. Benbasat I, Dexter AS. 1982.. Individual differences in the use of decision support aids. . J. Account. Res. 20:(1):111. https://doi.org/10.2307/2490759
    [Crossref] [Google Scholar]
  11. Bigman YE, Gray K. 2018.. People are averse to machines making moral decisions. . Cognition 181::2134. https://doi.org/10.1016/j.cognition.2018.08.003
    [Crossref] [Google Scholar]
  12. Bigman YE, Waytz A, Alterovitz R, Gray K. 2019.. Holding robots responsible: the elements of machine morality. . Trends Cogn. Sci. 23:(5):36568. https://doi.org/10.1016/j.tics.2019.02.008
    [Crossref] [Google Scholar]
  13. Bigman YE, Wilson D, Arnestad MN, Waytz A, Gray K. 2022.. Algorithmic discrimination causes less moral outrage than human discrimination. . J. Exp. Psychol. Gen. 152:(1):427. https://doi.org/10.1037/xge0001250
    [Crossref] [Google Scholar]
  14. Breazeal C. 2002.. Designing Sociable Robots. Cambridge, MA:: MIT Press. https://doi.org/10.7551/mitpress/2376.001.0001
    [Google Scholar]
  15. Breazeal C. 2004.. Social interactions in HRI: the robot view. . IEEE Trans. Syst. Man Cybern. 34:(2):18186. https://doi.org/10.1109/TSMCC.2004.826268
    [Crossref] [Google Scholar]
  16. Broadbent E, Stafford R, MacDonald B. 2009.. Acceptance of healthcare robots for the older population: review and future directions. . Int. J. Soc. Robot. 1::31930. https://doi.org/10.1007/s12369-009-0030-6
    [Crossref] [Google Scholar]
  17. Broekens J, Heerink M, Rosendal H. 2009.. Assistive social robots in elderly care: a review. . Gerontechnology 8:(2):94103. https://doi.org/10.4017/gt.2009.08.02.002.00
    [Crossref] [Google Scholar]
  18. Burke A. 2019.. Occluded algorithms. . Big Data Soc. 6:(2):205395171985874. https://doi.org/10.1177/2053951719858743
    [Crossref] [Google Scholar]
  19. Castelo N, Bos MW, Lehmann DR. 2019.. Task-dependent algorithm aversion. . J. Market. Res. 56:(5):80925. https://doi.org/10.1177/0022243719851788
    [Crossref] [Google Scholar]
  20. Castelo N, Ward AF. 2021.. Conservatism predicts aversion to consequential Artificial Intelligence. . PLOS ONE 16:(12):e0261467. https://doi.org/10.1371/journal.pone.0261467
    [Crossref] [Google Scholar]
  21. Clark HH, Fischer K. 2022.. Social robots as depictions of social agents. . Behav. Brain Sci. 46::e21. https://doi.org/10.1017/S0140525X22000668
    [Crossref] [Google Scholar]
  22. Colquitt JA, Conlon DE, Wesson MJ, Porter COLH, Ng KY. 2001.. Justice at the millennium: a meta-analytic review of 25 years of organizational justice research. . J. Appl. Psychol. 86:(3):42545. https://doi.org/10.1037/0021-9010.86.3.425
    [Crossref] [Google Scholar]
  23. Compeau DR, Higgins CA. 1995.. Computer self-efficacy: development of a measure and initial test. . MIS Q. 19:(2):189211. https://doi.org/10.2307/249688
    [Crossref] [Google Scholar]
  24. Compeau DR, Meister DB. 1997.. Measurement of perceived characteristics of innovating: a reconsideration based on three empirical studies. Paper presented at the annual meeting of the Diffusion Interest Group on Information Technology, Dec. 15 , Atlanta, GA:
    [Google Scholar]
  25. Cuccu L, Royuela V. 2024.. Just reallocated? Robots displacement, and job quality. . Br. J. Ind. Relat. 2024:. https://doi.org/10.1111/bjir.12805
    [Google Scholar]
  26. Daliot-Bul M. 2019.. Ghost in the Shell as a cross-cultural franchise: from radical posthumanism to human exceptionalism. . Asian Stud. Rev. 43:(3):52743. https://doi.org/10.1080/10357823.2019.1631257
    [Crossref] [Google Scholar]
  27. Dautenhahn K. 2007.. Methodology & themes of human-robot interaction: a growing research field. . Int. J. Adv. Robot. Syst. 4:. https://doi.org/10.5772/5702
    [Google Scholar]
  28. Davis FD. 1989.. Perceived usefulness, perceived ease of use, and user acceptance of information technology. . MIS Q. 13:(3):31940. https://doi.org/10.2307/249008
    [Crossref] [Google Scholar]
  29. Dawes RM, Faust D, Meehl PE. 1989.. Clinical versus actuarial judgment. . Science 243:(4899):166874. https://doi.org/10.1126/science.2648573
    [Crossref] [Google Scholar]
  30. De Boer S, Jansen B, Bustos VM, Prinse M, Horwitz Y, Hoorn JF. 2021.. Social robotics in Eastern and Western newspapers: China and (even) Japan are optimistic. . Int. J. Innov. Technol. Manag. 18:(1):2040001. https://doi.org/10.1142/S0219877020400015
    [Google Scholar]
  31. De Cremer D, Bianzino NM, Falk B. 2023.. How generative AI could disrupt creative work. . Harv. Bus. Rev. 13:. https://hbr.org/2023/04/how-generative-ai-could-disrupt-creative-work
    [Google Scholar]
  32. de Visser EJ, Monfort SS, McKendrick R, Smith MAB, McKnight PE, et al. 2016.. Almost human: Anthropomorphism increases trust resilience in cognitive agents. . J. Exp. Psychol. Appl. 22:(3):33149. https://doi.org/10.1037/xap0000092
    [Crossref] [Google Scholar]
  33. DeFranco JF, Voas J, Kshetri N. 2022.. Algorithms: society's invisible puppeteers. . Computer 55:(4):1214. https://doi.org/10.1109/MC.2021.3128675
    [Crossref] [Google Scholar]
  34. Dickmeyer N. 1983.. Measuring the effects of a university planning decision aid. . Manag. Sci. 29:(6):67385. https://www.jstor.org/stable/2631094
    [Crossref] [Google Scholar]
  35. Dietvorst BJ, Bartels DM. 2022.. Consumers object to algorithms making morally relevant tradeoffs because of algorithms’ consequentialist decision strategies. . J. Consum. Psychol. 32:(3):40624. https://doi.org/10.1002/jcpy.1266
    [Crossref] [Google Scholar]
  36. Dietvorst BJ, Simmons JP, Massey C. 2015.. Algorithm aversion: People erroneously avoid algorithms after seeing them err. . J. Exp. Psychol. Gen. 144:(1):11426. https://doi.org/10.1037/xge0000033
    [Crossref] [Google Scholar]
  37. Dietvorst BJ, Simmons JP, Massey C. 2018.. Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. . Manag. Sci. 64:(3):115570. https://doi.org/10.1287/mnsc.2016.2643
    [Crossref] [Google Scholar]
  38. Dillion D, Mondal D, Tandon N, Gray K. 2024.. Large language models as moral experts? GPT-4o outperforms expert ethicist in providing moral guidance. https://doi.org/10.31234/osf.io/w7236
    [Google Scholar]
  39. Dosi G, Nelson R. 1994.. An introduction to evolutionary theories in economics. . J. Evol. Econ. 4::15372. https://doi.org/10.1007/BF01236366
    [Crossref] [Google Scholar]
  40. Duffy BR. 2003.. Anthropomorphism and the social robot. . Robot. Auton. Syst. 42:(3–4):17790. https://doi.org/10.1016/S0921-8890(02)00374-3
    [Crossref] [Google Scholar]
  41. Eitle V, Buxmann P. 2020.. Cultural differences in machine learning adoption: an international comparison between Germany and the United States. . In European Conference on Information Systems. Atlanta, GA:: AIS. https://www.semanticscholar.org/paper/Cultural-Differences-in-Machine-Learning-Adoption%3A-Eitle-Buxmann/35ca2a8b9a3e77bdbae7edfa320a496ecf330d7c
    [Google Scholar]
  42. Eng A, Chi Y, Gray K. 2023.. People treat social robots as real social agents. . Behav. Brain Sci. 46::e28. https://doi.org/10.1017/S0140525X22001534
    [Crossref] [Google Scholar]
  43. Eom S, Lee S, Kim E, Somarajan C. 1998.. A survey of decision support system applications (1988–1994). . J. Oper. Res. Soc. 49::10920. https://doi.org/10.1057/palgrave.jors.2600507
    [Crossref] [Google Scholar]
  44. Epley N, Waytz A, Cacioppo JT. 2007.. On seeing human: a three-factor theory of anthropomorphism. . Psychol. Rev. 114:(4):86486. https://doi.org/10.1037/0033-295X.114.4.864
    [Crossref] [Google Scholar]
  45. Eyssel F, Kuchenbrandt D. 2012.. Social categorization of social robots: anthropomorphism as a function of robot group membership. . Br. J. Soc. Psychol. 51:(4):72431. https://doi.org/10.1111/j.2044-8309.2011.02082.x
    [Crossref] [Google Scholar]
  46. Ford P. 2015.. Our fear of artificial intelligence. . MIT Technol. Rev. 118:(2):7480. https://www.technologyreview.com/2015/02/11/169210/our-fear-of-artificial-intelligence/
    [Google Scholar]
  47. Franke T, Attig C, Wessel D. 2019.. A personal resource for technology interaction: development and validation of the affinity for technology interaction (ATI) scale. . Int. J. Hum.–Comp. Interact. 35:(6):45667. https://doi.org/10.1080/10447318.2018.1456150
    [Crossref] [Google Scholar]
  48. Friedman B, Kahn PH, Hagman J. 2003.. Hardware companions? What online AIBO discussion forums reveal about the human-robotic relationship. . In CHI '03: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 27380. New York:: Assoc. Comput. Mach. https://doi.org/10.1145/642611.642660
    [Google Scholar]
  49. Fuentes-Moraleda L, Díaz-Pérez P, Orea-Giner A, Muñoz-Mazón A, Villacé-Molinero T. 2020.. Interaction between hotel service robots and humans: a hotel-specific Service Robot Acceptance Model (sRAM). . Tour. Manag. Perspect. 36::100751. https://doi.org/10.1016/j.tmp.2020.100751
    [Google Scholar]
  50. Gatignon H, Robertson TS. 1985.. A propositional inventory for new diffusion research. . J. Consum. Res. 11:(4):84967. https://www.jstor.org/stable/2489212
    [Crossref] [Google Scholar]
  51. Gillath O, Ai T, Branicky MS, Keshmiri S, Davison RB, Spaulding R. 2021.. Attachment and trust in artificial intelligence. . Comput. Hum. Behav. 115::106607. https://doi.org/10.1016/j.chb.2020.106607
    [Crossref] [Google Scholar]
  52. Glikson E, Woolley AW. 2020.. Human trust in artificial intelligence: review of empirical research. . Acad. Manag. Ann. 14:(2):62760. https://doi.org/10.5465/annals.2018.0057
    [Crossref] [Google Scholar]
  53. Goetz J, Kiesler S. 2002.. Cooperation with a robotic assistant. . In CHI ’02 Extended Abstracts on Human Factors in Computing Systems, pp. 57879. New York:: Assoc. Comput. Mach. https://doi.org/10.1145/506443.506492
    [Google Scholar]
  54. Goetz J, Kiesler S, Powers A. 2003.. Matching robot appearance and behavior to tasks to improve human-robot cooperation. . In The 12th IEEE International Workshop on Robot and Human Interactive Communication, 2003. Proceedings, pp. 5560. New York:: IEEE. https://doi.org/10.1109/ROMAN.2003.1251796
    [Google Scholar]
  55. Gonzalez R. 2017.. Virtual therapists help veterans open up about PTSD. . Wired. https://www.wired.com/story/virtual-therapists-help-veterans-open-up-about-ptsd/
    [Google Scholar]
  56. Gopalakrishnan M, Gopalakrishnan S, Miller DM. 1993.. A decision support system for scheduling personnel in a newspaper publishing environment. . Interfaces 23:(4):10415. https://doi.org/10.1287/inte.23.4.104
    [Crossref] [Google Scholar]
  57. GovTech Singapore. n.d.. “ Ask Jamie” virtual assistant. . GovTech Singapore. Retrieved Oct. 8, 2022. https://www.tech.gov.sg/products-and-services/ask-jamie/
    [Google Scholar]
  58. Granulo A, Fuchs C, Puntoni S. 2019.. Psychological reactions to human versus robotic job replacement. . Nat. Hum. Behav. 3:(10):106269. https://doi.org/10.1038/s41562-019-0670-y
    [Crossref] [Google Scholar]
  59. Granulo A, Fuchs C, Puntoni S. 2021.. Preference for human (versus robotic) labor is stronger in symbolic consumption contexts. . J. Consum. Psychol. 31:(1):7280. https://doi.org/10.1002/jcpy.1181
    [Crossref] [Google Scholar]
  60. Gray HM, Gray K, Wegner DM. 2007.. Dimensions of mind perception. . Science 315:(5812):619
    [Crossref] [Google Scholar]
  61. Gray K, Wegner DM. 2008.. The sting of intentional pain. . Psychol. Sci. 19:(12):126062. https://doi.org/10.1111/j.1467-9280.2008.02208.x
    [Crossref] [Google Scholar]
  62. Gray K, Wegner DM. 2010.. Blaming God for our pain: human suffering and the divine mind. . Pers. Soc. Psychol. Rev. 14:(1):716. https://doi.org/10.1177/1088868309350299
    [Crossref] [Google Scholar]
  63. Grove WM, Meehl PE. 1996.. Comparative efficiency of informal (subjective, impressionistic) and formal (mechanical, algorithmic) prediction procedures: the clinical–statistical controversy. . Psychol. Public Policy Law 2:(2):293323. https://doi.org/10.1037/1076-8971.2.2.293
    [Crossref] [Google Scholar]
  64. Hamilton IA. 2020.. Microsoft's new “Productivity Score” lets your boss track how much you use email, Teams, and even whether you turn your camera on during meetings. . Business Insider. https://www.businessinsider.com/microsofts-productivity-score-tool-invades-employee-privacy-2020-11
    [Google Scholar]
  65. Hanson Robot. n.d.. Sophia. . Hanson Robotics. Retrieved Oct. 8, 2022. https://www.hansonrobotics.com/sophia/
    [Google Scholar]
  66. Haring KS, Mougenot C, Ono F, Watanabe K. 2014.. Cultural differences in perception and attitude towards robots. . Int. J. Affect. Eng. 13:(3):14957. https://doi.org/10.5057/ijae.13.149
    [Crossref] [Google Scholar]
  67. Hastie R, Dawes RM. 2009.. Rational Choice in an Uncertain World: The Psychology of Judgment and Decision Making. London:: SAGE
    [Google Scholar]
  68. Herlocker JL, Konstan JA, Terveen LG, Riedl JT. 2004.. Evaluating collaborative filtering recommender systems. . ACM Trans. Inform. Syst. 22:(1):553. https://doi.org/10.1145/963770.963772
    [Crossref] [Google Scholar]
  69. Herrnstein RJ. 1990.. Rational choice theory: necessary but not sufficient. . Am. Psychol. 45:(3):35667. https://doi.org/10.1037/0003-066X.45.3.356
    [Crossref] [Google Scholar]
  70. Hill T, Smith ND, Mann MF. 1986.. Communicating innovations: convincing computer phobics to adopt innovative technologies. . Adv. Consum. Res. 13:(1):41922
    [Google Scholar]
  71. Hofstede G, Bond MH. 1984.. Hofstede's culture dimensions: an independent validation using Rokeach's value survey. . J. Cross-Cult. Psychol. 15:(4):41733. https://doi.org/10.1177/0022002184015004003
    [Crossref] [Google Scholar]
  72. Hohenstein J, Jung M. 2020.. AI as a moral crumple zone: the effects of AI-mediated communication on attribution and trust. . Comput. Hum. Behav. 106::106190. https://doi.org/10.1016/j.chb.2019.106190
    [Crossref] [Google Scholar]
  73. Hortensius R, Kent M, Darda KM, Jastrzab L, Koldewyn K, et al. 2021.. Exploring the relationship between anthropomorphism and theory-of-mind in brain and behaviour. . Hum. Brain Mapp. 42:(13):422441. https://doi.org/10.1002/hbm.25542
    [Crossref] [Google Scholar]
  74. Jackson JC, Caluori N, Abrams S, Beckman E, Gelfand M, Gray K. 2021.. Tight cultures and vengeful gods: how culture shapes religious belief. . J. Exp. Psychol. Gen. 150:(10):205777. https://doi.org/10.1037/xge0001033
    [Crossref] [Google Scholar]
  75. Jackson JC, Yam KC, Tang PM, Sibley CG, Waytz A. 2023.. Exposure to automation explains religious declines. . PNAS 120:(34):e2304748120. https://doi.org/10.1073/pnas.2304748120
    [Crossref] [Google Scholar]
  76. Jauernig J, Uhl M, Walkowitz G. 2022.. People prefer moral discretion to algorithms: algorithm aversion beyond intransparency. . Philos. Technol. 35:(1). https://doi.org/10.1007/s13347-021-00495-y
    [Crossref] [Google Scholar]
  77. Jiang L, Qin X, Yam KC, Dong X, Liao W, Chen C. 2023.. Who should be first? How and when AI-human order influences procedural justice in a multistage decision-making process. . PLOS ONE 18:(7):e0284840. https://doi.org/10.1371/journal.pone.0284840
    [Crossref] [Google Scholar]
  78. Jung Y, Lee KM. 2004.. Effects of physical embodiment on social presence of social robots. . Proceed. PRESENCE 2004::8087
    [Google Scholar]
  79. Kahn PH, Friedman B, Pérez-Granados DR, Freier N. 2006.. Robotic pets in the lives of preschool children. . Interact. Stud. 7::40536. https://doi.org/10.1075/is.7.3.13kah
    [Crossref] [Google Scholar]
  80. Keltner D, Buswell BN. 1997.. Embarrassment: its distinct form and appeasement functions. . Psychol. Bull. 122:(3):25070. https://doi.org/10.1037/0033-2909.122.3.250
    [Crossref] [Google Scholar]
  81. Kidd CD, Breazeal C. 2004.. Effect of a robot on user perceptions. . IEEE/RSJ Int. Conf. Intell. Robots Syst. 4::355964. https://doi.org/10.1109/IROS.2004.1389967
    [Google Scholar]
  82. Kim SS, Kim J, Badu-Baiden F, Giroux M, Choi Y. 2021.. Preference for robot service or human service in hotels? Impacts of the COVID-19 pandemic. . Int. J. Hosp. Manag. 93::102795. https://doi.org/10.1016/j.ijhm.2020.102795
    [Crossref] [Google Scholar]
  83. Konovsky MA. 2000.. Understanding procedural justice and its impact on business organizations. . J. Manag. 26:(3):489511. https://doi.org/10.1177/014920630002600306
    [Google Scholar]
  84. Krach S, Hegel F, Wrede B, Sagerer G, Binkofski F, Kircher T. 2008.. Can machines think? Interaction and perspective taking with robots investigated via fMRI. . PLOS ONE 3:(7):e2597. https://doi.org/10.1371/journal.pone.0002597
    [Crossref] [Google Scholar]
  85. Lackner S, Francisco F, Mendonça C, Mata A, Gonçalves-Sa J. 2023.. Intermediate levels of scientific knowledge are associated with overconfidence and negative attitudes towards science. . Nat. Hum. Behav. 7::1490501. https://doi.org/10.1038/s41562-023-01677-8
    [Crossref] [Google Scholar]
  86. Lebovitz S, Lifshitz-Assaf H, Levina N. 2022.. To engage or not to engage with AI for critical judgments: how professionals deal with opacity when using AI for medical diagnosis. . Organ. Sci. 33:(1):12648. https://doi.org/10.1287/orsc.2021.1549
    [Crossref] [Google Scholar]
  87. Lee MK. 2018.. Understanding perception of algorithmic decisions: fairness, trust, and emotion in response to algorithmic management. . Big Data Soc. 5:(1):205395171875668. https://doi.org/10.1177/2053951718756684
    [Crossref] [Google Scholar]
  88. Lepri B, Oliver N, Letouzé E, Pentland A, Vinck P. 2018.. Fair, transparent, and accountable algorithmic decision-making processes: the premise, the proposed solutions, and the open challenges. . Philos. Technol. 31:(4):61127. https://doi.org/10.1007/s13347-017-0279-x
    [Crossref] [Google Scholar]
  89. Leslie AM, Friedman O, German TP. 2004.. Core mechanisms in ‘theory of mind. .’ Trends Cogn. Sci. 8:(12):52833. https://doi.org/10.1016/j.tics.2004.10.001
    [Crossref] [Google Scholar]
  90. Levy DA, Kaler SR, Schall M. 1988.. An empirical investigation of role schemata: occupations and personality characteristics. . Psychol. Rep. 63:(1):314. https://doi.org/10.2466/pr0.1988.63.1.3
    [Crossref] [Google Scholar]
  91. Li D, Rau PLP, Li Y. 2010.. A cross-cultural study: effect of robot appearance and task. . Int. J. Soc. Robot. 2:(2):17586. https://doi.org/10.1007/s12369-010-0056-9
    [Crossref] [Google Scholar]
  92. Li J. 2015.. The benefit of being physically present: a survey of experimental works comparing copresent robots, telepresent robots and virtual agents. . Int. J. Hum.-Comput. Stud. 77::2337. https://doi.org/10.1016/j.ijhcs.2015.01.001
    [Crossref] [Google Scholar]
  93. Liang W, Yao J, Chen A, Lv Q, Zanin M, et al. 2020.. Early triage of critically ill COVID-19 patients using deep learning. . Nat. Commun. 11:(1):3543. https://doi.org/10.1038/s41467-020-17280-8
    [Crossref] [Google Scholar]
  94. Lim V, Rooksby M, Cross ES. 2021.. Social robots on a global stage: establishing a role for culture during human–robot interaction. . Int. J. Soc. Robot. 13:(6):130733. https://doi.org/10.1007/s12369-020-00710-4
    [Crossref] [Google Scholar]
  95. Lodish LM. 1971.. CALLPLAN: an interactive salesman's call planning system. . Manag. Sci. 18:(4):P2540. https://www.jstor.org/stable/2629479
    [Crossref] [Google Scholar]
  96. Logg JM, Minson JA, Moore DA. 2019.. Algorithm appreciation: People prefer algorithmic to human judgment. . Organ. Behav. Hum. Decis. Process. 151::90103. https://doi.org/10.1016/j.obhdp.2018.12.005
    [Crossref] [Google Scholar]
  97. Longoni C, Bonezzi A, Morewedge CK. 2019.. Resistance to medical artificial intelligence. . J. Consumer Res. 46:(4):62950. https://doi.org/10.1093/jcr/ucz013
    [Crossref] [Google Scholar]
  98. MacDorman KF, Vasudevan SK, Ho C-C. 2009.. Does Japan really have robot mania? Comparing attitudes by implicit and explicit measures. . AI Soc. 23:(4):485510. https://doi.org/10.1007/s00146-008-0181-2
    [Crossref] [Google Scholar]
  99. Martin AE, Mason MF. 2023.. Hey Siri, I love you: People feel more attached to gendered technology. . J. Exp. Soc. Psychol. 104::104402. https://doi.org/10.1016/j.jesp.2022.104402
    [Crossref] [Google Scholar]
  100. Mathieson K. 1991.. Predicting user intentions: comparing the technology acceptance model with the theory of planned behavior. . Inform. Syst. Res. 2::17391. https://doi.org/10.1287/isre.2.3.173
    [Crossref] [Google Scholar]
  101. McGuire J, De Cremer D. 2023.. Algorithms, leadership, and morality: why a mere human effect drives the preference for human over algorithmic leadership. . AI Ethics 3:(2):60118. https://doi.org/10.1007/s43681-022-00192-2
    [Crossref] [Google Scholar]
  102. McIntyre SH. 1982.. An experimental study of the impact of judgment-based marketing models. . Manag. Sci. 28:(1):1733. https://www.jstor.org/stable/2631071
    [Crossref] [Google Scholar]
  103. McNee SM, Riedl J, Konstan JA. 2006.. Being accurate is not enough: how accuracy metrics have hurt recommender systems. . In CHI ’06 Extended Abstracts on Human Factors in Computing Systems, pp. 1097101. New York:: ACM. https://doi.org/10.1145/1125451.1125659
    [Google Scholar]
  104. Meehl DPE. 1954.. Clinical Versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence. Minneapolis:: Univ. Minn. Press
    [Google Scholar]
  105. Mittelstadt BD, Allo P, Taddeo M, Wachter S, Floridi L. 2016.. The ethics of algorithms: mapping the debate. . Big Data Soc. 3:(2):205395171667967. https://doi.org/10.1177/2053951716679679
    [Crossref] [Google Scholar]
  106. Moore GC, Benbasat I. 1991.. Development of an instrument to measure the perceptions of adopting an information technology innovation. . Inform. Syst. Res. 2:(3):192222. https://doi.org/10.1287/isre.2.3.192
    [Crossref] [Google Scholar]
  107. Mori M, MacDorman KF, Kageki N. 2012.. The uncanny valley [from the field]. . IEEE Robot. Autom. Mag. 19:(2):98100. https://doi.org/10.1109/MRA.2012.2192811
    [Crossref] [Google Scholar]
  108. Newman DT, Fast NJ, Harmon DJ. 2020.. When eliminating bias isn't fair: algorithmic reductionism and procedural justice in human resource decisions. . Organ. Behav. Hum. Decis. Process. 160::14967. https://doi.org/10.1016/j.obhdp.2020.03.008
    [Crossref] [Google Scholar]
  109. Nourbakhsh IR, Soto A, Bobenage J, Grange S, Meyer R, Lutz R. 1999.. An effective mobile robot educator with a full-time job. . Artif. Intell. 114:(1–2):95124. https://doi.org/10.1016/S0004-3702(99)00027-2
    [Crossref] [Google Scholar]
  110. Nowak KL, Biocca F. 2003.. The effect of the agency and anthropomorphism on users’ sense of telepresence, copresence, and social presence in virtual environments. . PRESENCE 12:(5):48194. https://doi.org/10.1162/105474603322761289
    [Crossref] [Google Scholar]
  111. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. 2019.. Dissecting racial bias in an algorithm used to manage the health of populations. . Science 366::44753. https://doi.org/10.1126/science.aax2342
    [Crossref] [Google Scholar]
  112. Okamura K, Yamada S. 2020.. Adaptive trust calibration for human-AI collaboration. . PLOS ONE 15:(2):e0229132. https://doi.org/10.1371/journal.pone.0229132
    [Crossref] [Google Scholar]
  113. Peoples HC, Duda P, Marlowe FW. 2016.. Hunter-gatherers and the origins of religion. . Hum. Nat. 27:(3):26182. https://doi.org/10.1007/s12110-016-9260-0
    [Crossref] [Google Scholar]
  114. Powers A, Kiesler S, Fussell S, Torrey C. 2007.. Comparing a computer agent with a humanoid robot. . In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, pp. 14552. New York:: IEEE. https://doi.org/10.1145/1228716.1228736
    [Google Scholar]
  115. Promberger M, Baron J. 2006.. Do patients trust computers?. J. Behav. Decis. Making 19:(5):45568. https://doi.org/10.1002/bdm.542
    [Crossref] [Google Scholar]
  116. Ramani KV, Mandal BK. 1992.. Operational planning of passenger trains in Indian Railways. . Interfaces 22:(5):3951. https://doi.org/10.1287/inte.22.5.39
    [Crossref] [Google Scholar]
  117. Rampersad G. 2020.. Robot will take your job: innovation for an era of artificial intelligence. . J. Bus. Res. 116::6874. https://doi.org/10.1016/j.jbusres.2020.05.019
    [Crossref] [Google Scholar]
  118. Rau PLP, Li Y, Li D. 2009.. Effects of communication style and culture on ability to accept recommendations from robots. . Comput. Hum. Behav. 25:(2):58795. https://doi.org/10.1016/j.chb.2008.12.025
    [Crossref] [Google Scholar]
  119. Raveendhran R, Fast NJ. 2021.. Humans judge, algorithms nudge: the psychology of behavior tracking acceptance. . Organ. Behav. Hum. Decis. Process. 164::1126. https://doi.org/10.1016/j.obhdp.2021.01.001
    [Crossref] [Google Scholar]
  120. Ravid DM, White JC, Tomczak DL, Miles AF, Behrend TS. 2023.. A meta-analysis of the effects of electronic performance monitoring on work outcomes. . Pers. Psychol. 76:(1):540. https://doi.org/10.1111/peps.12514
    [Crossref] [Google Scholar]
  121. Reeves B, Nass CI. 1996.. The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. Cambridge, UK:: Cambridge Univ. Press
    [Google Scholar]
  122. Riek LD, Rabinowitch T-C, Chakrabarti B, Robinson P. 2009.. How anthropomorphism affects empathy toward robots. . In Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction, pp. 24546. New York:: IEEE. https://doi.org/10.1145/1514095.1514158
    [Google Scholar]
  123. Rieke RD, Sillars MO, Peterson TR. 2005.. The domain of argumentation. . In Argumentation and Critical Decision Making, pp. 124. New York:: Pearson. , 6th ed..
    [Google Scholar]
  124. Sahota N. 2024.. AI in hospitality: elevating the hotel guest experience through innovation. . Forbes. https://www.forbes.com/sites/neilsahota/2024/03/06/ai-in-hospitality-elevating-the-hotel-guest-experience-through-innovation/
    [Google Scholar]
  125. Samuel S. 2019.. Robot priests can bless you, advise you, and even perform your funeral. . Vox. https://www.vox.com/future-perfect/2019/9/9/20851753/ai-religion-robot-priest-mindar-buddhism-christianity
    [Google Scholar]
  126. Scopelliti M, Giuliani MV, Fornara F. 2005.. Robots in a domestic setting: a psychological approach. . Univers. Access Inform. Soc. 4:(2):14655. https://doi.org/10.1007/s10209-005-0118-1
    [Crossref] [Google Scholar]
  127. Shannon PW, Minch RP. 1992.. A decision support system for motor vehicle taxation evaluation. . Interfaces 22:(2):5264. https://doi.org/10.1287/inte.22.2.52
    [Crossref] [Google Scholar]
  128. Shiban Y, Schelhorn I, Jobst V, Hörnlein A, Puppe F, et al. 2015.. The appearance effect: influences of virtual agent features on performance and motivation. . Comput. Hum. Behav. 49::511. https://doi.org/10.1016/j.chb.2015.01.077
    [Crossref] [Google Scholar]
  129. Shin D. 2021.. Embodying algorithms, enactive artificial intelligence and the extended cognition: You can see as much as you know about algorithm. . J. Inform. Sci. 49:(1):1831. https://doi.org/10.1177/0165551520985495
    [Crossref] [Google Scholar]
  130. Singh DT, Singh PP. 1997.. Aiding DSS users in the use of complex OR models. . Ann. Oper. Res. 72::527. https://doi.org/10.1023/A:1018908623751
    [Crossref] [Google Scholar]
  131. Solomon M. 2016.. Technology invades hospitality industry: Hilton robot, Domino delivery droid, Ritz-Carlton Mystique. . Forbes. https://www.forbes.com/sites/micahsolomon/2016/03/18/high-tech-hospitality-hilton-robot-concierge-dominos-delivery-droid-ritz-carlton-mystique/
    [Google Scholar]
  132. Srinivasan K, Kasturirangan R. 2016.. Political ecology, development, and human exceptionalism. . Geoforum 75::12528. https://doi.org/10.1016/j.geoforum.2016.07.011
    [Crossref] [Google Scholar]
  133. Steffel M, Williams EF, Perrmann-Graham J. 2016.. Passing the buck: delegating choices to others to avoid responsibility and blame. . Organ. Behav. Hum. Decis. Process. 135::3244. https://doi.org/10.1016/j.obhdp.2016.04.006
    [Crossref] [Google Scholar]
  134. Tang PM, Koopman J, Mai KM, De Cremer D, Zhang JH, et al. 2023.. No person is an island: unpacking the work and after-work consequences of interacting with artificial intelligence. . J. Appl. Psychol. 108:(11):176689. https://doi.org/10.1037/apl0001103
    [Crossref] [Google Scholar]
  135. Taylor S, Todd P. 1995.. Decomposition and crossover effects in the theory of planned behavior: a study of consumer adoption intentions. . Int. J. Res. Market. 12:(2):13755. https://doi.org/10.1016/0167-8116(94)00019-K
    [Crossref] [Google Scholar]
  136. Thurman N, Moeller J, Helberger N, Trilling D. 2019.. My friends, editors, algorithms, and I: examining audience attitudes to news selection. . Digit. J. 7:(4):44769. https://doi.org/10.1080/21670811.2018.1493936
    [Google Scholar]
  137. Tjosvold D, Tsao Y. 1989.. Productive organizational collaboration: the role of values and cooperation. . J. Organ. Behav. 10:(2):18995. https://www.jstor.org/stable/2488244
    [Crossref] [Google Scholar]
  138. Todd P, Benbasat I. 1991.. An experimental investigation of the impact of computer based decision aids on decision making strategies. . Inform. Syst. Res. 2:(2):87115. https://doi.org/10.1287/isre.2.2.87
    [Crossref] [Google Scholar]
  139. Todd P, Benbasat I. 1992.. The use of information in decision making: an experimental investigation of the impact of computer-based decision aids. . MIS Q. 16:(3):37393. https://doi.org/10.2307/249534
    [Crossref] [Google Scholar]
  140. Tschang FT, Almirall E. 2021.. Artificial intelligence as augmenting automation: implications for employment. . Acad. Manag. Perspect. 35:(4):64259. https://doi.org/10.5465/amp.2019.0062
    [Crossref] [Google Scholar]
  141. Venkatesh V, Bala H. 2008.. Technology acceptance model 3 and a research agenda on interventions. . Decis. Sci. 39:(2):273315. https://doi.org/10.1111/j.1540-5915.2008.00192.x
    [Crossref] [Google Scholar]
  142. Venkatesh V, Davis FD. 2000.. A theoretical extension of the technology acceptance model: four longitudinal field studies. . Manag. Sci. 46:(2):186204. https://doi.org/10.1287/mnsc.46.2.186.11926
    [Crossref] [Google Scholar]
  143. Waytz A, Cacioppo J, Epley N. 2010a.. Who sees human?: The stability and importance of individual differences in anthropomorphism. . Perspect. Psychol. Sci. 5:(3):21932. https://doi.org/10.1177/1745691610369336
    [Crossref] [Google Scholar]
  144. Waytz A, Gray K, Epley N, Wegner DM. 2010b.. Causes and consequences of mind perception. . Trends Cogn. Sci. 14:(8):38388. https://doi.org/10.1016/j.tics.2010.05.006
    [Crossref] [Google Scholar]
  145. Waytz A, Heafner J, Epley N. 2014.. The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. . J. Exp. Soc. Psychol. 52::11317. https://doi.org/10.1016/j.jesp.2014.01.005
    [Crossref] [Google Scholar]
  146. Waytz A, Norton MI. 2014.. Botsourcing and outsourcing: Robot, British, Chinese, and German workers are for thinking—not feeling—jobs. . Emotion 14::43444. https://doi.org/10.1037/a0036054
    [Crossref] [Google Scholar]
  147. Wiese E, Weis PP, Bigman Y, Kapsaskis K, Gray K. 2022.. It's a match: Task assignment in human–robot collaboration depends on mind perception. . Int. J. Soc. Robot. 14:(1):14148. https://doi.org/10.1007/s12369-021-00771-z
    [Crossref] [Google Scholar]
  148. Wilkes DM, Alford A, Pack RT, Rogers T, Peters RA, Kawamura K. 1998.. Toward socially intelligent service robots. . Appl. Artif. Intell. 12:(7–8):72966. https://doi.org/10.1080/088395198117604
    [Crossref] [Google Scholar]
  149. Witkin HA. 1971.. Group Embedded Figures Test (GEFT). APA PsycTests. https://doi.org/10.1037/t06471-000
    [Google Scholar]
  150. Yam KC, Bigman YE, Tang PM, Ilies R, De Cremer D, et al. 2021.. Robots at work: People prefer—and forgive—service robots with perceived feelings. . J. Appl. Psychol. 106:(10):155772. https://doi.org/10.1037/apl0000834
    [Crossref] [Google Scholar]
  151. Yam KC, Goh E-Y, Fehr R, Lee R, Soh H, Gray K. 2022.. When your boss is a robot: Workers are more spiteful to robot supervisors that seem more human. . J. Exp. Soc. Psychol. 102::104360. https://doi.org/10.1016/j.jesp.2022.104360
    [Crossref] [Google Scholar]
  152. Yam KC, Tan T, Jackson JC, Shariff A, Gray K. 2023a.. Cultural differences in people's reactions and applications of robots, algorithms, and artificial intelligence. . Manag. Organ. Rev. 19:(5):85975. https://www.cambridge.org/core/journals/management-and-organization-review/article/cultural-differences-in-peoples-reactions-and-applications-of-robots-algorithms-and-artificial-intelligence/EE491FDF4C4773AB97D71C89545DF07C
    [Crossref] [Google Scholar]
  153. Yam KC, Tang PM, Jackson JC, Su R, Gray K. 2023b.. The rise of robots increases job insecurity and maladaptive workplace behaviors: multimethod evidence. . J. Appl. Psychol. https://doi.org/10.1037/apl0001045
    [Google Scholar]
  154. Yeomans M, Shah A, Mullainathan S, Kleinberg J. 2019.. Making sense of recommendations. . J. Behav. Decis. Making 32:(4):40314. https://doi.org/10.1002/bdm.2118
    [Crossref] [Google Scholar]
  155. Zhou MX, Mark G, Li J, Yang H. 2019.. Trusting virtual agents: the effect of personality. . ACM Trans. Interact. Intell. Syst. 9:(2–3):136. https://doi.org/10.1145/3232077
    [Google Scholar]
/content/journals/10.1146/annurev-orgpsych-030223-044504
Loading
/content/journals/10.1146/annurev-orgpsych-030223-044504
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error