1932

Abstract

The integration of more artificial intelligence (AI)–enabled robots for the military and first response domains is necessary to support long-duration deployments in uncertain and dynamic environments while lessening humans’ exposure to threats and dangers. The effective integration of AI-enabled robots as teammates with humans will provide support and enhance overall mission performance; however, the majority of current research on human–robot interaction focuses only on the robot team supervisor. The true integration of robots into military and first response missions will require a breadth of human roles that span from the highest command level to the dismounted in situ personnel working directly with robots. All human roles within the hierarchy must understand and maintain direct control of the robot teammates. This article maps existing human roles from the literature to a military mission, presents technical challenges associated with this future human–robot teaming, and provides potential solutions and recommendations to propel the field forward toward human–robot teams that can achieve domain-relevant missions.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-control-061223-124431
2024-07-10
2025-04-26
Loading full text...

Full text loading...

/deliver/fulltext/control/7/1/annurev-control-061223-124431.html?itemId=/content/journals/10.1146/annurev-control-061223-124431&mimeType=html&fmt=ahah

Literature Cited

  1. 1.
    Cummings M. 2021.. Rethinking the maturity of artificial intelligence in safety-critical settings. . AI Mag. 42:(1):615
    [Google Scholar]
  2. 2.
    Scholtz J. 2003.. Theory and evaluation of human robot interactions. . In Proceedings of the 36th Annual Hawaii International Conference on System Sciences. Piscataway, NJ:: IEEE. https://doi.org/10.1109/HICSS.2003.1174284
    [Google Scholar]
  3. 3.
    Goodrich MA, Schultz AC. 2007.. Human-robot interaction: a survey. . Found. Trends Hum.-Comput. Interact. 1:(3):20375
    [Crossref] [Google Scholar]
  4. 4.
    Humphrey CM, Adams JA. 2015.. Human roles for robot augmented first response. . In 2015 IEEE International Symposium on Safety, Security, and Rescue Robotics. Piscataway, NJ:: IEEE. https://doi.org/10.1109/SSRR.2015.7443001
    [Google Scholar]
  5. 5.
    Humphrey CM, Adams JA. 2013.. Cognitive information flow analysis. . Cogn. Technol. Work 15:(2):13352
    [Crossref] [Google Scholar]
  6. 6.
    McCammon S, Marcon dos Santos G, Frantz M, Welch TP, Best G, et al. 2021.. Ocean front detection and tracking using a team of heterogeneous marine vehicles. . J. Field Robot. 38:(6):85481
    [Crossref] [Google Scholar]
  7. 7.
    US Environ. Prot. Agency. 2022.. Safety zones. . United States Environmental Protection Agency. https://www.epa.gov/emergency-response/safety-zones
    [Google Scholar]
  8. 8.
    Walker P, Hamell J, Miller C, Ladwig J, Wauck H, Keller P. 2023.. Immersive interaction interface (I3): a virtual reality swarm control interface. . Field Robot. 3::60536
    [Crossref] [Google Scholar]
  9. 9.
    Miller C, Funk H, Wu P, Goldman R, Meisner J, Chapman M. 2005.. The PlaybookTM approach to adaptive automation. . Proc. Hum. Factors Ergon. Soc. Annu. Meet. 49:(1):1519
    [Crossref] [Google Scholar]
  10. 10.
    Adams JA, Hamell J, Walker P. 2023.. Can a single human supervise a swarm of 100 heterogeneous robots?. Field Robot. 3::83781
    [Crossref] [Google Scholar]
  11. 11.
    Williamson B, Taranta E, Moolenaar Y, LaViola J. 2023.. Command and control of a large scale swarm using natural human interfaces. . Field Robot. 3::30122
    [Crossref] [Google Scholar]
  12. 12.
    Chen JY, Barnes MJ, Harper-Sciarini M. 2011.. Supervisory control of multiple robots: human-performance issues and user-interface design. . IEEE Trans. Syst. Man Cybernet. Part C (Appl. Rev.) 41:(4):43554
    [Crossref] [Google Scholar]
  13. 13.
    TAK.gov. 2023.. TAK product lines. . TAK.gov. https://tak.gov/products
    [Google Scholar]
  14. 14.
    Android Team Aware. Kit (ATAK) – Civilian. 2023.. Home page. . https://www.civtak.org
  15. 15.
    Holland TM. 2021.. ATAK enhances collaboration and awareness for public safety. . Insights, Sept. 1. https://insights.samsung.com/2021/09/01/atak-enhances-collaboration-and-awareness-for-public-safety-2
    [Google Scholar]
  16. 16.
    Clark S, Usbeck K, Diller D, Schantz RE. 2021.. CCAST: a framework and practical deployment of heterogeneous unmanned system swarms. . GetMobile Mobile Comput. Commun. 24:(4):1726
    [Crossref] [Google Scholar]
  17. 17.
    Chen J, Terrence P. 2009.. Effects of imperfect automation on concurrent performance of military and robotics tasks in a simulated multi-tasking environment. . Ergonomics 52:(8):90720
    [Crossref] [Google Scholar]
  18. 18.
    Emerg. Manag. Inst. 2018.. ICS review document: extracted from E/L/G 0300 Intermediate Incident Command System for Expanding Incidents, ICS 300. Rev. Doc., Emerg. Manag. Inst. , Fed. Emerg. Manag. Agency, Emmitsburg, MD.: https://training.fema.gov/emiweb/is/icsresource/assets/ICS%20Review%20Document.pdf
    [Google Scholar]
  19. 19.
    Today's Mil. 2023.. Chain of command & communication. . Today's Military. https://www.todaysmilitary.com/life-in-the-military/customs-practices/chain-of-command
    [Google Scholar]
  20. 20.
    Alliance Syst. Saf. UAS Res. Excell. (ASSURE). 2023.. ASSUREd Safe. Flyer, ASSURE , Miss. State Univ., Starkville:. https://assureuas.com/wp-content/uploads/2023/04/ASSUREd-Safe-Flyer.pdf
    [Google Scholar]
  21. 21.
    InterAgency Board. 2023.. IAB Interactive Standardized Equipment List. . InterAgency Board. https://sel.iabfoundation.org/SELint.aspx
    [Google Scholar]
  22. 22.
    US Dep. Army. 2018.. Soldier-materiel systems: guide for human systems integration in the systems acquisition process. Pam. 602–2 , US Dep. Army, Washington, DC.: https://armypubs.army.mil/epubs/DR_pubs/DR_a/pdf/web/ARN7935_DAPam602-2_FINAL.pdf
    [Google Scholar]
  23. 23.
    Endsley MR, Bolté B, Jones DG. 2003.. Designing for Situation Awareness: An Approach to User-Centered Design. London:: Taylor & Francis
    [Google Scholar]
  24. 24.
    Barnes MJ, Wang N, Pynadath DV, Chen JYC. 2021.. Human-agent bidirectional transparency. . In Trust in Human-Robot Interaction, ed. CS Nam, JB Lyons , pp. 20982. San Diego, CA:: Academic
    [Google Scholar]
  25. 25.
    Chen JYC, Lakhmani SG, Stowers K, Selkowitz AR, Wright JL, Barnes M. 2018.. Situation awareness-based agent transparency and human-autonomy teaming effectiveness. . Theor. Issues Ergon. Sci. 19:(3):25982
    [Crossref] [Google Scholar]
  26. 26.
    Chen JYC, Procci K, Boyce M, Wright J, Garcia A, Barnes M. 2014.. Situation awareness–based agent transparency. Tech. Rep. ARL-TR-6905 , US Army Res. Lab., Aberdeen Proving Ground, MD:. https://apps.dtic.mil/sti/pdfs/ADA600351.pdf
    [Google Scholar]
  27. 27.
    Roundtree KA, Goodrich M, Adams JA. 2019.. Transparency: transitioning from human-machine systems to human-swarm systems. . J. Cogn. Eng. Decis. Mak. 13:(4):17195
    [Crossref] [Google Scholar]
  28. 28.
    US Army. 2015.. Force 2025 and Beyond. . US Army, Mar. 27. https://www.army.mil/standto/archive/2015/03/27
    [Google Scholar]
  29. 29.
    US Dep. Navy. 2021.. Department of the Navy Unmanned Campaign Framework. Rep. , US Dep. Navy, Washington, DC:. https://news.usni.org/2021/03/16/document-department-of-the-navy-unmanned-campaign-framework
    [Google Scholar]
  30. 30.
    US Dep. Navy. 2021.. Department of the Navy Science & Technology Strategy for Intelligent Autonomous Systems. Rep. , US Dep. Navy, Washington, DC:. https://news.usni.org/2021/07/29/department-of-the-navy-strategy-for-intelligent-autonomous-systems
    [Google Scholar]
  31. 31.
    US Dep. Homel. Secur. 2020.. U.S. Department of Homeland Security Artificial Intelligence Strategy. Rep. , US Dep. Homel. Secur., Washington, DC:. https://www.dhs.gov/publication/us-department-homeland-security-artificial-intelligence-strategy
    [Google Scholar]
  32. 32.
    US Dep. Homel. Secur. 2020.. Strategic plan 2020–2025: Science and Technology Directorate. Rep. , Dep. Homel. Secur., Washington, DC.: https://www.dhs.gov/sites/default/files/publications/st_strategic_plan_2020.pdf
    [Google Scholar]
  33. 33.
    US Dep. Homel. Secur. 2021.. S&T strategic plan 2021. Rep. , Dep. Homel. Secur., Washington, DC:. https://www.dhs.gov/publication/st-strategic-plan-2021
    [Google Scholar]
  34. 34.
    Smith CJ. 2019.. Designing trustworthy AI: a human-machine teaming framework to guide development. Paper presented at AAAI 2019 Fall Symposium Series: Artificial Intelligence in Government and Public Sector, Washington, DC:, Nov. 7–9
    [Google Scholar]
  35. 35.
    Nevejans N. 2016.. European civil law: rules in robotics. Rep. , Eur. Parliam., Brussels:. https://www.europarl.europa.eu/RegData/etudes/STUD/2016/571379/IPOL_STU(2016)571379_EN.pdf
    [Google Scholar]
  36. 36.
    Eur. Comm. 2021.. Proposal for a regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts. Doc. 52021PC0206, COM/2021/206 , Eur. Comm., Brussels:. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
    [Google Scholar]
  37. 37.
    Calhoun G, Draper MH. 2015.. Display and control concepts for multi-UAV applications. . In Handbook of Unmanned Aerial Vehicles, ed. KP Valavanis, GJ Vachtsevanos , pp. 244373. Dordrecht, Neth.:: Springer
    [Google Scholar]
  38. 38.
    Gerkey BP, Matarić MJ. 2001.. Principled communication for dynamic multi-robot task allocation. . In Experimental Robotics VII, ed. D Rus, S Singh , pp. 35362. Berlin:: Springer
    [Google Scholar]
  39. 39.
    Gerkey BP, Matarić MJ. 2004.. A formal analysis and taxonomy of task allocation in multi-robot systems. . Int. J. Robot. Res. 23:(9):93954
    [Crossref] [Google Scholar]
  40. 40.
    Korsah GA, Stentz A, Dias MB. 2013.. A comprehensive taxonomy for multi-robot task allocation. . Int. J. Robot. Res. 32:(12):1495512
    [Crossref] [Google Scholar]
  41. 41.
    Nunes E, Manner M, Mitche H, Gini M. 2017.. A taxonomy for task allocation problems with temporal and ordering constraints. . Robot. Auton. Syst. 90::5570
    [Crossref] [Google Scholar]
  42. 42.
    Chaimowicz L, Sugar T, Kumar V, Campos MFM. 2001.. An architecture for tightly coupled multi-robot cooperation. . In IEEE International Conference on Robotics and Automation, pp. 299297. Piscataway, NJ:: IEEE
    [Google Scholar]
  43. 43.
    Stowers K, Kasdaglis N, Rupp MA, Newton OB, Chen JY, Barnes MJ. 2020.. The IMPACT of agent transparency on human performance. . IEEE Trans. Hum.-Mach. Syst. 50:(3):24553
    [Crossref] [Google Scholar]
  44. 44.
    Muller T, Subrin K, Joncheray D, Billon A, Garnier S. 2022.. Transparency analysis of a passive heavy load comanipulation arm. . IEEE Trans. Hum.-Mach. Syst. 52:(5):91827
    [Crossref] [Google Scholar]
  45. 45.
    Schmidt M, Kirchhoff J, von Stryk O. 2022.. A modular and portable black box recorder for increased transparency of autonomous service robots. . IEEE Robot. Autom. Lett. 7:(4):1067380
    [Crossref] [Google Scholar]
  46. 46.
    Simon L, Guérin C, Rauffet P, Chauvin C, Martin É. 2023.. How humans comply with a (potentially) faulty robot: effects of multidimensional transparency. . IEEE Trans. Hum.-Mach. Syst. 53:(4):75161
    [Crossref] [Google Scholar]
  47. 47.
    Qian Y, Han S, Wang Y, Yu H, Fu C. 2023.. Toward improving actuation transparency and safety of a hip exoskeleton with a novel nonlinear series elastic actuator. . IEEE/ASME Trans. Mechatron. 28:(1):41728
    [Crossref] [Google Scholar]
  48. 48.
    Olatunji SA, Oron-Gilad T, Markfeld N, Gutman D, Sarne-Fleischmann V, Edan Y. 2021.. Levels of automation and transparency: interaction design considerations in assistive robots for older adults. . IEEE Trans. Hum.-Mach. Syst. 51:(6):67383
    [Crossref] [Google Scholar]
  49. 49.
    Crandall JW, Goodrich MA, Olsen DR, Nielsen CW. 2005.. Validating human-robot interaction schemes in multitasking environments. . IEEE Trans. Syst. Man Cybernet. Part A Syst. Hum. 35:(4):43849
    [Crossref] [Google Scholar]
  50. 50.
    Crandall JW, Cummings ML, Nehme CE. 2009.. Predictive model for human-unmanned vehicle systems. . J. Aerosp. Comput. Inf. Commun. 6:(11):585603
    [Crossref] [Google Scholar]
  51. 51.
    Flournoy MA, Haines A, Chefitz G. 2020.. Building trust through testing: adapting DOD's Test & Evaluation, Validation & Verification (TEVV) enterprise for machine learning systems, including deep learning systems. Tech. Rep. , WestExec Advisors, Washington, DC:
    [Google Scholar]
  52. 52.
    Heard J, Fortune J, Adams JA. 2020.. SAHRTA: a Supervisory-based Adaptive Human-Robot Teaming Architecture. . In 2020 IEEE Conference on Cognitive and Computational Aspects of Situation Management. Piscataway, NJ:: IEEE. https://doi.org/10.1109/CogSIMA49017.2020.9215996
    [Google Scholar]
  53. 53.
    Heard J, Heald R, Harriott CE, Adams JA. 2019.. A diagnostic human workload assessment algorithm for collaborative and supervisory human–robot teams. . ACM Trans. Hum.-Robot Interact. 8:(2):7
    [Crossref] [Google Scholar]
  54. 54.
    Heard J, Adams JA. 2019.. A multi-dimensional human workload assessment algorithm for supervisory human-machine interaction. . J. Cogn. Eng. Decis. Mak. 13:(3):14670
    [Crossref] [Google Scholar]
  55. 55.
    Heard J, Harriott CE, Adams JA. 2018.. A survey of workload assessment algorithms. . IEEE Trans. Hum.-Mach. Syst. 48:(5):43451
    [Crossref] [Google Scholar]
  56. 56.
    US Fire Adm. 2023.. National Fire Department Registry quick facts. . US Fire Administration. https://apps.usfa.fema.gov/registry/summary
    [Google Scholar]
  57. 57.
    Walker PM, Miller CA, Mueller JB, Sycara K, Lewis M. 2019.. A playbook-based interface for human control of swarms. . In Human Performance in Automated and Autonomous Systems: Emerging Issues and Practical Perspectives, ed. M Mouloua, PA Hancock , pp. 6188. Boca Raton, FL:: CRC
    [Google Scholar]
  58. 58.
    Shneiderman B, Plaisant C, Cohen M, Jacobs S, Elmqvist N, Diakopoulo N. 2016.. Designing the User Interface: Strategies for Effective Human-Computer Interaction. Boston:: Pearson
    [Google Scholar]
  59. 59.
    Platz S. 2020.. Design Beyond Devices: Creating Multimodal, Cross-Device Experiences. New York:: Rosenfeld Media
    [Google Scholar]
  60. 60.
    Endsley MR. 2020.. The divergence of objective and subjective situation awareness: a meta-analysis. . J. Cogn. Eng. Decis. Mak. 14:(1):3453
    [Crossref] [Google Scholar]
  61. 61.
    Cruz P, Vásconez J, Romero R, Chico A, Benalcázar ME, et al. 2023.. A deep Q-network based hand gesture recognition system for control of robotic platforms. . Sci. Rep. 13::7956
    [Crossref] [Google Scholar]
  62. 62.
    Andrews RW, Lilly JM, Srivastava D, Feigh KM. 2023.. The role of shared mental models in human-AI teams: a theoretical review. . Theor. Issues Ergon. Sci. 24:(2):12975
    [Crossref] [Google Scholar]
/content/journals/10.1146/annurev-control-061223-124431
Loading
/content/journals/10.1146/annurev-control-061223-124431
Loading

Data & Media loading...

  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error