1932

Abstract

Early research on physical human–robot interaction (pHRI) has necessarily focused on device design—the creation of compliant and sensorized hardware, such as exoskeletons, prostheses, and robot arms, that enables people to safely come in contact with robotic systems and to communicate about their collaborative intent. As hardware capabilities have become sufficient for many applications, and as computing has become more powerful, algorithms that support fluent and expressive use of pHRI systems have begun to play a prominent role in determining the systems’ usefulness. In this review, we describe a selection of representative algorithmic approaches that regulate and interpret pHRI, describing the progression from algorithms based on physical analogies, such as admittance control, to computational methods based on higher-level reasoning, which take advantage of multimodal communication channels. Existing algorithmic approaches largely enable task-specific pHRI, but they do not generalize to versatile human–robot collaboration. Throughout the review and in our discussion of next steps, we therefore argue that emergent embodied dialogue—bidirectional, multimodal communication that can be learned through continuous interaction—is one of the next frontiers of pHRI.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-control-070122-102501
2023-05-03
2024-04-14
Loading full text...

Full text loading...

/deliver/fulltext/control/6/1/annurev-control-070122-102501.html?itemId=/content/journals/10.1146/annurev-control-070122-102501&mimeType=html&fmt=ahah

Literature Cited

  1. 1.
    Haddadin S, Croft E. 2016. Physical Human–Robot Interaction Cham, Switz: Springer
  2. 2.
    Mathewson KW, Parker ASR, Sherstan C, Edwards AL, Sutton RS, Pilarski PM. 2022. Communicative capital: a key resource for human-machine shared agency and collaborative capacity. Neural Comput. Appl. In press. https://doi.org/10.1007/s00521-022-07948-1
    [Google Scholar]
  3. 3.
    Scott-Phillips T. 2014. Speaking Our Minds: Why Human Communication Is Different, and How Language Evolved to Make It Special London: Bloomsbury
  4. 4.
    Sebanz N, Bekkering H, Knoblich G. 2006. Joint action: bodies and minds moving together. Trends Cogn. Sci. 10:70–76
    [Google Scholar]
  5. 5.
    Knoblich G, Butterfill S, Sebanz N. 2011. Psychological research on joint action: theory and data. Psychol. Learn. Motiv. 54:59–101
    [Google Scholar]
  6. 6.
    Pezzulo G, Dindo H. 2011. What should I do next? Using shared representations to solve interaction problems. Exp. Brain Res. 211:613–30
    [Google Scholar]
  7. 7.
    Bicho E, Erlhagen W, Louro L, Costa e Silva E. 2011. Neuro-cognitive mechanisms of decision making in joint action: a human–robot interaction study. Hum. Mov. Sci. 30:846–68
    [Google Scholar]
  8. 8.
    Pezzulo G, Donnarumma F, Dindo H. 2013. Human sensorimotor communication: a theory of signaling in online social interactions. PLOS ONE 8:e79876
    [Google Scholar]
  9. 9.
    Wolpert DM, Doya K, Kawato M. 2003. A unifying computational framework for motor control and social interaction. Philos. Trans. R. Soc. Lond. B 358:593–602
    [Google Scholar]
  10. 10.
    Moon A, Parker CA, Croft EA, Van der Loos HM. 2013. Design and impact of hesitation gestures during human-robot resource conflicts. J. Hum.-Robot Interact. 2:18–40
    [Google Scholar]
  11. 11.
    Hogan N. 1985. Impedance control: an approach to manipulation: part I—theory. J. Dyn. Syst. Meas. Control 107:1–7
    [Google Scholar]
  12. 12.
    Gopinath D, Jain S, Argall BD. 2016. Human-in-the-loop optimization of shared autonomy in assistive robotics. Robot. Autom. Lett. 2:247–54
    [Google Scholar]
  13. 13.
    Castellini C, Artemiadis P, Wininger M, Ajoudani A, Alimusaj M et al. 2014. Proceedings of the first workshop on peripheral machine interfaces: going beyond traditional surface electromyography. Front. Neurorobot. 8:22
    [Google Scholar]
  14. 14.
    Whitney DE. 1977. Force feedback control of manipulator fine motions. J. Dyn. Syst. Meas. Control 99:91–97
    [Google Scholar]
  15. 15.
    Marchal-Crespo L, Reinkensmeyer DJ. 2009. Review of control strategies for robotic movement training after neurologic injury. J. NeuroEng. Rehabil. 6:20
    [Google Scholar]
  16. 16.
    Newman WS. 1992. Stability and performance limits of interaction controllers. J. Dyn. Syst. Meas. Control 114:563–70
    [Google Scholar]
  17. 17.
    Keemink AQ, van der Kooij H, Stienen AH. 2018. Admittance control for physical human–robot interaction. Int. J. Robot. Res. 37:1421–44
    [Google Scholar]
  18. 18.
    He W, Xue C, Yu X, Li Z, Yang C 2020. Admittance-based controller design for physical human–robot interaction in the constrained task space. IEEE Trans. Autom. Sci. Eng. 17:1937–49
    [Google Scholar]
  19. 19.
    Chen X, Wang N, Cheng H, Yang C 2020. Neural learning enhanced variable admittance control for human–robot collaboration. IEEE Access 8:25727–37
    [Google Scholar]
  20. 20.
    Abdallah M, Chen A, Campeau-Lecours A, Gosselin C. 2022. How to reduce the impedance for pHRI: admittance control or underactuation?. Mechatronics 84:102768
    [Google Scholar]
  21. 21.
    van der Linde RQ, Lammertse P 2003. HapticMaster – a generic force controlled robot for human interaction. Ind. Robot 30:515–24
    [Google Scholar]
  22. 22.
    Patton JL, Stoykov ME, Kovic M, Mussa-Ivaldi FA. 2006. Evaluation of robotic training forces that either enhance or reduce error in chronic hemiparetic stroke survivors. Exp. Brain Res. 168:368–83
    [Google Scholar]
  23. 23.
    Fan T, Alwala KV, Xiang D, Xu W, Murphey T, Mukadam M. 2021. Revitalizing optimization for 3D human pose and shape estimation: a sparse constrained formulation. 2021 IEEE/CVF International Conference on Computer Vision11437–46. Piscataway, NJ: IEEE
    [Google Scholar]
  24. 24.
    Williams HE, Chapman CS, Pilarski PM, Vette AH, Hebert JS. 2019. Gaze and Movement Assessment (GaMA): inter-site validation of a visuomotor upper limb functional protocol. PLOS ONE 14:e0219333
    [Google Scholar]
  25. 25.
    Rudenko A, Palmieri L, Herman M, Kitani KM, Gavrila DM, Arras KO. 2020. Human motion trajectory prediction: a survey. Int. J. Robot. Res. 39:895–935
    [Google Scholar]
  26. 26.
    Letham B, Rudin C, Madigan D. 2013. Sequential event prediction. Mach. Learn. 93:357–80
    [Google Scholar]
  27. 27.
    Kalinowska A, Berrueta TA, Zoss A, Murphey T. 2019. Data-driven gait segmentation for walking assistance in a lower-limb assistive device. 2019 International Conference on Robotics and Automation1390–96. Piscataway, NJ: IEEE
    [Google Scholar]
  28. 28.
    Pérez-D'Arpino C, Shah JA. 2015. Fast target prediction of human reaching motion for cooperative human-robot manipulation tasks using time series classification. 2015 IEEE International Conference on Robotics and Automation6175–82. Piscataway, NJ: IEEE
    [Google Scholar]
  29. 29.
    Lasota PA, Shah JA. 2017. A multiple-predictor approach to human motion prediction. 2017 IEEE International Conference on Robotics and Automation2300–7. Piscataway, NJ: IEEE
    [Google Scholar]
  30. 30.
    Pilarski PM, Dawson MR, Degris T, Carey JP, Chan KM et al. 2013. Adaptive artificial limbs: a real-time approach to prediction and anticipation. Robot. Autom. Mag. 20:153–64
    [Google Scholar]
  31. 31.
    Liu H, Wang L. 2018. Gesture recognition for human-robot collaboration: a review. Int. J. Ind. Ergon. 68:355–67
    [Google Scholar]
  32. 32.
    Fleming A, Stafford N, Huang S, Hu X, Ferris DP, Huang HH. 2021. Myoelectric control of robotic lower limb prostheses: a review of electromyography interfaces, control paradigms, challenges and future directions. J. Neural Eng. 18:041004
    [Google Scholar]
  33. 33.
    Veeriah V, Pilarski PM, Sutton RS. 2016. Face valuing: training user interfaces with facial expressions and reinforcement learning Paper presented at the Interactive Machine Learning Workshop, 25th International Joint Conference on Artificial Intelligence New York, NY: July 9
  34. 34.
    Palinko O, Rea F, Sandini G, Sciutti A. 2016. Robot reading human gaze: why eye tracking is better than head tracking for human-robot collaboration. 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems5048–54. Piscataway, NJ: IEEE
    [Google Scholar]
  35. 35.
    Admoni H, Scassellati B. 2017. Social eye gaze in human-robot interaction: a review. J. Hum.-Robot Interact. 6:25–63
    [Google Scholar]
  36. 36.
    Rastgoo R, Kiani K, Escalera S. 2021. Sign language recognition: a deep survey. Expert Syst. Appl. 164:113794
    [Google Scholar]
  37. 37.
    Shehata AW, Williams HE, Hebert JS, Pilarski PM. 2021. Machine learning for the control of prosthetic arms: using electromyographic signals for improved performance. IEEE Signal Process. Mag. 38:446–53
    [Google Scholar]
  38. 38.
    Micera S, Carpaneto J, Raspopovic S. 2010. Control of hand prostheses using peripheral information. Rev. Biomed. Eng. 3:48–68
    [Google Scholar]
  39. 39.
    Lenzi T, De Rossi SMM, Vitiello N, Carrozza MC. 2011. Proportional EMG control for upper-limb powered exoskeletons. 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society628–31. Piscataway, NJ: IEEE
    [Google Scholar]
  40. 40.
    Bi L, Feleke AG, Guan C. 2019. A review on EMG-based motor intention prediction of continuous human upper limb motion for human-robot collaboration. Biomed. Signal Process. Control 51:113–27
    [Google Scholar]
  41. 41.
    Hargrove LJ, Simon AM, Young AJ, Lipschutz RD, Finucane SB et al. 2013. Robotic leg control with EMG decoding in an amputee with nerve transfers. N. Engl. J. Med. 369:1237–42
    [Google Scholar]
  42. 42.
    Chowdhury RH, Reaz MB, Ali MABM, Bakar AA, Chellappan K, Chang TG. 2013. Surface electromyography signal processing and classification techniques. Sensors 13:12431–66
    [Google Scholar]
  43. 43.
    Hargrove LJ, Miller LA, Turner K, Kuiken TA. 2017. Myoelectric pattern recognition outperforms direct control for transhumeral amputees with targeted muscle reinnervation: a randomized clinical trial. Sci. Rep. 7:13840
    [Google Scholar]
  44. 44.
    Resnik L, Huang HH, Winslow A, Crouch DL, Zhang F, Wolk N. 2018. Evaluation of EMG pattern recognition for upper limb prosthesis control: a case study in comparison with direct myoelectric control. J. NeuroEng. Rehabil. 15:23
    [Google Scholar]
  45. 45.
    Kristoffersen MB, Franzke AW, Van Der Sluis CK, Murgia A, Bongers RM. 2020. Serious gaming to generate separated and consistent EMG patterns in pattern-recognition prosthesis control. Biomed. Signal Process. Control 62:102140
    [Google Scholar]
  46. 46.
    Lalitharatne TD, Teramoto K, Hayashi Y, Kiguchi K. 2013. Towards hybrid EEG-EMG-based control approaches to be used in bio-robotics applications: current status, challenges and future directions. Paladyn J. Behav. Robot. 4:147–54
    [Google Scholar]
  47. 47.
    DelPreto J, Salazar-Gomez AF, Gil S, Hasani R, Guenther FH, Rus D. 2020. Plug-and-play supervisory control using muscle and brain signals for real-time gesture and error detection. Auton. Robots 44:1303–22
    [Google Scholar]
  48. 48.
    Cipriani C, Zaccone F, Micera S, Carrozza MC. 2008. On the shared control of an EMG-controlled prosthetic hand: analysis of user-prosthesis interaction. IEEE Trans. Robot. 24:170–84
    [Google Scholar]
  49. 49.
    Al-Quraishi MS, Elamvazuthi I, Daud SA, Parasuraman S, Borboni A. 2018. EEG-based control for upper and lower limb exoskeletons and prostheses: a systematic review. Sensors 18:3342
    [Google Scholar]
  50. 50.
    McMullen DP, Hotson G, Katyal KD, Wester BA, Fifer MS et al. 2013. Demonstration of a semi-autonomous hybrid brain–machine interface using human intracranial EEG, eye tracking, and computer vision to control a robotic upper limb prosthetic. IEEE Trans. Neural Syst. Rehabil. Eng. 22:784–96
    [Google Scholar]
  51. 51.
    Herff C, Krusienski DJ, Kubben P. 2020. The potential of stereotactic-EEG for brain-computer interfaces: current progress and future directions. Front. Neurosci. 14:123
    [Google Scholar]
  52. 52.
    Wolpaw JR, McFarland DJ. 2004. Control of a two-dimensional movement signal by a noninvasive brain-computer interface in humans. PNAS 101:17849–54
    [Google Scholar]
  53. 53.
    Jeong JH, Shim KH, Kim DJ, Lee SW. 2020. Brain-controlled robotic arm system based on multi-directional CNN-BiLSTM network using EEG signals. IEEE Trans. Neural Syst. Rehabil. Eng. 28:1226–38
    [Google Scholar]
  54. 54.
    Millán JDR, Renkens F, Mourino J, Gerstner W. 2004. Noninvasive brain-actuated control of a mobile robot by human EEG. IEEE Trans. Biomed. Eng. 51:1026–33
    [Google Scholar]
  55. 55.
    Batzianoulis I, Iwane F, Wei S, Correia CGPR, Chavarriaga R et al. 2021. Customizing skills for assistive robotic manipulators, an inverse reinforcement learning approach with error-related potentials. Commun. Biol. 4:1406
    [Google Scholar]
  56. 56.
    Argall BD, Billard AG. 2010. A survey of tactile human–robot interactions. Robot. Auton. Syst. 58:1159–76
    [Google Scholar]
  57. 57.
    Gopinath D, Javaremi MN, Argall B. 2021. Customized handling of unintended interface operation in assistive robots. 2021 IEEE International Conference on Robotics and Automation10406–12. Piscataway, NJ: IEEE
    [Google Scholar]
  58. 58.
    Javaremi MN, Young M, Argall BD. 2019. Interface operation and implications for shared-control assistive robots. 2019 IEEE 16th International Conference on Rehabilitation Robotics232–39. Piscataway, NJ: IEEE
    [Google Scholar]
  59. 59.
    Devlin J, Chang MW, Lee K, Toutanova K. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol. 1: Long and Short Papers4171–86. Minneapolis, MN: Assoc. Comput. Linguist.
    [Google Scholar]
  60. 60.
    Alayrac JB, Donahue J, Luc P, Miech A, Barr I et al. 2022. Flamingo: a visual language model for few-shot learning. arXiv:2204.14198 [cs.CV]
  61. 61.
    Tellex S, Gopalan N, Kress-Gazit H, Matuszek C. 2020. Robots that use language. Annu. Rev. Control Robot. Auton. Syst. 3:25–55
    [Google Scholar]
  62. 62.
    Collins S, Ruina A, Tedrake R, Wisse M. 2005. Efficient bipedal robots based on passive-dynamic walkers. Science 307:1082–85
    [Google Scholar]
  63. 63.
    Todorov E. 2004. Optimality principles in sensorimotor control. Nat. Neurosci. 7:907–15
    [Google Scholar]
  64. 64.
    Fitzsimons K, Murphey TD. 2022. Ergodic shared control: closing the loop on pHRI based on information encoded in motion. ACM Trans. Hum.-Robot Interact. 11:37
    [Google Scholar]
  65. 65.
    Gielniak MJ, Thomaz AL. 2011. Spatiotemporal correspondence as a metric for human-like robot motion. 2011 6th ACM/IEEE International Conference on Human-Robot Interaction77–84. Piscataway, NJ: IEEE
    [Google Scholar]
  66. 66.
    Fitzsimons K, Acosta AM, Dewald JP, Murphey TD. 2019. Ergodicity reveals assistance and learning from physical human-robot interaction. Sci. Robot. 4:eaav6079
    [Google Scholar]
  67. 67.
    Kalinowska A, Rudy K, Schlafly M, Fitzsimons K, Dewald JP, Murphey TD. 2020. Shoulder abduction loading affects motor coordination in individuals with chronic stroke, informing targeted rehabilitation. 2020 8th IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics1010–17. Piscataway, NJ: IEEE
    [Google Scholar]
  68. 68.
    Kalinowska A, Schlafly M, Rudy K, Dewald J, Murphey TD 2022. Measuring interaction bandwidth during physical human-robot collaboration. Robot. Autom. Lett. 7:12467–74
    [Google Scholar]
  69. 69.
    Edwards AL, Dawson MR, Hebert JS, Sherstan C, Sutton RS et al. 2016. Application of real-time machine learning to myoelectric prosthesis control: a case series in adaptive switching. Prosthet. Orthot. Int. 40:573–81
    [Google Scholar]
  70. 70.
    Edwards AL, Hebert JS, Pilarski PM. 2016. Machine learning and unlearning to autonomously switch between the functions of a myoelectric arm. 2016 6th IEEE International Conference on Biomedical Robotics and Biomechatronics514–21. Piscataway, NJ: IEEE
    [Google Scholar]
  71. 71.
    Günther J, Kearney A, Dawson MR, Sherstan C, Pilarski PM 2018. Predictions, surprise, and predictions of surprise in general value function architectures. Proceedings of the AAAI Fall Symposium on Reasoning and Learning in Real-World Systems for Long-Term Autonomy KH Wray, JA Shah, P Stone, SJ Witwicki, S Zilberstein 22–29. Palo Alto, CA: AAAI Press
    [Google Scholar]
  72. 72.
    Bethel CL, Salomon K, Murphy RR, Burke JL. 2007. Survey of psychophysiology measurements applied to human-robot interaction. RO-MAN 2007: The 16th IEEE International Symposium on Robot and Human Interactive Communication732–37. Piscataway, NJ: IEEE
    [Google Scholar]
  73. 73.
    Peternel L, Tsagarakis N, Caldwell D, Ajoudani A 2018. Robot adaptation to human physical fatigue in human–robot co-manipulation. Auton. Robots 42:1011–21
    [Google Scholar]
  74. 74.
    O'Neill C, Proietti T, Nuckols K, Clarke ME, Hohimer CJ et al. 2020. Inflatable soft wearable robot for reducing therapist fatigue during upper extremity rehabilitation in severe stroke. Robot. Autom. Lett. 5:3899–906
    [Google Scholar]
  75. 75.
    Zhang J, Fiers P, Witte KA, Jackson RW, Poggensee KL et al. 2017. Human-in-the-loop optimization of exoskeleton assistance during walking. Science 356:1280–84
    [Google Scholar]
  76. 76.
    Ding Y, Kim M, Kuindersma S, Walsh CJ. 2018. Human-in-the-loop optimization of hip assistance with a soft exosuit during walking. Sci. Robot. 3:eaar5438
    [Google Scholar]
  77. 77.
    Li K, Zhang J, Wang L, Zhang M, Li J, Bao S 2020. A review of the key technologies for sEMG-based human-robot interaction systems. Biomed. Signal Process. Control 62:102074
    [Google Scholar]
  78. 78.
    Swerdloff MM, Hargrove LJ. 2020. Quantifying cognitive load using EEG during ambulation and postural tasks. 2020 42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society2849–52. Piscataway, NJ: IEEE
    [Google Scholar]
  79. 79.
    Luo R, Wang Y, Weng Y, Paul V, Brudnak MJ et al. 2019. Toward real-time assessment of workload: a Bayesian inference approach. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 63:196–200
    [Google Scholar]
  80. 80.
    Koenig A, Novak D, Omlin X, Pulfer M, Perreault E et al. 2011. Real-time closed-loop control of cognitive load in neurological patients during robot-assisted gait training. IEEE Trans. Neural Syst. Rehabil. Eng. 19:453–64
    [Google Scholar]
  81. 81.
    Losey DP, McDonald CG, Battaglia E, O'Malley MK. 2018. A review of intent detection, arbitration, and communication aspects of shared control for physical human–robot interaction. Appl. Mech. Rev. 70:010804
    [Google Scholar]
  82. 82.
    Li Q, Chen W, Wang J 2011. Dynamic shared control for human-wheelchair cooperation. 2011 IEEE International Conference on Robotics and Automation4278–83. Piscataway, NJ: IEEE
    [Google Scholar]
  83. 83.
    Dragan AD, Srinivasa SS. 2013. A policy-blending formalism for shared control. Int. J. Robot. Res. 32:790–805
    [Google Scholar]
  84. 84.
    Kalinowska A, Fitzsimons K, Dewald J, Murphey TD 2018. Online user assessment for minimal intervention during task-based robotic assistance. Robotics: Science and Systems XIV H Kress-Gazit, S Srinivasa, T Howard, N Atanasov, pap. 46 N.p.: Robot. Sci. Syst. Found.
    [Google Scholar]
  85. 85.
    Ezeh C, Trautman P, Devigne L, Bureau V, Babel M, Carlson T. 2017. Probabilistic versus linear blending approaches to shared control for wheelchair driving. 2017 International Conference on Rehabilitation Robotics835–40. Piscataway, NJ: IEEE
    [Google Scholar]
  86. 86.
    Jeon HJ, Losey DP, Sadigh D 2020. Shared autonomy with learned latent actions. Robotics: Science and Systems XVI M Toussaint, A Bicchi, T Hermans, pap. 11 N.p.: Robot. Sci. Syst. Found.
    [Google Scholar]
  87. 87.
    Gopinath DE, Argall BD. 2020. Active intent disambiguation for shared control robots. IEEE Trans. Neural Syst. Rehabil. Eng. 28:1497–506
    [Google Scholar]
  88. 88.
    Abbink DA, Carlson T, Mulder M, De Winter JC, Aminravan F et al. 2018. A topology of shared control systems–finding common ground in diversity. IEEE Trans. Hum.-Mach. Syst. 48:509–25
    [Google Scholar]
  89. 89.
    Emken JL, Harkema SJ, Beres-Jones JA, Ferreira CK, Reinkensmeyer DJ. 2007. Feasibility of manual teach-and-replay and continuous impedance shaping for robotic locomotor training following spinal cord injury. IEEE Trans. Biomed. Eng. 55:322–34
    [Google Scholar]
  90. 90.
    Bortole M, Del Ama A, Rocon E, Moreno JC, Brunetti F, Pons JL. 2013. A robotic exoskeleton for overground gait rehabilitation. 2013 IEEE International Conference on Robotics and Automation3356–61. Piscataway, NJ: IEEE
    [Google Scholar]
  91. 91.
    Tchimino J, Markovic M, Dideriksen JL, Dosen S. 2021. The effect of calibration parameters on the control of a myoelectric hand prosthesis using EMG feedback. J. Neural Eng. 18:046091
    [Google Scholar]
  92. 92.
    Ang KK, Guan C. 2016. EEG-based strategies to detect motor imagery for control and rehabilitation. IEEE Trans. Neural Syst. Rehabil. Eng. 25:392–401
    [Google Scholar]
  93. 93.
    Mondini V, Kobler RJ, Sburlea AI, Müller-Putz GR. 2020. Continuous low-frequency EEG decoding of arm movement for closed-loop, natural control of a robotic arm. J. Neural Eng. 17:046031
    [Google Scholar]
  94. 94.
    Thompson A, Loke LY, Argall B. 2022. Control interface remapping for bias-aware assistive teleoperation. 2022 International Conference on Rehabilitation Robotics Piscataway, NJ: IEEE https://doi.org/10.1109/ICORR55369.2022.9896567
    [Google Scholar]
  95. 95.
    Erdogan A, Argall BD. 2017. The effect of robotic wheelchair control paradigm and interface on user performance, effort and preference: an experimental assessment. Robot. Auton. Syst. 94:282–97
    [Google Scholar]
  96. 96.
    Musić S, Hirche S. 2017. Control sharing in human-robot team interaction. Annu. Rev. Control 44:342–54
    [Google Scholar]
  97. 97.
    Argall BD 2015. Turning assistive machines into assistive robots. Quantum Sensing and Nanophotonic Devices XII M Razeghi, E Tournié, GJ Brown 413–24. Proc. SPIE 9370 Bellingham, WA: SPIE
    [Google Scholar]
  98. 98.
    Reinkensmeyer DJ, Boninger ML. 2012. Technologies and combination therapies for enhancing movement training for people with a disability. J. NeuroEng. Rehabil. 9:17
    [Google Scholar]
  99. 99.
    Blank AA, French JA, Pehlivan AU, O'Malley MK. 2014. Current trends in robot-assisted upper-limb stroke rehabilitation: promoting patient engagement in therapy. Curr. Phys. Med. Rehabil. Rep. 2:184–95
    [Google Scholar]
  100. 100.
    Lo AC, Guarino PD, Richards LG, Haselkorn JK, Wittenberg GF et al. 2010. Robot-assisted therapy for long-term upper-limb impairment after stroke. N. Engl. J. Med. 362:1772–83
    [Google Scholar]
  101. 101.
    Klamroth-Marganska V, Blanco J, Campen K, Curt A, Dietz V et al. 2014. Three-dimensional, task-specific robot therapy of the arm after stroke: a multicentre, parallel-group randomised trial. Lancet Neurol. 13:159–66
    [Google Scholar]
  102. 102.
    Iosa M, Morone G, Cherubini A, Paolucci S. 2016. The three laws of neurorobotics: a review on what neurorehabilitation robots should do for patients and clinicians. J. Med. Biol. Eng. 36:1–11
    [Google Scholar]
  103. 103.
    Danzl MM, Etter NM, Andreatta RD, Kitzman PH. 2012. Facilitating neurorehabilitation through principles of engagement. J. Allied Health 41:35–41
    [Google Scholar]
  104. 104.
    Wolbrecht ET, Chan V, Reinkensmeyer DJ, Bobrow JE. 2008. Optimizing compliant, model-based robotic assistance to promote neurorehabilitation. IEEE Trans. Neural Syst. Rehabil. Eng. 16:286–97
    [Google Scholar]
  105. 105.
    Cheng N, Phua KS, Lai HS, Tam PK, Tang KY et al. 2020. Brain-computer interface-based soft robotic glove rehabilitation for stroke. IEEE Trans. Biomed. Eng. 67:3339–51
    [Google Scholar]
  106. 106.
    Balasubramanian S, Garcia-Cossio E, Birbaumer N, Burdet E, Ramos-Murguialday A. 2018. Is EMG a viable alternative to BCI for detecting movement intention in severe stroke?. IEEE Trans. Biomed. Eng. 65:2790–97
    [Google Scholar]
  107. 107.
    Dukelow SP, Herter TM, Moore KD, Demers MJ, Glasgow JI et al. 2010. Quantitative assessment of limb position sense following stroke. Neurorehabil. Neural Repair 24:178–87
    [Google Scholar]
  108. 108.
    Tyryshkin K, Coderre AM, Glasgow JI, Herter TM, Bagg SD et al. 2014. A robotic object hitting task to quantify sensorimotor impairments in participants with stroke. J. NeuroEng. Rehabil. 11:47
    [Google Scholar]
  109. 109.
    McPherson JG, Ellis MD, Harden RN, Carmona C, Drogos JM et al. 2018. Neuromodulatory inputs to motoneurons contribute to the loss of independent joint control in chronic moderate to severe hemiparetic stroke. Front. Neurol. 9:470
    [Google Scholar]
  110. 110.
    Argall BD. 2018. Autonomy in rehabilitation robotics: an intersection. Annu. Rev. Control Robot. Auton. Syst. 1:441–63
    [Google Scholar]
  111. 111.
    Polygerinos P, Wang Z, Galloway KC, Wood RJ, Walsh CJ. 2015. Soft robotic glove for combined assistance and at-home rehabilitation. Robot. Auton. Syst. 73:134–43
    [Google Scholar]
  112. 112.
    Anam K, Al-Jumaily AA. 2012. Active exoskeleton control systems: state of the art. Procedia Eng. 41:988–94
    [Google Scholar]
  113. 113.
    Xiloyannis M, Alicea R, Georgarakis AM, Haufe FL, Wolf P et al. 2021. Soft robotic suits: state of the art, core technologies, and open challenges. IEEE Trans. Robot. 38:1342–62
    [Google Scholar]
  114. 114.
    Sun Y, Tang Y, Zheng J, Dong D, Chen X, Bai L 2022. From sensing to control of lower limb exoskeleton: a systematic review. Annu. Rev. Control 53:83–96
    [Google Scholar]
  115. 115.
    Dollar AM, Herr H. 2008. Lower extremity exoskeletons and active orthoses: challenges and state-of-the-art. IEEE Trans. Robot. 24:144–58
    [Google Scholar]
  116. 116.
    De Looze MP, Bosch T, Krause F, Stadler KS, O'Sullivan LW. 2016. Exoskeletons for industrial application and their potential effects on physical work load. Ergonomics 59:671–81
    [Google Scholar]
  117. 117.
    Walsh CJ, Endo K, Herr H. 2007. A quasi-passive leg exoskeleton for load-carrying augmentation. Int. J. Humanoid Robot. 4:487–506
    [Google Scholar]
  118. 118.
    Gull MA, Bai S, Bak T. 2020. A review on design of upper limb exoskeletons. Robotics 9:16
    [Google Scholar]
  119. 119.
    Sawers A, Bhattacharjee T, McKay JL, Hackney ME, Kemp CC, Ting LH. 2017. Small forces that differ with prior motor experience can communicate movement goals during human-human physical interaction. J. NeuroeEng. Rehabil. 31:8
    [Google Scholar]
  120. 120.
    Takagi A, Ganesh G, Yoshioka T, Kawato M, Burdet E. 2017. Physically interacting individuals estimate the partner's goal to enhance their movements. Nat. Hum. Behav. 1:0054
    [Google Scholar]
  121. 121.
    Fitzsimons K, Kalinowska A, Dewald JP, Murphey TD. 2020. Task-based hybrid shared control for training through forceful interaction. Int. J. Robot. Res. 39:1138–54
    [Google Scholar]
  122. 122.
    Jarrasse N, Sanguineti V, Burdet E. 2014. Slaves no longer: review on role assignment for human–robot joint motor action. Adapt. Behav. 22:70–82
    [Google Scholar]
  123. 123.
    Liu M, Wang D, Huang H. 2015. Development of an environment-aware locomotion mode recognition system for powered lower limb prostheses. IEEE Trans. Neural Syst. Rehabil. Eng. 24:434–43
    [Google Scholar]
  124. 124.
    Farina D, Vujaklija I, Brånemark R, Bull AM, Dietl H et al. 2021. Toward higher-performance bionic limbs for wider clinical use. Nat. Biomed. Eng. In press. https://doi.org/10.1038/s41551-021-00732-x
    [Google Scholar]
  125. 125.
    Igual C, Pardo LA Jr., Hahne JM, Igual J 2019. Myoelectric control for upper limb prostheses. Electronics 8:1244
    [Google Scholar]
  126. 126.
    Kuiken TA, Li G, Lock BA, Lipschutz RD, Miller LA et al. 2009. Targeted muscle reinnervation for real-time myoelectric control of multifunction artificial arms. JAMA 301:619–28
    [Google Scholar]
  127. 127.
    Hebert JS, Olson JL, Morhart MJ, Dawson MR, Marasco PD et al. 2013. Novel targeted sensory reinnervation technique to restore functional hand sensation after transhumeral amputation. IEEE Trans. Neural Syst. Rehabil. Eng. 22:765–73
    [Google Scholar]
  128. 128.
    Ortiz-Catalan M, Mastinu E, Sassu P, Aszmann O, Brånemark R. 2020. Self-contained neuromusculoskeletal arm prostheses. N. Engl. J. Med. 382:1732–38
    [Google Scholar]
  129. 129.
    Simon A, Turner K, Miller L, Potter B, Beachler M et al. 2022. User performance with a transradial multi-articulating hand prosthesis during pattern recognition and direct control home use. TechRxiv 19859281. https://doi.org/10.36227/techrxiv.19859281
  130. 130.
    Schofield JS, Evans KR, Carey JP, Hebert JS. 2014. Applications of sensory feedback in motorized upper extremity prosthesis: a review. Expert Rev. Med. Devices 11:499–511
    [Google Scholar]
  131. 131.
    Barontini F, Catalano MG, Grioli G, Bianchi M, Bicchi A. 2021. Wearable integrated soft haptics in a prosthetic socket. Robot. Autom. Lett. 6:1785–92
    [Google Scholar]
  132. 132.
    Battaglia E, Clark JP, Bianchi M, Catalano MG, Bicchi A, O'Malley MK. 2019. Skin stretch haptic feedback to convey closure information in anthropomorphic, under-actuated upper limb soft prostheses. IEEE Trans. Haptics 12:508–20
    [Google Scholar]
  133. 133.
    Lasota PA, Rossano GF, Shah JA. 2014. Toward safe close-proximity human-robot interaction with standard industrial robots. 2014 IEEE International Conference on Automation Science and Engineering339–44. Piscataway, NJ: IEEE
    [Google Scholar]
  134. 134.
    Liu H, Wang L. 2021. Collision-free human-robot collaboration based on context awareness. Robot. Comput.-Integr. Manuf. 67:101997
    [Google Scholar]
  135. 135.
    Trautman P, Ma J, Murray RM, Krause A. 2015. Robot navigation in dense human crowds: statistical models and experimental studies of human–robot cooperation. Int. J. Robot. Res. 34:335–56
    [Google Scholar]
  136. 136.
    Chen C, Liu Y, Kreiss S, Alahi A. 2019. Crowd-robot interaction: crowd-aware robot navigation with attention-based deep reinforcement learning. 2019 International Conference on Robotics and Automation6015–22. Piscataway, NJ: IEEE
    [Google Scholar]
  137. 137.
    Mörtl A, Lawitzky M, Kucukyilmaz A, Sezgin M, Basdogan C, Hirche S. 2012. The role of roles: physical cooperation between humans and robots. Int. J. Robot. Res. 31:1656–74
    [Google Scholar]
  138. 138.
    Ghadirzadeh A, Bütepage J, Maki A, Kragic D, Björkman M. 2016. A sensorimotor reinforcement learning framework for physical human-robot interaction. 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems2682–88. Piscataway, NJ: IEEE
    [Google Scholar]
  139. 139.
    Edsinger A, Kemp CC. 2007. Human-robot interaction for cooperative manipulation: handing objects to one another. RO-MAN 2007: The 16th IEEE International Symposium on Robot and Human Interactive Communication1167–72. Piscataway, NJ: IEEE
    [Google Scholar]
  140. 140.
    Ravichandar H, Polydoros AS, Chernova S, Billard A. 2020. Recent advances in robot learning from demonstration. Annu. Rev. Control Robot. Auton. Syst. 3:297–330
    [Google Scholar]
  141. 141.
    Pastor P, Hoffmann H, Asfour T, Schaal S. 2009. Learning and generalization of motor skills by learning from demonstration. 2009 IEEE International Conference on Robotics and Automation763–68. Piscataway, NJ: IEEE
    [Google Scholar]
  142. 142.
    Fitzgerald T, Goel A, Thomaz A. 2018. Human-guided object mapping for task transfer. ACM Trans. Hum.-Robot Interact. 7:17
    [Google Scholar]
  143. 143.
    Lauretti C, Cordella F, Ciancio AL, Trigili E, Catalan JM et al. 2018. Learning by demonstration for motion planning of upper-limb exoskeletons. Front. Neurorobot. 12:5
    [Google Scholar]
  144. 144.
    Brown DS, Goo W, Niekum S 2020. Better-than-demonstrator imitation learning via automatically-ranked demonstrations. Proceedings of the Conference on Robot Learning LP Kaelbling, D Kragic, K Sugiura 330–59. Proc. Mach. Learn. Res. 100 N.p.: PMLR
    [Google Scholar]
  145. 145.
    Myers V, Biyik E, Anari N, Sadigh D 2022. Learning multimodal rewards from rankings. Proceedings of the 5th Conference on Robot Learning A Faust, D Hsu, G Neumann 342–52. Proc. Mach. Learn. Res. 164 N.p.: PMLR
    [Google Scholar]
  146. 146.
    Niekum S, Chitta S, Barto AG, Marthi B, Osentoski S 2013. Incremental semantically grounded learning from demonstration. Robotics: Science and Systems IX P Newman, D Fox, D Hsu, pap. 48. N.p.: Robot. Sci. Syst. Found.
    [Google Scholar]
  147. 147.
    Bobu A, Bajcsy A, Fisac JF, Deglurkar S, Dragan AD. 2020. Quantifying hypothesis space misspecification in learning from human–robot demonstrations and physical corrections. IEEE Trans. Robot. 36:835–54
    [Google Scholar]
  148. 148.
    Kalinowska A, Prabhakar A, Fitzsimons K, Murphey T. 2021. Ergodic imitation: learning from what to do and what not to do. 2021 IEEE International Conference on Robotics and Automation3648–54. Piscataway, NJ: IEEE
    [Google Scholar]
  149. 149.
    Herlant LV, Holladay RM, Srinivasa SS. 2016. Assistive teleoperation of robot arms via automatic time-optimal mode switching. 2016 11th ACM/IEEE International Conference on Human-Robot Interaction35–42. Piscataway, NJ: IEEE
    [Google Scholar]
  150. 150.
    Hogan N, Sternad D. 2012. Dynamic primitives of motor behavior. Biol. Cybernet. 106:727–39
    [Google Scholar]
  151. 151.
    Losey DP, Srinivasan K, Mandlekar A, Garg A, Sadigh D. 2020. Controlling assistive robots with learned latent actions. 2020 IEEE International Conference on Robotics and Automation378–84. Piscataway, NJ: IEEE
    [Google Scholar]
  152. 152.
    Lasota PA, Fong T, Shah JA. 2017. A survey of methods for safe human-robot interaction. Found. Trends Robot. 5:261–349
    [Google Scholar]
  153. 153.
    Heinzmann J, Zelinsky A. 2003. Quantitative safety guarantees for physical human-robot interaction. Int. J. Robot. Res. 22:479–504
    [Google Scholar]
  154. 154.
    ISO 2106. ISO/TS 15066:2016: Robots and robotic devices – collaborative robots Stand., ISO Geneva, Switz: https://www.iso.org/standard/62996.html
  155. 155.
    Folkestad C, Chen Y, Ames AD, Burdick JW. 2020. Data-driven safety-critical control: synthesizing control barrier functions with Koopman operators. Control Syst. Lett. 5:2012–17
    [Google Scholar]
  156. 156.
    Kulic D, Croft EA. 2007. Affective state estimation for human–robot interaction. IEEE Trans. Robot. 23:991–1000
    [Google Scholar]
  157. 157.
    Brown DS, Schneider J, Dragan A, Niekum S 2021. Value alignment verification. Proceedings of the 38th International Conference on Machine Learning M Meila, T Zhang 1105–15. Proc. Mach. Learn. Res. 139 N.p.: PMLR
    [Google Scholar]
  158. 158.
    Kress-Gazit H, Eder K, Hoffman G, Admoni H, Argall B et al. 2021. Formalizing and guaranteeing human-robot interaction. Commun. ACM 64:978–84
    [Google Scholar]
  159. 159.
    Gielniak MJ, Thomaz AL. 2012. Enhancing interaction through exaggerated motion synthesis. 2012 7th ACM/IEEE International Conference on Human-Robot Interaction375–82. Piscataway, NJ: IEEE
    [Google Scholar]
  160. 160.
    Dragan AD, Bauman S, Forlizzi J, Srinivasa SS. 2015. Effects of robot motion on human-robot collaboration. 2015 10th ACM/IEEE International Conference on Human-Robot Interaction51–58. Piscataway, NJ: IEEE
    [Google Scholar]
  161. 161.
    Hoffman G, Ju W. 2014. Designing robots with movement in mind. J. Hum.-Robot Interact. 3:91–122
    [Google Scholar]
  162. 162.
    Rojas RA, Ruiz Garcia MA, Wehrle E, Vidoni R. 2019. A variational approach to minimum-jerk trajectories for psychological safety in collaborative assembly stations. Robot. Autom. Lett. 4:823–29
    [Google Scholar]
  163. 163.
    Winfield L, Glassmire J, Colgate JE, Peshkin M. 2007. T-pad: tactile pattern display through variable friction reduction. Second Joint EuroHaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems421–26. Piscataway, NJ: IEEE
    [Google Scholar]
  164. 164.
    Culbertson H, Nunez CM, Israr A, Lau F, Abnousi F, Okamura AM. 2018. A social haptic device to create continuous lateral motion using sequential normal indentation. 2018 IEEE Haptics Symposium32–39. Piscataway, NJ: IEEE
    [Google Scholar]
  165. 165.
    Salvato M, Williams SR, Nunez CM, Zhu X, Israr A et al. 2021. Data-driven sparse skin stimulation can convey social touch information to humans. IEEE Trans. Haptics 15:392–404
    [Google Scholar]
  166. 166.
    Parker AS, Edwards AL, Pilarski PM. 2019. Exploring the impact of machine-learned predictions on feedback from an artificial limb. 2019 IEEE 16th International Conference on Rehabilitation Robotics1239–46. Piscataway, NJ: IEEE
    [Google Scholar]
  167. 167.
    Brenneis DJ, Parker AS, Johanson MB, Butcher A, Davoodi E et al. 2022. Assessing human interaction in virtual reality with continually learning prediction agents based on reinforcement learning algorithms: a pilot study Paper presented at the Adaptive and Learning Agents Workshop, International Conference on Autonomous Agents and Multiagent Systems Auckland, N.Z.: May 9–10
  168. 168.
    Farkhatdinov I, Ryu JH, An J 2010. A preliminary experimental study on haptic teleoperation of mobile robot with variable force feedback gain. 2010 IEEE Haptics Symposium251–56. Piscataway, NJ: IEEE
    [Google Scholar]
  169. 169.
    Dominijanni G, Shokur S, Salvietti G, Buehler S, Palmerini E et al. 2021. The neural resource allocation problem when enhancing human bodies with extra robotic limbs. Nat. Mach. Intell. 3:850–60
    [Google Scholar]
  170. 170.
    Lee JM, Gebrekristos T, De Santis D, Javaremi MN, Gopinath D et al. 2022. Learning to control complex rehabilitation robot using high-dimensional interfaces. bioRxiv 2022.03.07.483341. https://doi.org/10.1101/2022.03.07.483341
  171. 171.
    Kirby S, Hurford JR 2002. The emergence of linguistic structure: an overview of the iterated learning model. Simulating the Evolution of Language A Cangelosi, D Parisi 121–47. London: Springer
    [Google Scholar]
  172. 172.
    Tzorakoleftherakis E, Murphey TD, Scheidt RA. 2016. Augmenting sensorimotor control using goal-aware vibrotactile stimulation during reaching and manipulation behaviors. Exp. Brain Res. 234:2403–14
    [Google Scholar]
  173. 173.
    Wagner K, Reggia JA, Uriagereka J, Wilkinson GS. 2003. Progress in the simulation of emergent communication and language. Adapt. Behav. 11:37–69
    [Google Scholar]
  174. 174.
    Kalinowska A, Davoodi E, Mathewson KW, Murphey T, Pilarski PM. 2022. Towards situated communication in multi-step interactions: time is a key pressure in communication emergence. Proc. Annu. Meet. Cogn. Sci. Soc. 44:615–20
    [Google Scholar]
  175. 175.
    Lazaridou A, Baroni M. 2020. Emergent multi-agent communication in the deep learning era. arXiv:2006.02419 [cs.CL]
  176. 176.
    Crandall JW, Oudah M, Ishowo-Oloko F, Abdallah S, Bonnefon JF et al. 2018. Cooperating with machines. Nat. Commun. 9:233
    [Google Scholar]
  177. 177.
    Tedersoo L, Küngas R, Oras E, Köster K, Eenmaa H et al. 2021. Data sharing practices and data availability upon request differ across scientific disciplines. Sci. Data 8:192
    [Google Scholar]
  178. 178.
    Furmanek MP, Mangalam M, Yarossi M, Lockwood K, Tunik E 2022. A kinematic and EMG dataset of online adjustment of reach-to-grasp movements to visual perturbations. Sci. Data 9:23
    [Google Scholar]
  179. 179.
    Newman BA, Aronson RM, Srinivasa SS, Kitani K, Admoni H. 2022. Harmonic: a multimodal dataset of assistive human–robot collaboration. Int. J. Robot. Res. 41:3–11
    [Google Scholar]
  180. 180.
    Rabadi MH, Rabadi FM. 2006. Comparison of the action research arm test and the Fugl-Meyer assessment as measures of upper-extremity motor weakness after stroke. Arch. Phys. Med. Rehabil. 87:962–66
    [Google Scholar]
  181. 181.
    Yuan L, Gao X, Zheng Z, Edmonds M, Wu YNet al 2022. In situ bidirectional human-robot value alignment. Sci. Robot 7:eabm4183
    [Google Scholar]
/content/journals/10.1146/annurev-control-070122-102501
Loading
/content/journals/10.1146/annurev-control-070122-102501
Loading

Data & Media loading...

  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error