1932

Abstract

The ability of a robot to build a persistent, accurate, and actionable model of its surroundings through sensor data in a timely manner is crucial for autonomous operation. While representing the world as a point cloud might be sufficient for localization, denser scene representations are required for obstacle avoidance. On the other hand, higher-level semantic information is often crucial for breaking down the necessary steps to autonomously complete a complex task, such as cooking. So the looming question is, What is a suitable scene representation for the robotic task at hand? This survey provides a comprehensive review of key approaches and frameworks driving progress in the field of robotic spatial perception, with a particular focus on the historical evolution and current trends in representation. By categorizing scene modeling techniques into three main types—metric, metric–semantic, and metric–semantic–topological—we discuss how spatial perception frameworks are transitioning from building purely geometric models of the world to more advanced data structures incorporating higher-level concepts, such as the notion of object instances and places. Special emphasis is placed on approaches for real-time simultaneous localization and mapping, their integration with deep learning for enhanced robustness and scene understanding, and their ability to handle scene dynamicity as some of the hottest topics of interest driving robotics research today. We conclude with a discussion of ongoing challenges and future research directions in the quest to develop robust and scalable spatial perception systems suitable for long-term autonomy.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-control-040423-030709
2025-05-05
2025-06-22
Loading full text...

Full text loading...

/deliver/fulltext/control/8/1/annurev-control-040423-030709.html?itemId=/content/journals/10.1146/annurev-control-040423-030709&mimeType=html&fmt=ahah

Literature Cited

  1. 1.
    Thrun S. 2003.. Robotic mapping: a survey. . In Exploring Artificial Intelligence in the New Millennium, ed. G Lakemeyer, B Nebel , pp. 135. San Francisco, CA:: Morgan Kaufmann
    [Google Scholar]
  2. 2.
    Durrant-Whyte H, Bailey T. 2006.. Simultaneous localization and mapping: part I. . IEEE Robot. Autom. Mag. 13:(2):99108
    [Crossref] [Google Scholar]
  3. 3.
    Bailey T, Durrant-Whyte H. 2006.. Simultaneous localisation and mapping (SLAM): part II. . IEEE Robot. Autom. Mag. 13:(3):10817
    [Crossref] [Google Scholar]
  4. 4.
    Davison AJ. 2003.. Real-time simultaneous localisation and mapping with a single camera. . In Proceedings of the Ninth IEEE International Conference on Computer Vision, Vol. 2, pp. 140310. Piscataway, NJ:: IEEE
    [Google Scholar]
  5. 5.
    Grisetti G, Kummerle R, Stachniss C, Burgard W. 2010.. A tutorial on graph-based SLAM. . IEEE Intell. Transp. Syst. Mag. 2:(4):3143
    [Crossref] [Google Scholar]
  6. 6.
    Cadena C, Carlone L, Carrillo H, Latif Y, Scaramuzza D, et al. 2016.. Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. . IEEE Trans. Robot. 32:(6):130932
    [Crossref] [Google Scholar]
  7. 7.
    Taketomi T, Uchiyama H, Ikeda S. 2017.. Visual SLAM algorithms: a survey from 2010 to 2016. . IPSJ Trans. Comput. Vis. Appl. 9::16
    [Crossref] [Google Scholar]
  8. 8.
    Garg S, Sünderhauf N, Dayoub F, Morrison D, Cosgun A, et al. 2020.. Semantics for robotic mapping, perception and interaction: a survey. . Found. Trends Robot. 8:(1–2):1224
    [Crossref] [Google Scholar]
  9. 9.
    Rosen DM, Doherty KJ, Terán Espinoza A, Leonard JJ. 2021.. Advances in inference and representation for simultaneous localization and mapping. . Annu. Rev. Control Robot. Auton. Syst. 4::21542
    [Crossref] [Google Scholar]
  10. 10.
    Chen W, Shang G, Ji A, Zhou C, Wang X, et al. 2022.. An overview on visual SLAM: from tradition to semantic. . Remote Sens. 14:(13):3010
    [Crossref] [Google Scholar]
  11. 11.
    Tourani A, Bavle H, Sanchez-Lopez JL, Voos H. 2022.. Visual SLAM: What are the current trends and what to expect?. Sensors 22:(23):9297
    [Crossref] [Google Scholar]
  12. 12.
    Chen C, Wang B, Lu CX, Trigoni N, Markham A. 2024.. Deep learning for visual localization and mapping: a survey. . IEEE Trans. Neural Netw. Learn. Syst. 35:(12):1700020
    [Crossref] [Google Scholar]
  13. 13.
    Placed JA, Strader J, Carrillo H, Atanasov N, Indelman V, et al. 2023.. A survey on active simultaneous localization and mapping: state of the art and new frontiers. . IEEE Trans. Robot. 39:(3):1686705
    [Crossref] [Google Scholar]
  14. 14.
    Yang K, Cheng Y, Chen Z, Wang J. 2024.. SLAM meets NeRF: a survey of implicit SLAM methods. . World Electr. Veh. J. 15:(3):85
    [Crossref] [Google Scholar]
  15. 15.
    Tosi F, Zhang Y, Gong Z, Sandström E, Mattoccia S, et al. 2024.. How NeRFs and 3D Gaussian splatting are reshaping SLAM: a survey. . arXiv:2402.13255 [cs.CV]
  16. 16.
    Wang G, Pan L, Peng S, Liu S, Xu C, et al. 2024.. NeRF in robotics: a survey. . arXiv:2405.01333 [cs.RO]
  17. 17.
    Ming Y, Yang X, Wang W, Chen Z, Feng J, et al. 2024.. Benchmarking neural radiance fields for autonomous robots: an overview. . arXiv:2405.05526 [cs.RO]
  18. 18.
    Davison AJ. 2018.. FutureMapping: the computational structure of spatial AI systems. . arXiv:1803.11288 [cs.AI]
  19. 19.
    McCormac J, Handa A, Davison A, Leutenegger S. 2017.. SemanticFusion: dense 3D semantic mapping with convolutional neural networks. . In 2017 IEEE International Conference on Robotics and Automation, pp. 462835. Piscataway, NJ:: IEEE
    [Google Scholar]
  20. 20.
    Tateno K, Tombari F, Laina I, Navab N. 2017.. CNN-SLAM: real-time dense monocular SLAM with learned depth prediction. . In 2017 IEEE Conference on Computer Vision and Pattern Recognition, pp. 656574. Piscataway, NJ:: IEEE
    [Google Scholar]
  21. 21.
    Mur-Artal R, Tardós JD. 2017.. ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras. . IEEE Trans. Robot. 33:(5):125562
    [Crossref] [Google Scholar]
  22. 22.
    McCormac J, Clark R, Bloesch M, Davison AJ, Leutenegger S. 2018.. Fusion++: volumetric object-level SLAM. . In 2018 International Conference on 3D Vision, pp. 3241. Piscataway, NJ:: IEEE
    [Google Scholar]
  23. 23.
    Gao X, Wang R, Demmel N, Cremers D. 2018.. LDSO: direct sparse odometry with loop closure. . In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2198204. Piscataway, NJ:: IEEE
    [Google Scholar]
  24. 24.
    Qin T, Li P, Shen S. 2018.. VINS-Mono: a robust and versatile monocular visual-inertial state estimator. . IEEE Trans. Robot. 34:(4):100420
    [Crossref] [Google Scholar]
  25. 25.
    Armeni I, He ZY, Gwak JY, Zamir AR, Fischer M, et al. 2019.. 3D scene graph: a structure for unified semantics, 3D space, and camera. . In 2019 IEEE/CVF International Conference on Computer Vision, pp. 566372. Piscataway, NJ:: IEEE
    [Google Scholar]
  26. 26.
    Narita G, Seno T, Ishikawa T, Kaji Y. 2019.. PanopticFusion: online volumetric semantic mapping at the level of stuff and things. . In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 420512. Piscataway, NJ:: IEEE
    [Google Scholar]
  27. 27.
    Schops T, Sattler T, Pollefeys M. 2019.. BAD SLAM: bundle adjusted direct RGB-D SLAM. . In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13444. Piscataway, NJ:: IEEE
    [Google Scholar]
  28. 28.
    Gomez-Ojeda R, Moreno FA, Zuñiga-Noël D, Scaramuzza D, Gonzalez-Jimenez J. 2019.. PL-SLAM: a stereo SLAM system through the combination of points and line segments. . IEEE Trans. Robot. 35:(3):73446
    [Crossref] [Google Scholar]
  29. 29.
    Wald J, Dhamo H, Navab N, Tombari F. 2020.. Learning 3D semantic scene graphs from 3D indoor reconstructions. . In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 396069. Piscataway, NJ:: IEEE
    [Google Scholar]
  30. 30.
    Rosinol A, Abate M, Chang Y, Carlone L. 2020.. Kimera: an open-source library for real-time metric-semantic localization and mapping. . In 2020 IEEE International Conference on Robotics and Automation, pp. 168996. Piscataway, NJ:: IEEE
    [Google Scholar]
  31. 31.
    Czarnowski J, Laidlow T, Clark R, Davison AJ. 2020.. DeepFactors: real-time probabilistic dense monocular SLAM. . IEEE Robot. Autom. Lett. 5:(2):72128
    [Crossref] [Google Scholar]
  32. 32.
    Li D, Shi X, Long Q, Liu S, Yang W, et al. 2020.. DXSLAM: a robust and efficient visual SLAM system with deep features. . In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 495865. Piscataway, NJ:: IEEE
    [Google Scholar]
  33. 33.
    Wu SC, Wald J, Tateno K, Navab N, Tombari F. 2021.. SceneGraphFusion: incremental 3D scene graph prediction from RGB-D sequences. . In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 751121. Piscataway, NJ:: IEEE
    [Google Scholar]
  34. 34.
    Yang Z, Liu C. 2021.. TUPPer-Map: temporal and unified panoptic perception for 3D metric-semantic mapping. . In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1094101. Piscataway, NJ:: IEEE
    [Google Scholar]
  35. 35.
    Sucar E, Liu S, Ortiz J, Davison AJ. 2021.. iMAP: implicit mapping and positioning in real-time. . In 2021 IEEE/CVF International Conference on Computer Vision, pp. 620918. Piscataway, NJ:: IEEE
    [Google Scholar]
  36. 36.
    Wang H, Wang C, Chen CL, Xie L. 2021.. F-LOAM: fast LiDAR odometry and mapping. . In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 439096. Piscataway, NJ:: IEEE
    [Google Scholar]
  37. 37.
    Hughes N, Chang Y, Carlone L. 2022.. Hydra: a real-time spatial perception system for 3D scene graph construction and optimization. . In Robotics: Science and Systems XVIII, ed. K Hauser, D Shell, S Huang, pap . 50. San Francisco:: Robot. Sci. Syst. Found.
    [Google Scholar]
  38. 38.
    Schmid L, Delmerico J, Schönberger JL, Nieto J, Pollefeys M, et al. 2022.. Panoptic multi-TSDFs: a flexible representation for online multi-resolution volumetric mapping and long-term dynamic scene consistency. . In 2022 International Conference on Robotics and Automation, pp. 801824. Piscataway, NJ:: IEEE
    [Google Scholar]
  39. 39.
    Dellenbach P, Deschaud JE, Jacquet B, Goulette F. 2022.. CT-ICP: real-time elastic LiDAR odometry with loop closure. . In 2022 International Conference on Robotics and Automation, pp. 558086. Piscataway, NJ:: IEEE
    [Google Scholar]
  40. 40.
    Zhu Z, Peng S, Larsson V, Xu W, Bao H, et al. 2022.. NICE-SLAM: neural implicit scalable encoding for SLAM. . In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1277686. Piscataway, NJ:: IEEE
    [Google Scholar]
  41. 41.
    Jatavallabhula KM, Kuwajerwala A, Gu Q, Omama M, Chen T, et al. 2023.. ConceptFusion: open-set multimodal 3D mapping. . In Robotics: Science and Systems XIX, ed. K Bekris, K Hauser, S Herbert, J Yu, pap . 66. San Francisco:: Robot. Sci. Syst. Found.
    [Google Scholar]
  42. 42.
    Sandström E, Li Y, Van Gool L, Oswald MR. 2023.. Point-SLAM: dense neural point cloud-based SLAM. . In 2023 IEEE/CVF International Conference on Computer Vision, pp. 1838798. Piscataway, NJ:: IEEE
    [Google Scholar]
  43. 43.
    Deng J, Wu Q, Chen X, Xia S, Sun Z, et al. 2023.. NeRF-LOAM: neural implicit representation for large-scale incremental LiDAR odometry and mapping. . In 2023 IEEE/CVF International Conference on Computer Vision, pp. 818493. Piscataway, NJ:: IEEE
    [Google Scholar]
  44. 44.
    Gu Q, Kuwajerwala A, Morin S, Jatavallabhula KM, Sen B, et al. 2024.. ConceptGraphs: open-vocabulary 3D scene graphs for perception and planning. . In 2024 IEEE International Conference on Robotics and Automation, pp. 502128. Piscataway, NJ:: IEEE
    [Google Scholar]
  45. 45.
    Maggio D, Chang Y, Hughes N, Trang M, Griffith D, et al. 2024.. Clio: real-time task-driven open-set 3D scene graphs. . arXiv:2404.13696 [cs.RO]
  46. 46.
    Zhu S, Wang G, Blum H, Liu J, Song L, et al. 2024.. SNI-SLAM: semantic neural implicit SLAM. . In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2116777. Piscataway, NJ:: IEEE
    [Google Scholar]
  47. 47.
    Matsuki H, Murai R, Kelly PHJ, Davison AJ. 2024.. Gaussian splatting SLAM. . In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1803948. Piscataway, NJ:: IEEE
    [Google Scholar]
  48. 48.
    Zhu Z, Peng S, Larsson V, Cui Z, Oswald MR, et al. 2024.. NICER-SLAM: neural implicit scene encoding for RGB SLAM. . In 2024 International Conference on 3D Vision, pp. 4252. Piscataway, NJ:: IEEE
    [Google Scholar]
  49. 49.
    Strasdat H, Montiel JM, Davison AJ. 2012.. Visual SLAM: Why filter?. Image Vis. Comput. 30::6577
    [Crossref] [Google Scholar]
  50. 50.
    Mur-Artal R, Montiel JMM, Tardós JD. 2015.. ORB-SLAM: a versatile and accurate monocular SLAM system. . IEEE Trans. Robot. 31:(5):114763
    [Crossref] [Google Scholar]
  51. 51.
    Klein G, Murray D. 2007.. Parallel tracking and mapping for small AR workspaces. . In 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, pp. 22523. Piscataway, NJ:: IEEE
    [Google Scholar]
  52. 52.
    Bloesch M, Omari S, Hutter M, Siegwart R. 2015.. Robust visual inertial odometry using a direct EKF-based approach. . In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 298304. Piscataway, NJ:: IEEE
    [Google Scholar]
  53. 53.
    Leutenegger S, Lynen S, Bosse M, Siegwart R, Furgale P. 2015.. Keyframe-based visual-inertial odometry using nonlinear optimization. . Int. J. Robot. Res. 34:(3):31434
    [Crossref] [Google Scholar]
  54. 54.
    Campos C, Elvira R, Rodriguez JJG, Montiel JMM, Tardós JD. 2021.. ORB-SLAM3: an accurate open-source library for visual, visual-inertial, and multimap SLAM. . IEEE Trans. Robot. 37:(6):187490
    [Crossref] [Google Scholar]
  55. 55.
    Shi J, Tomasi C. 1994.. Good features to track. . In 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 593600. Piscataway, NJ:: IEEE
    [Google Scholar]
  56. 56.
    Leutenegger S, Chli M, Siegwart RY. 2011.. BRISK: binary robust invariant scalable keypoints. . In 2011 International Conference on Computer Vision, pp. 254855. Piscataway, NJ:: IEEE
    [Google Scholar]
  57. 57.
    Rublee E, Rabaud V, Konolige K, Bradski G. 2011.. ORB: an efficient alternative to SIFT or SURF. . In 2011 International Conference on Computer Vision, pp. 256471. Piscataway, NJ:: IEEE
    [Google Scholar]
  58. 58.
    Tang J, Ericson L, Folkesson J, Jensfelt P. 2019.. GCNv2: efficient correspondence prediction for real-time SLAM. . IEEE Robot. Autom. Lett. 4:(4):350512
    [Google Scholar]
  59. 59.
    Pumarola A, Vakhitov A, Agudo A, Sanfeliu A, Moreno-Noguer F. 2017.. PL-SLAM: real-time monocular visual SLAM with points and lines. . In 2017 IEEE International Conference on Robotics and Automation, pp. 45038. Piscataway, NJ:: IEEE
    [Google Scholar]
  60. 60.
    Arndt C, Sabzevari R, Civera J. 2020.. From points to planes – adding planar constraints to monocular slam factor graphs. . In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 491722. Piscataway, NJ:: IEEE
    [Google Scholar]
  61. 61.
    Zhang J, Singh S. 2014.. LOAM: lidar odometry and mapping in real-time. . In Robotics: Science and Systems X, ed. D Fox, LE Kavraki, H Kurniawati, pap . 7. San Francisco:: Robot. Sci. Syst. Found.
    [Google Scholar]
  62. 62.
    Shan T, Englot B. 2018.. LeGO-LOAM: lightweight and ground-optimized lidar odometry and mapping on variable terrain. . In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 475865. Piscataway, NJ:: IEEE
    [Google Scholar]
  63. 63.
    Kerl C, Sturm J, Cremers D. 2013.. Dense visual SLAM for RGB-D cameras. . In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 21006. Piscataway, NJ:: IEEE
    [Google Scholar]
  64. 64.
    Concha A, Civera J. 2017.. RGBDTAM: a cost-effective and accurate RGB-D tracking and mapping system. . In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 675663. Piscataway, NJ:: IEEE
    [Google Scholar]
  65. 65.
    Reinke A, Palieri M, Morrell B, Chang Y, Ebadi K, et al. 2022.. LOCUS 2.0: robust and computationally efficient lidar odometry for real-time 3D mapping. . IEEE Robot. Autom. Lett. 7:(4):904350
    [Crossref] [Google Scholar]
  66. 66.
    Vizzo I, Guadagnino T, Mersch B, Wiesmann L, Behley J, Stachniss C. 2023.. KISS-ICP: in defense of point-to-point ICP – simple, accurate, and robust registration if done the right way. . IEEE Robot. Autom. Lett. 8:(2):102936
    [Crossref] [Google Scholar]
  67. 67.
    Newcombe RA, Lovegrove SJ, Davison AJ. 2011.. DTAM: dense tracking and mapping in real-time. . In 2011 International Conference on Computer Vision, pp. 232027. Piscataway, NJ:: IEEE
    [Google Scholar]
  68. 68.
    Engel J, Schöps T, Cremers D. 2014.. LSD-SLAM: large-scale direct monocular SLAM. . In Computer Vision – ECCV 2014, ed. D Fleet, T Pajdla, B Schiele, T Tuytelaars , pp. 83449. Cham, Switz:.: Springer
    [Google Scholar]
  69. 69.
    Engel J, Koltun V, Cremers D. 2018.. Direct sparse odometry. . IEEE Trans. Pattern Anal. Mach. Intell. 40:(3):61125
    [Crossref] [Google Scholar]
  70. 70.
    Yang N, Wang R, Stückler J, Cremers D. 2018.. Deep virtual stereo odometry: leveraging deep depth prediction for monocular direct sparse odometry. . In Computer Vision – ECCV 2018, ed. V Ferrari, M Hebert, C Sminchisescu, Y Weiss , pp. 83552. Cham, Switz:.: Springer
    [Google Scholar]
  71. 71.
    Yang N, von Stumberg L, Wang R, Cremers D. 2020.. D3VO: deep depth, deep pose and deep uncertainty for monocular visual odometry. . In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 127889. Piscataway, NJ:: IEEE
    [Google Scholar]
  72. 72.
    Teed Z, Deng J. 2021.. DROID-SLAM: deep visual SLAM for monocular, stereo, and RGB-D cameras. . In Advances in Neural Information Processing Systems 34, ed. M Ranzato, A Beygelzimer, Y Dauphin, PS Liang, J Wortman Vaughan , pp. 1655869. Red Hook, NY:: Curran
    [Google Scholar]
  73. 73.
    Whelan T, Leutenegger S, Salas-Moreno RF, Glocker B, Davison AJ. 2015.. ElasticFusion: dense SLAM without a pose graph. . In Robotics: Science and Systems XI, ed. LE Kavraki, D Hsu, J Buchli, pap. 1 . San Francisco:: Robot. Sci. Syst. Found.
    [Google Scholar]
  74. 74.
    Behley J, Stachniss C. 2018.. Efficient surfel-based SLAM using 3D laser range data in urban environments. . In Robotics: Science and Systems XIV, ed. H Kress-Gazit, S Srinivasa, T Howard, N Atanasov, pap . 16. San Francisco:: Robot. Sci. Syst. Found.
    [Google Scholar]
  75. 75.
    Piazza E, Romanoni A, Matteucci M. 2018.. Real-time CPU-based large-scale three-dimensional mesh reconstruction. . IEEE Robot. Autom. Lett. 3:(3):158491
    [Crossref] [Google Scholar]
  76. 76.
    Schreiberhuber S, Prankl J, Patten T, Vincze M. 2019.. ScalableFusion: high-resolution mesh-based real-time 3D reconstruction. . In 2019 International Conference on Robotics and Automation, pp. 14046. Piscataway, NJ:: IEEE
    [Google Scholar]
  77. 77.
    Lin J, Yuan C, Cai Y, Li H, Ren Y, et al. 2023.. ImMesh: an immediate LiDAR localization and meshing framework. . IEEE Trans. Robot. 39:(6):431231
    [Crossref] [Google Scholar]
  78. 78.
    Rosinol A, Sattler T, Pollefeys M, Carlone L. 2019.. Incremental visual-inertial 3D mesh generation with structural regularities. . In 2019 International Conference on Robotics and Automation, pp. 822026. Piscataway, NJ:: IEEE
    [Google Scholar]
  79. 79.
    Bloesch M, Laidlow T, Clark R, Leutenegger S, Davison AJ. 2019.. Learning meshes for dense visual SLAM. . In 2019 IEEE/CVF International Conference on Computer Vision, pp. 585463. Piscataway, NJ:: IEEE
    [Google Scholar]
  80. 80.
    Vizzo I, Chen X, Chebrolu N, Behley J, Stachniss C. 2021.. Poisson surface reconstruction for lidar odometry and mapping. . In 2021 IEEE International Conference on Robotics and Automation, pp. 562430. Piscataway, NJ:: IEEE
    [Google Scholar]
  81. 81.
    Ruan J, Li B, Wang Y, Sun Y. 2023.. SLAMesh: real-time LiDAR simultaneous localization and meshing. . In 2023 IEEE International Conference on Robotics and Automation, pp. 354652. Piscataway, NJ:: IEEE
    [Google Scholar]
  82. 82.
    Hornung A, Wurm KM, Bennewitz M, Stachniss C, Burgard W. 2013.. OctoMap: an efficient probabilistic 3D mapping framework based on octrees. . Auton. Robots 34::189206
    [Crossref] [Google Scholar]
  83. 83.
    Duberg D, Jensfelt P. 2020.. UFOMap: an efficient probabilistic 3D mapping framework that embraces the unknown. . IEEE Robot. Autom. Lett. 5:(4):641118
    [Crossref] [Google Scholar]
  84. 84.
    Min H, Han KM, Kim YJ. 2023.. OctoMap-RT: fast probabilistic volumetric mapping using ray-tracing GPUs. . IEEE Robot. Autom. Lett. 8:(9):5696703
    [Crossref] [Google Scholar]
  85. 85.
    Funk N, Tarrio J, Papatheodorou S, Popovic M, Alcantarilla PF, Leutenegger S. 2021.. Multi-resolution 3D mapping with explicit free space representation for fast and accurate mobile robot motion planning. . IEEE Robot. Autom. Lett. 6:(2):355360
    [Crossref] [Google Scholar]
  86. 86.
    Reijgwart V, Cadena C, Siegwart R, Ott L. 2023.. Efficient volumetric mapping of multi-scale environments using wavelet-based compression. . In Robotics: Science and Systems XIX, ed. K Bekris, K Hauser, S Herbert, J Yu, pap . 65. San Francisco:: Robot. Sci. Syst. Found.
    [Google Scholar]
  87. 87.
    Biber P, Strasser W. 2003.. The normal distributions transform: a new approach to laser scan matching.. In 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vol. 3, pp. 274348. Piscataway, NJ:: IEEE
    [Google Scholar]
  88. 88.
    Saarinen J, Andreasson H, Stoyanov T, Ala-Luhtala J, Lilienthal AJ. 2013.. Normal distributions transform occupancy maps: application to large-scale online 3D mapping. . In 2013 IEEE International Conference on Robotics and Automation, pp. 223338. Piscataway, NJ:: IEEE
    [Google Scholar]
  89. 89.
    Schulz C, Hanten R, Zell A. 2018.. Efficient map representations for multi-dimensional normal distributions transforms. . In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 267986. Piscataway, NJ:: IEEE
    [Google Scholar]
  90. 90.
    Ramos F, Ott L. 2016.. Hilbert maps: scalable continuous occupancy mapping with stochastic gradient descent. . Int. J. Robot. Res. 35:(14):171730
    [Crossref] [Google Scholar]
  91. 91.
    Zhi P, Li X, Karaman S, Sze V. 2024.. GMMap: memory-efficient continuous occupancy map using Gaussian mixture model. . IEEE Trans. Robot. 40::133955
    [Crossref] [Google Scholar]
  92. 92.
    Newcombe RA, Izadi S, Hilliges O, Molyneaux D, Kim D, et al. 2011.. KinectFusion: real-time dense surface mapping and tracking. . In 2011 10th IEEE International Symposium on Mixed and Augmented Reality, pp. 12736. Piscataway, NJ:: IEEE
    [Google Scholar]
  93. 93.
    Nießner M, Zollhöfer M, Izadi S, Stamminger M. 2013.. Real-time 3D reconstruction at scale using voxel hashing. . ACM Trans. Graph. 32:(6):169
    [Crossref] [Google Scholar]
  94. 94.
    Oleynikova H, Taylor Z, Fehr M, Siegwart R, Nieto J. 2017.. Voxblox: incremental 3D Euclidean signed distance fields for on-board MAV planning. . In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 136673. Piscataway, NJ:: IEEE
    [Google Scholar]
  95. 95.
    Pan Y, Kompis Y, Bartolomei L, Mascaro R, Stachniss C, Chli M. 2022.. Voxfield: non-projective signed distance fields for online planning and 3D reconstruction. . In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 533138. Piscataway, NJ:: IEEE
    [Google Scholar]
  96. 96.
    Millane A, Oleynikova H, Wirbel E, Steiner R, Ramasamy V, et al. 2024.. nvblox: GPU-accelerated incremental signed distance field mapping. . In 2024 IEEE International Conference on Robotics and Automation, pp. 2698705. Piscataway, NJ:: IEEE
    [Google Scholar]
  97. 97.
    Vespa E, Nikolov N, Grimm M, Nardi L, Kelly PH, Leutenegger S. 2018.. Efficient octree-based volumetric SLAM supporting signed-distance and occupancy mapping. . IEEE Robot. Autom. Lett. 3:(2):114451
    [Crossref] [Google Scholar]
  98. 98.
    Vespa E, Funk N, Kelly PH, Leutenegger S. 2019.. Adaptive-resolution octree-based volumetric SLAM. . In 2019 International Conference on 3D Vision, pp. 65462. Piscataway, NJ:: IEEE
    [Google Scholar]
  99. 99.
    Bloesch M, Czarnowski J, Clark R, Leutenegger S, Davison AJ. 2018.. CodeSLAM—learning a compact, optimisable representation for dense visual SLAM. . In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 256068. Piscataway, NJ:: IEEE
    [Google Scholar]
  100. 100.
    Mildenhall B, Srinivasan PP, Tancik M, Barron JT, Ramamoorthi R, Ng R. 2020.. NeRF: representing scenes as neural radiance fields for view synthesis. . In Computer Vision – ECCV 2020, ed. A Vedaldi, H Bischof, T Brox, JM Frahm , pp. 40521. Cham, Switz:.: Springer
    [Google Scholar]
  101. 101.
    Kruzhkov E, Savinykh A, Karpyshev P, Kurenkov M, Yudin E, et al. 2022.. MeSLAM: memory efficient SLAM based on neural fields. . In 2022 IEEE International Conference on Systems, Man, and Cybernetics, pp. 43035. Piscataway, NJ:: IEEE
    [Google Scholar]
  102. 102.
    Xiang B, Sun Y, Xie Z, Yang X, Wang Y. 2023.. NISB-Map: scalable mapping with neural implicit spatial block. . IEEE Robot. Autom. Lett. 8:(8):476168
    [Crossref] [Google Scholar]
  103. 103.
    Yang X, Li H, Zhai H, Ming Y, Liu Y, Zhang G. 2022.. Vox-Fusion: dense tracking and mapping with voxel-based neural implicit representation. . In 2022 IEEE International Symposium on Mixed and Augmented Reality, pp. 499507. Piscataway, NJ:: IEEE
    [Google Scholar]
  104. 104.
    Wang H, Wang J, Agapito L. 2023.. Co-SLAM: joint coordinate and sparse parametric encodings for neural real-time SLAM. . In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13293302. Piscataway, NJ:: IEEE
    [Google Scholar]
  105. 105.
    Müller T, McWilliams B, Rousselle F, Gross M, Novák J. 2019.. Neural importance sampling. . ACM Trans. Graph. 38:(5):145
    [Crossref] [Google Scholar]
  106. 106.
    Müller T, Evans A, Schied C, Keller A. 2022.. Instant neural graphics primitives with a multiresolution hash encoding. . ACM Trans. Graph. 41:(4):102
    [Crossref] [Google Scholar]
  107. 107.
    Johari MM, Carta C, Fleuret F. 2023.. ESLAM: efficient dense SLAM system based on hybrid representation of signed distance fields. . In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1740819. Piscataway, NJ:: IEEE
    [Google Scholar]
  108. 108.
    Liso L, Sandström E, Yugay V, Van Gool L, Oswald MR. 2024.. Loopy-SLAM: dense neural SLAM with loop closures. . In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2036373. Piscataway, NJ:: IEEE
    [Google Scholar]
  109. 109.
    Matsuki H, Sucar E, Laidow T, Wada K, Scona R, Davison AJ. 2023.. iMODE: real-time incremental monocular dense mapping using neural field. . In 2023 IEEE International Conference on Robotics and Automation, pp. 417177. Piscataway, NJ:: IEEE
    [Google Scholar]
  110. 110.
    Chung CM, Tseng YC, Hsu YC, Shi XQ, Hua YH, et al. 2023.. Orbeez-SLAM: a real-time monocular visual SLAM with ORB features and NeRF-realized mapping. . In 2023 IEEE International Conference on Robotics and Automation, pp. 94006. Piscataway, NJ:: IEEE
    [Google Scholar]
  111. 111.
    Rosinol A, Leonard JJ, Carlone L. 2023.. NeRF-SLAM: real-time dense monocular SLAM with neural radiance fields. . In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 343744. Piscataway, NJ:: IEEE
    [Google Scholar]
  112. 112.
    Isaacson S, Kung PC, Ramanagopal M, Vasudevan R, Skinner KA. 2023.. LONER: LiDAR only neural representations for real-time SLAM. . IEEE Robot. Autom. Lett. 8:(12):804249
    [Crossref] [Google Scholar]
  113. 113.
    Pan Y, Zhong X, Wiesmann L, Posewsky T, Behley J, Stachniss C. 2024.. PIN-SLAM: LiDAR SLAM using a point-based implicit neural representation for achieving global map consistency. . IEEE Trans. Robot. 40::404564
    [Crossref] [Google Scholar]
  114. 114.
    Kerbl B, Kopanas G, Leimkühler T, Drettakis G. 2023.. 3D Gaussian splatting for real-time radiance field rendering. . ACM Trans. Graph. 42:(4):139
    [Crossref] [Google Scholar]
  115. 115.
    Yan C, Qu D, Xu D, Zhao B, Wang Z, et al. 2024.. GS-SLAM: dense visual SLAM with 3D Gaussian splatting. . In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 19595604. Piscataway, NJ:: IEEE
    [Google Scholar]
  116. 116.
    Keetha N, Karhade J, Jatavallabhula KM, Yang G, Scherer S, et al. 2024.. SplaTAM: splat, track & map 3D Gaussians for dense RGB-D SLAM. . In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2135766. Piscataway, NJ:: IEEE
    [Google Scholar]
  117. 117.
    Hong S, He J, Zheng X, Zheng C, Shen S. 2024.. LIV-GaussMap: LiDAR-inertial-visual fusion for real-time 3D radiance field map rendering. . IEEE Robot. Autom. Lett. 9:(1):976572
    [Crossref] [Google Scholar]
  118. 118.
    Salas-Moreno RF, Newcombe RA, Strasdat H, Kelly PH, Davison AJ. 2013.. SLAM++: simultaneous localisation and mapping at the level of objects. . In 2013 IEEE Conference on Computer Vision and Pattern Recognition, pp. 135259. Piscataway, NJ:: IEEE
    [Google Scholar]
  119. 119.
    Nicholson L, Milford M, Sünderhauf N. 2019.. QuadricSLAM: dual quadrics from object detections as landmarks in object-oriented SLAM. . IEEE Robot. Autom. Lett. 4:(1):18
    [Crossref] [Google Scholar]
  120. 120.
    Hosseinzadeh M, Li K, Latif Y, Reid I. 2019.. Real-time monocular object-model aware sparse SLAM. . In 2019 International Conference on Robotics and Automation, pp. 712329. Piscataway, NJ:: IEEE
    [Google Scholar]
  121. 121.
    Yang S, Scherer S. 2019.. CubeSLAM: monocular 3-D object SLAM. . IEEE Trans. Robot. 35:(4):92538
    [Crossref] [Google Scholar]
  122. 122.
    Sünderhauf N, Pham TT, Latif Y, Milford M, Reid I. 2017.. Meaningful maps with object-oriented semantic mapping. . In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 507985. Piscataway, NJ:: IEEE
    [Google Scholar]
  123. 123.
    Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, et al. 2016.. SSD: single shot multibox detector. . In Computer Vision – ECCV 2016, ed. B Leibe, J Matas, N Sebe, M Welling , pp. 2137. Cham, Switz:.: Springer
    [Google Scholar]
  124. 124.
    He K, Gkioxari G, Dollar P, Girshick R. 2017.. Mask R-CNN . In 2017 IEEE International Conference on Computer Vision, pp. 298088. Piscataway, NJ:: IEEE
    [Google Scholar]
  125. 125.
    Grinvald M, Furrer F, Novkovic T, Chung JJ, Cadena C, et al. 2019.. Volumetric instance-aware semantic mapping and 3D object discovery. . IEEE Robot. Autom. Lett. 4:(3):303744
    [Crossref] [Google Scholar]
  126. 126.
    Furrer F, Novkovic T, Fehr M, Gawel A, Grinvald M, et al. 2018.. Incremental object database: building 3D models from multiple partial observations. . In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 683542. Piscataway, NJ:: IEEE
    [Google Scholar]
  127. 127.
    Mascaro R, Teixeira L, Chli M. 2022.. Volumetric instance-level semantic mapping via multi-view 2D-to-3D label diffusion. . IEEE Robot. Autom. Lett. 7:(2):353138
    [Crossref] [Google Scholar]
  128. 128.
    Kong X, Liu S, Taher M, Davison AJ. 2023.. vMAP: vectorised object mapping for neural field SLAM. . In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 95261. Piscataway, NJ:: IEEE
    [Google Scholar]
  129. 129.
    Han X, Liu H, Ding Y, Yang L. 2023.. RO-MAP: real-time multi-object mapping with neural radiance fields. . IEEE Robot. Autom. Lett. 8:(9):595057
    [Crossref] [Google Scholar]
  130. 130.
    Chen X, Milioto A, Palazzolo E, Giguère P, Behley J, Stachniss C. 2019.. SuMa++: efficient LiDAR-based semantic SLAM. . In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 453037. Piscataway, NJ:: IEEE
    [Google Scholar]
  131. 131.
    Li L, Kong X, Zhao X, Li W, Wen F, et al. 2021.. SA-LOAM: semantic-aided LiDAR SLAM with loop closure. . In 2021 IEEE International Conference on Robotics and Automation, pp. 762734. Piscataway, NJ:: IEEE
    [Google Scholar]
  132. 132.
    Li M, Liu S, Zhou H, Zhu G, Cheng N, et al. 2024.. SGS-SLAM: semantic Gaussian splatting for neural dense SLAM. . In Computer Vision – ECCV 2024. Cham, Switz:.: Springer. Forthcoming
    [Google Scholar]
  133. 133.
    Zhu S, Qin R, Wang G, Liu J, Wang H. 2024.. SemGauss-SLAM: dense semantic Gaussian splatting SLAM. . arXiv:2403.07494 [cs.RO]
  134. 134.
    Wu D, Yan Z, Zha H. 2024.. PanoRecon: real-time panoptic 3D reconstruction from monocular video. . In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2150718. Piscataway, NJ:: IEEE
    [Google Scholar]
  135. 135.
    Kirillov A, He K, Girshick R, Rother C, Dollár P. 2019.. Panoptic segmentation. . In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9396405. Piscataway, NJ:: IEEE
    [Google Scholar]
  136. 136.
    Nakajima Y, Tateno K, Tombari F, Saito H. 2018.. Fast and accurate semantic mapping through geometric-based incremental segmentation. . In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 38592. Piscataway, NJ:: IEEE
    [Google Scholar]
  137. 137.
    Pham QH, Hua BS, Nguyen DT, Yeung SK. 2019.. Real-time progressive 3D semantic segmentation for indoor scenes. . In 2019 IEEE Winter Conference on Applications of Computer Vision, pp. 108998. Piscataway, NJ:: IEEE
    [Google Scholar]
  138. 138.
    Seichter D, Stephan B, Fischedick SB, Müller S, Rabes L, Gross HM. 2023.. PanopticNDT: efficient and robust panoptic mapping. . In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 723340. Piscataway, NJ:: IEEE
    [Google Scholar]
  139. 139.
    Kundu A, Genova K, Yin X, Fathi A, Pantofaru C, et al. 2022.. Panoptic neural fields: a semantic object-aware neural scene representation. . In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1286171. Piscataway, NJ:: IEEE
    [Google Scholar]
  140. 140.
    Fu X, Zhang S, Chen T, Lu Y, Zhu L, et al. 2022.. Panoptic NeRF: 3D-to-2D label transfer for panoptic urban scene segmentation. . In 2022 International Conference on 3D Vision, pp. 30111. Piscataway, NJ:: IEEE
    [Google Scholar]
  141. 141.
    Kim UH, Park JM, Song TJ, Kim JH. 2020.. 3-D scene graph: a sparse and semantic representation of physical environments for intelligent agents. . IEEE Trans. Cybernet. 50:(12):492133
    [Crossref] [Google Scholar]
  142. 142.
    Hughes N, Chang Y, Hu S, Talak R, Abdulhai R, et al. 2024.. Foundations of spatial perception for robotics: hierarchical representations and real-time systems. . Int. J. Robot. Res. 43:(10):1457505
    [Crossref] [Google Scholar]
  143. 143.
    Yu C, Liu Z, Liu XJ, Xie F, Yang Y, et al. 2018.. DS-SLAM: a semantic visual SLAM towards dynamic environments. . In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 116874. Piscataway, NJ:: IEEE
    [Google Scholar]
  144. 144.
    Bescos B, Fácil JM, Civera J, Neira J. 2018.. DynaSLAM: tracking, mapping, and inpainting in dynamic scenes. . IEEE Robot. Autom. Lett. 3:(4):407683
    [Crossref] [Google Scholar]
  145. 145.
    Song S, Lim H, Lee AJ, Myung H. 2022.. DynaVINS: a visual-inertial SLAM for dynamic environments. . IEEE Robot. Autom. Lett. 7:(4):1152330
    [Crossref] [Google Scholar]
  146. 146.
    Yin H, Li S, Tao Y, Guo J, Huang B. 2023.. Dynam-SLAM: an accurate, robust stereo visual-inertial SLAM method in dynamic environments. . IEEE Trans. Robot. 39:(1):289308
    [Crossref] [Google Scholar]
  147. 147.
    Henein M, Zhang J, Mahony R, Ila V. 2020.. Dynamic SLAM: the need for speed. . In 2020 IEEE International Conference on Robotics and Automation, pp. 212329. Piscataway, NJ:: IEEE
    [Google Scholar]
  148. 148.
    Bescos B, Campos C, Tardós JD, Neira J. 2021.. DynaSLAM II: tightly-coupled multi-object tracking and SLAM. . IEEE Robot. Autom. Lett. 6:(3):519198
    [Crossref] [Google Scholar]
  149. 149.
    Henning DF, Laidlow T, Leutenegger S. 2022.. BodySLAM: joint camera localisation, mapping, and human motion tracking. . In Computer Vision – ECCV 2022, ed. S Avidan, G Brostow, M Cissé, GM Farinella, T Hassner , pp. 65673. Cham, Switz:.: Springer
    [Google Scholar]
  150. 150.
    Rünz M, Buffier M, Agapito L. 2018.. MaskFusion: real-time recognition, tracking and reconstruction of multiple moving objects. . In 2018 IEEE International Symposium on Mixed and Augmented Reality, pp. 1020. Piscataway, NJ:: IEEE
    [Google Scholar]
  151. 151.
    Xu B, Li W, Tzoumanikas D, Bloesch M, Davison A, Leutenegger S. 2019.. MID-Fusion: octree-based object-level multi-instance dynamic SLAM. . In 2019 International Conference on Robotics and Automation, pp. 523137. Piscataway, NJ:: IEEE
    [Google Scholar]
  152. 152.
    Strecke M, Stückler J. 2019.. EM-Fusion: dynamic object-level SLAM with probabilistic data association. . In 2019 IEEE/CVF International Conference on Computer Vision, pp. 586473. Piscataway, NJ:: IEEE
    [Google Scholar]
  153. 153.
    Ren Y, Xu B, Choi CL, Leutenegger S. 2022.. Visual-inertial multi-instance dynamic SLAM with object-level relocalisation. . In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1105562. Piscataway, NJ:: IEEE
    [Google Scholar]
  154. 154.
    Mersch B, Guadagnino T, Chen X, Vizzo I, Behley J, Stachniss C. 2023.. Building volumetric beliefs for dynamic environments exploiting map-based moving object segmentation. . IEEE Robot. Autom. Lett. 8:(8):518087
    [Crossref] [Google Scholar]
  155. 155.
    Schmid L, Andersson O, Sulser A, Pfreundschuh P, Siegwart R. 2023.. Dynablox: real-time detection of diverse dynamic objects in complex environments. . IEEE Robot. Autom. Lett. 8:(10):625966
    [Crossref] [Google Scholar]
  156. 156.
    Qian J, Chatrath V, Yang J, Servos J, Schoellig AP, Waslander SL. 2022.. POCD: probabilistic object-level change detection and volumetric mapping in semi-static scenes. . In Robotics: Science and Systems XVIII, ed. K Hauser, D Shell, S Huang, pap . 13. San Francisco:: Robot. Sci. Syst. Found.
    [Google Scholar]
  157. 157.
    Fu J, Du Y, Singh K, Tenenbaum JB, Leonard JJ. 2022.. Robust change detection based on neural descriptor fields. . In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 281724. Piscataway, NJ:: IEEE
    [Google Scholar]
  158. 158.
    Qian J, Chatrath V, Servos J, Mavrinac A, Burgard W, et al. 2023.. POV-SLAM: probabilistic object-aware variational SLAM in semi-static environments. . In Robotics: Science and Systems XIX, ed. K Bekris, K Hauser, S Herbert, J Yu, pap . 69. San Francisco:: Robot. Sci. Syst. Found.
    [Google Scholar]
  159. 159.
    Virgolino Soares JC, Suzano Medeiros V, Fischer Abati G, Becker M, Caurin G, et al. 2023.. Visual localization and mapping in dynamic and changing environments. . J. Intell. Robot. Syst. 109::95
    [Crossref] [Google Scholar]
  160. 160.
    Schmid L, Abate M, Chang Y, Carlone L. 2024.. Khronos: a unified approach for spatio-temporal metric-semantic SLAM in dynamic environments. . In Robotics: Science and Systems XX, ed. D Kulic, G Venture, K Bekris, E Coronado, pap . 81. San Francisco:: Robot. Sci. Syst. Found.
    [Google Scholar]
  161. 161.
    Newcombe RA, Fox D, Seitz SM. 2015.. DynamicFusion: reconstruction and tracking of non-rigid scenes in real-time. . In 2015 IEEE Conference on Computer Vision and Pattern Recognition, pp. 34352. Piscataway, NJ:: IEEE
    [Google Scholar]
  162. 162.
    Badias A, Alfaro I, Gonzalez D, Chinesta F, Cueto E. 2022.. MORPH-DSLAM: model order reduction for physics-based deformable SLAM. . IEEE Trans. Pattern Anal. Mach. Intell. 44:(11):776477
    [Crossref] [Google Scholar]
  163. 163.
    Lamarca J, Parashar S, Bartoli A, Montiel JMM. 2021.. DefSLAM: tracking and mapping of deforming scenes from monocular sequences. . IEEE Trans. Robot. 37:(1):291303
    [Crossref] [Google Scholar]
  164. 164.
    Gómez Rodriguez JJ, Montiel JMM, Tardós JD. 2024.. NR-SLAM: non-rigid monocular SLAM. . IEEE Trans. Robot. 40::425264
    [Crossref] [Google Scholar]
  165. 165.
    Song J, Wang J, Zhao L, Huang S, Dissanayake G. 2018.. MIS-SLAM: real-time large-scale dense deformable SLAM system in minimal invasive surgery based on heterogeneous computing. . IEEE Robot. Autom. Lett. 3:(4):406875
    [Crossref] [Google Scholar]
/content/journals/10.1146/annurev-control-040423-030709
Loading
/content/journals/10.1146/annurev-control-040423-030709
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error