RGB-D terrain perception and dense mapping for legged robots

Dominik Belter; Przemysław Łabecki; Péter Fankhauser; Roland Siegwart

International Journal of Applied Mathematics and Computer Science (2016)

  • Volume: 26, Issue: 1, page 81-97
  • ISSN: 1641-876X

Abstract

top
This paper addresses the issues of unstructured terrain modeling for the purpose of navigation with legged robots. We present an improved elevation grid concept adopted to the specific requirements of a small legged robot with limited perceptual capabilities. We propose an extension of the elevation grid update mechanism by incorporating a formal treatment of the spatial uncertainty. Moreover, this paper presents uncertainty models for a structured light RGB-D sensor and a stereo vision camera used to produce a dense depth map. The model for the uncertainty of the stereo vision camera is based on uncertainty propagation from calibration, through undistortion and rectification algorithms, allowing calculation of the uncertainty of measured 3D point coordinates. The proposed uncertainty models were used for the construction of a terrain elevation map using the Videre Design STOC stereo vision camera and Kinect-like range sensors. We provide experimental verification of the proposed mapping method, and a comparison with another recently published terrain mapping method for walking robots.

How to cite

top

Dominik Belter, et al. "RGB-D terrain perception and dense mapping for legged robots." International Journal of Applied Mathematics and Computer Science 26.1 (2016): 81-97. <http://eudml.org/doc/276699>.

@article{DominikBelter2016,
abstract = {This paper addresses the issues of unstructured terrain modeling for the purpose of navigation with legged robots. We present an improved elevation grid concept adopted to the specific requirements of a small legged robot with limited perceptual capabilities. We propose an extension of the elevation grid update mechanism by incorporating a formal treatment of the spatial uncertainty. Moreover, this paper presents uncertainty models for a structured light RGB-D sensor and a stereo vision camera used to produce a dense depth map. The model for the uncertainty of the stereo vision camera is based on uncertainty propagation from calibration, through undistortion and rectification algorithms, allowing calculation of the uncertainty of measured 3D point coordinates. The proposed uncertainty models were used for the construction of a terrain elevation map using the Videre Design STOC stereo vision camera and Kinect-like range sensors. We provide experimental verification of the proposed mapping method, and a comparison with another recently published terrain mapping method for walking robots.},
author = {Dominik Belter, Przemysław Łabecki, Péter Fankhauser, Roland Siegwart},
journal = {International Journal of Applied Mathematics and Computer Science},
keywords = {RGB-D perception; elevation mapping; uncertainty; legged robots},
language = {eng},
number = {1},
pages = {81-97},
title = {RGB-D terrain perception and dense mapping for legged robots},
url = {http://eudml.org/doc/276699},
volume = {26},
year = {2016},
}

TY - JOUR
AU - Dominik Belter
AU - Przemysław Łabecki
AU - Péter Fankhauser
AU - Roland Siegwart
TI - RGB-D terrain perception and dense mapping for legged robots
JO - International Journal of Applied Mathematics and Computer Science
PY - 2016
VL - 26
IS - 1
SP - 81
EP - 97
AB - This paper addresses the issues of unstructured terrain modeling for the purpose of navigation with legged robots. We present an improved elevation grid concept adopted to the specific requirements of a small legged robot with limited perceptual capabilities. We propose an extension of the elevation grid update mechanism by incorporating a formal treatment of the spatial uncertainty. Moreover, this paper presents uncertainty models for a structured light RGB-D sensor and a stereo vision camera used to produce a dense depth map. The model for the uncertainty of the stereo vision camera is based on uncertainty propagation from calibration, through undistortion and rectification algorithms, allowing calculation of the uncertainty of measured 3D point coordinates. The proposed uncertainty models were used for the construction of a terrain elevation map using the Videre Design STOC stereo vision camera and Kinect-like range sensors. We provide experimental verification of the proposed mapping method, and a comparison with another recently published terrain mapping method for walking robots.
LA - eng
KW - RGB-D perception; elevation mapping; uncertainty; legged robots
UR - http://eudml.org/doc/276699
ER -

References

top
  1. Belter, D., Łabecki, P. and Skrzypczyński, P. (2012). Estimating terrain elevation maps from sparse and uncertain multi-sensor data, IEEE 2012 International Conference on Robotics and Biomimetics, Guangzhou, China, pp. 715-722. 
  2. Belter, D., Łabecki, P. and Skrzypczyński, P. (n.d.). Adaptive motion planning for autonomous rough terrain traversal with a walking robot, Journal of Field Robotics, (in print). Zbl1243.68284
  3. Belter, D., Nowicki, M., Skrzypczyński, P., Walas, K. and Wietrzykowski, J. (2015). Lightweight RGB-D SLAM system for search and rescue robots, in M.K.R. Szewczyk and C. Zieliński (Eds.), Recent Advances in Automation, Robotics and Measuring Techniques, Advances in Intelligent Systems and Computing, Vol. 351, Springer, Cham, pp. 11-21. 
  4. Belter, D. and Skrzypczyński, P. (2011a). Integrated motion planning for a hexapod robot walking on rough terrain, 18th IFAC World Congress, Milan, Italy, pp. 6918-6923. Zbl1243.68284
  5. Belter, D. and Skrzypczyński, P. (2011b). Rough terrain mapping and classification for foothold selection in a walking robot, Journal of Field Robotics 28(4): 497-528. Zbl1243.68284
  6. Belter, D. and Skrzypczyński, P. (2013). Precise self-localization of a walking robot on rough terrain using parallel tracking and mapping, Industrial Robot: An International Journal 40(3): 229-237. 
  7. Belter, D. and Walas, K. (2014). A compact walking robot-flexible research and development platform, in M.K.R. Szewczyk and C. Zieliński (Eds.), Recent Advances in Automation, Robotics and Measuring Techniques, Advances in Intelligent Systems and Computing, Vol. 267, Springer, Cham, pp. 343-352. 
  8. Berger, M., Tagliasacchi, A., Seversky, L., Alliez, P., Levine, J., Sharf, A. and Silva, C. (2014). State of the art in surface reconstruction from point clouds, in S. Lefebvre and M. Spagnuolo (Eds.), Eurographics 2014-State of the Art Reports, The Eurographics Association, Geneve. 
  9. Bloesch, M., Gehring, C., Fankhauser, P., Hutter, M., Hoepflinger, M.A. and Siegwart, R. (2013). State estimation for legged robots on unstable and slippery terrain, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, pp. 6058-6064. 
  10. Dey, T.K., Ge, X., Que, Q., Safa, I., Wang, L. and Wang, Y. (2012). Feature-preserving reconstruction of singular surfaces, Computer Graphics Forum 31(5): 1787-1796. 
  11. Dryanovski, I., Morris, W. and Xiao, J. (2010). Multi-volume occupancy grids: An efficient probabilistic 3D mapping model for micro aerial vehicles, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, pp. 1553-1559. 
  12. Fankhauser, P., Bloesch, M., Gehring, C., Hutter, M. and Siegwart, R. (2014). Robot-centric elevation mapping with uncertainty estimates, International Conference on Climbing and Walking Robots (CLAWAR), Poznań, Poland, pp. 433-440. 
  13. Handa, A., Whelan, T., McDonald, J. and Davison, A. (2014). A benchmark for RGB-D visual odometry, 3D reconstruction and SLAM, IEEE International Conference on Robotics and Automation, ICRA, Hong Kong, China, pp. 1524-1531. 
  14. Hebert, M., Caillas, C., Krotkov, E. and Kweon, I. (1989). Terrain mapping for a roving planetary explorer, Proceedings of the IEEE International Conference on Robotics and Automation, Scottsdale, AZ, USA, pp. 997-1002. 
  15. Hornung, A., Wurm, K., Bennewitz, M., Stachniss, C. and Burgard, W. (2013). OctoMap: An efficient probabilistic 3D mapping framework based on octrees, Autonomous Robots 34(3): 189-206. 
  16. Hutter, M., Gehring, C., Bloesch, M., Hoepflinger, M.A., Remy, C.D. and Siegwart, R. (2012). StarlETH: A compliant quadrupedal robot for fast, efficient, and versatile locomotion, International Conference on Climbing and Walking Robots (CLAWAR), Baltimore, MD, USA, pp. 483-490. 
  17. Kleiner, A. and Dornhege, C. (2007). Real-time localization and elevation mapping within urban search and rescue scenarios, Journal of Field Robotics 24(8-9): 723-745. 
  18. Khoshelham, K. and Elberink, S. (2012). Accuracy and resolution of kinect depth data for indoor mapping applications, Sensors 12(2): 1437-1454. 
  19. Kolter, J., Kim, Y. and Ng, A. (2009). Stereo vision and terrain modeling for quadruped robots, Proceedings of the IEEE International Conference on Robotics and Automation, Kobe, Japan, pp. 1557-1564. 
  20. Konolige, K. (1997). Small vision systems: Hardware and implementation, 8th International Symposium on Robotics Research, Monterey, CA, USA, pp. 111-116. 
  21. Kweon, I. and Kanade, T. (1992). High-resolution terrain map from multiple sensor data, IEEE Transactions on Pattern Analysis and Machine Intelligence 14(2): 278-292. 
  22. Łabecki, P. and Belter, D. (2014). RGB-D based mapping method for a legged robot, in C.Z.K. Tchoń (Ed.), Zeszyty Naukowe Politechniki Warszawskiej, Warsaw University of Technology Press, Warsaw, pp. 297-306, (in Polish). 
  23. Łabecki, P. and Skrzypczyński, P. (2013). Spatial uncertainty assessment in visual terrain perception for a mobile robot, in J. Korbicz and M. Kowal (Eds.), Intelligent Systems in Technical and Medical Diagnostics, Advances in Intelligent Systems and Computing, Vol. 230, Springer-Verlag, Berlin, pp. 357-368. 
  24. Matthies, L. and Shafer, S. (1987). Error modeling in stereo navigation, International Journal of Robotics and Automation 3(3): 239-248. 
  25. Nowicki, M. and Skrzypczyński, P. (2013). Combining photometric and depth data for lightweight and robust visual odometry, European Conference on Mobile Robots, Barcelona, Spain, pp. 125-130. 
  26. Park, J.-H., Shin, Y.-D., Bae, J.-H. and Baeg, M.-H. (2012). Spatial uncertainty model for visual features using a kinect sensor, Sensors 12(7): 8640-8662. 
  27. Pfaff, P., Triebel, R. and Burgard, W. (2007). An efficient extension to elevation maps for outdoor terrain mapping and loop closing, International Journal of Robotics Research 26(2): 217-230. 
  28. Plagemann, C., Mischke, S., Prentice, S., Kersting, K., Roy, N. and Burgard, W. (2008). Learning predictive terrain models for legged robot locomotion, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, pp. 3545-3552. Zbl1243.68295
  29. Poppinga, J., Birk, A. and Pathak, K. (2010). A characterization of 3D sensors for response robots, in J. Baltes et al. (Eds.), RoboCup 2009, Lecture Notes in Artificial Intelligence, Vol. 5949, Springer, Berlin, pp. 264-275. 
  30. Rusu, R., Sundaresan, A., Morisset, B., Hauser, K., Agrawal, M., Latombe, J.-C. and Beetz, M. (2009). Leaving flatland: Efficient real-time three-dimensional perception and motion planning, Journal of Field Robotics 26(10): 841-862. 
  31. Saarinen, J., Andreasson, H., Stoyanov, T. and Lilienthal, A.J. (2013). 3D normal distributions transform occupancy maps: An efficient representation for mapping in dynamic environments, International Journal of Robotics Research 32(14): 1627-1644. 
  32. Sahabi, H. and Basu, A. (1996). Analysis of error in depth perception with vergence and spatially varying sensing, Computer Vision and Image Understanding 63(3): 447-461. 
  33. Sharf, A., Lewiner, T., Shklarski, G., Toledo, S. and Cohen-Or, D. (2007). Interactive topology-aware surface reconstruction, ACM Transactions on Graphics 26(3), Article No. 43. 
  34. Skrzypczyński, P. (2007). Spatial uncertainty management for simultaneous localization and mapping, Proceedings of the IEEE International Conference on Robotics and Automation, Rome, Italy, pp. 4050-4055. 
  35. Skrzypczyński, P. (2009). Simultaneous localization and mapping: A feature-based probabilistic approach, International Journal of Applied Mathematics and Computer Science 19(4): 575-588, DOI: 10.2478/v10006-009-0045-z. Zbl1300.93157
  36. Stelzer, A., Hirschmuller, H. and Gorner, M. (2012). Stereo-vision-based navigation of a six-legged walking robot in unknown rough terrain, International Journal of Robotics Research 31(4): 381-402. 
  37. Szeliski, R. (2011). Computer Vision, Algorithms and Applications, Springer, London. Zbl1219.68009
  38. Thrun, S., Burgard, W. and Fox, D. (2005). Probabilistic Robotics (Intelligent Robotics and Autonomous Agents), The MIT Press, Cambridge, MA. Zbl1081.68703
  39. Walas, K. and Belter, D. (2011). Supporting locomotive functions of a six-legged walking robot, International Journal of Applied Mathematics and Computer Science 21(2): 363-377, DOI: 10.2478/v10006-011-0027-9. Zbl1282.93191
  40. Walas, K. and Nowicki, M. (2014). Terrain classification using Laser Range Finder, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, pp. 5003-5009. 
  41. Ye, C. and Borenstein, J. (2004). A novel filter for terrain mapping with laser rangefinders, IEEE Transactions on Robotics and Automation 20(5): 913-921. 
  42. Yoon, S., Hyung, S., Lee, M., Roh, K., Ahn, S., Geeb, A., Bunnunb, P., Calwayb, A. and Mayol-Cuevas, W. (2013). Real-time 3D simultaneous localization and map-building for a dynamic walking humanoid robot, Advanced Robotics 27(10): 759-772. 
  43. Zucker, M., Bagnell, J., Atkeson, C. and Kuffner, J. (2010). An optimization approach to rough terrain locomotion, IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, pp. 3589-3595. 

NotesEmbed ?

top

You must be logged in to post comments.

To embed these notes on your page include the following JavaScript code on your page where you want the notes to appear.

Only the controls for the widget will be shown in your chosen language. Notes will be shown in their authored language.

Tells the widget how many notes to show per page. You can cycle through additional notes using the next and previous controls.

    
                

Note: Best practice suggests putting the JavaScript code just before the closing </body> tag.