Visual simultaneous localisation and map-building supported by structured landmarks

Robert Bączyk; Andrzej Kasiński

International Journal of Applied Mathematics and Computer Science (2010)

  • Volume: 20, Issue: 2, page 281-293
  • ISSN: 1641-876X

Abstract

top
Visual simultaneous localisation and map-building systems which take advantage of some landmarks other than point-wise environment features are not frequently reported. In the following paper the method of using the operational map of robot surrounding, which is complemented with visible structured passive landmarks, is described. These landmarks are used to improve self-localisation accuracy of the robot camera and to reduce the size of the Kalman-filter state-vector with respect to the vector size involving point-wise environment features only. Structured landmarks reduce the drift of the camera pose estimate and improve the reliability of the map which is built on-line. Results of simulation experiments are described, proving advantages of such an approach.

How to cite

top

Robert Bączyk, and Andrzej Kasiński. "Visual simultaneous localisation and map-building supported by structured landmarks." International Journal of Applied Mathematics and Computer Science 20.2 (2010): 281-293. <http://eudml.org/doc/207987>.

@article{RobertBączyk2010,
abstract = {Visual simultaneous localisation and map-building systems which take advantage of some landmarks other than point-wise environment features are not frequently reported. In the following paper the method of using the operational map of robot surrounding, which is complemented with visible structured passive landmarks, is described. These landmarks are used to improve self-localisation accuracy of the robot camera and to reduce the size of the Kalman-filter state-vector with respect to the vector size involving point-wise environment features only. Structured landmarks reduce the drift of the camera pose estimate and improve the reliability of the map which is built on-line. Results of simulation experiments are described, proving advantages of such an approach.},
author = {Robert Bączyk, Andrzej Kasiński},
journal = {International Journal of Applied Mathematics and Computer Science},
keywords = {mobile robots; navigation; SLAM; artificial landmarks},
language = {eng},
number = {2},
pages = {281-293},
title = {Visual simultaneous localisation and map-building supported by structured landmarks},
url = {http://eudml.org/doc/207987},
volume = {20},
year = {2010},
}

TY - JOUR
AU - Robert Bączyk
AU - Andrzej Kasiński
TI - Visual simultaneous localisation and map-building supported by structured landmarks
JO - International Journal of Applied Mathematics and Computer Science
PY - 2010
VL - 20
IS - 2
SP - 281
EP - 293
AB - Visual simultaneous localisation and map-building systems which take advantage of some landmarks other than point-wise environment features are not frequently reported. In the following paper the method of using the operational map of robot surrounding, which is complemented with visible structured passive landmarks, is described. These landmarks are used to improve self-localisation accuracy of the robot camera and to reduce the size of the Kalman-filter state-vector with respect to the vector size involving point-wise environment features only. Structured landmarks reduce the drift of the camera pose estimate and improve the reliability of the map which is built on-line. Results of simulation experiments are described, proving advantages of such an approach.
LA - eng
KW - mobile robots; navigation; SLAM; artificial landmarks
UR - http://eudml.org/doc/207987
ER -

References

top
  1. Bailey, T. and Durrant-Whyte, H. (2006). Simultaneous localization and mapping (SLAM): Part II, IEEE Robotics & Automation Magazine 13(3): 108-117. 
  2. Bar-Shalom, Y., Kirubarajan, T. and Li, X.-R. (2002). Estimation with Applications to Tracking and Navigation, John Wiley & Sons, Inc., New York, NY. 
  3. Bączyk, R., Kasiński, A. and Skrzypczyński, P. (2003). Visionbased mobile robot localization with simple artificial landmarks, 7th International IFAC Symposium on Robot Control (SYROCO), Wrocław, Poland, pp. 217-222. 
  4. Castle, R.O., Gawley, D.J., Klein, G. and Murray, D.W. (2007a). Towards simultaneous recognition, localization and mapping for hand-held and wearable cameras, International Conference on Robotics and Automation (ICRA), Rome, Italy, pp. 4102-4107. 
  5. Castle, R.O., Gawley, D.J., Klein, G. and Murray, D.W. (2007b). Video-rate recognition and localization for wearable cameras, Proceedings of the 18th British Machine Vision Conference (BMVC), Warwick, UK, pp. 1100-1109. 
  6. Civera, J., Davison, A. and Montiel, J. (2008). Inverse depth parametrization for monocular SLAM, IEEE Transactions on Robotics 24(5): 932-945. 
  7. Clemente, L.A., Davison, A., Reid, I., Neira, J. and Tardos, J. (2007). Mapping large loops with a single hand-held camera, Robotics Science and Systems (RSS), Georgia Institute of Technology, Atlanta, GA. Zbl05501711
  8. Davison, A. (2003). Real-time simultaneous localisation and mapping with a single camera, Ninth IEEE International Conference on Computer Vision (ICCV), Nice, France, Vol. 2, pp. 1403-1410. 
  9. Davison, A.J. and Murray, D.W. (2002). Simultaneous localisation and map-building using active vision, IEEE Transactions on Pattern Analysis and Machine Intelligence 24(7): 865-880. 
  10. Davison, A.J., Reid, I.D., Molton, N.D. and Stasse, O. (2007). MonoSLAM: Real-time single camera SLAM, IEEE Transactions on Pattern Analysis and Machine Intelligence 29(6): 1052-1067. 
  11. Dissanayake, G., Newman, P., Clark, S., Durrant-Whyte, H.F. and Csorba, M. (2001). A solution to the simultaneous localization and map building (SLAM) problem, IEEE Transactions on Robotics and Automation 17(2): 229-241. 
  12. Durrant-Whyte, H. and Bailey, T. (2006). Simultaneous localization and mapping: Part I, IEEE Robotics & Automation Magazine 13(2): 99-110. 
  13. Eade, E. and Drummond, T. (2009). Edge landmarks in monocular SLAM, Image and Vision Computing 27(5): 588-596. 
  14. Gee, A.P., Chekhlov, D., Calway, A. and Mayol-Cuevas, W. (2008). Discovering higher level structure in visual SLAM, IEEE Transactions on Robotics 24(5): 980-990. 
  15. Gee, A.P. and Mayol-Cuevas, W. (2006). Real-time model-based SLAM using line segments, 2nd International Symposium on Visual Computing, Lake Tahoe, NV, USA, pp. 354-363. 
  16. Haralick, R.M. (2000). Performance Characterization in Computer Vision, in R. Klette, H.S. Stiehl, M.A. Viergever, K.L. Vincken (Eds.) Proceedings of the Theoretical Foundations of Computer Vision, TFCV on Performance Characterization in Computer Vision, Kluwer, B.V., Deventer, pp. 95-114. 
  17. Neira, J., Davison, A.J. and Leonard, J.J. (2008). Guest editorial special issue on visual SLAM, IEEE Transactions on Robotics 24(5): 929-931. 
  18. Smith, P., Reid, I. and Davison, A.J. (2006). Real-time monocular SLAM with straight lines, British Machine Vision Conference (BMVC), Edinburgh, UK, Vol. 1, pp. 17-26. 
  19. Thrun, S., Montemerlo, M., Dahlkamp, H., Stavens, D., Aron, A., Diebel, J., Fong, P., Gale, J., Halpenny, M., Hoffmann, G., Lau, K., Oakley, C.M., Palatucci, M., Pratt, V., Stang, P., Strohband, S., Dupont, C., Jendrossek, L.-E., Koelen, C., Markey, C., Rummel, C., van Niekerk, J., Jensen, E., Alessandrini, P., Bradski, G.R., Davies, B., Ettinger, S., Kaehler, A., Nefian, A.V. and Mahoney, P. (2006). Stanley: The robot that won the DARPA Grand Challenge, Journal of Field Robotics 23(9): 661-692. 

NotesEmbed ?

top

You must be logged in to post comments.

To embed these notes on your page include the following JavaScript code on your page where you want the notes to appear.

Only the controls for the widget will be shown in your chosen language. Notes will be shown in their authored language.

Tells the widget how many notes to show per page. You can cycle through additional notes using the next and previous controls.

    
                

Note: Best practice suggests putting the JavaScript code just before the closing </body> tag.