Abstract
Mobile robots need an internal representation of their environment to do useful things. Usually such a representation is some sort of geometric model. For our robot, which is equipped with a panoramic vision system, we choose an appearance model in which the sensoric data (in our case the panoramic images) have to be modeled as a function of the robot position. Because images are very high-dimensional vectors, a feature extraction is needed before the modeling step. Very often a linear dimension reduction is used where the projection matrix is obtained from a Principal Component Analysis (PCA). PCA is optimal for the reconstruction of the data, but not necessarily the best linear projection for the localization task. We derived a method which extracts linear features optimal with respect to a risk measure re.ecting the localization performance. We tested the method on a real navigation problem and compared it with an approach where PCA features were used.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
W. Burgard, A. Cremers, D. Fox, G. Lakemeyer, D. Hähnel, D. Schulz, W. Steiner, Walter, and S. Thrun. The interactive museum tour-guide robot. In A. P. Press, editor, Proceedings of the Fifteenth National Conference on Artificial Intelligence, 1998.
J. L. Crowley, F. Wallner, and B. Schiele. Position estimation using principal components of range data. In Proc. IEEE Int. Conf. on Robotics and Automation, Leuven, Belgium, May 1998.
I. Jolliffe. Principal Component Analysis. Springer-Verlag, New York, 1986.
K. Konolige and K. Chou. Markov localization using correlation. In Proc. International Joint Conference on Artificial Intelligence, pages 1154–1159. Morgan Kauffmann, 1999.
H. Murase and S. K. Nayar. Visual learning and recognition of 3-d objects from appearance. Int. Jrnl of Computer Vision, 14:5–24, 1995.
S. Oore, G. E. Hinton, and G. Dudek. A mobile robot that learns its place. Neural Computation, 9:683–699, 1997.
W. H. Press, S. A. Teukolsky, B. P. Flannery, and W. T. Vetterling. Numerical Recipes in C. Cambridge University Press, 2nd edition, 1992.
S. Thrun, W. Burgard, and D. Fox. A probabilistic approach to concurrent mapping and localization for mobile robots. Machine Learning, 31:29–53, 1998.
S. Thrun. Bayesian landmark learning for mobile robot localization. Machine Learning, 33(1), 1998.
N. Vlassis and B. Kröse. Robot environment modeling via principal component regression. In IROS’99, Proceedings of 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp 677–682, 1999.
N. Vlassis, R. Bunschoten, and B. Kröse. Learning task-relevant features from robot data. In IEEE International Conference on Robotics and Automation, Seoul, Korea, May 2001. pp 499–504, 2001.
N. Vlassis, Y. Motomura and B. Kröse. Supervised Dimension Reduction of IntrinsicallyLow-dimensional Data. Neural Computation, to appear, 2001.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kröse, B.J.A., Vlassis, N., Bunschoten, R. (2002). Omnidirectional Vision for Appearance-Based Robot Localization. In: Hager, G.D., Christensen, H.I., Bunke, H., Klein, R. (eds) Sensor Based Intelligent Robots. Lecture Notes in Computer Science, vol 2238. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45993-6_3
Download citation
DOI: https://doi.org/10.1007/3-540-45993-6_3
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-43399-6
Online ISBN: 978-3-540-45993-4
eBook Packages: Springer Book Archive