Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Detecting moving objects

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

The detection of moving objects is important in many tasks. This paper examines moving object detection based primarily on optical flow. We conclude that in realistic situations, detection using visual information alone is quite difficult, particularly when the camera may also be moving. The availability of additional information about camera motion and/or scene structure greatly simplifies the problem. Two general classes of techniques are examined. The first is based upon the motion epipolar constraint—translational motion produces a flow field radially expanding from a “focus of expansion” (FOE). Epipolar methods depend on knowing at least partial information about camera translation and/or rotation. The second class of methods is based on comparison of observed optical flow with other information about depth, for example from stereo vision. Examples of several of these techniques are presented.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. G. Adiv, “Inherent ambiguities in recovering 3-d motion and structure from a noisy flow field,” Proc. 3rd IEEE Conf. Comput, Vision Pattern Recog., San Francisco, pp. 70–77, 1985.

  2. Proc. Workshop on Motion: Representation and Analysis, Kiawah Island, SC, May 1986.

  3. S. Ullman, The Interpretation of Visual Motion, Cambridge, MA: MIT Press, 1979.

    Google Scholar 

  4. R. Jain, W.N. Martin, and J.K. Aggarwal, “Extraction of moving object images through change detection,” Proc. 6th Intern. Joint Conf. Artif. Intell., Tokyo, pp. 425–428, 1979.

  5. R. Jain, D. Militzer, and H.-H. Nagel, “Separating non-stationary from stationary scene components in a sequence of real world TV images,” Proc. 5th Intern. Joint Conf. Artif. Intell., Cambridge, MA, pp. 425–428, 1977.

  6. A.M. Waxman and J.H. Duncan, “Binocular image flows,” Proc. Workshop on Motion: Representation and Analysis, Kiawah Island, SC, 1986.

  7. W.B. Thompson, K.M. Mutch, and V.A. Berzins, “Dynamic occlusion analysis in optical flow fields,” IEEE Trans. PAMI 7:374–383, July 1985.

    Google Scholar 

  8. R.C. Jain, “Segmentation of frame sequences obtained by a moving observer,” IEEE Trans. PAMI 6:624–629, September 1984.

    Google Scholar 

  9. W.B. Thompson, “Combining motion and contrast for segmentation,” IEEE Trans. PAMI 2:543–549, November 1980.

    Google Scholar 

  10. W.F. Clocksin, “Perception of surface slant and edge labels from optical flow: A computational approach,” Perception 9:253–269, 1980.

    Google Scholar 

  11. K. Nakayama and J.M. Loomis, “Optical velocity patterns, velocity sensitive neurons, and space perception: A hypothesis,” Perception 3:63–80, 1974.

    Google Scholar 

  12. D.J. Heeger and G. Hager, “Egomotion and the sabilized world,” Proc. 2nd Intern. Conf. Comput. Vision, Tampa, pp. 435–440, 1988.

  13. Z. Zhang, O.D. Faugeras, and N. Avache, “Analysis of a sequence of stereo scenes containing multiple moving objects using rigidity constraints,” Proc. 2nd Intern. Conf. Comput. Vision. Tampa, pp. 177–186, 1988.

  14. A.R. Bruss and B.K.P. Horn, “Passive navigation,” Comput. Vision, Graphics Image Process. 21(1):3–20, 1983.

    Google Scholar 

  15. D.H. Ballard and O.A. Kimball, “Rigid body motion from depth and optical flow,” Comput. Vision, Graphics Image Process. 22: 95–115, 1983.

    Google Scholar 

  16. D.A. Marr, Vision, San Francisco: W.H. Freeman, 1982.

    Google Scholar 

  17. A. Bandopadhay, B. Chandra, and D.H. Ballard, “Active navigation: Tracking an environmental point considered beneficial,” Proc. Workshop on Motion: Representation and Analysis, Kiawah Island, SC, pp. 23–29, 1986.

  18. T.S. Huang, S.D. Blostein, A. Werkheiser, M. McDonnel, and M. Lew, “Motion detection and estimation from stereo image sequences: Some preliminary experimental results,” Proc. Workshop on Motion: Representatin and Analysis, Kiawah Island, SC, pp. 45–46, 1986.

  19. J.J. Gibson, The Perception of the Visual World, Cambridge, MA: Riverside Press, 1950.

    Google Scholar 

  20. R.A. Brooks, A.M. Flynn, and T. Marill, “Self calibration of motion and stereo for mobile robots,” Proc. 4th Intern. Symp. Robotics Res., 1987.

  21. J.H. Reiger and D.T. Lawton, “Sensor motion and relative depth from difference fields of optic flows,” Proc. 8th Intern. Joint Conf. Artif. Intell., Karlsruhe, pp. 1027–1031, 1983.

  22. S.T. Barnard and W.B. Thompson, “Disparity analysis of images,” IEEE Trans. PAMI 2:333–340, July 1980.

    Google Scholar 

  23. D.T. Lawton, personal communication.

  24. J.K. Kearney, W.B. Thompson, and D.L. Boley, “Optical flow estimation,” IEEE Trans. PAMI 9:229–244, March 1987.

    Google Scholar 

  25. W.B. Thompson and J.K. Kearney, “Inexact vision,” Proc. Workshop on Motion: Representation and Analysis, Kiawah Island, SC, pp. 15–21, 1986.

Download references

Author information

Authors and Affiliations

Authors

Additional information

A preliminary version of this article appeared in The Proceedings of the First International Conference on Computer Vision, London, June 1987.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Thompson, W.B., Pong, TC. Detecting moving objects. Int J Comput Vision 4, 39–57 (1990). https://doi.org/10.1007/BF00137442

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF00137442

Keywords