Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
article

Photo-consistent synthesis of motion blur and depth-of-field effects with a real camera model

Published: 01 September 2012 Publication History

Abstract

Depth-of-field (DOF) and motion blur are important visual cues used for computer graphics and photography to illustrate focus of attention and object motion. In this work, we present a method for photo-realistic DOF and motion blur generation based on the characteristics of a real camera system. Both the depth-blur relation for different camera focus settings and the nonlinear intensity response of image sensors are modeled. The camera parameters are calibrated and used for defocus and motion blur synthesis. For a well-focused real scene image, DOF and motion blur effects are generated by post-processing techniques. Experiments have shown that the proposed method generates more photo-consistent results than the commonly used graphical models.

References

[1]
Lengyel, J., The convergence of graphics and vision. Computer. v31 i7. 46-53.
[2]
Kutulakos, K.N. and Vallino, J.R., Calibration-free augmented reality. IEEE Trans. Vis. Comput. Graph. v4 i1. 1-20.
[3]
Berger, M.-O. and Simon, G., Robust image composition algorithms for augmented reality. In: ACCV '98: Proceedings of the Third Asian Conference on Computer Vision-Volume II, Springer-Verlag, London, UK. pp. 360-367.
[4]
Goshtasby, A., Fusion of multi-exposure images. Image Vis. Comput. v23 i6. 611-618.
[5]
Li, S. and Yang, B., Multifocus image fusion using region segmentation and spatial frequency. Image Vis. Comput. v26 i7. 971-979.
[6]
Ortiz, A. and Oliver, G., Radiometric calibration of vision cameras and intensity uncertainty estimation. Image Vis. Comput. v24 i10. 1137-1145.
[7]
Kolb, C., Mitchell, D. and Hanrahan, P., A realistic camera model for computer graphics. In: SIGGRAPH '95: Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, ACM Press. pp. 317-324.
[8]
Heidrich, W., Slusallek, P. and Seidel, H.-P., An image-based model for realistic lens systems in interactive computer graphics. In: Proceedings of the Conference on Graphics Interface '97, Canadian Information Processing Society, Toronto, Ont., Canada, Canada. pp. 68-75.
[9]
De, I., Chanda, B. and Chattopadhyay, B., Enhancing effective depth-of-field by image fusion using mathematical morphology. Image Vis. Comput. v24 i12. 1278-1287.
[10]
Haeberli, P. and Akeley, K., The accumulation buffer: hardware support for high-quality rendering. In: SIGGRAPH '90: Proceedings of the 17th Annual Conference on Computer Graphics and Interactive Techniques, ACM Press. pp. 309-318.
[11]
Scofield, C., 212d depth-of-field simulation for computer animation. In: Graphics Gems III, AP Professional, pp. 36-38.
[12]
Potmesil, M. and Chakravarty, I., A lens and aperture camera model for synthetic image generation. In: SIGGRAPH '81: Proceedings of the 8th Annual Conference on Computer Graphics and Interactive Techniques, ACM Press. pp. 297-305.
[13]
Debevec, P., Rendering synthetic objects into real scenes: bridging traditional and image-based graphics with global illumination and high dynamic range photography. In: SIGGRAPH '98: Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, ACM Press. pp. 189-198.
[14]
Agusanto, K., Li, L., Chuangui, Z. and Sing, N.W., Photorealistic rendering for augmented reality using environment illumination. In: ISMAR '03: Proceedings of the 2nd IEEE and ACM International Symposium on Mixed and Augmented Reality, pp. 208
[15]
Jacobs, K. and Loscos, C., Classification of illumination methods for mixed reality. Comput. Graphics Forum. v25 i23. 29-51.
[16]
Rokita, P., Generating depth-of-field effects in virtual reality applications. IEEE Comput. Graph. Appl. v16 i2. 18-21.
[17]
Mulder, J.D. and van Liere, R., Fast perception-based depth of field rendering. In: VRST '00: Proceedings of the ACM Symposium on Virtual Reality Software and Technology, ACM Press, New York, NY, USA. pp. 129-133.
[18]
Azuma, R., Baillot, Y., Behringer, R., Feiner, S., Julier, S. and MacIntyre, B., Recent advances in augmented reality. IEEE Comput. Graph. Appl. v21 i6. 34-47.
[19]
Potmesil, M. and Chakravarty, I., Modeling motion blur in computer-generated images. In: Proceedings of the 10th Annual Conference on Computer Graphics and Interactive Techniques, ACM Press. pp. 389-399.
[20]
Max, N.L. and Lerner, D.M., A Two-and-a-half-d Motion-blur Algorithm, in: SIGGRAPH '85: Proceedings of the 12th Annual Conference on Computer Graphics and Interactive Techniques. 1985. ACM Press. New York, NY, USA.
[21]
Dachille, F. and Kaufman, A., High-degree temporal antialiasing. In: CA '00: Proceedings of the Computer Animation, pp. 49-54.
[22]
Mueller, K., Möller, T., II, J.E.S., Crawfis, R., Shareef, N. and Yagel, R., Splatting errors and antialiasing. IEEE Trans. Vis. Comput. Graph. v4 i2. 178-191.
[23]
Sung, K., Pearce, A. and Wang, C., Spatial-temporal antialiasing. IEEE Trans. Vis. Comput. Graph. v08 i2. 144-153.
[24]
Brostow, G. and Essa, I., Image-based motion blur for stop motion animation. In: SIGGRAPH 01 Conference Proceedings, ACM SIGGRAPH, pp. 561-566.
[25]
Egan, K., Tseng, Y.-T., Holzschuch, N., Durand, F. and Ramamoorthi, R., Frequency analysis and sheared reconstruction for rendering motion blur. ACM Trans. Graph. v28 i3. 1-13.
[26]
Wloka, M.M. and Zeleznik, R.C., Interactive real-time motion blur. Vis. Comput. v12 i6. 283-295.
[27]
Meinds, K., Stout, J. and van Overveld, K., Real-time temporal anti-aliasing for 3d graphics. In: T. Ertl (Ed.), VMV, Aka GmbH, pp. 337-344.
[28]
A. Rush, Nonlinear sensors impact digital imaging., Electronics Engineer.
[29]
Banham, M. and Katsaggelos, A., Digital image restoration. IEEE Signal Process. Mag. v14 i2. 24-41.
[30]
Debevec, P.E. and Malik, J., Recovering high dynamic range radiance maps from photographs. In: SIGGRAPH '97: Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, ACM Press. pp. 369-378.
[31]
Schanz, M., Nitta, C., Bussman, A., Hosticka, B.J. and Wertheimer, R.K., A high-dynamic-range CMOS image sensor for automotive applications. IEEE J. Solid State Circuits. v35 i7. 932-938.
[32]
Chaudhuri, S. and Rajagopalan, A., . 1998. Springer-Verlag.
[33]
Pentland, A., A new sense for depth of field. IEEE Trans. Pattern Anal. Mach. Intell. v9 i4. 523-531.
[34]
Schreiber, W.F., . 1993. 3rd edition. Springer.
[35]
Cook, R.L., Porter, T. and Carpenter, L., Distributed ray tracing. SIGGRAPH Comput. Graph. v18 i3. 137-145.
[36]
Favaro, P. and Soatto, S., Seeing beyond occlusions (and other marvels of a finite lens aperture), Computer Vision and Pattern Recognition, 2003. In: Proceedings. 2003 IEEE Computer Society Conference on 2, vol. 2. pp. II579-II586.

Cited By

View all
  • (2024)Benchmarking Object Detection Robustness against Real-World CorruptionsInternational Journal of Computer Vision10.1007/s11263-024-02096-6132:10(4398-4416)Online publication date: 1-Oct-2024
  • (2017)FPGA-based methodology for depth-of-field extension in a single imageDigital Signal Processing10.1016/j.dsp.2017.07.01470:C(14-23)Online publication date: 1-Nov-2017
  1. Photo-consistent synthesis of motion blur and depth-of-field effects with a real camera model

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image Image and Vision Computing
      Image and Vision Computing  Volume 30, Issue 9
      September, 2012
      91 pages

      Publisher

      Butterworth-Heinemann

      United States

      Publication History

      Published: 01 September 2012

      Author Tags

      1. Depth-of-field
      2. Image synthesis
      3. Motion blur

      Qualifiers

      • Article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 10 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Benchmarking Object Detection Robustness against Real-World CorruptionsInternational Journal of Computer Vision10.1007/s11263-024-02096-6132:10(4398-4416)Online publication date: 1-Oct-2024
      • (2017)FPGA-based methodology for depth-of-field extension in a single imageDigital Signal Processing10.1016/j.dsp.2017.07.01470:C(14-23)Online publication date: 1-Nov-2017

      View Options

      View options

      Get Access

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media