Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

H.264 Video Coding-Based Motion Estimation Architecture for Video Broadcasting from a Studio

  • Published:
Wireless Personal Communications Aims and scope Submit manuscript

A Correction to this article was published on 15 February 2021

This article has been updated

Abstract

Motion estimation (ME) as a process in H.264 coding basically deals with huge number of image data and needs a lot of calculation so that it should be considered to improve by hardware implementation and data redundancy reduction. This paper proposes an H.264 coding-based motion estimation architecture for video broadcasting from a studio. To implement a hardware of motion estimation, parallel processing is logically applied in the preprocessing and keypoint finding processes, and time-domain based algorithm is replaced by the frequency-domain based algorithm in order to filter the informative data in the low frequency range. The experimental results show that the proposed H.264 coding-based motion estimation architecture achieved significant improvement while maintaining the signal quality.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24

Similar content being viewed by others

Change history

References

  1. Smolic, A., & Kauff, P. (2005). Interactive 3-D video representation and coding technologies. Proceedings of IEEE, 93(1), 98–110.

    Article  Google Scholar 

  2. Rao, K. R., Kim, D. N., & Hwang, J. J. (2014). Video coding standards. Berlin: Springer.

    Book  Google Scholar 

  3. Ohm, J. R., Sullivan, G. J., Schwarz, H., Tan, T. K., & Wiegand, T. (2012). Comparison of the coding efficiency of video coding standards—including high efficiency video coding (HEVC). IEEE Transactions on Circuits and Systems for Video Technology, 22(12), 1669–1684.

    Article  Google Scholar 

  4. Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91–110.

    Article  Google Scholar 

  5. Liu, K. H., Liu, T. J., & Liu, H. H. (2010). A SIFT descriptor based method for global disparity vector estimation in multiview video coding. In Multimedia and expo (ICME), IEEE international conference, 19–23 July 2010.

  6. Wei, P., Zhanhuai, L., Yansong, Z., & Chuliang, W. (2018). The new hardware development trend and the challenges in data management and analysis. Data Science and Engineering, 3(3), 263–276.

    Article  Google Scholar 

  7. Rodriguez-Andina, J. J., Moure, J. M., & Valdes, M. D. (2007). Features, design tools, and application domains of FPGAs. IEEE Transactions on Industrial Electronics, 54(4), 438–442.

    Article  Google Scholar 

  8. Parhi, K. K. (1999). VLSI digital signal processing systems: Design and implementation (p. 1999). Hoboken: Wiley.

    Google Scholar 

  9. Monteiro, E., Vizzotto, B., Diniz, C., Maule, M., & Bampi, S. (2014). Parallelization of full search motion estimation algorithm for parallel and distributed platforms. International Journal of Parallel Programming, 42(2), 239–264.

    Article  Google Scholar 

  10. Konstantopoulos, C. (2015). A parallel algorithm for motion estimation in video coding using the bilinear transformation. SpringerPlus. https://doi.org/10.1186/s40064-015-1038-z.

    Article  Google Scholar 

  11. Yao, L. F. (2009). An architecture of optimised SIFT feature detection for an FPGA implementation of an image matcher. In Field-programmable technology conference, 9–11 December 2009, Australia, pp. 30–37.

  12. Kim, E. S., & Lee, H. J. (2013). A novel hardware design for SIFT generation with reduced memory requirement. Journal of Semiconductor Technology and Science., 13(2), 157–169.

    Article  Google Scholar 

  13. Wang, J., Sheng, Z., Yan, L., & Cao, Z. (2013). An embedded system-on-a-chip architecture for real-time visual detection and matching. IEEE Transactions on Circuits and Systems for Video Technology, 24(3), 525–538.

    Article  Google Scholar 

  14. Zhong, S., Wang, J., Yan, L., Kang, L., & Cao, Z. (2013). A real-time embedded architecture for SIFT. Journal of Systems Architecture, 59(1), 16–29.

    Article  Google Scholar 

  15. Jiang, J., Li, X., & Zhang, G. (2014). SIFT hardware implementation for real-time image feature extraction. IEEE Transactions on Circuits and Systems for Video Technology, 24(7), 1209–1220.

    Article  MathSciNet  Google Scholar 

  16. Vayalil, N. C., & Kong, Y. (2017). VLSI architecture of full-search variable- block-size motion estimation for HEVC video encoding. IET Circuits, Devices and Systems, 11(6), 543–548.

    Article  Google Scholar 

  17. Boonthep, N., Chamnongthai, K, & Phensadsaeng. P. (2018). A FPGA-based SIFT architecture for motion estimation in video coding. In Global wireless summit (GWS) conference.

  18. Boonthep, N., & Chamnongthai, K. (2019). A method of motion-estimation-based H.264 video coding using optimal search-range. Wireless Personal Communications.. https://doi.org/10.1007/s11277-019-06766-4.

    Article  Google Scholar 

  19. Brigham, E. O. (1988). The fast Fourier transform and its applications. Upper Saddle River: Prentice Hall.

    Google Scholar 

  20. Voronenko, Y., & Püschel, M. (2009). Algebraic signal processing theory: Cooley–Tukey type algorithms for real DFTs. IEEE Transactions on Signal Process, 57(1), 205–222.

    Article  MathSciNet  Google Scholar 

  21. Anand, S. (1999). Network-based parallel computing: Communication, architecture, and applications. New York: Springer.

    Google Scholar 

  22. Horn, B. K. P., & Schunck, B. G. (1981). Determining optical flow. Artificial Intelligence, 16(1), 185–203.

    Article  Google Scholar 

  23. ISO/IEC MPEG & ITU-T VCEG. (2006). Common test condition for multiview video coding. JVT-U211.

Download references

Acknowledgements

The financial support provided by the Thailand Research Fund through the Royal Golden Jubilee Ph.D. Program (Grant No. PHD/0144/2551), and the King Mongkut's University of Technology Thonburi are gratefully acknowledged.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kosin Chamnongthai.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The original version of this article has been revised: The missing Acknowledgement section has been added.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Boonthep, N., Chamnongthai, K. & Phensadsaeng, P. H.264 Video Coding-Based Motion Estimation Architecture for Video Broadcasting from a Studio. Wireless Pers Commun 115, 2851–2874 (2020). https://doi.org/10.1007/s11277-020-07557-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11277-020-07557-y

Keywords