Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3423328.3423500acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

VMP360: Adaptive 360° Video Streaming Based on Optical Flow Estimated QoE

Published: 15 October 2020 Publication History

Abstract

Containing full panoramic content in a single frame and providing immersive experience for users, 360° video has attracted great attention in industry and academia. Viewport-driven tiling schemes have been introduced in 360° video processing to provide high-quality video streaming. However, treating viewport as traditional streaming screen results in frequently rebuffer or quality distortion, leading to poor Quality of Experience (QoE) of schemes. In this paper, we propose Viewpoint Movement Perception 360° Video Streaming (VMP360), an adaptive 360° video streaming system that utilizes unique factors of 360° video perception quality of users to improve the overall QoE. By studying the relative moving speed and depth difference between the viewpoint and other content, the system evaluates the perceived quality distortion based on optical flow estimation. Taking QoE into account, a novel 360° video quality evaluation metric is defined as Optical-flow-based Peak Signal-to-Noise Ratio (OPSNR). Appling OPSNR to tiling process, VMP360 proposes a versatile-size tiling scheme, and further Reinforcement Learning (RL) is used to realize the Adaptive Bit Rate (ABR) selection of tiles. VMP360 is evaluated through the client-server streaming system with two prior schemes Pano and Plato. Statistics show that the proposed scheme can improve the quality of 360° video by 10.1% while maintaining same rebuffer ratio compared with the Pano and Plato, which confirms that VMP360 can provide a promising high QoE for 360° video streaming. The code of a prototype can be found in https://github.com/buptexplorers/OFB-VR.

References

[1]
V. Petrock. Virtual and augmented reality users 2019, March 2019.
[2]
Leire Amezua Hormaza, Wael M. Mohammed, Borja Ramis Ferrer, Ronal Bejarano, and Jose L. Martinez Lastra. On-line training and monitoring of robot tasks through virtual reality. In 2019 IEEE 17th International Conference on Industrial Informatics (INDIN), 2019.
[3]
T. Hatchard, F. Azmat, M. Al-Amin, Z. Rihawi, B. Ahmed, and A. Alsebae. Examining student response to virtual reality in education and training. In 2019 IEEE 17th International Conference on Industrial Informatics (INDIN), volume 1, pages 1145--1149, 2019.
[4]
Simone Mangiante, Guenter Klas, Amit Navon, Zhuang GuanHua, Ju Ran, and Marco Dias Silva. Vr is on the edge: How to deliver 360 videos in mobile networks. In Proceedings of the Workshop on Virtual Reality and Augmented Reality Network, pages 30--35, 2017.
[5]
Fanyi Duanmu, Eymen Kurdoglu, S Amir Hosseini, Yong Liu, and Yao Wang. Prioritized buffer control in two-tier 360 video streaming. In Proceedings of the Workshop on Virtual Reality and Augmented Reality Network, pages 13--18, 2017.
[6]
Chao Zhou, Mengbai Xiao, and Yao Liu. Clustile: Toward minimizing bandwidth in 360-degree video streaming. In IEEE INFOCOM 2018-IEEE Conference on Computer Communications, pages 962--970. IEEE, 2018.
[7]
Bo Han, Vijay Gopalakrishnan, Zhengye Liu, and Feng Qian. Priority-based tile transmission system and method for panoramic video streaming, March 5 2020. US Patent App. 16/122,584.
[8]
Xiaolan Jiang, Yi-Han Chiang, Yang Zhao, and Yusheng Ji. Plato: Learning-based adaptive streaming of 360-degree videos. In 2018 IEEE 43rd Conference on Local Computer Networks (LCN), pages 393--400. IEEE, 2018.
[9]
Eddy Ilg, Nikolaus Mayer, Tonmoy Saikia, Margret Keuper, Alexey Dosovitskiy, and Thomas Brox. Flownet 2.0: Evolution of optical flow estimation with deep networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2462--2470, 2017.
[10]
Xavier Corbillon, Gwendal Simon, Alisa Devlic, and Jacob Chakareski. Viewportadaptive navigable 360-degree video delivery. In 2017 IEEE international conference on communications (ICC), pages 1--7. IEEE, 2017.
[11]
Yixuan Ban, Lan Xie, Zhimin Xu, Xinggong Zhang, Zongming Guo, and Yue Wang. Cub360: Exploiting cross-users behaviors for viewport prediction in 360 video adaptive streaming. In 2018 IEEE International Conference on Multimedia and Expo (ICME), pages 1--6. IEEE, 2018.
[12]
Ching-Ling Fan, Jean Lee, Wen-Chih Lo, Chun-Ying Huang, Kuan-Ta Chen, and Cheng-Hsin Hsu. Fixation prediction for 360 video streaming in head-mounted virtual reality. In Proceedings of the 27th Workshop on Network and Operating Systems Support for Digital Audio and Video, pages 67--72, 2017.
[13]
Liyang Sun, Yixiang Mao, Tongyu Zong, Yong Liu, and Yao Wang. Flocking-based live streaming of 360-degree video. In Proceedings of the 11th ACM Multimedia Systems Conference, MMSys '20, page 26--37, New York, NY, USA, 2020. Association for Computing Machinery.
[14]
X. Zhang, G. Cheung, P. Le Callet, and J. Z. G. Tan. Sparse directed graph learning for head movement prediction in 360 video streaming. In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2678--2682, 2020.
[15]
Shu Shi, Varun Gupta, and Rittwik Jana. Freedom: Fast recovery enhanced vr delivery over mobile networks. In Proceedings of the 17th Annual International Conference on Mobile Systems, Applications, and Services, pages 130--141, 2019.
[16]
Vamsidhar Reddy Gaddam, Michael Riegler, Ragnhild Eg, Carsten Griwodz, and Pål Halvorsen. Tiling in interactive panoramic video: Approaches and evaluation. IEEE Transactions on Multimedia, 18(9):1819--1831, 2016.
[17]
Jian He, Mubashir Adnan Qureshi, Lili Qiu, Jin Li, Feng Li, and Lei Han. Rubiks: Practical 360-degree streaming for smartphones. In Proceedings of the 16th Annual International Conference on Mobile Systems, Applications, and Services, pages 482-- 494, 2018.
[18]
Yuanxing Zhang, Yushuo Guan, Kaigui Bian, Yunxin Liu, Hu Tuo, Lingyang Song, and Xiaoming Li. Epass360: Qoe-aware 360-degree video streaming over mobile devices. IEEE Transactions on Mobile Computing, 2020.
[19]
Yu Guan, Chengyuan Zheng, Xinggong Zhang, Zongming Guo, and Junchen Jiang. Pano: Optimizing 360 video streaming with a better understanding of quality perception. In Proceedings of the ACM Special Interest Group on Data Communication, pages 394--407. 2019.
[20]
Zesong Fei, Fei Wang, Jing Wang, and Xiang Xie. Qoe evaluation methods for 360- degree vr video transmission. IEEE Journal of Selected Topics in Signal Processing, 14(1):78--88, 2019.
[21]
Chengjun Guo, Ying Cui, and Zhi Liu. Optimal multicast of tiled 360 vr video. IEEE Wireless Communications Letters, 8(1):145--148, 2018.
[22]
Kaixuan Long, Ying Cui, Chencheng Ye, and Zhi Liu. Optimal transmission of multi-quality tiled 360 vr video by exploiting multicast opportunities. In 2019 IEEE Global Communications Conference (GLOBECOM), pages 1--6. IEEE, 2019.
[23]
Mingzhe Chen, Omid Semiari, Walid Saad, Xuanlin Liu, and Changchuan Yin. Federated deep learning for immersive virtual reality over wireless networks. In 2019 IEEE Global Communications Conference (GLOBECOM), pages 1--6. IEEE, 2019.
[24]
Nikil Jayant, James Johnston, and Robert Safranek. Signal compression based on models of human perception. Proceedings of the IEEE, 81(10):1385--1422, 1993.
[25]
Yin Zhao, Lu Yu, Zhenzhong Chen, and Ce Zhu. Video quality assessment based on measuring perceptual noise from spatial and temporal perspectives. IEEE Transactions on Circuits and Systems for Video Technology, 21(12):1890--1902, 2011.
[26]
Rakesh Agrawal, Johannes Gehrke, Dimitrios Gunopulos, and Prabhakar Raghavan. Automatic subspace clustering of high dimensional data for data mining applications. In Proceedings of the 1998 ACM SIGMOD international conference on Management of data, pages 94--105, 1998.
[27]
Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pages 1928--1937, 2016.
[28]
RS Sutton. anda. g. barto,'reinforcement learningan introduction?, 1998.
[29]
Feng Qian, Bo Han, Qingyang Xiao, and Vijay Gopalakrishnan. Flare: Practical viewport-adaptive 360-degree video streaming for mobile devices. In Proceedings of the 24th Annual International Conference on Mobile Computing and Networking, pages 99--114, 2018.
[30]
Xiaoqi Yin, Abhishek Jindal, Vyas Sekar, and Bruno Sinopoli. A control-theoretic approach for dynamic adaptive video streaming over http. In Proceedings of the 2015 ACM Conference on Special Interest Group on Data Communication, pages 325--338, 2015.
[31]
Chenglei Wu, Zhihao Tan, Zhi Wang, and Shiqiang Yang. A dataset for exploring user behaviors in vr spherical video streaming. In Proceedings of the 8th ACM on Multimedia Systems Conference, pages 193--198, 2017.
[32]
Jeroen Van Der Hooft, Stefano Petrangeli, Tim Wauters, Rafael Huysegems, Patrice Rondao Alface, Tom Bostoen, and Filip De Turck. Http/2-based adaptive streaming of hevc video over 4g/lte networks. IEEE Communications Letters, 20(11):2177--2180, 2016.

Cited By

View all
  • (2021)Off-Policy: Soft Actor-Critic-based Adaptive Streaming for 360-degree Video in Heterogeneous Wireless Networks2021 13th International Conference on Wireless Communications and Signal Processing (WCSP)10.1109/WCSP52459.2021.9613395(1-6)Online publication date: 20-Oct-2021

Index Terms

  1. VMP360: Adaptive 360° Video Streaming Based on Optical Flow Estimated QoE

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    QoEVMA'20: Proceedings of the 1st Workshop on Quality of Experience (QoE) in Visual Multimedia Applications
    October 2020
    60 pages
    ISBN:9781450381581
    DOI:10.1145/3423328
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 15 October 2020

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. adaptive 360 video streaming
    2. optical flow
    3. quality of experience (qoe)
    4. reinforcement learning (rl)

    Qualifiers

    • Research-article

    Funding Sources

    • Industrial Internet Research Institute (Jinan) of Beijing University of Posts and Telecommunications

    Conference

    MM '20
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 14 of 20 submissions, 70%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)12
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 27 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2021)Off-Policy: Soft Actor-Critic-based Adaptive Streaming for 360-degree Video in Heterogeneous Wireless Networks2021 13th International Conference on Wireless Communications and Signal Processing (WCSP)10.1109/WCSP52459.2021.9613395(1-6)Online publication date: 20-Oct-2021

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media