Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3379336.3381477acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
poster

AI-Based 360-degree Video Generation from Monocular Video for Immersive Experience

Published: 17 March 2020 Publication History
  • Get Citation Alerts
  • Abstract

    We propose an artificial intelligence (AI)-based framework for generating 360-degree videos from videos recorded by monocular cameras. We also show immersive virtual reality content generation using AI through an analysis of user experience that compares manually designed and AI-generated 360-degree videos based on the proposed framework. The production of 360-degree videos conventionally requires special equipment, such as omni-directional cameras. Our framework is applicable to a massive amount of existing camera and video, hence it increases the availability of 360-degree videos. We implemented our framework in two steps. First, we generate a three-dimensional point cloud from the input video. Then, we apply AI-based methods to interpolate the sparse point cloud based on geometric and semantic information. Our framework will be applicable to several usages such as assisting surveying past traffic accident videos and education on showing the historical townscape of 360 degrees, etc.

    References

    [1]
    Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1125--1134, 2017.
    [2]
    Shimon Ullman. The interpretation of structure from motion. Proceedings of the Royal Society of London. Series B. Biological Sciences, 203(1153):405--426, 1979.
    [3]
    Opensfm. https://www.opensfm.org/, 2019. Accessed September 27, 2019.
    [4]
    O Faugeras. Panoramic vision: sensors, theory, and applications. Springer Science & Business Media, 2013.
    [5]
    Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. Vision meets robotics: The kitti dataset. International Journal of Robotics Research (IJRR), 2013.

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    IUI '20 Companion: Companion Proceedings of the 25th International Conference on Intelligent User Interfaces
    March 2020
    153 pages
    ISBN:9781450375139
    DOI:10.1145/3379336
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 17 March 2020

    Check for updates

    Author Tags

    1. 360-degree video
    2. immersive VR
    3. neural networks

    Qualifiers

    • Poster
    • Research
    • Refereed limited

    Conference

    IUI '20
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 746 of 2,811 submissions, 27%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 200
      Total Downloads
    • Downloads (Last 12 months)40
    • Downloads (Last 6 weeks)12
    Reflects downloads up to 10 Aug 2024

    Other Metrics

    Citations

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media