Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3372224.3419196acmconferencesArticle/Chapter ViewAbstractPublication PagesmobicomConference Proceedingsconference-collections
research-article

C-14: assured timestamps for drone videos

Published: 18 September 2020 Publication History

Abstract

Inexpensive and highly capable unmanned aerial vehicles (aka drones) have enabled people to contribute high-quality videos at a global scale. However, a key challenge exists for accepting videos from untrusted sources: establishing when a particular video was taken. Once a video has been received or posted publicly, it is evident that the video was created before that time, but there are no current methods for establishing how long it was made before that time.
We propose C-141, a system that assures the earliest timestamp, tb, of drone-made videos. C-14 provides a challenge to an untrusted drone requiring it to execute a sequence of motions, called a motion program, revealed only after tb. It then uses camera pose estimation techniques to verify the resulting video matches the challenge motion program, thus assuring the video was taken after tb. We demonstrate the system on manually crafted programs representing a large space of possible motion programs. We also propose and evaluate an example algorithm which generates motion programs based on a seed value released after tb. C-14 incorporates a number of compression and sampling techniques to reduce the computation required to verify videos. We can verify a 59-second video from an eight-motion, manual motion program, in 91 seconds of computation with a false positive rate of one in 1013 and no false negatives. We can also verify a 190-second video from an algorithmically derived, 4-motion program, in 158 seconds of computation with a false positive rate of one in one hundred thousand and no false negatives.

References

[1]
2018. Using Drones to Shoot War Zones. https://petapixel.com/2018/02/20/using-drones-shoot-war-zones/.
[2]
2019. DJI Mobile SDK. https://developer.dji.com/mobile-sdk/documentation/introduction/flightController_concepts.html.
[3]
2019. DJI Mobile SDK. https://developer.dji.com/mobile-sdk/documentation/introduction/component-guide-flightController.html#virtual-sticks.
[4]
2019. Litchi. https://flylitchi.com/.
[5]
2019. React Native Wrapper Library For DJI Mobile SDK. https://github.com/Aerobotics/react-native-dji-mobile.
[6]
2019. Virtual Litchi Mission. https://mavicpilots.com/threads/virtual-litchi-mission.31109/.
[7]
2020. NEO-M8 u-blox M8 concurrent GNSS modules Data Sheet. https://www.u-blox.com/sites/default/files/NEO-M8-FW3_DataSheet_%28UBX-15031086%29.pdf.
[8]
2020. Project website. https://github.com/zptang1210/C-14.
[9]
Mikhail Afanasov, Alessandro Djordjevic, Feng Lui, and Luca Mottola. 2019. Fly-Zone: A Testbed for Experimenting with Aerial Drone Applications. In Proceedings of the 17th Annual International Conference on Mobile Systems, Applications, and Services. ACM, 67--78.
[10]
Javad Abbasi Aghamaleki and Alireza Behrad. 2016. Inter-frame video forgery detection and localization using intrinsic effects of double compression on quantization errors of video coding. Signal Processing: Image Communication 47 (2016), 289--302.
[11]
Jamimamul Bakas and Ruchira Naskar. 2018. A Digital Forensic Technique for Inter-Frame Video Forgery Detection Based on 3D CNN. In International Conference on Information Systems Security. Springer, 304--317.
[12]
Jamimamul Bakas, Ruchira Naskar, and Rahul Dixit. 2019. Detection and localization of inter-frame video forgeries based on inconsistency in correlation distribution between Haralick coded frames. Multimedia Tools and Applications 78, 4 (2019), 4905--4935.
[13]
Pia Bideau and Erik Learned-Miller. 2016. It's Moving! A Probabilistic Model for Causal Motion Segmentation in Moving Camera Videos. In European Conference on Computer Vision (ECCV).
[14]
Pia Bideau, Aruni RoyChowdhury, Rakesh R Menon, and Erik Learned-Miller. 2018. The best of both worlds: Combining cnns and geometric constraints for hierarchical motion segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 508--517.
[15]
Rainer Böhme and Matthias Kirchner. 2013. Counter-forensics: Attacking image forensics. In Digital Image Forensics. Springer, 327--366.
[16]
Endri Bregu, Nicola Casamassima, Daniel Cantoni, Luca Mottola, and Kamin Whitehouse. 2016. Reactive control of autonomous drones. In Proceedings of the 14th Annual International Conference on Mobile Systems, Applications, and Services. ACM, 207--219.
[17]
Sintayehu Dehnie, Taha Sencar, and Nasir Memon. 2006. Digital image forensics for identifying computer generated and digital camera images. In Image Processing, 2006 IEEE International Conference on. IEEE, 2313--2316.
[18]
Jakob Engel, Thomas Schöps, and Daniel Cremers. 2014. LSD-SLAM: Large-scale direct monocular SLAM. In European conference on computer vision. Springer, 834--849.
[19]
Ariel Gordon, Hanhan Li, Rico Jonschkowski, and Anelia Angelova. 2019. Depth from videos in the wild: Unsupervised monocular depth learning from unknown cameras. In Proceedings of the IEEE International Conference on Computer Vision. 8977--8986.
[20]
Suyog Dutt Jain, Bo Xiong, and Kristen Grauman. 2017. Fusionseg: Learning to combine motion and appearance for fully automatic segmentation of generic objects in videos. In 2017 IEEE conference on computer vision and pattern recognition (CVPR). IEEE, 2117--2126.
[21]
Shan Jia, Zhengquan Xu, Hao Wang, Chunhui Feng, and Tao Wang. 2018. Coarse-to-fine copy-move forgery detection for video forensics. IEEE Access 6 (2018), 25323--25335.
[22]
Rong Jin, Yanjun Qi, and Alexander Hauptmann. 2002. A probabilistic model for camera zoom detection. In Object recognition supported by user interaction for service robots, Vol. 3. IEEE, 859--862.
[23]
Staffy Kingra, Naveen Aggarwal, and Raahat Devender Singh. 2017. Inter-frame forgery detection in H. 264 videos using motion and brightness gradients. Multimedia Tools and Applications 76, 24 (2017), 25767--25786.
[24]
Vincent Lenders, Emmanouil Koukoumidis, Pei Zhang, and Margaret Martonosi. 2008. Location-based trust for mobile user-generated content: applications, challenges and implementations. In Proceedings of the 9th workshop on Mobile computing systems and applications. ACM, 60--64.
[25]
Vishnu Vardhan Makkapati. 2007. Robust Camera Pan and Zoom Change Detection Using Optical Flow.
[26]
Wenguang Mao, Zaiwei Zhang, Lili Qiu, Jian He, Yuchen Cui, and Sangki Yun. 2017. Indoor follow me drone. In Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services. ACM, 345--358.
[27]
Falko Matern, Christian Riess, and Marc Stamminger. 2019. Exploiting visual artifacts to expose deepfakes and face manipulations. In 2019 IEEE Winter Applications of Computer Vision Workshops (WACVW). IEEE, 83--92.
[28]
Raul Mur-Artal, Jose Maria Martinez Montiel, and Juan D Tardos. 2015. ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE transactions on robotics 31, 5 (2015), 1147--1163.
[29]
Huy H Nguyen, Junichi Yamagishi, and Isao Echizen. 2019. Capsule-forensics: Using capsule networks to detect forged images and videos. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2307--2311.
[30]
Phuc Nguyen, Hoang Truong, Mahesh Ravindranathan, Anh Nguyen, Richard Han, and Tam Vu. 2017. Matthan: Drone presence detection by identifying physical signatures in the drone's rf communication. In Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services. ACM, 211--224.
[31]
Simon Niklaus. 2018. A Reimplementation of PWC-Net Using PyTorch. https://github.com/sniklaus/pytorch-pwc.
[32]
Feng Pan, JiongBin Chen, and JiWu Huang. 2009. Discriminating between photorealistic computer graphics and natural images using fractal geometry. Science in China Series F: Information Sciences 52, 2 (2009), 329--337.
[33]
Fei Peng, Jiao-ting Li, and Min Long. 2015. Identification of Natural Images and Computer-Generated Graphics Based on Statistical and Textural Features. Journal of forensic sciences 60, 2 (2015), 435--443.
[34]
Clement Pinard, Laure Chevalley, Antoine Manzanera, and David Filliat. 2018. Learning structure-from-motion from motion. In The European Conference on Computer Vision (ECCV) Workshops.
[35]
Mahmudur Rahman, Mozhgan Azimpourkivi, Umut Topkara, and Bogdan Carbunar. 2017. Video Liveness for Citizen Journalism: Attacks and Defenses. IEEE Transactions on Mobile Computing 16, 11 (2017), 3250--3263.
[36]
Mahmudur Rahman, Umut Topkara, and Bogdan Carbunar. 2013. Seeing is not believing: Visual verifications through liveness analysis using mobile devices. In Proceedings of the 29th Annual Computer Security Applications Conference. ACM, 239--248.
[37]
Nicolas Rahmouni, Vincent Nozick, Junichi Yamagishi, and Isao Echizen. 2017. Distinguishing computer graphics from natural images using convolution neural networks. In 2017 IEEE Workshop on Information Forensics and Security (WIFS). IEEE, 1--6.
[38]
Anurag Ranjan, Joel Janai, Andreas Geiger, and Michael J. Black. 2019. Attacking Optical Flow. arXiv:1910.10053 [cs.CV]
[39]
Stefan Saroiu and Alec Wolman. 2009. Enabling new mobile applications with location proofs. In Proceedings of the 10th workshop on Mobile Computing Systems and Applications. ACM, 3.
[40]
Raahat Devender Singh and Naveen Aggarwal. 2017. Detection of upscale-crop and splicing for digital video authentication. Digital Investigation 21 (2017), 31--52.
[41]
Deqing Sun, Xiaodong Yang, Ming-Yu Liu, and Jan Kautz. 2018. PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume. In CVPR.
[42]
Justus Thies, Michael Zollhofer, Marc Stamminger, Christian Theobalt, and Matthias Nießner. 2016. Face2face: Real-time face capture and reenactment of rgb videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2387--2395.
[43]
P. Tokmakov, K. Alahari, and C. Schmid. 2017. Learning Motion Patterns in Videos. In CVPR.
[44]
Weihong Wang and Hany Farid. 2009. Exposing digital forgeries in video by detecting double quantization. In Proceedings of the 11th ACM workshop on Multimedia and security. ACM, 39--48.
[45]
John Wihbey. 2017. The Drone Revolution - UAV-Generated Geodata Drives Policy Innovation. Land Lines Magazine (October 2017), 14--21.
[46]
Xin Yang, Yuezun Li, and Siwei Lyu. 2019. Exposing deep fakes using inconsistent head poses. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 8261--8265.
[47]
Tinghui Zhou, Matthew Brown, Noah Snavely, and David G Lowe. 2017. Unsupervised learning of depth and ego-motion from video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1851--1858.
[48]
Zhichao Zhu and Guohong Cao. 2011. Applaus: A privacy-preserving location proof updating system for location-based services. In 2011 Proceedings IEEE INFOCOM. IEEE, 1889--1897.

Cited By

View all
  • (2024)Sharing instant delivery UAVs for crowdsensing: A data-driven performance studyComputers & Industrial Engineering10.1016/j.cie.2024.110100191(110100)Online publication date: May-2024
  • (2022)G2AuthProceedings of the 20th Annual International Conference on Mobile Systems, Applications and Services10.1145/3498361.3538941(84-98)Online publication date: 27-Jun-2022
  • (2022)Authentication for drone delivery through a novel way of using face biometricsProceedings of the 28th Annual International Conference on Mobile Computing And Networking10.1145/3495243.3560550(609-622)Online publication date: 14-Oct-2022

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
MobiCom '20: Proceedings of the 26th Annual International Conference on Mobile Computing and Networking
April 2020
621 pages
ISBN:9781450370851
DOI:10.1145/3372224
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 18 September 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. UAV video
  2. drone video
  3. security and privacy
  4. video forensics

Qualifiers

  • Research-article

Conference

MobiCom '20
Sponsor:

Acceptance Rates

Overall Acceptance Rate 440 of 2,972 submissions, 15%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)25
  • Downloads (Last 6 weeks)4
Reflects downloads up to 06 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Sharing instant delivery UAVs for crowdsensing: A data-driven performance studyComputers & Industrial Engineering10.1016/j.cie.2024.110100191(110100)Online publication date: May-2024
  • (2022)G2AuthProceedings of the 20th Annual International Conference on Mobile Systems, Applications and Services10.1145/3498361.3538941(84-98)Online publication date: 27-Jun-2022
  • (2022)Authentication for drone delivery through a novel way of using face biometricsProceedings of the 28th Annual International Conference on Mobile Computing And Networking10.1145/3495243.3560550(609-622)Online publication date: 14-Oct-2022

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media