Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2506364.2506366acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Assessing internet video quality using crowdsourcing

Published: 22 October 2013 Publication History

Abstract

In this paper, we present a subjective video quality evaluation system that has been integrated with different crowdsourcing platforms. We try to evaluate the feasibility of replacing the time consuming and expensive traditional tests with a faster and less expensive crowdsourcing alternative. CrowdFlower and Amazon's Mechanical Turk were used as the crowdsourcing platforms to collect data. The data was compared with the formal subjective tests conducted by MPEG as part of the video standardization process, as well as with previous results from a study we ran at the university level. High quality compressed videos with known Mean Opinion Score (MOS) are used as references instead of the original lossless videos in order to overcome intrinsic bandwidth limitations. The bitrates chosen for the experiment were selected targeting Internet use, since this is the environment in which users were going to be evaluating the videos. Evaluations showed that the results are consistent with formal subjective evaluation scores, and can be reproduced across different crowds with low variability, which makes this type of test setting very promising.

References

[1]
Baroncini, V. et al. 2010. Report of subjective test results of responses to the joint call for proposals (cfp) on video coding technology for high efficiency video coding (HEVC). JCT-VC document JCTVC-A204, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG. 16, (2010).
[2]
Chen, K.-T. et al. 2010. Quadrant of euphoria: a crowdsourcing platform for QoE assessment. IEEE Network. 24, 2 (2010), 28--35.
[3]
Gordon, J. et al. 2010. Evaluation of commonsense knowledge with Mechanical Turk. Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk (Stroudsburg, PA, USA, 2010), 159--162.
[4]
Gottlieb, L. et al. 2012. Pushing the limits of mechanical turk: qualifying the crowd for video geo-location. Proceedings of the ACM multimedia 2012 workshop on Crowdsourcing for multimedia (New York, NY, USA, 2012), 23--28.
[5]
Hoh, B. et al. 2012. TruCentive: A game-theoretic incentive platform for trustworthy mobile crowdsourcing parking services. 2012 15th International IEEE Conference on Intelligent Transportation Systems (ITSC) (2012), 160--166.
[6]
Jain, A. and Bal, T.N. 2013. TALLY: A Web-Based Subjective Testing Tool. 2013 Fifth International Workshop on Quality of Multimedia Experience (QoMEX) (2013).
[7]
Keimel, C. et al. 2012. Challenges in crowd-based video quality assessment. 2012 Fourth International Workshop on Quality of Multimedia Experience (QoMEX) (2012), 13--18.
[8]
Keimel, C. et al. 2012. QualityCrowd - A framework for crowd-based quality evaluation. Picture Coding Symposium (PCS), 2012 (2012), 245--248.
[9]
Korshunov, P. et al. 2012. Crowdsourcing approach for evaluation of privacy filters in video surveillance. Proceedings of the ACM multimedia 2012 workshop on Crowdsourcing for multimedia (New York, NY, USA, 2012), 35--40.
[10]
Li, R. et al. 2009. Enhancing the contrast sensitivity function through action video game training. Nature Neuroscience. 12, 5 (May. 2009), 549--551.
[11]
Marge, M. et al. 2010. Using the Amazon Mechanical Turk for transcription of spoken language. 2010 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP) (2010), 5270--5273.
[12]
Noronha, J. et al. 2011. Platemate: crowdsourcing nutritional analysis from food photographs. Proceedings of the 24th annual ACM symposium on User interface software and technology (New York, NY, USA, 2011), 1--12.
[13]
Nowak, S. and Rüger, S. 2010. How reliable are annotations via crowdsourcing: a study about inter-annotator agreement for multi-label image annotation. Proceedings of the international conference on Multimedia information retrieval (New York, NY, USA, 2010), 557--566.
[14]
Rashtchian, C. et al. 2010. Collecting image annotations using Amazon's Mechanical Turk. Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk (Stroudsburg, PA, USA, 2010), 139--147.
[15]
Recommendation, I. 2002. 500--11, Methodology for the subjective assessment of the quality of television pictures. International Telecommunication Union, Geneva, Switzerland. (2002).
[16]
Reynolds, K. et al. 2011. Using Machine Learning to Detect Cyberbullying. 2011 10th International Conference on Machine Learning and Applications and Workshops (ICMLA) (2011), 241--244.
[17]
Ribeiro, F. et al. 2011. CROWDMOS: An approach for crowdsourcing mean opinion score studies. 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2011), 2416--2419.
[18]
Sheikh, H.R. et al. 2005. LIVE image quality assessment database release 2.
[19]
Soleymani, M. and Larson, M. 2010. Crowdsourcing for affective annotation of video: Development of a viewer-reported boredom corpus. Proceedings of the ACM SIGIR 2010 workshop on crowdsourcing for search evaluation (CSE 2010) (2010), 4--8.
[20]
Spiro, I. et al. 2010. Hands by hand: Crowd-sourced motion tracking for gesture annotation. 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2010), 17--24.
[21]
Steiner, T. et al. 2011. Crowdsourcing event detection in YouTube video. (2011).
[22]
Su, H. et al. 2012. Crowdsourcing Annotations for Visual Object Detection. Workshops at the Twenty-Sixth AAAI Conference on Artificial Intelligence (2012).
[23]
Tang, A. and Boring, S. 2012. #EpicPlay: crowd-sourcing sports video highlights. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (New York, NY, USA, 2012), 1569--1572.
[24]
Wu, S.-Y. et al. 2011. Video summarization via crowdsourcing. CHI '11 Extended Abstracts on Human Factors in Computing Systems (New York, NY, USA, 2011), 1531--1536.
[25]
Xu, Q. et al. 2012. Online crowdsourcing subjective image quality assessment. Proceedings of the 20th ACM international conference on Multimedia (New York, NY, USA, 2012), 359--368.

Cited By

View all
  • (2023)Context-aware Big Data Quality Assessment: A Scoping ReviewJournal of Data and Information Quality10.1145/360370715:3(1-33)Online publication date: 22-Aug-2023
  • (2023)Quality assessment of higher resolution images and videos with remote testingQuality and User Experience10.1007/s41233-023-00055-68:1Online publication date: 13-Apr-2023
  • (2022)Analysis of Video Transmission Capabilities in a Simulated OFDM-Based Supplementary BPL-PLC SystemEnergies10.3390/en1510362115:10(3621)Online publication date: 15-May-2022
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CrowdMM '13: Proceedings of the 2nd ACM international workshop on Crowdsourcing for multimedia
October 2013
44 pages
ISBN:9781450323963
DOI:10.1145/2506364
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 October 2013

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. crowdsourcing
  2. internet video quality
  3. mean opinion score
  4. mos
  5. quality assessment
  6. subjective quality

Qualifiers

  • Research-article

Conference

MM '13
Sponsor:
MM '13: ACM Multimedia Conference
October 22, 2013
Barcelona, Spain

Acceptance Rates

CrowdMM '13 Paper Acceptance Rate 8 of 16 submissions, 50%;
Overall Acceptance Rate 16 of 42 submissions, 38%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)9
  • Downloads (Last 6 weeks)0
Reflects downloads up to 18 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2023)Context-aware Big Data Quality Assessment: A Scoping ReviewJournal of Data and Information Quality10.1145/360370715:3(1-33)Online publication date: 22-Aug-2023
  • (2023)Quality assessment of higher resolution images and videos with remote testingQuality and User Experience10.1007/s41233-023-00055-68:1Online publication date: 13-Apr-2023
  • (2022)Analysis of Video Transmission Capabilities in a Simulated OFDM-Based Supplementary BPL-PLC SystemEnergies10.3390/en1510362115:10(3621)Online publication date: 15-May-2022
  • (2022)Quality Analysis of Audio-Video Transmission in an OFDM-Based Communication SystemMobile and Ubiquitous Systems: Computing, Networking and Services10.1007/978-3-030-94822-1_47(724-736)Online publication date: 8-Feb-2022
  • (2021)Towards High Resolution Video Quality Assessment in the Crowd2021 13th International Conference on Quality of Multimedia Experience (QoMEX)10.1109/QoMEX51781.2021.9465425(1-6)Online publication date: 14-Jun-2021
  • (2021)AVrate Voyager: an open source online testing platform2021 IEEE 23rd International Workshop on Multimedia Signal Processing (MMSP)10.1109/MMSP53017.2021.9733561(1-6)Online publication date: 6-Oct-2021
  • (2020)Development of web-based crowdsourcing framework used for video quality assessment2020 18th International Conference on Emerging eLearning Technologies and Applications (ICETA)10.1109/ICETA51985.2020.9379172(718-723)Online publication date: 12-Nov-2020
  • (2019)Large-Scale Study of Perceptual Video QualityIEEE Transactions on Image Processing10.1109/TIP.2018.286967328:2(612-627)Online publication date: Feb-2019
  • (2018)Information Visualization Evaluation Using CrowdsourcingComputer Graphics Forum10.1111/cgf.1344437:3(573-595)Online publication date: 10-Jul-2018
  • (2018)Large Scale Subjective Video Quality Study2018 25th IEEE International Conference on Image Processing (ICIP)10.1109/ICIP.2018.8451467(276-280)Online publication date: Oct-2018
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media