Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/1772938.1772953acmconferencesArticle/Chapter ViewAbstractPublication PageswebconfConference Proceedingsconference-collections
research-article

Modulating video credibility via visualization of quality evaluations

Published: 27 April 2010 Publication History
  • Get Citation Alerts
  • Abstract

    In this work we develop and evaluate a method for the syndication and visualization of aggregate quality evaluations of informational video. We enable the sharing of knowledge between motivated media watchdogs and a wider population of casual users. We do this by developing simple visual cues which indicate aggregated activity levels and polarity of quality evaluations (i.e. positive / negative) which are presented in-line with videos as they play. In an experiment we show the potential of these visuals to engender constructive changes to the credibility of informational video under some circumstances. We discuss the limitations, and future work associated with this approach toward video credibility modulation.

    References

    [1]
    Adler, B. T. and Alfaro, L. d., A Content-Driven Reputation System for the Wikipedia. in Proceedings of the 16th International World Wide Web Conference (WWW), (2007).
    [2]
    Adler, B. T., Chatterjee, K., Alfaro, L. d., Faella, M., Pye, I. and Raman, V., Assigning Trust to Wikipedia Content. in In WikiSym 2008: International Symposium on Wikis, (2008).
    [3]
    boyd, d., Lee, H.-Y., Ramage, D. and Donath, J., Developing Legible Visualizations for Online Social Spaces. in HICSS, (2002).
    [4]
    Diakopoulos, N. and Essa, I., An Annotation Model for Making Sense of Information Quality in Online Video. in International Conference on the Pragmatic Web, (Uppsala, Sweden, 2008).
    [5]
    Diakopoulos, N., Goldenberg, S. and Essa, I., Videolyzer: Quality Analysis of Online Informational Video for Bloggers and Journalists. in Conference on Human Factors in Computing Systems (CHI), (Boston, MA, 2009).
    [6]
    Diakopoulos, N. and Shamma, A., Characterizing Debate Performance via Aggregated Twitter Sentiment. in Conference on Human Factors in Computing Systems (CHI), (Atlanta, GA, 2010).
    [7]
    Few, S. Information Dashboard Design: The Effective Visual Communication of Data. O'Reilly Media, 2006.
    [8]
    Fogg, B. J. Persuasive Technology: Using Computers to Change What We Think and Do. Morgan, 2003.
    [9]
    Fogg, B. J., Prominence-interpretation theory: explaining how people assess credibility online. in CHI '03: CHI '03 extended abstracts on Human factors in computing systems, (2003), ACM, 722--723.
    [10]
    Fogg, B. J., Marshall, J., Laraki, O., Osipovich, A., Varma, C., Fang, N., Paul, J., Rangnekar, A., Shon, J., Swani, P. and Treinen, M., What makes Web sites credible?: a report on a large quantitative study. in CHI '01: Proceedings of the SIGCHI conference on Human factors in computing systems, (2001), ACM, 61--68.
    [11]
    Josephson, S. and Holmes, M. E., Clutter of Content? How on-screen enhancements affect how TV viewers scan and what they learn. in Proc. Symposium on Eye Tracking Research & Application, (2006), 155--162.
    [12]
    Kittur, A., Chi, E. H. and Suh, B., Crowdsourcing User Studies With Mechanical Turk in Proceedings of CHI, (2008), 453--456.
    [13]
    Kittur, A., Suh, B. and Chi, E. H. Can You Ever Trust a Wiki? Impacting Perceived Trustworthiness in Wikipedia CSCW, San Diego, CA, 2008, 477--480.
    [14]
    Lampe, C. and Garrett, R. K., It's all News to Me: The Effect of Insturments on Ratings Provision. in Proceedings of HICSS, (2007).
    [15]
    Lerman, K. Social Information Processing in News Aggregation. IEEE Internet Computing. 16--28.
    [16]
    Lord, C. G., Ross, L. and Lepper, M. R. Biased Assimilation and Attitude Polarization: The Effects of Prior Theories on Subsequently Considered Evidence. Journal of Personality and Social Psychology, 37 (11). 2098--2109.
    [17]
    Manjoo, F. True Enough: Learning to Live in a Post-Fact Society. John Wiley & Sons, 2008.
    [18]
    McCroskey, J. The effects of evidence in persuasive communication. The Journal of Western Speech Communications, 31. 189--199.
    [19]
    Miller, K. Communication Theories: Perspectives, Processes, and Contexts. McGraw-Hill, 2004.
    [20]
    Murakami, K., Nichols, E., Matsuyoshi, S., Sumida, A., Masuda, S., Unui, K. and Matsumoto, Y., Statement Map: Assisting Information Credibility Analysis by Visualizing Arguments. in Workshop on Information Credibility on the Web, (2009).
    [21]
    Nakamura, S., Shimizu, M. and Tanaka, K., Can Social Annotation Support Users in Evaluating the Trustworthiness of Video Clips? in Workshop on Information Credibility on the Web (WICOW), (2008), 59--62.
    [22]
    Pentland, A. S. Honest Signals: How They Shape our World. MIT Press, 2008.
    [23]
    Perry, D. K. Theory and Reserach in Mass Communication: Contexts and Consequences (2nd Edition). Lawrence Erlbaum Associates, 2002.
    [24]
    Pirolli, P., Wollny, E. and Suh, B., So You Know You're Getting the Best Possible Information: A Tool that Increases Wikipedia Credibility. in Proc. CHI, (2009), 1505--1508.
    [25]
    Ross, J., Irani, L., Silberman, M. S., Zaldivar, A. and Tomlinson, B., Who are the Crowdworkers? Shifting Demographics in Mechanical Turk. in Conference on Human Factors in Computing Systems (CHI) alt.chi, (2010).
    [26]
    Shamma, A., Kennedy, L. and Churchill, E., Tweet the Debates. in ACM Multimedia Workshop on Social Media (WSM), (2009).
    [27]
    Sheng, V. S., Provost, F. and Ipeirotis, P. G., Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers. in KDD, (2008).
    [28]
    Smith, M. A. and Fiore, A. T., Visualization Components for Persistent Conversations. in Proc CHI, (2001), 136--143.
    [29]
    Snow, R., O'Connor, B., Jurafsky, D. and Ng, A. Y. Cheap and Fast - But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks Conference on Empirical Methods in Natural Language Processing, 2008.
    [30]
    Stvilia, B., Twidale, M., Smith, L. and Gasser, L. Information Quality Work Organization in Wikipedia. Journal of the American Society for Information Science and Technology, 59 (6). 983--1001.
    [31]
    Suh, B., Chi, E. H., Kittur, A. and Pendleton, B. A., Lifting the Veil: Improving Accountability and Social Transparency in Wikipedia with WikiDashboard. in Proc. CHI, (2008), 1037--1040.
    [32]
    Wyer, R. S., Jr., and Albarracín, D. Belief formation, organization, and change: Cognitive and motivational influences. in D. Albarracín, B. T. Johnson and Zanna, M. P. eds. The handbook of attitudes, Erlbaum, 2005.

    Cited By

    View all
    • (2017)Analysis on Effect of Disputed Topic Suggestion for User Behavior in Web SearchTransactions of the Japanese Society for Artificial Intelligence10.1527/tjsai.WII-L32:1(WII-L_1-12)Online publication date: 2017
    • (2016)Can Disputed Topic Suggestion Enhance User Consideration of Information Credibility in Web Search?Proceedings of the 27th ACM Conference on Hypertext and Social Media10.1145/2914586.2914592(169-177)Online publication date: 10-Jul-2016
    • (2015)Credibility in Information RetrievalFoundations and Trends in Information Retrieval10.1561/15000000469:5(355-475)Online publication date: 1-Dec-2015

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    WICOW '10: Proceedings of the 4th workshop on Information credibility
    April 2010
    92 pages
    ISBN:9781605589404
    DOI:10.1145/1772938
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    In-Cooperation

    • Institute for advanced analytics
    • Professional

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 27 April 2010

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. credibility
    2. mechanical turk
    3. video annotation
    4. visualization

    Qualifiers

    • Research-article

    Conference

    WWW '10
    Sponsor:
    WWW '10: The 19th International World Wide Web Conference
    April 27, 2010
    North Carolina, Raleigh, USA

    Acceptance Rates

    Overall Acceptance Rate 9 of 19 submissions, 47%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)9
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 11 Aug 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2017)Analysis on Effect of Disputed Topic Suggestion for User Behavior in Web SearchTransactions of the Japanese Society for Artificial Intelligence10.1527/tjsai.WII-L32:1(WII-L_1-12)Online publication date: 2017
    • (2016)Can Disputed Topic Suggestion Enhance User Consideration of Information Credibility in Web Search?Proceedings of the 27th ACM Conference on Hypertext and Social Media10.1145/2914586.2914592(169-177)Online publication date: 10-Jul-2016
    • (2015)Credibility in Information RetrievalFoundations and Trends in Information Retrieval10.1561/15000000469:5(355-475)Online publication date: 1-Dec-2015

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media