Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Public Access

Novice and Expert Sensemaking of Crowdsourced Design Feedback

Published: 06 December 2017 Publication History

Abstract

Online feedback exchange (OFE) systems are an increasingly popular way to test concepts with millions of target users before going to market. Yet, we know little about how designers make sense of this abundant feedback. This empirical study investigates how expert and novice designers make sense of feedback in OFE systems. We observed that when feedback conflicted with frames originating from the participant's design knowledge, experts were more likely than novices to question the inconsistency, seeking critical information to expand their understanding of the design goals. Our results suggest that in order for OFE systems to be truly effective, they must be able to support nuances in sensemaking activities of novice and expert users.

References

[1]
{n. d.}. BetaFamily. ({n. d.}). Retrieved July 7, 2017 from www.betafamily.com
[2]
{n. d.}. Dribbble. ({n. d.}). Retrieved July 7, 2017 from www.dribbble.com
[3]
{n. d.}. MURAL. ({n. d.}). Retrieved July 7, 2017 from www.mural.ly
[4]
{n. d.}. UserTesting. ({n. d.}). Retrieved July 7, 2017 from www.usertesting.com
[5]
Cynthia J Atman, Justin R Chimka, Karen M Bursic, and Heather L Nachtmann. 1999. A comparison of freshman and senior engineering design processes. Design Studies 20, 2 (1999), 131--152.
[6]
Sara L Beckman and Michael Barry. 2007. Innovation as a learning process: Embedding design thinking. California management review 50, 1 (Oct. 2007), 25--56.
[7]
Jacob Cohen. 1968. Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. Psychological bulletin 70, 4 (1968), 213--220.
[8]
Nigel Cross. 2004. Expertise in design: An overview. Design Studies 25, 5 (2004), 427--441.
[9]
Deanna P Dannels and Kelly Norris Martin. 2008. Critiquing critiques: A genre analysis of feedback across novice to expert design studios. Journal of Business and Technical Communication 22, 2 (April 2008), 135--159.
[10]
Eureka Foong, Steven P Dow, Brian P Bailey, and Elizabeth M Gerber. 2017. Online feedback exchange: A framework for understanding the socio-psychological factors. In Proceedings of the ACM Conference on Human Factors in Computing Systems.
[11]
Michael D Greenberg, Matthew W Easterday, and Elizabeth M Gerber. 2015. Critiki: A scaffolded approach to gathering design feedback from paid crowdworkers. In the 2015 ACM SIGCHI Conference on Creativity and Cognition.
[12]
Jonathan Grudin. 1988. Why CSCW applications fail: Problems in the design and evaluation of organizational interfaces. In Proceedings of the ACM Conference on Computer-Supported Cooperative Work & Social Computing. 85--93.
[13]
Catherine M Hicks, Vineet Pandey, C Ailie Fraser, and Scott Klemmer. 2016. Framing feedback: Choosing review environment features that support high quality peer assessment. In the 2016 ACM Conference on Human Factors in Computing Systems. 458--469.
[14]
Julie Hui, Amos Glenn, Rachel Jue, Elizabeth Gerber, and Steven Dow. 2015. Using anonymity and communal efforts to improve quality of crowdsourced feedback. In Proceedings of the Third AAAI Conference on Human Computation and Crowdsourcing.
[15]
Panagiotis G Ipeirotis, Foster Provost, and Jing Wang. 2010. Quality management on Amazon Mechanical Turk. In Proceedings of the ACM SIGKDD Workshop on Human Computation - HCOMP '10. ACM Press, New York, New York, USA, 64.
[16]
Aniket Kittur, Ed H Chi, and Bongwon Suh. 2008. Crowdsourcing user studies with Mechanical Turk. In Proceedings of the 26th Annual ACM Conference on Human Factors in Computing Systems. ACM, New York, New York, USA, 453--456.
[17]
Gary Klein, B Moon, and R R Ho man. 2006. Making sense of sensemaking 2: A macrocognitive model. IEEE intelligent systems 21, 5 (2006), 88--92.
[18]
Gary Klein, Jennifer K Phillips, Erica L Rall, and Deborah Peluso. 2007. A data-frame theory of sensemaking. In Expertise out of context: Proceedings of the Sixth International Conference on Naturalistic Decision Making.
[19]
Gary Klein, W Seick, D Peluso, and J Smith. 2007. FOCUS: A model of sensemaking. Technical Report.
[20]
Markus Krause, Tom Garncarz, JiaoJiao Song, Elizabeth M Gerber, Brian P Bailey, and Steven P Dow. 2017. Critique style guide: Improving crowdsourced design feedback with a natural Language Model. In Proceedings of the ACM Conference on Human Factors in Computing Systems.
[21]
Bryan Lawson. 2005. How Designers Think: The Design Process Demysti ed. Elsevier, Oxford, England.
[22]
Kurt Luther, Jari-Lee Tolentino,Wei Wu, Amy Pavel, Brian P Bailey, Maneesh Agrawala, Björn Hartmann, and Steven P Dow. 2015. Structuring, aggregating, and evaluating crowdsourced design critique. In the 18th ACM Conference. ACM Press, New York, New York, USA, 473--485.
[23]
John Neuhart, Charles Eames, Ray Eames, and Marilyn Neuhart. 1989. Eames Design: The Work of the Office of Charles and Ray Eames. Harry N. Abrams.
[24]
Duyen T Nguyen, Thomas R Garncarz, Felicia Ng, Laura Dabbish, and Steven Dow. 2016. Fruitful Feedback: Positive affective language and source anonymity improve critique reception and work outcomes. In Expertise out of context: Proceedings of the Sixth International Conference on Naturalistic Decision Making.
[25]
Don Norman. 2013. The Design of Everyday Things: Revised and Expanded Edition. Basic Books.
[26]
Victor Papanek and R Buckminster Fuller. 1972. Design for the Real World. Thames and Hudson, London.
[27]
Peter Pirolli and Stuart Card. 2005. The sensemaking process and leverage points for analyst technology as identified through cognitive task analysis. In Proceedings of international conference on.
[28]
Daniel M Russell, Mark J Stefik, Peter Pirolli, and Stuart K Card. 1993. The cost structure of sensemaking. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems - CHI '93. ACM Press, New York, New York, USA, 269--276.
[29]
Herbert A Simon. 1988. The science of design: Creating the artificial. Design Issues 4, 1/2 (1988), 67.
[30]
James P Spradley. 1980. Participant Observation. Wadsworth Publishing Company.
[31]
Jennifer Thom-Santelli, Dan Cosley, and Geri Gay. 2010. What do you know? Experts, novices and territoriality in collaborative systems. In Proceedings of the ACM Conference on Human Factors in Computing Systems. 1685--1694.
[32]
Karl E Weick, Kathleen M Sutcliffe, and David Obstfeld. 2005. Organizing and the Process of Sensemaking. Organization Science 16, 4 (Aug. 2005), 409--421.
[33]
Anbang Xu and Brian P Bailey. 2012. What do you think?: A case study of benefit, expectation, and interaction in a large online critique community. In Proceedings of the 15th ACM Conference on Computer Supported Cooperative Work & Social Computing. ACM, New York, New York, USA, 295--304.
[34]
Anbang Xu, Shih-Wen Huang, and Brian P Bailey. 2014. Voyant: Generating structured feedback on visual designs using a crowd of non-experts. In Proceedings of the 17th ACM conference on Computer Supported Cooperative Work & Social Computing. 1433--1444.
[35]
Anbang Xu, Huaming Rao, Steven P Dow, and Brian P Bailey. 2015. A classroom study of using crowd feedback in the iterative design process. In the 18th ACM Conference. ACM Press, New York, New York, USA, 1637--1648.
[36]
Yu-Chun Grace Yen, Steven Dow, Elizabeth Gerber, and Brian P Bailey. 2016. Social network, web forum, or task market? Comparing different crowd genres for design feedback exchange. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems. ACM Press, 773--784.
[37]
Alvin Yuan, Kurt Luther, Markus Krause, Sophie Vennix, Steven P Dow, and Björn Hartmann. 2016. Almost an expert: The effects of rubrics and expertise on the perceived value of crowdsourced design critique. In the 19th ACM Conference.

Cited By

View all
  • (2024)Collective Privacy Sensemaking on Social Media about Period and Fertility Tracking post Roe v. WadeProceedings of the ACM on Human-Computer Interaction10.1145/36410008:CSCW1(1-35)Online publication date: 26-Apr-2024
  • (2024)When to Give Feedback: Exploring Tradeoffs in the Timing of Design FeedbackProceedings of the 16th Conference on Creativity & Cognition10.1145/3635636.3656183(292-310)Online publication date: 23-Jun-2024
  • (2024)LanT: finding experts for digital calligraphy character restorationMultimedia Tools and Applications10.1007/s11042-023-17844-y83:24(64963-64986)Online publication date: 18-Jan-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Human-Computer Interaction
Proceedings of the ACM on Human-Computer Interaction  Volume 1, Issue CSCW
November 2017
2095 pages
EISSN:2573-0142
DOI:10.1145/3171581
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 06 December 2017
Published in PACMHCI Volume 1, Issue CSCW

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. assessment
  2. crowdsourced feedback
  3. crowdsourcing
  4. design
  5. expert
  6. learning
  7. novice
  8. online feedback exchange
  9. sensemaking

Qualifiers

  • Research-article

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)148
  • Downloads (Last 6 weeks)29
Reflects downloads up to 13 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Collective Privacy Sensemaking on Social Media about Period and Fertility Tracking post Roe v. WadeProceedings of the ACM on Human-Computer Interaction10.1145/36410008:CSCW1(1-35)Online publication date: 26-Apr-2024
  • (2024)When to Give Feedback: Exploring Tradeoffs in the Timing of Design FeedbackProceedings of the 16th Conference on Creativity & Cognition10.1145/3635636.3656183(292-310)Online publication date: 23-Jun-2024
  • (2024)LanT: finding experts for digital calligraphy character restorationMultimedia Tools and Applications10.1007/s11042-023-17844-y83:24(64963-64986)Online publication date: 18-Jan-2024
  • (2023)Simulating Urban Element Design with Pedestrian Attention: Visual Saliency as Aid for More Visible Wayfinding DesignLand10.3390/land1202039412:2(394)Online publication date: 1-Feb-2023
  • (2023)User Perspectives on Branching in Computer-Aided DesignProceedings of the ACM on Human-Computer Interaction10.1145/36102207:CSCW2(1-30)Online publication date: 4-Oct-2023
  • (2023)Visualizing Topics and Opinions Helps Students Interpret Large Collections of Peer Feedback for Creative ProjectsACM Transactions on Computer-Human Interaction10.1145/357181730:3(1-30)Online publication date: 10-Jun-2023
  • (2023)What Did My AI Learn? How Data Scientists Make Sense of Model BehaviorACM Transactions on Computer-Human Interaction10.1145/354292130:1(1-27)Online publication date: 7-Mar-2023
  • (2022)Design Opportunities for Freelancing Platforms: Online Freelancers’ Views on a Worker-Centred Design FictionProceedings of the 1st Annual Meeting of the Symposium on Human-Computer Interaction for Work10.1145/3533406.3533410(1-19)Online publication date: 8-Jun-2022
  • (2022)Improving TA Feedback on In-Class Coding Assignments for Introductory Computer ScienceProceedings of the 27th ACM Conference on on Innovation and Technology in Computer Science Education Vol. 110.1145/3502718.3524746(421-427)Online publication date: 7-Jul-2022
  • (2022)Where Should We Put It? Layout and Placement Strategies of Documents in Augmented Reality for Collaborative SensemakingProceedings of the 2022 CHI Conference on Human Factors in Computing Systems10.1145/3491102.3501946(1-16)Online publication date: 29-Apr-2022
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media