Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Toward User-Driven Sound Recognizer Personalization with People Who Are d/Deaf or Hard of Hearing

Published: 24 June 2021 Publication History

Abstract

Automated sound recognition tools can be a useful complement to d/Deaf and hard of hearing (DHH) people's typical communication and environmental awareness strategies. Pre-trained sound recognition models, however, may not meet the diverse needs of individual DHH users. While approaches from human-centered machine learning can enable non-expert users to build their own automated systems, end-user ML solutions that augment human sensory abilities present a unique challenge for users who have sensory disabilities: how can a DHH user, who has difficulty hearing a sound themselves, effectively record samples to train an ML system to recognize that sound? To better understand how DHH users can drive personalization of their own assistive sound recognition tools, we conducted a three-part study with 14 DHH participants: (1) an initial interview and demo of a personalizable sound recognizer, (2) a week-long field study of in situ recording, and (3) a follow-up interview and ideation session. Our results highlight a positive subjective experience when recording and interpreting training data in situ, but we uncover several key pitfalls unique to DHH users---such as inhibited judgement of representative samples due to limited audiological experience. We share implications of these results for the design of recording interfaces and human-the-the-loop systems that can support DHH users to build sound recognizers for their personal needs.

Supplementary Material

goodman (goodman.zip)
Supplemental movie, appendix, image and software files for, Toward User-Driven Sound Recognizer Personalization with People Who Are d/Deaf or Hard of Hearing

References

[1]
Dustin Adams, Tory Gallagher, Alexander Ambard, and Sri Kurniawan. 2013. Interviewing blind photographers: design insights for a smartphone application. In Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility. ACM, New York, NY, USA, 1--2. https://doi.org/10.1145/2513383.2513418
[2]
A. Akbari and R. Jafari. 2020. Personalizing Activity Recognition Models Through Quantifying Different Types of Uncertainty Using Wearable Sensors. IEEE Transactions on Biomedical Engineering 67, 9 (2020), 2530--2541. https://doi.org/10.1109/TBME.2019.2963816
[3]
Saleema Amershi, Maya Cakmak, William Bradley Knox, and Todd Kulesza. 2014. Power to the People: The Role of Humans in Interactive Machine Learning. AI Magazine 35, 4 (dec 2014), 105. https://doi.org/10.1609/aimag.v35i4.2513
[4]
Saleema Amershi, Max Chickering, Steven M. Drucker, Bongshin Lee, Patrice Simard, and Jina Suh. 2015. ModelTracker: Redesigning Performance Analysis Tools for Machine Learning. Association for Computing Machinery, New York, NY, USA, 337--346. https://doi.org/10.1145/2702123.2702509
[5]
Apple. 2020. iOS 14 - Features - Apple. Retrieved September 15, 2020 from https://www.apple.com/ios/ios-14/features/
[6]
Audacity Team. 2020. Audacity(R): Free Audio Editor and Recorder. Retrieved July 19, 2020 from https://audacityteam.org/
[7]
Thomas Balkany, Annelle V Hodges, and Kenneth W Goodman. 1996. Ethics of cochlear implantation in young children. Otolaryngology---Head and Neck Surgery 114, 6 (1996), 748--755.
[8]
Danielle Bragg, Nicholas Huynh, and Richard E Ladner. 2016. A Personalizable Mobile Sound Detector App Design for Deaf and Hard-of-Hearing Users. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '16). ACM Press, New York, New York, USA, 3--13. https://doi.org/10.1145/2982142.2982171
[9]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative research in psychology 3, 2 (2006), 77--101.
[10]
Virginia Braun and Victoria Clarke. 2019. Reflecting on reflexive thematic analysis. Qualitative Research in Sport, Exercise and Health 11, 4 (aug 2019), 589--597. https://doi.org/10.1080/2159676X.2019.1628806
[11]
Michelle Carney, Barron Webster, Irene Alvarado, Kyle Phillips, Noura Howell, Jordan Griffith, Jonas Jongejan, Amit Pitaru, and Alexander Chen. 2020. Teachable Machine: Approachable Web-Based Tool for Exploring Machine Learning Classification. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1--8. https://doi.org/10.1145/3334480.3382839
[12]
Mark Cartwright, Ayanna Seals, Justin Salamon, Alex Williams, Stefanie Mikloska, Duncan MacConnell, Edith Law, Juan P. Bello, and Oded Nov. 2017. Seeing Sound: Investigating the Efects of Visualizations and Complexity on Crowdsourced Audio Annotations. Proceedings of the ACM on Human-Computer Interaction 1, CSCW (dec 2017), 1--21. https://doi.org/10.1145/3134664
[13]
Anna Cavender and Richard E Ladner. 2008. Hearing impairments. In Web accessibility. Springer, 25--35.
[14]
Himanshu Chaurasiya. 2020. Time-Frequency Representations: Spectrogram, Cochleogram and Correlogram. Procedia Computer Science 167 (2020), 1901--1910. https://doi.org/10.1016/j.procs.2020.03.209 International Conference on Computational Intelligence and Data Science.
[15]
Naomi B. H. Croghan, Kathryn H. Arehart, and James M. Kates. 2014. Music Preferences With Hearing Aids. Ear and Hearing 35, 5 (2014), e170-e184. https://doi.org/10.1097/AUD.0000000000000056
[16]
Allan G. de Oliveira, Thiago M. Ventura, Todor D. Ganchev, Josiel M. de Figueiredo, Olaf Jahn, Marinez I. Marques, and Karl-L. Schuchmann. 2015. Bird acoustic activity detection based on morphological filtering of the spectrogram. Applied Acoustics 98 (nov 2015), 34--42. https://doi.org/10.1016/j.apacoust.2015.04.014
[17]
Alex De Robertis and Ian Higginbottom. 2007. A post-processing technique to estimate the signal-to-noise ratio and remove echosounder background noise. ICES Journal of Marine Science 64, 6 (2007), 1282--1291.
[18]
Jaimie Drozdal, Justin Weisz, Dakuo Wang, Gaurav Dass, Bingsheng Yao, Changruo Zhao, Michael Muller, Lin Ju, and Hui Su. 2020. Trust in AutoML: exploring information needs for establishing trust in automated machine learning systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces. ACM, New York, NY, USA, 297--307. https://doi.org/10.1145/3377325.3377501
[19]
John J. Dudley and Per Ola Kristensson. 2018. A Review of User Interface Design for Interactive Machine Learning. ACM Transactions on Interactive Intelligent Systems 8, 2 (jul 2018), 1--37. https://doi.org/10.1145/3185517
[20]
Rebecca Fiebrink, Perry R. Cook, and Dan Trueman. 2011. Human model evaluation in interactive supervised learning. In Proceedings of the 2011 annual conference on Human factors in computing systems - CHI '11. ACM Press, New York, New York, USA, 147. https://doi.org/10.1145/1978942.1978965
[21]
Rebecca Fiebrink and Marco Gillies. 2018. Introduction to the Special Issue on Human-Centered Machine Learning. ACM Transactions on Interactive Intelligent Systems 8, 2 (jul 2018), 1--7. https://doi.org/10.1145/3205942
[22]
Leah Findlater, Bonnie Chinh, Dhruv Jain, Jon Froehlich, Raja Kushalnagar, and Angela Carey Lin. 2019. Deaf and Hard-of-Hearing Individuals' Preferences for Wearable and Mobile Sound Awareness Technologies. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI '19). Association for Computing Machinery, New York, NY, USA, 1--13. https://doi.org/10.1145/3290605.3300276
[23]
Leah Findlater, Steven Goodman, Yuhang Zhao, Shiri Azenkot, and Margot Hanley. 2020. Fairness Issues in AI Systems That Augment Sensory Abilities. SIGACCESS Access. Comput. 125, Article 8 (March 2020), 1 pages. https://doi.org/10.1145/3386296.3386304
[24]
Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. In Proceedings of the 34th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 70), Doina Precup and Yee Whye Teh (Eds.). PMLR, International Convention Centre, Sydney, Australia, 1126--1135.
[25]
Eduardo Fonseca, Jordi Pons, Xavier Favory, Frederic Font, Dmitry Bogdanov, Andrés Ferraro, Sergio Oramas, Alastair Porter, and Xavier Serra. 2017. Freesound Datasets: a platform for the creation of open audio datasets. In Proceedings of the 18th International Society for Music Information Retrieval Conference (ISMIR 2017). Suzhou, China, 486--493.
[26]
Jort F. Gemmeke, Daniel P. W. Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R. Channing Moore, Manoj Plakal, and Marvin Ritter. 2017. Audio Set: An ontology and human-labeled dataset for audio events. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 776--780. https://doi.org/10.1109/ICASSP.2017.7952261
[27]
Steven Goodman, Susanne Kirchner, Rose Guttman, Dhruv Jain, Jon Froehlich, and Leah Findlater. 2020. Evaluating Smartwatch-based Sound Feedback for Deaf and Hard-of-hearing Users Across Contexts. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1--13. https://doi.org/10.1145/3313831.3376406
[28]
Google. 2020. Audio Model - Teachable Machines. Retrieved July 19, 2020 from https://teachablemachine.withgoogle.com/train/audio
[29]
Google. 2020. Important household sounds become more accessible. Retrieved October 12, 2020 from https://blog.google/products/android/new-sound-notifications-on-android/
[30]
Sébastien Gulluni, Slim Essid, Olivier Buisson, and Gaël Richard. 2011. An Interactive System for Electro-Acoustic Music Analysis. In Proc. ISMIR. 145--150.
[31]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep Residual Learning for Image Recognition. arXiv:1512.03385 [cs.CV]
[32]
Shawn Hershey, Sourish Chaudhuri, Daniel P W Ellis, Jort F Gemmeke, Aren Jansen, R Channing Moore, Manoj Plakal, Devin Platt, Rif A Saurous, Bryan Seybold, and Others. 2017. CNN architectures for large-scale audio classification. In 2017 ieee international conference on acoustics, speech and signal processing (icassp). IEEE, 131--135.
[33]
Jonggi Hong, Kyungjun Lee, June Xu, and Hernisa Kacorri. 2020. Crowdsourcing the Perception of Machine Teaching. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI '20). Association for Computing Machinery, New York, NY, USA, 1--14. https://doi.org/10.1145/3313831.3376428
[34]
Hilary Hutchinson, Heiko Hansen, Nicolas Roussel, Björn Eiderbäck, Wendy Mackay, Bo Westerlund, Benjamin B. Bederson, Allison Druin, Catherine Plaisant, Michel Beaudouin-Lafon, Stéphane Conversy, and Helen Evans. 2003. Technology probes: inspiring design for and with families. In Proceedings of the conference on Human factors in computing systems - CHI '03. ACM Press, New York, New York, USA, 17. https://doi.org/10.1145/642611.642616
[35]
Tatsuya Ishibashi, Yuri Nakao, and Yusuke Sugano. 2020. Investigating Audio Data Visualization for Interactive Sound Recognition. In Proceedings of the 25th International Conference on Intelligent User Interfaces (Cagliari, Italy) (IUI '20). Association for Computing Machinery, New York, NY, USA, 67--77. https://doi.org/10.1145/3377325.3377483
[36]
Dhruv Jain, Leah Findlater, Jamie Gilkeson, Benjamin Holland, Ramani Duraiswami, Dmitry Zotkin, Christian Vogler, and Jon E Froehlich. 2015. Head-Mounted Display Visualizations to Support Sound Awareness for the Deaf and Hard of Hearing. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 241--250. https://doi.org/10.1145/2702123.2702393
[37]
Dhruv Jain, Kelly Mack, Akli Amrous, Matt Wright, Steven Goodman, Leah Findlater, and Jon E. Froehlich. 2020. HomeSound: An Iterative Field Deployment of an In-Home Sound Awareness System for Deaf or Hard of Hearing Users. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1--12. https://doi.org/10.1145/3313831.3376758
[38]
Dhruv Jain, Hung Ngo, Pratyush Patel, Steven Goodman, Leah Findlater, and Jon Froehlich. 2020. SoundWatch: Exploring Smartwatch-Based Deep Learning Approaches to Support Sound Awareness for Deaf and Hard of Hearing Users. In The 22nd International ACM SIGACCESS Conference on Computers and Accessibility (Virtual Event, Greece) (ASSETS '20). Association for Computing Machinery, New York, NY, USA, Article 30, 13 pages. https://doi.org/10.1145/3373625.3416991
[39]
Chandrika Jayant, Hanjie Ji, Samuel White, and Jeffrey P. Bigham. 2011. Supporting blind photography. In The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility - ASSETS '11. ACM Press, New York, New York, USA, 203. https://doi.org/10.1145/2049536.2049573
[40]
Hernisa Kacorri. 2017. Teachable Machines for Accessibility. SIGACCESS Access. Comput. 119 (nov 2017), 10--18. https://doi.org/10.1145/3167902.3167904
[41]
Hernisa Kacorri, Kris M Kitani, Jeffrey P Bigham, and Chieko Asakawa. 2017. People with Visual Impairment Training Personal Object Recognizers: Feasibility and Challenges. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, NY, USA, 5839--5849. https://doi.org/10.1145/3025453.3025899
[42]
Bongjun Kim and Bryan Pardo. 2017. I-SED: an Interactive Sound Event Detector. In Proceedings of the 22nd International Conference on Intelligent User Interfaces. ACM, New York, NY, USA, 553--557. https://doi.org/10.1145/3025171.3025231
[43]
Bongjun Kim and Bryan Pardo. 2018. A Human-in-the-Loop System for Sound Event Detection and Annotation. ACM Transactions on Interactive Intelligent Systems 8, 2 (jul 2018), 1--23. https://doi.org/10.1145/3214366
[44]
W. Bradley Knox and Peter Stone. 2015. Framing reinforcement learning from human reward: Reward positivity, temporal discounting, episodicity, and performance. Artificial Intelligence 225 (August 2015). http://www.cs.utexas.edu/users/ai-lab?knox:aij15
[45]
Todd Kulesza, Margaret Burnett, Weng-Keen Wong, and Simone Stumpf. 2015. Principles of Explanatory Debugging to Personalize Interactive Machine Learning. In Proceedings of the 20th International Conference on Intelligent User Interfaces. ACM, New York, NY, USA, 126--137. https://doi.org/10.1145/2678025.2701399
[46]
Todd Kulesza, Simone Stumpf, Margaret Burnett, and Irwin Kwan. 2012. Tell me more?. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems - CHI '12. ACM Press, New York, New York, USA, 1. https://doi.org/10.1145/2207676.2207678
[47]
Paddy Ladd and Harlan Lane. 2013. Deaf Ethnicity, Deafhood, and Their Relationship. Sign Language Studies 13, 4 (2013), 565--579. https://doi.org/10.1353/sls.2013.0012
[48]
Gierad Laput, Karan Ahuja, Mayank Goel, and Chris Harrison. 2018. Ubicoustics: Plug-and-Play Acoustic Activity Recognition. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology (Berlin, Germany) (UIST '18). Association for Computing Machinery, New York, NY, USA, 213--224. https://doi.org/10.1145/3242587.3242609
[49]
Kyungjun Lee, Jonggi Hong, Simone Pimento, Ebrima Jarjue, and Hernisa Kacorri. 2019. Revisiting Blind Photography in the Context of Teachable Object Recognizers. In The 21st International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '19). Association for Computing Machinery, New York, NY, USA, 83--95. https://doi.org/10.1145/3308561.3353799
[50]
Lie Lu, Hong-Jiang Zhang, and Hao Jiang. 2002. Content analysis for audio classification and segmentation. IEEE Transactions on Speech and Audio Processing 10, 7 (oct 2002), 504--516. https://doi.org/10.1109/TSA.2002.804546
[51]
Makeability Lab. 2020. SoundWatch. Retrieved November 8, 2020 from https://github.com/makeabilitylab/SoundWatch
[52]
Shoji Makino, Shoko Araki, Ryo Mukai, and Hiroshi Sawada. 2004. Audio source separation based on independent component analysis. In 2004 IEEE International Symposium on Circuits and Systems (IEEE Cat. No. 04CH37512), Vol. 5. IEEE, V-V.
[53]
Tara Matthews, Scott Carter, Carol Pai, Janette Fong, and Jennifer Mankoff. 2006. Scribe4Me: Evaluating a Mobile Sound Transcription Tool for the Deaf. In Proceedings of the 8th International Conference on Ubiquitous Computing (UbiComp '06). Springer-Verlag, 159--176. https://doi.org/10.1007/11853565_10
[54]
Tara Matthews, Janette Fong, F. Wai-Ling Ho-Ching, and Jennifer Mankoff. 2006. Evaluating non-speech sound visualizations for the deaf. Behaviour & Information Technology 25, 4 (jul 2006), 333--351. https://doi.org/10.1080/01449290600636488
[55]
Matthias Mielke and Rainer Brück. 2015. Design and evaluation of a smartphone application for non-speech sound awareness for people with hearing loss. In 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 5008--5011. https://doi.org/10.1109/EMBC.2015.7319516
[56]
Matthias Mielke and Rainer Bruck. 2016. AUDIS wear: A smartwatch based assistive device for ubiquitous awareness of environmental sounds. In 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 5343--5347. https://doi.org/10.1109/EMBC.2016.7591934
[57]
Matthew S. Moore and Linda Levitan. 1992. For Hearing People Only: Answers to Some of the Most Commonly Asked Questions about the Deaf Community, Its Culture, and the "Deaf Reality". Deaf Life Press, Rochester, NY, USA.
[58]
Yuri Nakao and Yusuke Sugano. 2020. Use of Machine Learning by Non-Expert DHH People: Technological Understanding and Sound Perception. In Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society (Tallinn, Estonia) (NordiCHI '20). Association for Computing Machinery, New York, NY, USA, Article 82, 12 pages. https://doi.org/10.1145/3419249.3420157
[59]
Meg Pirrung, Nathan Hilliard, Artëm Yankov, Nancy O'Brien, Paul Weidert, Courtney D Corley, and Nathan O Hodas. 2018. Sharkzor: Interactive Deep Learning for Image Triage, Sort and Summary. arXiv:1802.05316 [cs.HC]
[60]
Thejan Rajapakshe, Rajib Rana, Siddique Latif, Sara Khalifa, and Björn W. Schuller. 2019. Pre-training in Deep Reinforcement Learning for Automatic Speech Recognition. arXiv:1910.11256 [cs.SD]
[61]
Gonzalo Ramos, Christopher Meek, Patrice Simard, Jina Suh, and Soroush Ghorashi. 2020. Interactive machine teaching: a human-centered approach to building machine-learned models. Human-Computer Interaction 35, 5-6 (nov 2020), 413--451. https://doi.org/10.1080/07370024.2020.1734931
[62]
Gonzalo Ramos, Jina Suh, Soroush Ghorashi, Christopher Meek, Richard Banks, Saleema Amershi, Rebecca Fiebrink, Alison Smith-Renner, and Gagan Bansal. 2019. Emerging Perspectives in Human-Centered Machine Learning. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1--8. https://doi.org/10.1145/3290607.3299014
[63]
Rev.com. 2020. Voice Recorder App | Audio Recording App. Retrieved July 19, 2020 from https://www.rev.com/voicerecorder
[64]
James Robert, Marc Webbie, et al. 2018. Pydub. http://pydub.com/
[65]
Prem Seetharaman, Gautham Mysore, Bryan Pardo, Paris Smaragdis, and Celso Gomes. 2019. VoiceAssist: Guiding Users to High-Quality Voice Recordings. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). Association for Computing Machinery, New York, NY, USA, 1--6. https://doi.org/10.1145/3290605.3300539
[66]
Zhao Shuyang, Toni Heittola, and Tuomas Virtanen. 2017. Active learning for sound event classification by clustering unlabeled data. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 751--755. https://doi.org/10.1109/ICASSP.2017.7952256
[67]
Liu Sicong, Zhou Zimu, Du Junzhao, Shangguan Longfei, Jun Han, and Xin Wang. 2017. UbiEar: Bringing Location-independent Sound Awareness to the Hard-of-hearing People with Smartphones. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 1, 2 (jun 2017), 17:1-17:21. https://doi.org/10.1145/3090082
[68]
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
[69]
Joan Sosa-García and Francesca Odone. 2017. "Hands On" Visual Recognition for Visually Impaired Users. ACM Transactions on Accessible Computing 10, 3 (aug 2017), 1--30. https://doi.org/10.1145/3060056
[70]
Jina Suh, Soroush Ghorashi, Gonzalo Ramos, Nan-Chen Chen, Steven Drucker, Johan Verwey, and Patrice Simard. 2020. AnchorViz: Facilitating Semantic Data Exploration for IML. ACM Transactions on Interactive Intelligent Systems 10, 1 (jan 2020), 1--38. https://doi.org/10.1145/3241379
[71]
Kyle A. Swiston and Daniel J. Mennill. 2009. Comparison of manual and automated methods for identifying target sounds in audio recordings of Pileated, Pale-billed, and putative Ivory-billed woodpeckers. Journal of Field Ornithology 80, 1 (2009), 42--50. https://doi.org/10.1111/j.1557-9263.2009.00204.x arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1557-9263.2009.00204.x
[72]
Chuanqi Tan, Fuchun Sun, Tao Kong, Wenchang Zhang, Chao Yang, and Chunfang Liu. 2018. A Survey on Deep Transfer Learning. arXiv:1808.01974 [cs.LG]
[73]
Joe Tullio, Anind K. Dey, Jason Chalecki, and James Fogarty. 2007. How it works: a field study of non-tech. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI '07. ACM Press, New York, New York, USA, 31--40. https://doi.org/10.1145/1240624.1240630
[74]
Marynel Vázquez and Aaron Steinfeld. 2012. Helping visually impaired users properly aim a camera. In Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility - ASSETS '12. ACM Press, New York, New York, USA, 95. https://doi.org/10.1145/2384916.2384934
[75]
Donald A Vogel, Patricia A McCARTHY, Gene W Bratt, and Carmen Brewer. 2007. The clinical audiogram: its history and current use. Commun Disord Rev 1, 2 (2007), 81--94.
[76]
Emily Wall, Soroush Ghorashi, and Gonzalo Ramos. 2019. Using Expert Patterns in Assisted Interactive Machine Learning: A Study in Machine Teaching. 578--599. https://doi.org/10.1007/978-3-030-29387-1_34
[77]
Qianwen Wang, Yao Ming, Zhihua Jin, Qiaomu Shen, Dongyu Liu, Micah J. Smith, Kalyan Veeramachaneni, and Huamin Qu. 2019. ATMSeer: Increasing transparency and controllability in automated machine learning. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1--12. https://doi.org/10.1145/3290605.3300911
[78]
Qian Yang, Jina Suh, Nan-Chen Chen, and Gonzalo Ramos. 2018. Grounding Interactive Machine Learning Tool Design in How Non-Experts Actually Build Models. In Proceedings of the 2018 Designing Interactive Systems Conference. ACM, New York, NY, USA, 573--584. https://doi.org/10.1145/3196709.3196729
[79]
Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. 2014. How transferable are features in deep neural networks? arXiv preprint arXiv:1411.1792 (2014).
[80]
Zoom Video Communications. 2020. Video Conferencing, Web Conferencing, Webinars, Screen Sharing. Retrieved July 19, 2020 from https://zoom.us

Cited By

View all
  • (2024)A Review of Machine Learning Approaches for the Personalization of Amplification in Hearing AidsSensors10.3390/s2405154624:5(1546)Online publication date: 28-Feb-2024
  • (2024)"It's like Goldilocks:" Bespoke Slides for Fluctuating Audience Access NeedsProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3675640(1-15)Online publication date: 27-Oct-2024
  • (2024)The AI-DEC: A Card-based Design Method for User-centered AI ExplanationsProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661576(1010-1028)Online publication date: 1-Jul-2024
  • Show More Cited By

Index Terms

  1. Toward User-Driven Sound Recognizer Personalization with People Who Are d/Deaf or Hard of Hearing

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
      Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 5, Issue 2
      June 2021
      932 pages
      EISSN:2474-9567
      DOI:10.1145/3472726
      Issue’s Table of Contents
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 24 June 2021
      Published in IMWUT Volume 5, Issue 2

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Deaf and hard of hearing
      2. accessibility
      3. field study
      4. sound recognition

      Qualifiers

      • Research-article
      • Research
      • Refereed

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)127
      • Downloads (Last 6 weeks)14
      Reflects downloads up to 09 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)A Review of Machine Learning Approaches for the Personalization of Amplification in Hearing AidsSensors10.3390/s2405154624:5(1546)Online publication date: 28-Feb-2024
      • (2024)"It's like Goldilocks:" Bespoke Slides for Fluctuating Audience Access NeedsProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3675640(1-15)Online publication date: 27-Oct-2024
      • (2024)The AI-DEC: A Card-based Design Method for User-centered AI ExplanationsProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661576(1010-1028)Online publication date: 1-Jul-2024
      • (2024)A Way for Deaf and Hard of Hearing People to Enjoy Music by Exploring and Customizing Cross-modal Music ConceptsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642665(1-17)Online publication date: 11-May-2024
      • (2024)Technical Understanding from Interactive Machine Learning Experience: a Study Through a Public Event for Science Museum VisitorsInteracting with Computers10.1093/iwc/iwae00736:3(155-171)Online publication date: 12-Mar-2024
      • (2023)"It's Not an Issue of Malice, but of Ignorance"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36109017:3(1-31)Online publication date: 27-Sep-2023
      • (2023)“Not There Yet”: Feasibility and Challenges of Mobile Sound Recognition to Support Deaf and Hard-of-Hearing PeopleProceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3597638.3608431(1-14)Online publication date: 22-Oct-2023
      • (2023)Understanding Personalized Accessibility through Teachable AI: Designing and Evaluating Find My Things for People who are Blind or Low VisionProceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3597638.3608395(1-12)Online publication date: 22-Oct-2023
      • (2023)AdaptiveSound: An Interactive Feedback-Loop System to Improve Sound Recognition for Deaf and Hard of Hearing UsersProceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3597638.3608390(1-12)Online publication date: 22-Oct-2023
      • (2023)PrISM-TrackerProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/35695046:4(1-27)Online publication date: 11-Jan-2023
      • Show More Cited By

      View Options

      Get Access

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media