Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3491102.3502015acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

SilentSpeller: Towards mobile, hands-free, silent speech text entry using electropalatography

Published: 29 April 2022 Publication History
  • Get Citation Alerts
  • Abstract

    Speech is inappropriate in many situations, limiting when voice control can be used. Most unvoiced speech text entry systems can not be used while on-the-go due to movement artifacts. Using a dental retainer with capacitive touch sensors, SilentSpeller tracks tongue movement, enabling users to type by spelling words without voicing. SilentSpeller achieves an average 97% character accuracy in offline isolated word testing on a 1164-word dictionary. Walking has little effect on accuracy; average offline character accuracy was roughly equivalent on 107 phrases entered while walking (97.5%) or seated (96.5%). To demonstrate extensibility, the system was tested on 100 unseen words, leading to an average 94% accuracy. Live text entry speeds for seven participants averaged 37 words per minute at 87% accuracy. Comparing silent spelling to current practice suggests that SilentSpeller may be a viable alternative for silent mobile text entry.

    Supplementary Material

    MP4 File (3491102.3502015-video-figure.mp4)
    Video Figure
    MP4 File (3491102.3502015-video-preview.mp4)
    Video Preview
    MP4 File (3491102.3502015-talk-video.mp4)
    Talk Video

    References

    [1]
    S. T. Ahi, H. Kambara, and Y. Koike. 2011. A Dictionary-Driven P300 Speller With a Modified Interface. IEEE Transactions on Neural Systems and Rehabilitation Engineering 19, 1(2011), 6–14.
    [2]
    Abdelkareem Bedri, Himanshu Sahni, Pavleen Thukral, Thad Starner, David Byrd, Peter Presti, Gabriel Reyes, Maysam Ghovanloo, and Zehua Guo. 2015. Toward silent-speech control of consumer wearables. Computer 48, 10 (2015), 54–62.
    [3]
    Florent Bocquelet, Thomas Hueber, Laurent Girin, Christophe Savariaux, and Blaise Yvert. 2016. Real-Time Control of an Articulatory-Based Speech Synthesizer for Brain Computer Interfaces. PLOS Computational Biology 12, 11 (11 2016), 1–28. https://doi.org/10.1371/journal.pcbi.1005119
    [4]
    Héctor A Caltenco, Björn Breidegard, and Lotte NS Andreasen Struijk. 2014. On the tip of the tongue: Learning typing and pointing with an intra-oral computer interface. Disability and Rehabilitation: Assistive Technology 9, 4 (2014), 307–317.
    [5]
    Steven J Castellucci and I Scott MacKenzie. 2008. Graffiti vs. unistrokes: an empirical comparison. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 305–308.
    [6]
    Xiaogang Chen, Yijun Wang, Masaki Nakanishi, Xiaorong Gao, Tzyy-Ping Jung, and Shangkai Gao. 2015. High-speed spelling with a noninvasive brain–computer interface. Proceedings of the national academy of sciences 112, 44(2015), E6058–E6067.
    [7]
    Edward Clarkson, James Clawson, Kent Lyons, and Thad Starner. 2005. An Empirical Study of Typing Rates on Mini-QWERTY Keyboards. In CHI ’05 Extended Abstracts on Human Factors in Computing Systems (Portland, OR, USA) (CHI EA ’05). Association for Computing Machinery, New York, NY, USA, 1288–1291. https://doi.org/10.1145/1056808.1056898
    [8]
    J. Clawson, K. Lyons, T. Starner, and E. Clarkson. 2005. The impacts of limited visual feedback on mobile text entry for the Twiddler and mini-QWERTY keyboards. In Ninth IEEE International Symposium on Wearable Computers (ISWC’05). 170–177.
    [9]
    James Clawson, Thad Starner, Daniel Kohlsdorf, David P. Quigley, and Scott Gilliland. 2014. Texting While Walking: An Evaluation of Mini-Qwerty Text Input While on-the-Go. In Proceedings of the 16th International Conference on Human-Computer Interaction with Mobile Devices and Services (Toronto, ON, Canada) (MobileHCI ’14). Association for Computing Machinery, New York, NY, USA, 339–348. https://doi.org/10.1145/2628363.2628408
    [10]
    Tamás Gábor Csapó, Tamás Grósz, Gábor Gosztolya, László Tóth, and Alexandra Markó. 2017. DNN-Based Ultrasound-to-Speech Conversion for a Silent Speech Interface. In INTERSPEECH.
    [11]
    Bruce Denby, Thomas Schultz, Kiyoshi Honda, Thomas Hueber, Jim M Gilbert, and Jonathan S Brumberg. 2010. Silent speech interfaces. Speech Communication 52, 4 (2010), 270–287.
    [12]
    M.J. Fagan, S.R. Ell, J.M. Gilbert, E. Sarrazin, and P.M. Chapman. 2008. Development of a (silent) speech recognition system for patients following laryngectomy. Medical Engineering & Physics 30, 4 (2008), 419 – 425. https://doi.org/10.1016/j.medengphy.2007.05.003
    [13]
    Torsten Felzer, I Scott MacKenzie, and Stephan Rinderknecht. 2014. Applying small-keyboard computer control to the real world. In International Conference on Computers for Handicapped Persons. Springer, 180–187.
    [14]
    João Freitas, António Teixeira, Miguel Sales Dias, and Samuel Silva. 2017. An Introduction to Silent Speech Interfaces. Springer.
    [15]
    Masaaki Fukumoto. 2018. SilentVoice: Unnoticeable Voice Input by Ingressive Speech. In The 31st Annual ACM Symposium on User Interface Software and Technology. ACM, 237–246.
    [16]
    Yang Gao, Yincheng Jin, Jiyang Li, Seokmin Choi, and Zhanpeng Jin. 2020. EchoWhisper: Exploring an Acoustic-Based Silent Speech Interface for Smartphone Users. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 4, 3, Article 80 (Sept. 2020), 27 pages. https://doi.org/10.1145/3411830
    [17]
    Mayank Goel, Leah Findlater, and Jacob Wobbrock. 2012. WalkType: using accelerometer data to accomodate situational impairments in mobile touch screen text entry. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2687–2696.
    [18]
    Mayank Goel, Jacob Wobbrock, and Shwetak Patel. 2012. GripSense: using built-in sensors to detect hand posture and pressure on commodity mobile phones. In Proceedings of the 25th annual ACM symposium on User interface software and technology. 545–554.
    [19]
    Jose A. Gonzalez, Lam A. Cheah, James M. Gilbert, Jie Bai, Stephen R. Ell, Phil D. Green, and Roger K. Moore. 2016. A Silent Speech System Based on Permanent Magnet Articulography and Direct Synthesis. Comput. Speech Lang. 39, C (Sept. 2016), 67–87. https://doi.org/10.1016/j.csl.2016.02.002
    [20]
    Tamás Grósz, Gábor Gosztolya, László Tóth, Tamás Csapó, and Alexandra Markó. 2018. F0 Estimation for DNN-Based Ultrasound Silent Speech Interfaces. https://doi.org/10.1109/ICASSP.2018.8461732
    [21]
    W. Hardcastle, W. Jones, C. Knight, A. Trudgeon, and G. Calder. 1989. New developments in electropalatography: A state-of-the-art report. Clinical Linguistics & Phonetics 3, 1 (1989), 1–38. https://doi.org/10.3109/02699208908985268 arXiv:https://doi.org/10.3109/02699208908985268
    [22]
    Tatsuya Hirahara, Makoto Otani, Shota Shimizu, Tomoki Toda, Keigo Nakamura, Yoshitaka Nakajima, and Kiyohiro Shikano. 2010. Silent-speech Enhancement Using Body-conducted Vocal-tract Resonance Signals. Speech Commun. 52, 4 (April 2010), 301–313. https://doi.org/10.1016/j.specom.2009.12.001
    [23]
    T. Hueber, G. Aversano, G. Cholle, B. Denby, G. Dreyfus, Y. Oussar, P. Roussel, and M. Stone. 2007. Eigentongue Feature Extraction for an Ultrasound-Based Silent Speech Interface. In 2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP ’07, Vol. 1. I–1245–I–1248. https://doi.org/10.1109/ICASSP.2007.366140
    [24]
    Thomas Hueber, Elie-Laurent Benaroya, Gérard Chollet, Bruce Denby, Gérard Dreyfus, and Maureen Stone. 2010. Development of a silent speech interface driven by ultrasound and optical images of the tongue and lips. Speech Communication 52, 4 (2010), 288 – 300. https://doi.org/10.1016/j.specom.2009.11.004 Silent Speech Interfaces.
    [25]
    Yan Ji, Licheng Liu, Hongcui Wang, Zhilei Liu, Zhibin Niu, and Bruce Denby. 2018. Updating the Silent Speech Challenge benchmark with deep learning. Speech Communication 98(2018), 42 – 50. https://doi.org/10.1016/j.specom.2018.02.002
    [26]
    Biing-Hwang Juang and Lawrence R Rabiner. 2005. Automatic speech recognition–a brief history of the technology development. Georgia Institute of Technology. Atlanta Rutgers University and the University of California. Santa Barbara 1 (2005), 67.
    [27]
    Arnav Kapur, Shreyas Kapur, and Pattie Maes. 2018. AlterEgo: A Personalized Wearable Silent Speech Interface. In 23rd International Conference on Intelligent User Interfaces. ACM, 43–53.
    [28]
    Arnav Kapur, Utkarsh Sarawgi, Eric Wadkins, Matthew Wu, Nora Hollenstein, and Pattie Maes. 2020. Non-Invasive Silent Speech Recognition in Multiple Sclerosis with Dysphonia. In Machine Learning for Health Workshop. 25–38.
    [29]
    Naoki Kimura, Tan Gemicioglu, Jonathan Womack, Richard Li, Yuhui Zhao, Abdelkareem Bedri, Alex Olwal, Jun Rekimoto, and Thad Starner. 2021. Mobile, Hands-Free, Silent Speech Texting Using SilentSpeller. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3411763.3451552
    [30]
    Naoki Kimura, Michinari Kono, and Jun Rekimoto. 2019. SottoVoce: An Ultrasound Imaging-Based Silent Speech Interaction Using Deep Neural Networks. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–11. https://doi.org/10.1145/3290605.3300376
    [31]
    Yongkuk Lee, Connor Howe, Saswat Mishra, Dong Sup Lee, Musa Mahmood, Matthew Piper, Youngbin Kim, Katie Tieu, Hun-Soo Byun, James P. Coffey, Mahdis Shayan, Youngjae Chun, Richard M. Costanzo, and Woon-Hong Yeo. 2018. Wireless, intraoral hybrid electronics for real-time quantification of sodium intake toward hypertension management. Proceedings of the National Academy of Sciences 115, 21(2018), 5377–5382. https://doi.org/10.1073/pnas.1719573115 arXiv:https://www.pnas.org/content/115/21/5377.full.pdf
    [32]
    Richard Li, Jason Wu, and Thad Starner. 2019. TongueBoard: An Oral Interface for Subtle Input. In Proceedings of the 10th Augmented Human International Conference 2019. 1–9.
    [33]
    Kent Lyons, Thad Starner, Daniel Plaisted, James Fusia, Amanda Lyons, Aaron Drew, and EW Looney. 2004. Twiddler typing: one-handed chording text entry for mobile phones. In Proceedings of the SIGCHI conference on Human factors in computing systems. 671–678.
    [34]
    I. Scott MacKenzie and R. William Soukoreff. 2003. Phrase Sets for Evaluating Text Entry Techniques. In CHI ’03 Extended Abstracts on Human Factors in Computing Systems (Ft. Lauderdale, Florida, USA) (CHI EA ’03). Association for Computing Machinery, New York, NY, USA, 754–755. https://doi.org/10.1145/765891.765971
    [35]
    I Scott MacKenzie and Kumiko Tanaka-Ishii. 2010. Text entry systems: Mobility, accessibility, universality. Elsevier.
    [36]
    L. Maier-Hein, F. Metze, T. Schultz, and A. Waibel. 2005. Session independent non-audible speech recognition using surface electromyography. In IEEE Workshop on Automatic Speech Recognition and Understanding, 2005.331–336. https://doi.org/10.1109/ASRU.2005.1566521
    [37]
    Päivi Majaranta, Ulla-Kaija Ahola, and Oleg Špakov. 2009. Fast gaze typing with an adjustable dwell time. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 357–360.
    [38]
    Geoffrey S Meltzner, James T Heaton, Yunbin Deng, Gianluca De Luca, Serge H Roy, and Joshua C Kline. 2018. Development of sEMG sensors and algorithms for silent speech recognition. Journal of neural engineering(2018).
    [39]
    Carlos H Morimoto and Arnon Amir. 2010. Context switching for fast key selection in text entry applications. In Proceedings of the 2010 symposium on eye-tracking research & applications. 271–274.
    [40]
    Y Nakajima, Hideki Kashioka, Kiyohiro Shikano, and Nick Campbell. 2003. Non-audible murmur recognition input interface using stethoscopic microphone attached to the skin. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings 5, V – 708. https://doi.org/10.1109/ICASSP.2003.1200069
    [41]
    Shuo Niu, Li Liu, and D Scott McCrickard. 2014. Tongue-able interfaces: Evaluating techniques for a camera based tongue gesture input system. In Proceedings of the 16th international ACM SIGACCESS conference on Computers & accessibility. 277–278.
    [42]
    Lotte NS Andreasen Struijk, Eugen R Lontis, Michael Gaihede, Hector A Caltenco, Morten Enemark Lund, Henrik Schioeler, and Bo Bentsen. 2017. Development and functional demonstration of a wireless intraoral inductive tongue computer interface for severely disabled persons. Disability and Rehabilitation: Assistive Technology 12, 6(2017), 631–640.
    [43]
    Kseniia Palin, Anna Maria Feit, Sunjun Kim, Per Ola Kristensson, and Antti Oulasvirta. 2019. How do people type on mobile devices? Observations from a study with 37,000 volunteers. In Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services. 1–12.
    [44]
    Laxmi Pandey and Ahmed Sabbir Arif. 2021. LipType: A Silent Speech Recognizer Augmented with an Independent Repair Model. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–19.
    [45]
    Daniel S Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D Cubuk, and Quoc V Le. 2019. Specaugment: A simple data augmentation method for automatic speech recognition. arXiv preprint arXiv:1904.08779(2019).
    [46]
    J. Picone. 1990. Continuous speech recognition using hidden Markov models. IEEE ASSP Magazine 7, 3 (1990), 26–41.
    [47]
    F. Robineau, F. Boy, J. Orliaguet, J. Demongeot, and Y. Payan. 2007. Guiding the Surgical Gesture Using an Electro-Tactile Stimulus Array on the Tongue: A Feasibility Study. IEEE Transactions on Biomedical Engineering 54, 4 (2007), 711–717. https://doi.org/10.1109/TBME.2006.889180
    [48]
    Anne Roudaut, Andreas Rau, Christoph Sterz, Max Plauth, Pedro Lopes, and Patrick Baudisch. 2013. Gesture Output: Eyes-Free Output Using a Force Feedback Touch Surface. Association for Computing Machinery, New York, NY, USA, 2547–2556. https://doi.org/10.1145/2470654.2481352
    [49]
    Sherry Ruan, Jacob O. Wobbrock, Kenny Liou, Andrew Ng, and James A. Landay. 2018. Comparing Speech and Keyboard Text Entry for Short Messages in Two Languages on Touchscreen Phones. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 1, 4, Article 159 (Jan. 2018), 23 pages. https://doi.org/10.1145/3161187
    [50]
    Sherry Ruan, Jacob O Wobbrock, Kenny Liou, Andrew Ng, and James A Landay. 2018. Comparing speech and keyboard text entry for short messages in two languages on touchscreen phones. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 1, 4 (2018), 1–23.
    [51]
    Hasim Sak, Andrew W. Senior, and Françoise Beaufays. 2014. Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In INTERSPEECH 2014, 15th Annual Conference of the International Speech Communication Association, Singapore, September 14-18, 2014. ISCA, 338–342.
    [52]
    Javier San Agustin, Henrik Skovsgaard, Emilie Mollenbach, Maria Barret, Martin Tall, Dan Witzner Hansen, and John Paulin Hansen. 2010. Evaluation of a low-cost open-source gaze tracker. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications. 77–80.
    [53]
    Tanja Schultz. 2010. ICCHP Keynote: Recognizing Silent and Weak Speech Based on Electromyography. 595–604. https://doi.org/10.1007/978-3-642-14097-6_96
    [54]
    R. Soukoreff and I. MacKenzie. 2003. Metrics for text entry research: An evaluation of MSD and KSPC, and a new unified error metric. Conference on Human Factors in Computing Systems - Proceedings, 113–120. https://doi.org/10.1145/642611.642632
    [55]
    Simon Stone and Peter Birkholz. 2020. Cross-Speaker Silent-Speech Command Word Recognition Using Electro-Optical Stomatography. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 7849–7853. https://doi.org/10.1109/ICASSP40776.2020.9053447
    [56]
    Ke Sun, Chun Yu, Weinan Shi, Lan Liu, and Yuanchun Shi. 2018. Lip-Interact: Improving Mobile Device Interaction with Silent Speech Commands. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology(Berlin, Germany) (UIST ’18). ACM, New York, NY, USA, 581–593. https://doi.org/10.1145/3242587.3242599
    [57]
    Ke Sun, Chun Yu, Weinan Shi, Lan Liu, and Yuanchun Shi. 2018. Lip-Interact: Improving Mobile Device Interaction with Silent Speech Commands. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology(Berlin, Germany) (UIST ’18). ACM, New York, NY, USA, 581–593. https://doi.org/10.1145/3242587.3242599
    [58]
    László Tóth, Gábor Gosztolya, Tamás Grósz, Alexandra Markó, and Tamás Gábor Csapó. 2018. Multi-Task Learning of Speech Recognition and Speech Synthesis Parameters for Ultrasound-based Silent Speech Interfaces. In INTERSPEECH. 3172–3176.
    [59]
    Jason Tu, Angeline Vidhula Jeyachandra, Deepthi Nagesh, Naresh Prabhu, and Thad Starner. 2021. Typing on Tap: Estimating a Finger-Worn One-Handed Chording Keyboard’s Text Entry Rate. In 2021 International Symposium on Wearable Computers. 156–158.
    [60]
    Outi Tuisku, Päivi Majaranta, Poika Isokoski, and Kari-Jouko Räihä. 2008. Now Dasher! Dash away! Longitudinal study of fast text entry by eye gaze. In Proceedings of the 2008 symposium on Eye tracking research & applications. 19–26.
    [61]
    Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, undefinedukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. In Proceedings of the 31st International Conference on Neural Information Processing Systems (Long Beach, California, USA) (NIPS’17). Curran Associates Inc., Red Hook, NY, USA, 6000–6010.
    [62]
    Jingxian Wang, Chengfeng Pan, Haojian Jin, Vaibhav Singh, Yash Jain, Jason I Hong, Carmel Majidi, and Swarun Kumar. 2019. RFID Tattoo: A Wireless Platform for Speech Recognition. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3, 4 (2019), 1–24.
    [63]
    Tracy Westeyn, Helene Brashear, Amin Atrash, and Thad Starner. 2003. Georgia Tech Gesture Toolkit: Supporting Experiments in Gesture Recognition. In Proceedings of the 5th International Conference on Multimodal Interfaces (Vancouver, British Columbia, Canada) (ICMI ’03). Association for Computing Machinery, New York, NY, USA, 85–92. https://doi.org/10.1145/958432.958452
    [64]
    Jacob O Wobbrock. 2006. EdgeWrite: A versatile design for text entry and control.
    [65]
    Jacob O Wobbrock. 2007. Measures of text entry performance. Text entry systems: Mobility, accessibility, universality (2007), 47–74.
    [66]
    Jacob O Wobbrock. 2019. Situationally-induced impairments and disabilities. In Web Accessibility. Springer, 59–92.
    [67]
    Jacob O Wobbrock, Shaun K Kane, Krzysztof Z Gajos, Susumu Harada, and Jon Froehlich. 2011. Ability-based design: Concept, principles and examples. ACM Transactions on Accessible Computing (TACCESS) 3, 3 (2011), 1–27.
    [68]
    Jacob O Wobbrock, Brad A Myers, and John A Kembel. 2003. EdgeWrite: a stylus-based text entry method designed for high accuracy and stability of motion. In Proceedings of the 16th annual ACM symposium on User interface software and technology. 61–70.
    [69]
    Jacob O Wobbrock, James Rubinstein, Michael W Sawyer, and Andrew T Duchowski. 2008. Longitudinal evaluation of discrete consecutive gaze gestures for text entry. In Proceedings of the 2008 symposium on Eye tracking research & applications. 11–18.
    [70]
    Pei Yin, Thad Starner, Harley Hamilton, Irfan Essa, and James M Rehg. 2009. Learning the basic units in american sign language using discriminative segmental feature selection. In 2009 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 4757–4760.
    [71]
    Steve Young, G Evermann, M.J.F. Gales, Thomas Hain, D Kershaw, Xunying Liu, G Moore, James Odell, D Ollason, Daniel Povey, V Valtchev, and Philip Woodland. 2002. The HTK book.
    [72]
    Shumin Zhai and Per Ola Kristensson. 2012. The word-gesture keyboard: reimagining keyboard interaction. Commun. ACM 55, 9 (2012), 91–101.
    [73]
    Mingrui Ray Zhang and Jacob O. Wobbrock. 2019. Beyond the Input Stream: Making Text Entry Evaluations More Flexible with Transcription Sequences. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (New Orleans, LA, USA) (UIST ’19). Association for Computing Machinery, New York, NY, USA, 831–842. https://doi.org/10.1145/3332165.3347922
    [74]
    Qian Zhang, Dong Wang, Run Zhao, and Yinggang Yu. 2021. SoundLip: Enabling Word and Sentence-level Lip Interaction for Smart Devices. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, 1 (2021), 1–28.

    Cited By

    View all
    • (2024)MELDER: The Design and Evaluation of a Real-time Silent Speech Recognizer for Mobile DevicesProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642348(1-23)Online publication date: 11-May-2024
    • (2024)ReHEarSSE: Recognizing Hidden-in-the-Ear Silently Spelled ExpressionsProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642095(1-16)Online publication date: 11-May-2024
    • (2024)GazePuffer: Hands-Free Input Method Leveraging Puff Cheeks for VR2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR)10.1109/VR58804.2024.00055(331-341)Online publication date: 16-Mar-2024
    • Show More Cited By

    Index Terms

    1. SilentSpeller: Towards mobile, hands-free, silent speech text entry using electropalatography

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CHI '22: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems
      April 2022
      10459 pages
      ISBN:9781450391573
      DOI:10.1145/3491102
      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 29 April 2022

      Check for updates

      Author Tags

      1. silent speech interface
      2. text entry
      3. wearable computing

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Conference

      CHI '22
      Sponsor:
      CHI '22: CHI Conference on Human Factors in Computing Systems
      April 29 - May 5, 2022
      LA, New Orleans, USA

      Acceptance Rates

      Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)298
      • Downloads (Last 6 weeks)16
      Reflects downloads up to 29 Jul 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)MELDER: The Design and Evaluation of a Real-time Silent Speech Recognizer for Mobile DevicesProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642348(1-23)Online publication date: 11-May-2024
      • (2024)ReHEarSSE: Recognizing Hidden-in-the-Ear Silently Spelled ExpressionsProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642095(1-16)Online publication date: 11-May-2024
      • (2024)GazePuffer: Hands-Free Input Method Leveraging Puff Cheeks for VR2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR)10.1109/VR58804.2024.00055(331-341)Online publication date: 16-Mar-2024
      • (2024)FanPad: A Fan Layout Touchpad Keyboard for Text Entry in VR2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR)10.1109/VR58804.2024.00045(222-232)Online publication date: 16-Mar-2024
      • (2023)Generating Representative Phrase Sets for Text Entry Experiments by GA-Based Text Corpora SamplingMathematics10.3390/math1111255011:11(2550)Online publication date: 1-Jun-2023
      • (2023)HPSpeech: Silent Speech Interface for Commodity HeadphonesProceedings of the 2023 ACM International Symposium on Wearable Computers10.1145/3594738.3611365(60-65)Online publication date: 8-Oct-2023
      • (2023)mSilentProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/35808387:1(1-28)Online publication date: 28-Mar-2023
      • (2023)TongueTap: Multimodal Tongue Gesture Recognition with Head-Worn DevicesProceedings of the 25th International Conference on Multimodal Interaction10.1145/3577190.3614120(564-573)Online publication date: 9-Oct-2023
      • (2023)HMDspeller: Fast and Hands-free Text Entry System for Head Mount Displays using Silent Spelling RecognitionExtended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544549.3583910(1-4)Online publication date: 19-Apr-2023
      • (2023)LipLearner: Customizable Silent Speech Interactions on Mobile DevicesProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581465(1-21)Online publication date: 19-Apr-2023
      • Show More Cited By

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media