Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Combining Different Modalities in Classifying Phonological Categories

  • Conference paper
  • First Online:
Machine Learning and Interpretation in Neuroimaging (MLINI 2013, MLINI 2014)

Abstract

This paper concerns a new dataset we are collecting combining 3 modalities (EEG, video of the face, and audio) during imagined and vocalized phonemic and single-word prompts. We pre-process the EEG data, compute features for all 3 modalities, and perform binary classification of phonological categories using a combination of these modalities. For example, a deep-belief network obtains accuracies over 90 % on identifying consonants, which is significantly more accurate than two baseline support vector machines. These data may be used generally by the research community to learn multimodal relationships, and to develop silent-speech and brain-computer interfaces.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 34.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 44.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bartels, J., Andreasen, D., Ehirim, P., Mao, H., Seibert, S., Wright, E.J., Kennedy, P.: Neurotrophic electrode: method of assembly and implantation into human motor speech cortex. J. Neurosci. Methods 174(2), 168–176 (2008). http://www.sciencedirect.com/science/article/pii/S0165027008003865

    Article  Google Scholar 

  2. Blakely, T., Miller, K., Rao, R.P.N., Holmes, M.D., Ojemann, J.: Localization and classification of phonemes using high spatial resolution electrocorticography (ECoG) grids. In: 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS 2008, pp. 4964–4967, August 2008

    Google Scholar 

  3. Brigham, K., Kumar, B.: Imagined speech classification with EEG signals for silent communication: a preliminary investigation into synthetic telepathy. In: 2010 4th International Conference on Bioinformatics and Biomedical Engineering (iCBBE), pp. 1–4, June 2010

    Google Scholar 

  4. Callan, D.E., Callan, A.M., Honda, K., Masaki, S.: Single-sweep EEG analysis of neural processes underlying perception and production of vowels. Cognit. Brain Res. 10(1–2), 173–176 (2000). http://www.sciencedirect.com/science/article/pii/S0926641000000252

    Article  Google Scholar 

  5. DaSalla, C.S., Kambara, H., Sato, M., Koike, Y.: Single-trial classification of vowel speech imagery using common spatial patterns. Neural Netw. 22(9), 1334–1339 (2009). http://www.sciencedirect.com/science/article/pii/S0893608009000999, brain-Machine Interface

    Article  Google Scholar 

  6. Delorme, A., Makeig, S.: EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Methods 134(1), 9–21 (2004). http://www.sciencedirect.com/science/article/pii/S0165027003003479

    Article  Google Scholar 

  7. D’Zmura, M., Deng, S., Lappas, T., Thorpe, S., Srinivasan, R.: Toward EEG sensing of imagined speech. In: Jacko, J.A. (ed.) HCI International 2009, Part I. LNCS, vol. 5610, pp. 40–48. Springer, Heidelberg (2009)

    Google Scholar 

  8. Fujimaki, N., Takeuchi, F., Kobayashi, T., Kuriki, S., Hasuo, S.: Event-related potentials in silent speech. Brain Topogr. 6(4), 259–267 (1994)

    Article  Google Scholar 

  9. Gomez-Herrero, G., De Clercq, W., Anwar, H., Kara, O., Egiazarian, K., Van Huffel, S., Van Paesschen, W.: Automatic removal of ocular artifacts in the EEG without an EOG reference channel. In: Proceedings of the 7th Nordic Signal Processing Symposium, NORSIG 2006, pp. 130–133, June 2006

    Google Scholar 

  10. Kellis, S., Miller, K., Thomson, K., Brown, R., House, P., Greger, B.: Decoding spoken words using local field potentials recorded from the cortical surface. J. Neural Eng. 7(5), 1–10 (2010). http://stacks.iop.org/1741-2552/7/i=5/a=056007

    Article  Google Scholar 

  11. Kent, R.D., Weismer, G., Kent, J.F., Rosenbek, J.C.: Toward phonetic intelligibility testing in dysarthria. J. Speech Hear. Disord, 54(4), 482–499 (1989). http://dx.doi.org/10.1044/jshd.5404.482

    Article  Google Scholar 

  12. Pasley, B.N., David, S.V., Mesgarani, N., Flinker, A., Shamma, S.A., Crone, N.E., Knight, R.T., Chang, E.F.: Reconstructing speech from human auditory cortex. PLoS ONE 10(1), 1–13 (2012)

    Google Scholar 

  13. Porbadnigk, A., Wester, M., Calliess, J., Schultz, T.: EEG-based speech recognition- impact of temporal effects. In: Encarnao, P., Veloso, A. (eds.) BIOSIGNALS, pp. 376–381. INSTICC Press (2009)

    Google Scholar 

  14. Sharbrough, F., Chatrian, G., Lesser, R., Lüders, H., Nuwer, M., Picton, T.: American electroencephalographic society guidelines for standard electrode position nomenclature. J. Clin. Neurophysiol. 8(2), 200–202 (1991)

    Article  Google Scholar 

  15. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014). http://jmlr.org/papers/v15/srivastava14a.html

    MathSciNet  MATH  Google Scholar 

  16. Suppes, P., Lu, Z.L., Han, B.: Brain wave recognition of words. Proc. Nat. Acad. Sci. 94(26), 14965–14969 (1997). http://www.pnas.org/content/94/26/14965.abstract

    Article  Google Scholar 

  17. Zhao, S., Rudzicz, F.: Classifying phonological categories in imagined and articulated speech. In: Proceedings of ICASSP 2015 (2015)

    Google Scholar 

Download references

Acknowledgements

This research is funded by the Toronto Rehabilitation Institute, the Natural Sciences and Engineering Research Council of Canada (RGPIN 435874), and a grant from the Nuance Foundation. Data collection was assisted by Selvana Morcos, Aaron Marquis, Chaim Katz, and César Márquez-Chin.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shunan Zhao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this paper

Cite this paper

Zhao, S., Rudzicz, F. (2016). Combining Different Modalities in Classifying Phonological Categories. In: Rish, I., Langs, G., Wehbe, L., Cecchi, G., Chang, Km., Murphy, B. (eds) Machine Learning and Interpretation in Neuroimaging. MLINI MLINI 2013 2014. Lecture Notes in Computer Science(), vol 9444. Springer, Cham. https://doi.org/10.1007/978-3-319-45174-9_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-45174-9_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-45173-2

  • Online ISBN: 978-3-319-45174-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics