Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Spectral-temporal receptive fields and MFCC balanced feature extraction for robust speaker recognition

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

This paper proposes a speaker recognition system using acoustic features that are based on spectral-temporal receptive fields (STRFs). The STRF is derived from physiological models of the mammalian auditory system in the spectral-temporal domain. With the STRF, a signal is expressed by rate (in Hz) and scale (in cycles/octaves). The rate and scale are used to specify the temporal response and spectral response, respectively. This paper uses the proposed STRF based feature to perform speaker recognition. First, the energy of each scale is calculated using the STRF representation. A logarithmic operation is then applied to the scale energies. Finally, a discrete cosine transform is utilized to the generation of the proposed STRF feature. This paper also presents a feature set that combines the proposed STRF feature with conventional Mel frequency cepstral coefficients (MFCCs). The support vector machines (SVMs) are adopted to be the speaker classifiers. To evaluate the performance of the proposed speaker recognition system, experiments on 36-speaker recognition were conducted. Comparing with the MFCC baseline, the proposed feature set increases the speaker recognition rates by 3.85 % and 18.49 % on clean and noisy speeches, respectively. The experiments results demonstrate the effectiveness of adopting STRF based feature in speaker recognition.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Andrew O. Hatch, Sachin K, Andreas S (2006) Within-class covariance normalization for SVM-based speaker recognition. In: 2006 ICSLP

  2. Anthony L, Kong AL, Bin M, Haizhou L (2013) Phonetically-constrained plda modeling for text-dependent speaker verification with multiple short utterances. Human Language Technology Department, Institute for Infocomm Research, A*STAR, Singapore

    Google Scholar 

  3. Anthony L, Kong AL, Bin M, Haizhou L (2015) The RSR2015: database for text-dependent speaker verification using multiple pass-phrases. Institute for Infocomm Research (I2R), A*STAR, Singapore

    Google Scholar 

  4. Campbell WM, Campbell JP, Reynolds DA, Singer E, Torres-Carrasquillo PA (2006) Support vector machines for speaker and language recognition. In: Computer Speech and Language

  5. Campbell WM, Sturim DE, Reynolds DA (2006) Support vector machines using GMM supervectors for speaker verification. In: IEEE Signal Processing Letters

  6. Chang CC, Lin CJ (2011) LIBSVM: a library for support vector machines. In: ACM Transactions on Intelligent Systems and Technology

  7. Chi TS, Lin TH, Hsu CC (2012) Spectro-temporal modulation energy based mask for robust speaker identification. J Acoust Soc Am 131(5):368–374

    Article  Google Scholar 

  8. Chi TS, Ru P, Shamma S (2005) Multiresolution spectrotemporal analysis of complex sounds. J Acoust Soc Am 118:887–906

    Article  Google Scholar 

  9. Desai S, Black AW, Prahallad K (2010) Spectral mapping using artificial neural networks for voice conversion. IEEE Trans Audio Speech Lang Process 18(5):954–964

    Article  Google Scholar 

  10. Didier M, Andrzej D (2001) Forensic speaker recognition based on a Bayesian framework and Gaussian mixture modelling (GMM). In: ODYSSEY-2001, Crete, Greece.

  11. Ding IR, Yen CT (2013) Enhancing GMM speaker identification by incorporating SVM speaker verification for intelligent web-based speech applications. In: Multimedia Tools and Applications

  12. Douglas AR, Richard CR (1995) Robust text-independent speaker identification using Gaussian mixture speaker models. In: IEEE Transactions on Speech and Audio Processing

  13. Hsu W, Lin CJ (2002) A comparison of methods for multiclass support vector machines. IEEE Trans Neural Netw 13(2):415–425

    Article  Google Scholar 

  14. Juang BH, Chen TH (1998) The past, present, and future of speech processing. IEEE Signal Process Mag 15(3):24–48

    Article  Google Scholar 

  15. Khan SA, Anil ST, Jagannath HN, Vinay SP (2015) A unique approach in text independent speaker recognition using MFCC feature sets and probabilistic neural network. In: 2015 Eighth International Conference on Advances in Pattern Recognition (ICAPR), Kolkata

  16. Kuan TW, Wang JF, Wang JC, Lin PC, Gu GH (2012) VLSI design of an SVM learning core on sequential minimal optimization algorithm. IEEE Trans Very Large Scale Integr VLSI Syst 20(4):673–683

    Article  Google Scholar 

  17. Kuruvachan KG, Arunraj K, Sreekumar KT, Santhosh KC, Ramachandran KI (2014) Towards improving the performance of text/language independent speaker recognition systems. In: International Conference on Power, Signals, Controls and Computation (EPSCICON)

  18. Lukáš B, Pavel M, Petr S, Ondřej G, Jan Č (2007) Analysis of feature extraction and channel compensation in a GMM speaker recognition system. In: IEEE Transactions on Audio, Speech, and Language Processing

  19. Srinivas V, Santhi rani C, Madhu T (2013) Investigation of decision tree induction, probabilistic technique and SVM for speaker identification. Int J Signal Process Image Process Pattern Recog 6(6):193–204

    Google Scholar 

  20. Stafylakis T, Kenny P, Ouellet P, Perez J, Kockmann M, Dumouchel P (2013) Text-dependent speaker recognition using PLDA with uncertainty propagation. Centre de Recherche Informatique de Montreal (CRIM), Canada

    Google Scholar 

  21. Tuzun OB, Demirekler M, Nakiboglu KB, (1994) Comparison of parametric and non-parametric representations of speech for recognition. In: Proc. 7th Mediterranean Electrotechnical Conference, 1994, pp 65–68

  22. Vapnik V (1998) Statistical learning theory. Wiley, New York

    MATH  Google Scholar 

  23. Wang JC, Chin YH, Hsieh WC, Lin CH, Chen YR, Siahaan E (2015) Speaker identification with whispered speech for the access control system. IEEE Trans Autom Sci Eng 12(4):1191–1199

    Article  Google Scholar 

  24. Wang JC, Lee YS, Lin CH, Siahaan E, Yang CH (2015) Robust environmental sound recognition with fast noise suppression for home automation. IEEE Trans Autom Sci Eng 12(4):1235–1242

    Article  Google Scholar 

  25. Wang JC, Lian LX, Lin YY, Zhao JH (2015) VLSI design for SVM-based speaker verification system. IEEE Trans Very Large Scale Integr VLSI Syst 23(7):1355–1359

    Article  Google Scholar 

  26. Wang JC, Lin CH, Chen ET, Chang PC (2014) Spectral-temporal receptive fields and mfcc balanced feature extraction for noisy speech recognition. In: Asia-Pacific Signal and Information Processing Association (APSIPA)

  27. Wang JC, Wang JF, Weng YS (2002) Chip design of MFCC extraction for speech recognition. Integr VLSI J 32(1–3):111–131

    Article  MATH  Google Scholar 

  28. Wang JC, Yang CH, Wang JF, Lee HP (2007) Robust speaker identification and verification. IEEE Comput Intell Mag 2(2):52–59

    Article  Google Scholar 

  29. Woojay J, Juang BH (2008) Speech analysis in a model of the central auditory system. IEEE Trans Audio Speech Lang Process 15(6):1802–1817

    Article  Google Scholar 

  30. Yun L, Nicolas S, Luciana F, Mitchell M (2014) A novel scheme for speaker recognition using a phonetically-aware deep neural network. IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP), Florence

    Google Scholar 

  31. Zhe J, Wei H, Xin J (2013) Duration weighted Gaussian mixture model supervector modeling for robust speaker recognition. In: 2013 Ninth International Conference on Natural Computation (ICNC), Shenyang, China

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pao-Chi Chang.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, JC., Wang, CY., Chin, YH. et al. Spectral-temporal receptive fields and MFCC balanced feature extraction for robust speaker recognition. Multimed Tools Appl 76, 4055–4068 (2017). https://doi.org/10.1007/s11042-016-3335-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-016-3335-0

Keywords