Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Speaker Verification Using Convolutional Neural Networks: Hossein Salehghaffari

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Speaker Verification using Convolutional Neural

Networks
Hossein Salehghaffari
Control/Robotics Research Laboratory (CRRL),
Department of Electrical and Computer Engineering,
NYU Tandon School of Engineering (Polytechnic Institute), NY 11201, USA
Email: h.saleh@nyu.edu
arXiv:1803.05427v2 [eess.AS] 10 Aug 2018

Abstract—In this paper, a novel Convolutional Neural query utterances are identified by comparing to existing
Network architecture has been developed for speaker speaker models created in the enrollment phase.
verification in order to simultaneously capture and dis-
card speaker and non-speaker information, respectively. Recently, with the advent of deep learning in different
In training phase, the network is trained to distinguish applications such as speech, image recognition and net-
between different speaker identities for creating the back- work pruning [1]–[4], data-driven approaches using Deep
ground model. One of the crucial parts is to create the Neural Networks (DNNs) have also been proposed for
speaker models. Most of the previous approaches create effective feature learning for Automatic Speech Recog-
speaker models based on averaging the speaker representa- nition (ASR) [3] and Speaker Recognition (SR) [5],
tions provided by the background model. We overturn this [6]. Also deep architecture has mostly been treated as
problem by further fine-tuning the trained model using the
black boxes, some approaches based on Information
Siamese framework for generating a discriminative feature
space to distinguish between same and different speakers Theory [7], have been presented for multimodal feature
regardless of their identity. This provides a mechanism extraction and demonstrated promising results [8].
which simultaneously captures the speaker-related infor- Some traditional successful model for speaker verifica-
mation and create robustness to within-speaker variations. tion are Gaussian Mixture Model-Universal Background
It is demonstrated that the proposed method outperforms
Model (GMM-UBM) [9] and i-vector [10]. The main
the traditional verification methods which create speaker
models directly from the background model. disadvantage of these models is their unsupervised na-
ture since there are not trained objectively for speaker
I. I NTRODUCTION verification setup. Some methods have been proposed
In speaker verification (SV), the identity of a query to supervise the aforementioned models training such as
spoken utterance should be confirmed by comparing SVM-based GMM-UBMs [11] and PLDA for i-vectors
to the gallery of known speakers. The speaker verifi- model [12]. With the advent of Convolutional Neural
cation can be categorized to text-dependent and text- Networks (CNNs) and their promising results for action
independent. In text-independent, no restriction is con- recognition [13], scene understanding [14], recently they
sidered for the utterances. On the other hand, in text- have been proposed as well for speaker and speech
dependent setting, all speakers repeat the same phrase. recognition [6], [15].
Due to the variational nature of the former setup, it In this work, we propose to use the Siamese neural
considers being a more challenging task since the sys- networks to operate one traditional speech features such
tem must be able to clearly distinguish between the as MFCCs1 instead of raw feature for having a higher-
speaker and non-speaker characteristics of the uttered level representation for speaker-related characteristics.
phrases. The general procedure of speaker verification Moreover, we show the advantage of utilizing an effec-
consists of three phases: Development, enrollment, and tive pair selection method for verification purposes.
evaluation. For development, a background model must
be created for capturing the speaker-related information.
In enrollment, the speaker models are created using
1
the background model. Finally, in the evaluation, the Mel Frequency Cepstral Coefficients
II. R ELATED W ORKS speaker models using a score function and the one with
Convolutional Neural Networks [16] have recently the highest score is the predicted speaker. Considering
been used for speech recognition [17]. Deep models the one-vs-all setup, this stage is equivalent a binary
have effectively been proposed an utilized for text- classification problem in which the traditional Equal
independent setup in some research efforts [5], [18]. Error Rate (EER) is used for model evaluation. The
Locally Connected Networks (LCNs) have been utilized false reject rate and the false accept rate are determined
for SV as well [19]. Although in [19], the setup is text- by predefined threshold and when two errors become
dependent. In some other works such as [20], [21], the equal, the operating point is EER. Usually, as for the
deep networks have been employed for feature extractors scoring function, the simple cosine similarity score will
to create speaker models for further evaluations. We be employed. The score is measured by the similarity
investigate the CNNs specifically trained end-to-end for between the representation of the test utterance and the
verification purposes and furthermore employ them as targeted speaker model.
feature extractors to distinguish between the speaker and
non-speaker information.
III. S PEAKER V ERIFICATION P ROCEDURE AND
P ROTOCOL
The speaker verification protocol can be categorized
into three phases: development, enrollment, and
evaluation. The general view of the speaker verification
protocol is depicted in Fig 1. We explain these phases
in this section with a special emphasis on how they can
be adapted to deep learning. Different research efforts
proposed variety of methods for implementing and
adapting this protocol such i-vector [10], [22], d-vector
system [6].

Development In the development stage, speaker


utterances are utilized for creating a background model Fig. 1. General view of the speaker verification.
for speaker representation. Different elements such
as representation level (frame or utterance-level),
the model type such as deep networks or Bayesian IV. DATASET
models and training objective (loss function) forms
the speaker representation type. The main motivation We used the public VoxCeleb dataset for our ex-
behind employing DNN is to use their architecture as a periments [23]. It contains around 140k utterances for
powerful speaker feature extractor. 1211 speakers and around 6k utterances for 40 speaker
identities used for testing. The dataset is balanced based
Enrollment In this stage, a distinct model should be on the gender and has been spanned through different
created for each speaker identity. The speaker utterances ethnicities and accents. The audios are extracted from
will be utilized for speaker model generation. In the uploaded videos to YouTube and have been captured in
case of DNNs, is this phase, speaker utterances will a wide variety of challenging multi-speaker settings in-
be the input to the model created by previous phase cluding background chatter, overlapping speech, channel
and the outputs will be integrated with some method noise and different qualities of recording. The general
to create the unique speaker model. The speaker statistics of the dataset are available in Table I.
representation provided by averaging the outputs of
the DNN (called d-vectors) is a common choice [6], [19]. V. A RCHITECTURE

Evaluation During the evaluation phase, test utterances The aim is to utilize CNNs as powerful feature ex-
will be fed to the model for speaker representation tractors. The input pipeline and the specific architecture
extraction. The query test sample will be compared to all are explained in this section.
TABLE I TABLE II
S TATISTICS OF THE VOX C ELEB DATASET. T HE ARCHITECTURE USED FOR VERIFICATION PURPOSE .

# of Development Test Layer Kernel # Filters Stride Output size


Speakers 1,211 40 Conv-1 7×7 32 2×2 32 × 17 × 47
Videos 21,819 677 Conv-2 5×5 64 1×1 64 × 13 × 43
Utterances 139,124 6,255 Conv-3 3×3 128 1×1 128 × 11 × 41
Conv-4 3×3 256 1×1 256 × 9 × 39
Conv-5 3×3 256 1×1 256 × 7 × 37
fc-1 - 1024 - -
fc-2 - 256 - -
A. Input pipeline fc-3 - 1251 - -
The audio raw features are extracted and re-sampled
to 16kHz. For spectrogram generation, 25ms hamming
windows with a step size of 10ms are used for 512-
point FFT spectrum. We used 1-second of audio stream The Siamese architecture consists of two identical
and it yields to a spectrogram of size 256 × 100. No CNNs. The main goal is to create a common feature
mean or variance normalization has been used. No voice subspace for discrimination between match and non-
activity detection has been performed as well. Out of match pairs based on a distance metric. The model is
the generated spectrum, 40 log-energy of filter banks demonstrated in Fig. 2. The general idea is that when
per hamming window alongside their first and second two pairs belong to the same identity, their distance
order derivatives are generated to form 3 × 40 × 100 in the common feature subspace should be as close as
input feature map. For feature extraction, the SpeechPy possible and as far as possible, otherwise. Assume Xp1
library [24] has been utilized. and Xp2 are the input pairs of the system and the distance
between a pair of input in the output subspace defined as
B. Architecture Design DW (Xp1 , Xp2 ) (i.e., the `2 −norm between two vectors),
We use an architecture similar to VGG-M [25], widely then the distance is computed as follows:
used for image classification and speech-related appli-
cations [26]. The details are available in Table III. We
modified the architecture with some considerations: (1) It DW (Xp1 , Xp1 ) = ||FW (Xp1 ) − FW (Xp2 )||2 . (1)
should be adapted to our input pipeline, (2) we do not
use any pooling layer and (3) the size has been shrunk
for having a smaller architecture for faster training and
empirically we found it to be less prone to overfitting.
The main reason for not pooling in time is to keep
the temporal information although it is claimed that
it may increase the robustness to temporal variations.
In practice we found it degrading the performance to
perform pooling in the time dimension. Our observations
have been further investigated an verified by [15] as well.

C. The Verification Setup


A usual method is to train a Softmax loss function for
classification and use the features extracted by the fully-
connected layer prior to Softmax. However, a reasonable
criticism about this method is that the Softmax loss
criterion tries to identify speakers and not verify the Fig. 2. Siamese Model Framework
available identity in a one-vs-all setup inconsistent with
the speaker verification protocol. Instead, we utilize a The system will be trained using contrastive cost
Siamese neural network as proposed in [27] and imple- function. The goal of contrastive cost LW (X, Y ) is to
mented in many research efforts [28]–[30]. minimize the loss in both scenarios of encountering
match and non-match pairs, with the following defini- performance metric used in our evaluation phase is the
tion: EER that is commonly used for verification systems.
N
1 X
LW (X, Y ) = LW (Yi , (Xp1 , Xp2 )i ), (2) B. Methods
N
i=1
where N is the number of training samples, i is the DIfferent methods have been implemented and used
indication of each sample and LW (Yi , (Xp1 , Xp2 )i ) is for comparison.
defined as follows: GMM-UBM For GMM-UBM method [9], MFCCs with
40 coefficients with cepstral mean and variance normal-
LW (Yi , (Xp1 , Xp2 )i ) = Y ∗ Lgen (DW (Xp1 , Xp2 )i ) isation are used. For training the Universal Background
(3) Model (UBM) made of 512 mixture components, 20
+ (1 − Y ) ∗ Limp (DW (Xp1 , Xp2 )i ) + λ||W ||2
iterations of the training data have been used.
in which the last term is the regularization. Lgen and I-vectors The I-vectors system has been widely known
Limp will be defined as the functions of DW (Xp1 , Xp2 ) as one of the state-of-the-art representations which op-
by the following equations: erate as frame-level as proposed in [10]. Probabilistic
linear discriminant analysis (PLDA) has also been used
(
Lgen (DW (Xp1 , Xp2 )) = 21 DW (Xp1 , Xp2 )2 on top of I-vector for dimensionality reduction [33].
Limp (DW (Xp1 , Xp2 )) = 21 max{0, (M − DW (Xp1 , Xp2 ))}2
(4) C. Results
where M is a predefined margin. Contrastive cost is
implemented as a mapping criterion which supposed to We used different dimensions for our embedding level.
put match pairs to nearby and non-match pairs to distant Moreover, we used I-vector with and without PLDA
places in the output manifold. to showcase the effect of probabilistic dimensionality
reduction.
VI. I MPLEMENTATION
We used TensorFLow [31] as the deep learning frame- TABLE III
work and our model has been trained on a NVIDIA T HE ARCHITECTURE USED FOR VERIFICATION PURPOSE .
Pascal GPU. No data augmentation has been used for the
development phase. Batch normalization has been em- Model EER
ployed for having the robustness to s internal covariate GMM-UBM 17.1
shifts and being less affected by initialization [32]. For I-vectors 12.8
verification, after training the network as a classifier (ini- I-vectors + PLDA [10] 11.5
CNN-2048 11.3
tial learning rate = 0.001), we fine-tune the network by CNN-256 + Pair Selection 10.5
training the Siamese architecture (with an initial learning
rate of 0.00001) for 20 epochs. Unlike the procedure
used by [23], we do not freeze the weights of any layer
for fine-tuning.
VIII. C ONCLUSION
VII. E XPERIMENTS
In this section, we describe the experiments performed We proposed an end-to-end architecture alongside
for speaker verification and we compare our proposed with adapting active learning procedure for pair selection
method to some of the existing methods. for speaker verification application. It is observed the
effective online pair selection method in addition to train-
A. Experimental Setup ing the system in an end-to-end fashion, can outperform
For speaker verification, we followed the protocol the traditional method that uses the background models
provided by [23] and the identities their names start for speaker representation. A proposed CNN architecture
with ’E’ are used for testing. The subjects are not used has also been trained as a feature extractor on top of
for purpose of training and testing though. We only the traditional speech features rather than the raw audio
use the subjects to create match and non-match pairs for directly capturing the inter-speaker and intra-speaker
for verification purposes. As it has been mentioned, the variations.
R EFERENCES [17] T. N. Sainath, A.-r. Mohamed, B. Kingsbury, and B. Ram-
abhadran, “Deep convolutional neural networks for lvcsr,” in
[1] K. Simonyan and A. Zisserman, “Very deep convolutional Acoustics, speech and signal processing (ICASSP), 2013 IEEE
networks for large-scale image recognition,” arXiv preprint international conference on, pp. 8614–8618, IEEE, 2013.
arXiv:1409.1556, 2014. [18] F. Richardson, D. Reynolds, and N. Dehak, “Deep neural
[2] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classi- network approaches to speaker and language recognition,” IEEE
fication with deep convolutional neural networks,” in Advances Signal Processing Letters, vol. 22, no. 10, pp. 1671–1675, 2015.
in neural information processing systems, pp. 1097–1105, 2012. [19] Y.-h. Chen, I. Lopez-Moreno, T. N. Sainath, M. Visontai,
[3] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, R. Alvarez, and C. Parada, “Locally-connected and convolu-
N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, tional neural networks for small footprint speaker recognition,”
et al., “Deep neural networks for acoustic modeling in speech in Sixteenth Annual Conference of the International Speech
recognition: The shared views of four research groups,” IEEE Communication Association, 2015.
Signal Processing Magazine, vol. 29, no. 6, pp. 82–97, 2012. [20] G. Heigold, I. Moreno, S. Bengio, and N. Shazeer, “End-to-
[4] S. Han, J. Pool, J. Tran, and W. Dally, “Learning both weights end text-dependent speaker verification,” in Acoustics, Speech
and connections for efficient neural network,” in Advances in and Signal Processing (ICASSP), 2016 IEEE International
neural information processing systems, pp. 1135–1143, 2015. Conference on, pp. 5115–5119, IEEE, 2016.
[5] Y. Lei, N. Scheffer, L. Ferrer, and M. McLaren, “A novel [21] C. Zhang and K. Koishida, “End-to-end text-independent
scheme for speaker recognition using a phonetically-aware deep speaker verification with triplet loss on short utterances,” in
neural network,” in Acoustics, Speech and Signal Processing Proc. of Interspeech, 2017.
(ICASSP), 2014 IEEE International Conference on, pp. 1695– [22] P. Kenny, G. Boulianne, P. Ouellet, and P. Dumouchel, “Joint
1699, IEEE, 2014. factor analysis versus eigenchannels in speaker recognition,”
[6] E. Variani, X. Lei, E. McDermott, I. L. Moreno, and IEEE Transactions on Audio, Speech, and Language Process-
J. Gonzalez-Dominguez, “Deep neural networks for small foot- ing, vol. 15, no. 4, pp. 1435–1447, 2007.
print text-dependent speaker verification,” in Acoustics, Speech
[23] A. Nagrani, J. S. Chung, and A. Zisserman, “Voxceleb: a large-
and Signal Processing (ICASSP), 2014 IEEE International
scale speaker identification dataset,” in INTERSPEECH, 2017.
Conference on, pp. 4052–4056, IEEE, 2014.
[24] A. Torfi, “Speechpy-a library for speech processing and recog-
[7] C. E. Shannon, “A mathematical theory of communication,”
nition,” arXiv preprint arXiv:1803.01094, 2018.
ACM SIGMOBILE Mobile Computing and Communications
Review, vol. 5, no. 1, pp. 3–55, 2001. [25] K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman, “Re-
turn of the devil in the details: Delving deep into convolutional
[8] M. Gurban and J.-P. Thiran, “Information theoretic feature
nets,” arXiv preprint arXiv:1405.3531, 2014.
extraction for audio-visual speech recognition,” IEEE Trans-
actions on signal processing, vol. 57, no. 12, pp. 4765–4776, [26] J. S. Chung and A. Zisserman, “Out of time: automated lip sync
2009. in the wild,” in Workshop on Multi-view Lip-reading, ACCV,
[9] D. A. Reynolds, T. F. Quatieri, and R. B. Dunn, “Speaker 2016.
verification using adapted gaussian mixture models,” Digital [27] S. Chopra, R. Hadsell, and Y. LeCun, “Learning a similarity
signal processing, vol. 10, no. 1-3, pp. 19–41, 2000. metric discriminatively, with application to face verification,” in
[10] N. Dehak, P. J. Kenny, R. Dehak, P. Dumouchel, and P. Ouellet, Computer Vision and Pattern Recognition, 2005. CVPR 2005.
“Front-end factor analysis for speaker verification,” IEEE Trans- IEEE Computer Society Conference on, vol. 1, pp. 539–546,
actions on Audio, Speech, and Language Processing, vol. 19, IEEE, 2005.
no. 4, pp. 788–798, 2011. [28] X. Sun, A. Torfi, and N. Nasrabadi, “Deep siamese convo-
[11] W. M. Campbell, D. E. Sturim, and D. A. Reynolds, “Support lutional neural networks for identical twins and look-alike
vector machines using gmm supervectors for speaker verifica- identification,” Deep Learning in Biometrics, p. 65, 2018.
tion,” IEEE signal processing letters, vol. 13, no. 5, pp. 308– [29] R. R. Varior, M. Haloi, and G. Wang, “Gated siamese convolu-
311, 2006. tional neural network architecture for human re-identification,”
[12] D. Garcia-Romero and C. Y. Espy-Wilson, “Analysis of i-vector in European Conference on Computer Vision, pp. 791–808,
length normalization in speaker recognition systems,” in Twelfth Springer, 2016.
Annual Conference of the International Speech Communication [30] G. Koch, R. Zemel, and R. Salakhutdinov, “Siamese neural net-
Association, 2011. works for one-shot image recognition,” in ICML Deep Learning
[13] S. Ji, W. Xu, M. Yang, and K. Yu, “3d convolutional neural Workshop, vol. 2, 2015.
networks for human action recognition,” IEEE transactions [31] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro,
on pattern analysis and machine intelligence, vol. 35, no. 1, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat,
pp. 221–231, 2013. I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefow-
[14] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri, icz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga,
“Learning spatiotemporal features with 3d convolutional net- S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens,
works,” in Computer Vision (ICCV), 2015 IEEE International B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke,
Conference on, pp. 4489–4497, IEEE, 2015. V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg,
[15] O. Abdel-Hamid, A.-r. Mohamed, H. Jiang, L. Deng, G. Penn, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: Large-scale
and D. Yu, “Convolutional neural networks for speech recogni- machine learning on heterogeneous systems,” 2015. Software
tion,” IEEE/ACM Transactions on audio, speech, and language available from tensorflow.org.
processing, vol. 22, no. 10, pp. 1533–1545, 2014. [32] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating
[16] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient- deep network training by reducing internal covariate shift,” in
based learning applied to document recognition,” Proceedings International conference on machine learning, pp. 448–456,
of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998. 2015.
[33] P. Kenny, “Bayesian speaker verification with heavy-tailed
priors.,” in Odyssey, p. 14, 2010.

You might also like