- Research
- Open access
- Published:
Speaker adaptation in the maximum a posteriori framework based on the probabilistic 2-mode analysis of training models
EURASIP Journal on Audio, Speech, and Music Processing volume 2013, Article number: 7 (2013)
Abstract
In this article, we describe a speaker adaptation method based on the probabilistic 2-mode analysis of training models. Probabilistic 2-mode analysis is a probabilistic extension of multilinear analysis. We apply probabilistic 2-mode analysis to speaker adaptation by representing each of the hidden Markov model mean vectors of training speakers as a matrix, and derive the speaker adaptation equation in the maximum a posteriori (MAP) framework. The adaptation equation becomes similar to the speaker adaptation equation using the MAP linear regression adaptation. In the experiments, the adapted models based on probabilistic 2-mode analysis showed performance improvement over the adapted models based on Tucker decomposition, which is a representative multilinear decomposition technique, for small amounts of adaptation data while maintaining good performance for large amounts of adaptation data.
1 Introduction
In automatic speech recognition (ASR) systems using hidden Markov models (HMMs)[1], mismatches between the training and testing conditions lead to performance degradation. One of such mismatches results from speaker variation. Thus, speaker adaptation techniques[2] are employed to transform a well-trained canonical model (e.g., speaker-independent (SI) HMM) to the target speaker. Speaker adaptation requires fewer adaptation data than needed to build a speaker-dependent (SD) model. Among speaker adaptation techniques, eigenvoice (EV)[3] expresses the model of a new speaker as a linear combination of basis vectors, which are built from the principal component analysis (PCA) of the HMM mean vectors of training speakers.
In a similar approach, speaker adaptation based on tensor analysis using Tucker decomposition[4] was investigated in[5], where bases were constructed from the multilinear decomposition of a tensor that consisted of the HMM mean vectors of training speakers. In the approach, all the training models were collectively arranged in a third-order tensor (3-D array):
where the first, second, and third modes (dimensions) were for the mixture component, dimension of the mean vector, and training speaker. In[5], Tucker decomposition was used to build bases and in the experiments, speaker adaptation using Tucker decomposition showed better performance than eigenvoice and maximum likelihood linear regression (MLLR) adaptation[6]. The improvement seemed to be attributable to the increased number of adaptation parameters and compact bases. Also noticed in[5] was that the increased number of adaptation parameters did not guarantee a good performance when the amount of adaptation data was small (the determination of the proper number of adaptation parameters for given adaptation data is a model-order selection problem). Extending the tensor-based approach, in[7], the fourth mode for noise was added (so, became a 4-D array) so that the training models of various speakers and noise conditions were decomposed.
In this article, we describe a speaker adaptation method using probabilistic 2-mode analysis, which is an application of probabilistic tensor analysis (PTA)[8] to the second-order tensor (i.e., matrix); PTA is an application of probabilistic PCA (PPCA)[9] to tensor objects. Using probabilistic 2-mode analysis, we derive bases from training models in a probabilistic framework, and formulate the speaker adaptation equation in the maximum a posteriori (MAP) framework[10]. The speaker adaptation equation based on the probabilistic approach becomes similar to MAP linear regression (MAPLR) adaptation[11] as shown below. The experiments showed that the proposed method further improved the performance of the speaker adaptation based on Tucker decomposition for small amounts of adaptation data.
The rest of this article is organized as follows. Section 2.1 explains some tensor algebra and tensor decomposition. Section 2.3 explains the probabilistic 2-mode analysis of a set of mean vectors of training HMMs. In Section 2.5, the estimation of the prior distribution of the adaptation parameter is described. Section 2.6 describes the speaker adaptation in the MAP framework using the bases and the prior. Section 2.2 describes the speaker adaptation using Tucker decomposition, which is compared with the probabilistic 2-mode analysis-based method. We explain the experiments in Section 3 and conclude the article in Section 4. Some of the notations used in this article are summarized in Table1.
2 Methods
2.1 Multilinear decomposition
Following the convention of multilinear algebra, we denote vectors, matrices, and tensors by lowercase boldface letters (e.g., M), uppercase boldface letters (e.g., M), and calligraphic letters (e.g.,), respectively, in this article.
A tensor is a multidimensional array, and an N-dimensional array is called the N th-order tensor (or N-way array). The order of a tensor is the number of indices for addressing the tensor; so the order of is N. Scalar, vector, and matrix are zeroth-, first-, and second-order tensors, respectively. There are three indices for addressing the array in a third-order tensor as depicted in Figure1.
Tensor algebra is performed in terms of matrix and vector representations of tensors; the mode-n flattening (matricization) of tensor, which is denoted as M(n), is obtained by reordering the elements as follows:
That is, all the column vectors along the mode n are arranged into a matrix. For example, a third-order tensor can be flattened into an I×(J K), J×(K I), or K×(I J) matrix as depicted in Figure2; for a tensor:
the mode-n flattening is given as:
The operation of the mode-n flattening will be denoted as mat n (·), i.e.,.
Multiplication of a tensor and a matrix is performed by n-mode product; the n-mode product of a tensor with a matrix U is denoted as
and is carried out by matrix multiplication in terms of flattened matrices:
or elementwise
where w and u denote the elements of and U, respectively. If and, then the dimension of becomes I1×I2×⋯×In−1×K n ×In+1×⋯×I N .
As an extension of singular value decomposition (SVD) to tensor objects, Tucker decomposition decomposes a tensor as follows[4]:
where, K n ≤I n (n=1,…,N). The core tensor and mode matrices U n ’s correspond to the matrices of singular values and orthonormal basis vectors in matrix SVD, respectively. An example of Tucker decomposition of a third-order tensor is illustrated in Figure3.
The core tensor and mode matrices U n ’s in Tucker decomposition can be computed such that they minimize
where the norm of a tensor is defined as. A representative technique for Tucker decomposition is the alternating least squares (ALS)[12]; the basic idea is to compute each mode matrix U n alternatingly with other mode matrices fixed. For more details on Tucker decomposition, refer to[4]. In the following section, we explain probabilistic 2-mode analysis in the context of speaker adaptation.
2.2 Speaker adaptation using Tucker decomposition
The probabilistic 2-mode analysis based method is a probabilistic extension of the Tucker decomposition based method. Thus, we compare the probabilistic approach with the Tucker decomposition based method in the experiments. In this section, we explain the speaker adaptation based on the Tucker decomposition of training models in[5]. In this article, speaker adaptation is performed by updating the mean vectors of the output distribution of an HMM. The HMM mean vectors of each training speaker are arranged in an R×D matrix:
Here, μs;r denotes the mean vector corresponding to mixture r of the s th training speaker model.
All the centered HMM mean vectors of training speakers, where, are collectively expressed as a third-order tensor, and we decompose the training tensor by Tucker decomposition as follows:
In the above equation,,, and are basis matrices for the mixture component, dimension of the mean vector, and training speaker, respectively (K R ≤R−1, K D ≤D−1, and K S ≤S−1); the core tensor is common across the mixture component, dimension of the mean vector, and training speaker. In Equation (11), the s th row vector of Uspeaker, which is denoted as uspeaker;s, corresponds to the speaker weight of the s th speaker, thus the low-rank approximation of the s th speaker model is given by
If we define the augmented speaker weight, Equation (12) becomes
Thus, we express the model of a new speaker as
For the given adaptation data O={o1,…,o T }, we derive the equation for finding the speaker weight in a maximum likelihood (ML) criterion:
where γ r (t) denotes the occupation probability of being at mixture r at t given O, C r the covariance matrix of the r th Gaussian component of an SI HMM (in this article, a diagonal covariance matrix is used); umixture;r and denote the r th row vectors of Umixture and, respectively. In the above equation, Wnew, aug can be computed using a technique similar to MLLR adaptation and the weight of the new speaker is obtained by
which is plugged into Equation (14) to produce the model updated for the new speaker.
2.3 Probabilistic 2-mode analysis
The advantage of probabilistic 2-mode analysis over Tucker decomposition is similar to that of PPCA over standard PCA; probabilistic 2-mode analysis can deal with missing entries in the data tensor (although this is not the case in our experiments). In the modeling perspective, probabilistic 2-mode analysis assumes a distribution of latent variables, thus it is suitable for a MAP framework.
In this section, the ensemble of training models is expressed as
Assuming the HMM mean vectors of training speakers are drawn from the matrix-variate normal distribution[13], we derive the adaptation equation based on the probabilistic 2-mode analysis of training models. We use probabilistic 2-mode analysis, the second-order case of PTA[8], to decompose the training models expressed in matrix form. The latent tensor model is expressed as
where denotes the latent tensor, U n ’s the factor loading matrices, the mean, and is the error/noise process. The 2-mode case of the latent tensor model is given by
which becomes, for the training models {M1,…,M S },
where denotes the latent matrix, and the factor loading matrices (K R ≤R−1 and K D ≤D−1), Mmean the mean, and E s the error/noise process. (Mode matrices and dimensions are defined as follows: U1=Umixture, U2=Udim, I1=R, I2=D, K1=K R , and K2=K D .) The distribution of W s is assumed to be a matrix-variate normal, i.e., where ⊗ denotes the Kronecker product, and independent of E s whose elements follow. Figure4 shows the graphical model representing the probabilistic 2-mode model.
In Equation (20), it is computationally intractable to calculate U n ’s simultaneously. So, the following decoupled predictive density is defined:
where and denote the mean vector and noise variance, respectively, for mode n;, i.e., the product of M with all the mode matrices except mode n, which is called the contracted n-mode product[14]. That is, the n th probabilistic function is defined as the projected M by all U j ’s expect U n . Given observed data M, the decoupled posterior probabilistic function is defined as
By Bayes’ theorem, the n th posterior distribution can be expressed in terms of the decoupled likelihood function and the decoupled prior distribution:
Therefore, the decoupled predictive density is given by
is dropped out for a fixed U n ). This is the 2-mode case of the PTA in[8]. In our case, Equation (24) is given by
Now, U n ’s are obtained by maximizing the following posterior distribution:
where. The expectation-maximization (EM) algorithm[15] is applied to compute U n ’s. The application of the EM algorithm to construct probabilistic 2-mode model is explained in the next section.
2.4 Construction of probabilistic 2-mode model for speaker adaptation
In Equation (20), for the given training models, the maximum likelihood (ML) estimate of Mmean is given as and can be estimated as follows. First, let us define the followings: Let be the j th column vector of
for () and be the j th column vector of
Let us suppose and. Then, by integrating out x n , where and. Consequently,
where. The right-hand side of Equation (26) becomes
where and tr[ · ] denotes the trace of a matrix. Summing up for all the modes, we obtain the following log-likelihood function of the posterior distribution:
The graphical model representation of the decoupled probabilistic model is shown in Figure5.
We seek to find U n ’s that maximize the log-likelihood function alternatingly. Mode matrices U1 and U2 are initialized with the results from the Tucker decomposition which minimizes the reconstruction error:
With the initial U1 and U2, the following procedure is performed for each mode (n=1,2).
Each training model is projected into mode matrices except mode n and expressed in a mode-n matrix:
All the column vectors of constitute the training data set:
Then, with an initial estimate of (e.g., 0.005 was used in the experiments), the EM algorithm is iterated as follows until U n and converge.
E-step: From Equation (31), the expectation of the log-likelihood function of complete data w.r.t. is given as
where
So,
with the sufficient statistics are given as follows from Equation (29):
M-step: Model parameters are updated by maximizing 〈Lc〉 w.r.t. U n and. Setting produces
Next, setting produces
Essentially, the procedure applies PPCA to the data set {tn;j} for each mode.
2.5 Estimation of prior distribution
Given model parameters, the weight matrix for the training speaker model M s is obtained by
From the set of weight matrices, the distribution of the weight is estimated. In deriving the adaptation equation in the MAP framework, the parameters for the prior distribution can be obtained in closed-form solutions if p(W) follows a conjugate distribution. Hence, we assume the prior distribution of the weight to be a matrix-variate normal:
Furthermore, the hyperparameters of p(W) can easily be estimated in an ML criterion if Ψ is known[16]. So, Ψ is assumed to be the identity matrix[17], and the hyperparameters are estimated as:
2.6 Speaker adaptation in the MAP framework
Based on Equation (20), we express the model of a new speaker as
For the given adaptation data O={o1,…,o T }, we estimate the adaptation parameter in a MAP criterion:
where Λ={Wnew} denotes the model parameter.
Using the EM algorithm, we obtain the following auxiliary Q-function to be optimized (discarding the terms that are independent of the model parameter):
where Λ and denote the current and updated model parameters, respectively, and. In finding the speaker weight, we compute Wnew, aug≡WnewUdim, from which Wnew is obtained. Solving in this way, we can use the row-by-row technique in MLLR adaptation[6]. Setting yields the following equation:
The above equation can be solved for Wnew, aug in a similar way to MLLR adaptation in[6]: we define the followings:
where v r (i,i) denotes the (i,i) element of V r and ws;i the i th column vector of Ws, aug≡W s Udim. Then, the speaker weight can be computed:
where wnew, aug, (i) denotes the i th row of Wnew, aug and z(i) the i th row vector of Z. The method becomes similar to MAPLR adaptation in[11]. Finally, the speaker weight is obtained as
where [ · ]+ denotes the pseudoinverse of a matrix. The weight is plugged into Equation (44) to produce the model adapted to the new speaker.
2.7 Speaker adaptation techniques compared in the experiments
In this section, we briefly review the speaker adaptation techniques compared with the probabilistic 2-mode analysis based method: eigenvoice adaptation[3], MLLR adaptation[6], and MAPLR adaptation[11].
In eigenvoice adaptation, the collection of HMM mean vectors of speaker s is arranged in an (R D)×1 vector:
Then, the set of S supervectors, {μ1,…,μ S }, is decomposed by PCA to produce the adaptation model
where Φ=[ϕ1…ϕ K ], the basis matrix consisting of the K dominant eigenvectors from PCA, and. The K×1 weight vector can be obtained by maximizing the likelihood of the adaptation data, which is given by
where Φ r and denote the D×K submatrix and D×1 subvector corresponding to the r th mixture of Φ and, respectively.
In MLLR adaptation, the updated model for a new speaker is obtained by linearly transforming the SI model (assuming a global regression matrix):
where μSI, r denotes the mean vector of the SI HMM corresponding to mixture r and ω is the bias offset term: ω=1 to include the term and ω=0 otherwise (ω=1 in our experiments). The D×(D+1) transformation matrix can be obtained in an ML criterion, which yields the following equation:
The above equation can be solved for Wnew:
where and z(i) denote the i th row vectors of and Z, respectively; G(i) and Z are defined as:
where v r (i,i) denotes the (i,i) element of V r .
In MAPLR adaptation, the prior for the transformation matrix is used in the MLLR framework. The parameters for the prior are obtained from the MLLR transformation matrices of training speakers {W1,…,W S }:
where ws, (i) denotes the i th row vector of W s . Then, the transformation matrix for a new speaker is obtained in a MAP criterion. Deriving the equation in the same way as above, we can obtain the following:
3 Experiments
We carried out the large-vocabulary continuous-speech recognition (LVCSR) experiments using the Wall Street Journal corpus WSJ0[18]. In building the SI model, we used 12754 utterances of 101 speakers from the corpus. As the acoustic feature vector, we used the 39-dimensional vector consisting of 13-dimensional mel-frequency cepstral coefficients (MFCCs) including the 0th cepstral coefficient, their derivative coefficients, and their acceleration coefficients. The feature vector was extracted with the 20-ms Hamming window with the frame sliding of 10 ms. Using the HMM toolkit (HTK)[19], we built a tied-state triphone model (word-internal triphones) with 3472 tied states and 8-mixture Gaussian.
To build training models for constructing bases, we transformed the SI model by MLLR adaptation[6] using 32 regression classes followed by maximum a posteriori (MAP) adaptation[10]. We used the 101 adapted models to build the Tucker decomposition and probabilistic tensor based models as well as eigenvoice.
For adaptation and recognition test, we used Nov’92 5K non-verbalized adaptation and test sets. The number of testing speakers was 8; adaptation set was used for adaptation and testing set of 330 sentences was used for recognition test (the number of testing utterances per speaker was about 40). The length of an adaptation sentence was about 6 s and the adaptation was performed in supervised mode. In recognition test, we used WSJ 5K non-verbalized 5k closed-vocabulary set and WSJ standard 5K non-verbalized closed bigram.
The word recognition accuracy of the SI model is 91.54%. Table2 shows the results of the Tucker decomposition and probabilistic 2-mode based methods (K S =100 in the Tucker decomposition based model). In the table, the probabilistic 2-mode based method shows improved performance over the Tucker decomposition based method for small amounts of adaptation data, which can be evidently seen in Figure6 for the Tucker decomposition and probabilistic 2-mode based models with (K R =20,K D =35). The results of MAPLR[11] are also shown in the figure. The use of MAP framework contributes to improved performance for small amounts of adaptation data. The number of free parameters of each method is given as follows: 20 · 35 for the Tucker 3-mode and probabilistic 2-mode based models, and 39 · 40 for MAPLR adaptation. In Figure7, the Tucker decomposition based method is compared with MLLR and eigenvoice adaptation techniques. The figure shows that the Tucker decomposition based method outperforms MLLR and eigenvoice adaptation techniques for adaptation sentences > 1. It can be inferred from the figure that eigenvoice adaptation will outperform the Tucker decomposition based method or MLLR for sparse adaptation data. The p-values from the matched-pair t-test are shown in Table3; although the values are not always small, the performance improvement of the probabilistic 2-mode based method seems meaningful. Additionally, Figure8 shows the performance of the probabilistic 2-mode based model with (K R =20,K D =35), MLLR adaptation with a full regression matrix, and MAPLR adaptation for adaptation data of about 6–240 s; for adaptation sentences ≥ 10 (about 60 s), the probabilistic 2-mode based model shows the comparable performance with MLLR adaptation and MAPLR adaptation. In Figure8, the p-values are given as: p<0.01 for 1–5 adaptation sentences between the probabilistic 2-mode based model and MLLR adaptation, p<0.05 for 2–5 adaptation sentences between the probabilistic 2-mode based model and MAPLR adaptation. The number of free parameters of each method is summarized in Table4.
We think that the performance improvement of the proposed method over MLLR or MAPLR adaptation comes from the use of basis vectors and speaker weight of large dimension. Additionally, we think that the performance improvement of the probabilistic 2-mode based method in the MAP framework over the Tucker decomposition based method in the ML framework for small amounts of adaptation data (e.g., 1 adaptation sentence) is due to its constraint on the weight. If the amount of adaptation data is small (e.g., 1 adaptation sentence), the weight cannot be reliably estimated in the ML framework where the weight is estimated using only adaptation data without constraint, as done in the Tucker decomposition based method. The results confirm that constraint on the weight in the MAP framework can produce better model when the amount of adaptation data is small.
The selection of appropriate dimensions of model parameters (e.g., K R and K D ) in the probabilistic 2-mode analysis depends on the training models and also available adaptation data. The selection of model parameters affects the performance of the system, but how to choose the optimum model parameters is not obvious, which needs a further study.
4 Conclusions
In this article, we applied probabilistic tensor analysis to the adaptation of HMM mean vectors to a new speaker. The training models consisted of the mean vectors of HMMs expressed in matrix form and the training set was decomposed by probabilistic 2-mode analysis. The prior distribution of the adaptation parameter was estimated from the training models. Then, the speaker adaptation equation was derived in the MAP framework. Compared with the speaker adaptation method based on Tucker 3-mode decomposition in the ML framework, the proposed method further improved the performance for small amounts of adaptation data.
Abbreviations
- ALS:
-
Alternating Least Squares
- ASR:
-
Automatic Speech Recognition
- EM:
-
Expectation-Maximization
- HMM:
-
Hidden Markov Model
- HTK:
-
HMM Toolkit
- LVCSR:
-
Large-Vocabulary Continuous-Speech Recognition
- MAP:
-
Maximum A Posteriori
- MAPLR:
-
Maximum A Posteriori Linear Regression
- MFCC:
-
Mel-Frequency Cepstral Coefficient
- ML:
-
Maximum Likelihood
- MLLR:
-
Maximum Likelihood Linear Regression
- PCA:
-
Principal Component Analysis
- PPCA:
-
Probabilistic Principal Component Analysis
- PTA:
-
Probabilistic Tensor Analysis
- SD:
-
Speaker-Dependent
- SI:
-
Speaker-Independent
- SVD:
-
Singular Value Decomposition
References
Rabiner LR: A tutorial on hidden Markov models and selected applications in speech recognition. Proc. IEEE 1989, 77(2):257-286. 10.1109/5.18626
Gales M, Young S: The application of hidden Markov models in speech recognition. Found. Trends Signal, Process 2008, 1(3):195-304.
Kuhn R, Junqua J-C, Nguyen P, Niedzielski N: Rapid speaker adaptation in eigenvoice space. IEEE Trans. Speech Audio Process 2000, 8(6):695-707. 10.1109/89.876308
Kolda TG, Bader BW: Tensor decompositions and applications. SIAM Rev 2009, 51(3):455-500. 10.1137/07070111X
Jeong Y: Speaker adaptation based on the multilinear decomposition of training speaker models. In Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing. TX: Dallas; 2010:4870-4873.
Leggetter CJ, Woodland PC: Maximum likelihood linear regression for speaker adaptation of continuous density hidden Markov models. Comput. Speech Lang 1995, 9(2):171-185. 10.1006/csla.1995.0010
Jeong Y: Acoustic model adaptation based on tensor analysis of training models. IEEE Signal Process. Lett 2011, 18(6):347-350.
Tao D, Song M, Li X, Shen J, Sun J, Wu X, Faloutsos C, Maybank SJ: Bayesian tensor approach for 3-D face modeling. IEEE Trans. Circ. Syst. Video Technol 2008, 18(10):1397-1410.
Tipping ME, Bishop CM: Probabilistic principal component analysis. J. R. Stat. Soc. Ser. B-Stat. Methodol 1999, 61(3):611-622. 10.1111/1467-9868.00196
Gauvain J-L, Lee C-H: Maximum a posteriori estimation for multivariate Gaussian mixture observations of Markov chains. IEEE Trans. Speech Audio Process 1994, 2(2):291-298. 10.1109/89.279278
Chesta C, Siohan O, Lee C-H: Maximum a posteriori linear regression for hidden Markov model adaptation. In Proceedings of EUROSPEECH. Hungary: Budapest; 1999:211-214.
Carroll JD, Chang JJ: Analysis of individual differences in multidimensional scaling via an N-way generalization of “Eckart-Young” decomposition. Psychometrika 1970, 35(3):283-319. 10.1007/BF02310791
Gupta AK, Nagar DK: Matrix Variate Distributions. Boca Raton, FL: Chapman and Hall/CRC; 1999.
Bader BW, Kolda TG: Algorithm 862: MATLAB tensor classes for fast algorithm prototyping. ACM Trans. Math. Softw 2006, 32(4):635-653. 10.1145/1186785.1186794
Dempster AP, Laird NM, Rubin DB: Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. B-Stat. Methodol 1977, 39(1):1-38.
Gupta AK, Varga T: Elliptically Contoured Models in Statistics. Norwell, MA: Kluwer; 1993.
Siohan O, Chesta C, Lee C-H: Joint maximum a posteriori adaptation of transformation and HMM parameters. IEEE Trans. Speech Audio Process 2001, 9(14):417-428.
Paul DB, Baker JM: The design for the Wall Street Journal-based CSR corpus. In Proceedings of DARPA Speech and Natural Language Workshop. TX: Austin; 1992:357-362.
Young S, Evermann G, Kershaw D, Moore G, Odell J, Ollason D, Povey D, Valtchev V, Woodland P: The HTK Book, Version 3.2. England: Cambridge University Engineering Department; 2002.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The author declares that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Jeong, Y. Speaker adaptation in the maximum a posteriori framework based on the probabilistic 2-mode analysis of training models. J AUDIO SPEECH MUSIC PROC. 2013, 7 (2013). https://doi.org/10.1186/1687-4722-2013-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1687-4722-2013-7