Skip to main content

Speaker adaptation based on regularized speaker-dependent eigenphone matrix estimation

Abstract

Eigenphone-based speaker adaptation outperforms conventional maximum likelihood linear regression (MLLR) and eigenvoice methods when there is sufficient adaptation data. However, it suffers from severe over-fitting when only a few seconds of adaptation data are provided. In this paper, various regularization methods are investigated to obtain a more robust speaker-dependent eigenphone matrix estimation. Element-wise l1 norm regularization (known as lasso) encourages the eigenphone matrix to be sparse, which reduces the number of effective free parameters and improves generalization. Squared l2 norm regularization promotes an element-wise shrinkage of the estimated matrix towards zero, thus alleviating over-fitting. Column-wise unsquared l2 norm regularization (known as group lasso) acts like the lasso at the column level, encouraging column sparsity in the eigenphone matrix, i.e., preferring an eigenphone matrix with many zero columns as solution. Each column corresponds to an eigenphone, which is a basis vector of the phone variation subspace. Thus, group lasso tries to prevent the dimensionality of the subspace from growing beyond what is necessary. For nonzero columns, group lasso acts like a squared l2 norm regularization with an adaptive weighting factor at the column level. Two combinations of these methods are also investigated, namely elastic net (applying l1 and squared l2 norms simultaneously) and sparse group lasso (applying l1 and column-wise unsquared l2 norms simultaneously). Furthermore, a simplified method for estimating the eigenphone matrix in case of diagonal covariance matrices is derived, and a unified framework for solving various regularized matrix estimation problems is presented. Experimental results show that these methods improve the adaptation performance substantially, especially when the amount of adaptation data is limited. The best results are obtained when using the sparse group lasso method, which combines the advantages of both the lasso and group lasso methods. Using speaker-adaptive training, performance can be further improved.

1 Introduction

Model space speaker adaptation is an important technique in modern speech recognition system. The basic idea is that given some adaptation data, the parameters of a speaker-independent (SI) system are transformed to match the speaking pattern of an unknown speaker, resulting in a speaker-adapted (SA) system. In this paper, we focus on the speaker adaptation of a conventional hidden Markov model Gaussian mixture model (HMM-GMM)-based speech recognition system. To deal with the scarcity of the adaptation data, parameter sharing schemes are usually adopted. For example, in the eigenvoice method[1], the SA models are assumed to lie in a low-dimensional speaker subspace. The subspace bases are shared among all speakers, and a speaker dependent coordinate vector is estimated for each unknown speaker. The maximum likelihood linear regression (MLLR) method[2] estimates a set of linear transformations to transform an SI model into an SA model. The transformation matrices are shared among different HMM state components.

Recently, a novel phone subspace-based method, the eigenphone-based method, was proposed[3]. In contrast to the eigenvoice method, the phone variations of a speaker are assumed to be in a low-dimensional subspace, called the phone variation subspace. The coordinates of the whole phone set are shared among different speakers. During speaker adaptation, a speaker-dependent eigenphone matrix representing the main phone variation patterns for a specific speaker is estimated. In[4], the ‘eigenphone’ is first introduced as a set of linear basis vectors of the phone space used in conjunction with eigenvoices. The set of linear basis vectors are obtained by a Kullback-Leibler divergence minimization algorithm for a closed set of training speakers. Estimation of the eigenphones for unknown speakers is not studied. Kenny’s eigenphone method is a multi-speaker modeling technique rather than a speaker adaptation technique in the usual sense. In our method, the speaker-independent phone coordinate matrix is obtained by principal component analysis (PCA), and speaker adaptation is performed by estimating a set of eigenphones for each unknown speaker using the maximum likelihood criterion.

Due to its more elaborate modeling, the eigenphone method outperforms both the eigenvoice and the MLLR method, when sufficient amounts of adaptation data are available. However, with limited amounts of adaptation data, the estimation shows severe over-fitting, resulting in very bad adaptation performance[3]. Even with a fine tuned Gaussian prior, the eigenphone matrix estimated by the maximum a posteriori (MAP) criterion still does not match the performance of the eigenvoice method.

In machine learning, regularization techniques are widely employed to address the problem of data scarcity and model complexity. Recently, regularization has been widely adopted in speech processing and speech recognition applications. For instance, l1 and l2 regularization have been proposed for spectral denoising in speech recognition[5]. In[6], similar regularization methods are adopted to improve the estimation of state-specific parameters in the subspace Gaussian mixture model (SGMM). In[7], l1 regularization is used to reduce the nonzero connections of deep neural networks (DNNs) without sacrificing speech recognition performance. In[8], it was found that group sparse regularization can offer significant gains over efficient techniques like the elastic net (combining of l1 and l2 regularization) in noise robust speech recognition.

In this paper, we investigate the regularized estimation of the speaker-dependent eigenphone matrix for speaker adaptation. Three regularization methods and their combinations are applied to improve the robustness of the eigenphone-based method. The l1 norm regularization can be used to constrain the sparsity of the matrix, which can reduce the number of free parameters of each eigenphone, thus improving the robustness of the adaptation. The squared l2 norm can prevent each eigenphone from being too large, yielding better generalization of the adapted model. Each column in the eigenphone matrix corresponds to one eigenphone and hence is a basis vector of the phone variation subspace. Thus, the number of nonzero columns determines the dimension of the phone variation subspace. The column-wise unsquared l2 norm regularization forces some columns of the matrix to be zero, thus effectively preventing the dimensionality of the phone variation subspace to grow beyond what is necessary. In this paper, all these regularization methods, as well as two combinations of them, namely, the elastic net and sparse group lasso, are presented in a unified framework. Accelerated proximal gradient descent is adopted to solve the mathematical optimization problems in a flexible way.

In[9], a speaker-space compressive sensing method is used to perform speaker adaptation using an over-complete speaker dictionary in case of limited amount of adaptation data. In this paper, we discuss the phone-space speaker adaptation method, which obtains good performance when the adaptation data is sufficient. Various regularization methods are applied to improve performance in case of insufficient adaptation data. Although the speaker-space and phone-space methods can be combined using a hierarchical Bayesian framework[10], we will not pursue that in this paper.

In the next section, a brief overview of the eigenphone-based speaker adaptation method is given, a simplified method for row-wise estimation of the eigenphone matrix in case of diagonal covariance matrices is derived, and the comparisons between the eigenphone method and various existing methods are presented. A unified framework of regularized eigenphone estimation is proposed in Section 3. Various regularization methods and their combinations are discussed in detail. The optimization of the eigenphone matrix using an accelerated incremental proximal gradient decent algorithm is given in Section 4. In Section 5, different regularization methods are compared through experiments on supervised speaker adaptation of a Mandarin syllable recognition system and unsupervised speaker adaptation of an English large vocabulary speech recognition system using the Wall Street Journal (WSJ; New York, NY, USA) corpus. Finally, conclusions are given in Section 6.

2 Review of the eigenphone-based speaker adaptation method

2.1 Eigenphone-based speaker adaptation

Given a set of speaker-independent HMMs containing a total of M mixture components across all states and models and a D-dimensional speech feature vector, let μ m , μ m (s), and u m (s)=μ m (s)-μ m denote the SI mean vector, the SA mean vector, and the phone variation vector for speaker s and mixture component m, respectively. In eigenphone-based speaker adaptation method, the phone variation vectors { u m ( s ) } m = 1 M are assumed to be located in a speaker-dependent N(N<<M) dimensional phone variation subspace. The eigenphone decomposition of the phone variation matrix can be expressed by the following equation[3]:

U ( s ) = u 1 ( s ) u 2 ( s ) u M ( s ) v 0 ( s ) v 1 ( s ) v 2 ( s ) v N ( s ) 1 1 1 1 l 11 l 21 l 31 l M 1 l 12 l 22 l 32 l M 2 l 1 N l 2 N l 3 N l MN = V ( s ) · L ,
(1)

where v0(s) and { v n ( s ) } n = 1 N denote the origin and the bases of the phone variation subspace of speaker s, respectively, l m 1 l m 2 l mN T is the corresponding coordinate of mixture component m. We call { v n ( s ) } n = 0 N the eigenphones of speaker s.

Equation 1 can be viewed as the decomposition of the phone variation matrix U(s) to the multiplication of two low-rank matrices L and V(s). Note that the phone coordinate matrix L is shared among all speakers, and the eigenphone matrix V(s) is speaker dependent. Given L, speaker adaptation can be performed by estimating V(s′) for each unknown speaker s′ during adaptation.

Suppose there are S speakers in the training set. Concatenating each column of all training speaker phone variation matrices { U ( s ) } s = 1 S , we can obtain

U = U ( 1 ) U ( 2 ) U ( S ) = U 1 ( 1 ) U 2 ( 1 ) U M ( 1 ) U 1 ( 2 ) U 2 ( 2 ) U M ( 2 ) U 1 ( S ) U 2 ( S ) U M ( S ) v 0 ( 1 ) v 1 ( 1 ) v 2 ( 1 ) v N ( 1 ) v 0 ( 2 ) v 1 ( 2 ) v 2 ( 2 ) v N ( 2 ) v 0 ( S ) v 1 ( S ) v 2 ( S ) v N ( S ) × 1 1 1 1 l 11 l 21 l 31 l M 1 l 12 l 22 l 32 l M 2 l 1 N l 2 N l 3 N l MN = v · L .
(2)

Note that the n th column of V, which is the concatenation of the n th eigenphones for all speakers { v n ( s ) } s = 1 S , can be viewed as a basis vector of the column vectors of matrix U. The m th column of the phone coordinate matrix L corresponds to the coordinate vector for mixture component m. Hence, L implicitly contains the correlation information for different Gaussian components, which is speaker independent. From Equation 2, it can be observed that L can be calculated by performing PCA on the columns of matrix U.

During speaker adaptation, given some adaptation data, the eigenphone matrix V(s) is estimated using the maximum likelihood criterion. Let O(s)={o(s,1),o(s,2),,o(s,T)} denote the sequence of feature vectors of the adaptation data for speaker s. Using the expectation maximization (EM) algorithm, the auxiliary function to be minimized is given as follows:

Q ( V ( s ) ) = 1 2 t m γ m ( t ) o ( s , t ) - μ m ( s ) T × Σ m - 1 o ( s , t ) - μ m ( s ) ,
(3)

where μ m (s)= μ m + v 0 (s)+ n = 1 N l mn v n (s), and γ m (t) is the posterior probability of being in mixture m at time t given the observation sequence O(s) and the current estimation of the SA model.

Suppose the covariance matrix Σ m is diagonal. Let σm,d denote its d th diagonal element and o d (s,t), μm,d, and vn,d(s) represent the d th component of o(s,t), μ m , and v n (s), respectively. After some mathematical manipulation, Equation 3 can be simplified to

Q ( v ( s ) ) = 1 2 d t m γ m ( t ) σ m , d - 1 o m , d ( s , t ) - l ̂ m T ν d ( s ) 2 ,
(4)

where o m , d (s,t)= o d (s,t)- μ m , d , l ̂ m = [ 1 , l m 1 , l m 2 , , l mN ] T , and ν d (s)=[v0,d(s),v1,d(s),v2,d(s),…,vN,d(s)]T, which is the d th row of the eigenphone matrix V(s).

Define

A d = t m γ m ( t ) σ m , d - 1 l ̂ m l ̂ m T

and

b d = t m γ m ( t ) σ m , d - 1 o′ m , d ( s , t ) l ̂ m .

Equation 4 can be further simplified to

Q(V(s))= 1 2 d ν d ( s ) T A d ν d ( s ) - b d T ν d ( s ) +Const.
(5)

Setting the derivative of (5) with respect to ν d (s) to zero yields ν ̂ d (s)= A d - 1 b d . Because of the independence of different feature dimensions, { ν ̂ d ( s ) } d = 1 D can be calculated in parallel very efficiently.

It is well known that many conventional speaker adaptation methods, such as MLLR and the eigenvoice method, work substantially better in combination with speaker-adaptive training (SAT)[11]. The above eigenphone-based speaker adaptation method can also be combined with SAT. Initially, an SI model ΛSI is trained using all the training data. Then, the speaker-adapted model Λs for each training speaker s is obtained using conventional speaker adaptation methods such as MLLR + MAP. The phone coordinate matrix L is calculated using PCA on the columns of the training speaker phone variation matrix U (Equation 2). Let Λc denote the canonical model; then, the eigenphone-based SAT procedure can be summarized as follows:

  1. 1.

    Initialize Λ c with Λ SI.

  2. 2.

    Given Λ c and L, estimate the eigenphone matrices V(s) for each training speaker s using the corresponding speaker-dependent training data.

  3. 3.

    Given { V ( s ) } s = 1 S and L, re-estimate the canonical model Λ c using all training data.

  4. 4.

    Repeat steps 2 and 3 for a predefined count K.

Note that in step 3, the first-order statistic s m and second-order statistic S m of Gaussian component m are calculated as

s m = s t γ m (t) o m (s,t)
(6)
S m = s t γ m (t) o m (s,t) o m T (s,t),
(7)

where o m (s,t)=o(s,t)-V(s) l ̂ m .

2.2 Comparison with existing methods

Various adaptation methods have been proposed in the past 2 decades, which can be classified into three broad categories: MAP[12], MLLR[13], and speaker subspace-based methods[1]. In conventional MAP adaptation, with a conjugate prior distribution, the SA model parameters are estimated using the maximum a posteriori criterion. The main advantage of MAP adaptation is its good asymptotic property, which means that the MAP estimate approaches the maximum likelihood (ML) estimate when the adaptation data is sufficient. But, it is a local update of the model parameters, in which only model parameters observed in the adaptation data can be modified from their prior means. The number of free parameters in MAP adaptation is fixed to M·D. Large amounts of adaptation data are required to obtain good performance. The chance of over-fitting is controlled by the prior weight. The larger the prior weight, the lower the chance of over-fitting.

Instead of estimating the SD model directly, the MLLR method estimates a set of linear transformations to transform an SI model into a new SA model. Using a regression class tree, the Gaussian components are grouped into classes with each class having its own transformation matrix. The number of regression classes (denoted by R) can be adjusted automatically according to the amount of adaptation data. There are R D2 free parameters in MLLR, which are much fewer than those of the MAP method. Hence, the MLLR method has lower data requirements. However, its asymptotic behavior is poor, as performance improvement saturates rapidly as the adaptation data increases. The chance of over-fitting is closely related to the number of regression classes N used. The number of free parameters can only be an integer multiple of D2, which restricts its flexibility.

Unlike MAP and MLLR, speaker subspace-based approaches deal with the speaker adaptation problem in a different way. These assume that all SD models lie in a low-dimensional manifold, so that speaker adaptation is no more than the estimation of the local or global coordinate of the new SD model. A representative of these methods is the eigenvoice method[1], where the low-dimensional manifold is a linear subspace and a set of linear bases (called eigenvoices), which capture most of the variance of the SD model parameters, can be obtained by principal component analysis. During speaker adaptation, the coordinate of a new SD model is estimated using the maximum likelihood criterion. The number of free parameters in the eigenvoice method is equal to the dimension (K) of the speaker subspace, which is much fewer than that of the MAP and MLLR methods. So, the eigenvoice method can yield good performance even when only a few seconds of adaptation data is provided. The chance of over-fitting is related to K, which can be adjusted according to amount of adaptation data using a heuristic formula or regularization method[9]. However, due to the strong subspace constraint, its performance is poor compared with that of the MLLR or MAP method when there is a sufficient amount of adaptation data.

In the eigenphone method, a phone variation subspace is assumed. Each speaker-dependent eigenphone matrix V(s) is of size (N + 1)× D, containing more free parameters than the eigenvoice method. By adjusting the dimensionality (N) of the phone variation subspace, the number of free parameters can be varied by an integer multiplier of D. So, the eigenphone method is more flexible than the MLLR method. When a sufficient amount of adaptation data is available, better performance can be obtained with a large N (typically N=100). However, when the amount of adaptation data is limited, performance degrades quickly. The recognition rate can even fall bellow that of the unadapted SI model. In order to alleviate the over-fitting problem, a Gaussian prior is assumed and an MAP adaptation method is derived in[3]. In this paper, we address this problem using an explicit matrix regularization function.

The advantages of the MAP method, the eigenvoice method, and the eigenphone method can be combined using a probabilistic formulation and the Bayesian principle, resulting in a hierarchical Bayesian adaptation method[10]. This paper focuses on using various matrix regularization methods to improve the performance of the eigenphone method in case of insufficient amount of adaptation data. In the following sections, we omit the speaker identifier s for brevity, i.e., we write V for V(s) and v n for v n (s).

3 Regularized eigenphone matrix estimation

The center of the eigenphone adaptation method is the robust estimation of the eigenphone matrix V. This type of problem, i.e., the estimation of an unknown matrix from some observation data, has appeared frequently across many diverse fields. Regularization proved to be a valid method to overcome the data scarcity. For robust eigenphone matrix estimation, the regularized objective function to be minimized is as following:

Q′(V)=Q(V)+J(V),
(8)

where J(V) denotes a regularization function (known as regularizer) for V.

In this paper, we consider the following general regularization function:

J(V)= λ 1 ||V| | 1 + λ 2 ||V| | 2 2 + λ 3 n = 0 N || v n | | 2 ,
(9)

where||V| | 1 = n = 0 N || v n | | 1 and||V| | 2 2 = n = 0 N || v n | | 2 2 denote the l1 norm and squared l2 norm of matrix V. ||v n ||1 and ||v n ||2 denote the l1 norm and l2 norm of column vector v n . λ1, λ2, and λ3 are nonnegative weighting factors for the matrix l1 norm, squared l2 norm, and column-wise unsquared l2 norm, respectively.

Different norms have different effects of regularization. Equation 9 is a mixed norm regularizer, with many well-known regularizers as special cases of it. The general form has the advantage that we can solve the various regularization problems in a unified framework using a single algorithm.

The l1 norm is the standard convex relaxation of the l0 norm. The l1 norm regularizer (J(V) with λ1>0 and λ2=λ3=0) is sometimes referred to as lasso[14], which can drive an element-wise shrinkage of V towards zero, thus leading to a sparse matrix solution. l1 norm regularization has been widely used as an effective parameter selection method in compressive sensing, signal recovery etc.

The squared l2 norm regularizer (J(V) with λ2>0 and λ1=λ3=0) is referred to as ridge regression[15] or weight decay in the literature. This penalizes large value components of the parameters, enabling more robust estimation and prevents model over-fitting.

The column-wise l2 norm regularizer (J(V) with λ3>0 and λ1=λ2=0) is a variant of the group lasso[16], which acts like lasso at the group level: due to the nondifferentiability of v n at 0, the entire group of parameters may be set to zero at the same time[16]. Here a ‘group’ corresponds to one column of the matrix V, and the group lasso is a good surrogate for column sparsity. Previous experiments on eigenphone-based speaker adaptation have shown that when the amount of adaptation data is sufficient, the number of eigenphones should be large. When less adaptation data is available, fewer eigenphones should be used, i.e., many eigenphones should be zero. In this situation, the optimal eigenphone matrix should show ‘group sparsity’, where a group corresponds to an eigenphone vector. Hence, group lasso is a good choice for eigenphone matrix regularization. Each eigenphone is of dimension D, and there are N eigenphones in the N-dimensional phone variation subspace. If we combine all eigenphones to form a dictionary, the dictionary is over-complete when N>D. Learning such an over-complete dictionary requires a large amount of adaptation data. The group lasso regularizer removes unnecessary eigenphones from the dictionary according to the amount of adaptation data available. When insufficient data is provided, the resulting dictionary may not be complete due to the effect of nondifferentiable column-wise l2 norm penalties.

Each type of norm regularizer has its own strong points, and the combination of them through the generic form J(V) (9) is expected to obtain better performance. Two typical variants of this are the elastic net[17] and sparse group lasso (SGL)[18].

The elastic net regularizer combines l1 and l2 norm regularization through linear combination and can be written as J(V) with λ1>0, λ2>0, and λ3=0. It has been successfully applied to many fields of speech processing and recognition, such as spectral denoising[5], sparse exemplar-based representation for speech[19], and robust estimation of parameters for the SGMM[6] and DNN[7].

The combination of group lasso and the original lasso is referred as SGL[18], which corresponds to J(V) with λ1>0, λ3>0, and λ2=0. The basic considerations are as follows: the group lasso selects the best set of parameters through nonzero columns of matrix V and the lasso regularization can further reduce free parameters of each column, resulting in a column-wise sparse and within-column sparse matrix. Sparse group lasso looks very similar to the elastic net regularizer but differs in that the column-wise l2 norm term is not squared, which makes it not differentiable at 0. However, we will show in Section 4 that within each nonzero group (i.e., eigenphone) it gives an ‘adaptive’ elastic net fit.

4 Optimization

There is no closed form solution to the regularized objective function (8). Many numerical methods have been proposed in the literature to solve the regularization problem. For example, a gradient projection method has been proposed in[20] to solve the lasso and elastic net problem for sparse reconstruction. The software tool SLEP[21] implements the sparse group lasso formulation using a version of the fast iterative shrinkage-thresholding algorithm (FISTA)[22]. Recently, a more efficient algorithm using accelerated generalized gradient descent method has been proposed[18]. In this paper, for robust eigenphone matrix estimation using the regularization function J(V), we propose an accelerated version of the incremental proximal descent algorithm[23, 24], which is fast and flexible and can be viewed as a natural extension of the incremental gradient algorithm[25] and the FISTA algorithm[22].

For a convex regularizerR(V),V N × D , the proximal operator[26] is defined as

prox R (V)= argmin X 1 2 ||X-V| | 2 2 +R(X).
(10)

The proximal operator for the l1 norm regularizer is the soft thresholding operator

prox γ | | · | | 1 (V)=sgn(V) ( | V | - γ ) + ,
(11)

where denotes the Hadamard product of two matrices, (x)+=m a x{x,0}. The sign function (sgn), product, and maximum are all taken component-wise.

The proximal operator for the squared l2 norm regularizer is the multiplicative shrinkage operator

prox γ | | · | | 2 2 (V)= 1 1 + 2 γ V.
(12)

For the column-wise group sparse regularizer, the proximal operator prox γ | | · | | 2 is given by the shrinkage operation on each column of the parameter matrix v n as follows[26]:

prox γ | | · | | 2 ( v n )= ( 1 - γ | | v n | | 2 ) + v n .
(13)

The proximal operator (13) is sometimes called the block soft thresholding operator. In fact, when ||v n ||2>γ, the resulting n th column will be nonzero, and it can be written as

prox γ | | · | | 2 ( v n )= 1 1 + γ | | v n | | 2 - γ v n .
(14)

Comparing (14) with (12), it can be seen that for nonzero columns, the group sparse lasso is equivalent to the squared l2 norm regularization with a weighting factor of γ 2 ( | | v n | | 2 - γ ) . The larger ||v n ||2, the smaller the weighting factor of the squared l2 norm. So within each nonzero column, the weighting factor of the equivalent l2 norm is kind of adaptive.

In fact, the proximity operator of a convex function is a natural extension of the notion of a projection operator onto a convex set. The incremental proximal descent algorithm[24] could be viewed as a natural extension of the iterated projection algorithm, which activates each convex set modeling a constraint individually by means of its projection operator. In this paper, an accelerated version of the incremental proximal descent algorithm is introduced for the estimation of the eigenphone matrix V, which is summarized in Algorithm 1.

Algorithm 1 Accelerated Incremental Proximal Descent Algorithm for Regularized Eigenphone Matrix Estimation

In Algorithm 1, Q(V) is the gradient of (5), which can be easily calculated from Q(ν d )=-A d ν d +b d . Step 6 is the normal gradient descent step of the original objective function Q(V). In steps 7, 8 and 9, the proximal operators of the element-wise l1 norm, squared l2 norm, and column-wise group sparse regularizer are applied in sequence. The initial descent step size η(0) is simply set to 1.0. From step 10 to 14, we calculate the change of the regularized objective function (8) as Δ Q(k+1) and reduce the current step size η(k) by a factor of θ (0<θ<1, i.e., θ=0.8) until Δ Q(k+1) is below zero.

To accelerate the convergence speed, a momentum term[27] is included in step 4. For fastest convergence, t(k) should increase as fast as possible. In step 15, t(k) is updated using the formula proposed by[22]. Note that when k=0, t ( k - 1 ) - 1 t ( k ) =0; when k, t ( k - 1 ) - 1 t ( k ) 1. This gives the nice property that when V approaches its optimal value, the momentum term increases towards 1, which prevents unnecessary oscillations during the iteration process, thus improves convergence speed. The whole procedure is iterated until the relative change of (8) is smaller than some predefined threshold ε=10-5 (step 17). In our experiments, the typical number of outer iterations is around 200. After finding suitable step sizes η(k) in the first k iterations (typically k<10), there is almost no change in step size for the following outer iterations. Once k>10, the average number of inner iterations is nearly a constant 1. For each iteration, there are only a few element-wise matrix addition, multiplication and thresholding operations, together with an evaluation of the objection function Q(k). Using any modern linear algebra software package, an efficient implementation of Algorithm 1 can be obtained.

Algorithm 1 is also very flexible. If λ2=0 and λ3=0, step 8 and 9 can be omitted, resulting in an accelerated version of the iterative shrinkage-thresholding (IST)[28] algorithm for solving the lasso problem. If only one of steps 7, 8, and 9 is retained, it reduces to FISTA[22] for solving lasso, ridge regression, and group lasso problems, respectively. If λ3=0 or λ2=0, the algorithm becomes the accelerated generalized gradient descent method for solving the elastic net and sparse group lasso problems[18], respectively.

5 Experiments

This section presents an experimental study to evaluate the performance of various regularized eigenphone speaker adaptation methods on a Mandarin Chinese continuous speech recognition task provided by Microsoft[29] (Redmond, WA, USA) and the WSJ English large vocabulary continuous speech recognition task. Supervised and unsupervised speaker adaptation using a varying amount of adaptation data were evaluated. For both tasks, we compare the proposed methods with various conventional methods. In all methods, only the Gaussian means are updated. When comparing the results of the two methods, statistical significance tests were performed using the suite of significance tests implemented by NIST[30]. Three significance tests were applied, including the matched pair (MP) sentence segment (word error) test, the signed paired (SI) comparison test (speaker word accuracy rate), and the Wilcoxon (WI) signed rank test (speaker word accuracy rate). We use this to define significant improvement at a 5% level of significance. All experiments were based on the standard HTK (v 3.4.1) tool set[31]. Detailed experimental setups and results are presented below for each task.

5.1 Experiments on the Mandarin Chinese task

5.1.1 Experimental setup

Supervised speaker adaptation experiments were performed on the Mandarin Chinese continuous speech recognition task provided by Microsoft[29]. The training set contains 19,688 sentences from 100 speakers with a total of 454,315 syllables (about 33 h total). The testing set consists of 25 speakers, and each speaker contributes 20 sentences (the average length of a sentence is 5 s). The frame length and frame step size were set as 25 and 10 ms, respectively. Acoustic features were constructed from 13 dimensional Mel-frequency cepstral coefficients (MFCC) and their first and second derivatives. The basic units for acoustic modeling are 27 initial and 157 tonal final units of Mandarin Chinese as described in[29]. Monophone models were first created using all 19,688 sentences. Then, all possible cross-syllable triphone expansions based on the full syllable dictionary were generated, resulting in 295,180 triphones. Out of these triphones, 95,534 triphones actually occur in the training corpus. Each triphone was modeled by a 3-state left-to-right HMM without skips. After decision tree-based state clustering, the number of unique tied states was reduced to 2,392. We then use the HTK’s Gaussian splitting capability to incrementally increase the number of Gaussians per state to 8, resulting in 19,136 different Gaussian components in the SI model.

Standard regression class tree-based MLLR was used to obtain the 100 training speakers’ SA models. HVite was used as the decoder with a full connected syllable recognition network. All 1,679 tonal syllables are listed in the network, with any syllable allowed to follow any other syllable, or a short pause or silence. This recognition framework puts the highest demand on the quality of the acoustic models. We drew 1, 2, 4, 6, 8, and 10 sentences randomly from each testing speaker for adaptation in supervised mode; the tonal syllable recognition rate was measured among the remaining 10 sentences. To ensure statistical robustness of the results, each experiment was repeated eight times using cross-validation, and the recognition rates were averaged. The recognition accuracy of the SI model is 53.04% (the baseline reference result reported in[29] is 51.21%).

For the purpose of comparison, we carried out experiments using conventional MLLR + MAP[32], eigenvoices[1], and the ML and MAP eigenphone methods[3] with varying parameter settings. The MAP eigenphone method is equivalent to the squared l2 norm regularized eigenphone method. Other regularization methods, namely the lasso, the elastic net, the group lasso and the sparse group lasso, were tested with a wide range of weighting factors. Experimental results are presented and compared in the following sections.

5.1.2 Speaker adaptation based on conventional methods

For MLLR + MAP adaptation, we experimented with different parameter settings. The best result was obtained at a prior weighting factor of 10 (for MAP), a regression class tree with 32 base classes and three-block-diagonal transformation matrices (for MLLR). The number of transformation matrices is adjusted automatically based on the amount of adaptation data using the default setting of HTK. For eigenvoice adaptation, the dimension K of the speaker subspace was varied from 10 to 100. For the eigenphone-based method, both the ML and MAP estimation schemes were tested. For the MAP eigenphone method, σ(-2) denotes the inverse prior variance for the eigenphone. In fact, the MAP estimation using a zero mean Gaussian prior is equivalent to the squared l2 norm regularized estimation with λ2=σ(-2).

The experiment results of the above methods are summarized in Table1. Significance tests show that when the amount of adaptation data is sufficient (≥4 sentences) and the number (N) of the eigenphones is 50, the ML eigenphone method outperforms the MAP + MLLR method significantly. But, when the adaptation data is limited to 1 or 2 sentences (about 5 10 s), the performance degrades quickly due to over-fitting. The situation is worse when a high-dimensional phone variation subspace (i.e., N=100) is used. Reducing the number of the eigenphones improves the recognition rate. However, even with N=10, the performance is still worse than that of the SI model when the adaptation data is one sentence. MAP estimation using a Gaussian prior can alleviate over-fitting to some extent. To prevent performance degradation, a very small Gaussian prior (i.e., a large weighting factor of the squared l2 norm regularizer) is required, which heavily limits the performance when there is a sufficient amount of adaptation data available. This suggests that the l2 regularization can only improve the performance in the case of limited amount of adaptation data (less than two sentences, about 10 s). In order to demonstrate the performance of the various regularization methods, the subsequent experiments all employ a large number of eigenphones, 100.

Table 1 Average tonal syllable recognition rate (%) after speaker adaptation using conventional methods

5.1.3 Eigenphone speaker adaptation using lasso

The lasso regularizer (J(V) with λ1>0, λ2=λ3=0) leads to a sparse eigenphone matrix. To measure the sparseness of a matrix, we calculate its ‘overall sparsity’, which is defined as the percentage of nonzero elements in that matrix. The weighting factor (λ1) of the l1 norm is varied between 10 and 40. The experimental results are summarized in Table2. For each experiment setting, the average overall sparsity of the eigenphone matrix among all testing speakers is shown in parentheses.

Table 2 Average tonal syllable recognition rate (%) after eigenphone-based speaker adaptation using lasso

Significance tests show that compared with the ML eigenphone method, the l1 norm regularization method can improve the performance significantly. It shows performance gain over the MAP eigenphone method under almost every testing condition. The larger the weighting factor λ1, the more sparse the resulting eigenphone matrix becomes. When the amount of adaptation data is limited to one, two, four, or six sentences, the best results are obtained with λ1=20. The relative improvements over the ML eigenphone method are 181.5%, 36.4%, 7.8%, and 2.5%, respectively. When the amount of adaptation data is increased to eight or ten sentences, a small weighting factor of 10 performs best. The resulting recognition rates are still better than that of the ML eigenphone method, with relative improvements of 1.3% and 1.1%,respectively.

5.1.4 Eigenphone speaker adaptation using elastic net

For the elastic net method, λ1 was fixed to 10. All experiments were repeated with λ2 changing from 10 to 2,000. The results are summarized in Table3. Again, the average overall sparsity of the eigenphone matrix is shown in parentheses.

Table 3 Average tonal syllable recognition rate (%) after eigenphone-based speaker adaptation using elastic net

Unfortunately, the results in Table3 show little improvement over the lasso method. The overall sparsity remains the same in all testing conditions. When the adaptation data is one sentence, even with a large weighting factor of l2=2,000, the relative improvement over the lasso method is only 0.2%. We also set λ1 to different values of 20, 30, and 40, and experimented with λ2 varying from 10 to 2,000. Again, almost no improvement was observed over the results in Table2. The squared l2 regularization term seems to not work in combination with l1 regularization.

5.1.5 Eigenphone speaker adaptation using group lasso

As pointed out in Section 3, the group lasso regularizer leads to a column-wise group sparse eigenphone matrix that is a matrix with many zero columns. To measure the column-wise group sparseness of the eigenphone matrix, we calculate its ‘column sparsity’, which is defined as the percentage of nonzero columns in that matrix. In the group lasso experiments, the weighting factor (λ3) of the column-wise l2 norm is varied between 10 and 150. The results are summarized in Table4. For each experiment setting, the average column sparsity of the eigenphone matrix among all testing speakers is shown in parentheses.

Table 4 Average tonal syllable recognition rate (%) after eigenphone-based speaker adaptation using group lasso

From Table4, it can be observed that the group lasso method improves the recognition results compared with the ML eigenphone method, especially with limited adaptation data. Under all testing conditions, its best results are better than that of the MAP eigenphone method, i.e., the squared l2 regularization method. When the adaptation data is limited to one sentence and λ3 is larger than 120, the recognition rate is higher than the best results of the lasso method. However, when more adaptation data is provided, the group lasso method no longer achieves better results than the lasso method. The larger the weighting factor λ3, the larger the column sparsity of the eigenphone matrix. With two adaptation sentences or less, λ3 should be larger than 120 to obtain a good row sparsity. With lots of adaptation data (more than four sentences), even with a large value of λ3 of 150, the column sparsity remains very small, that is, almost no column is set to zero. For these nonzero columns, the group lasso is equivalent to the ‘adaptive’ l2 regularization, and the recognition results are better than those obtained with the ML eigenphone method.

5.1.6 Eigenphone-based speaker adaptation using sparse group lasso

In the sparse group lasso experiments, we fixed λ1 to 10 and varied λ3 from 10 to 150, in hope that the advantages of the lasso and group lasso methods can be combined. The results are summarized in Table5. The average overall sparsity and column sparsity of the eigenphone matrix are shown in parentheses.

Table 5 Average tonal syllable recognition rate (%) after eigenphone-based speaker adaptation using sparse group lasso

From Table5, it can be seen that when the weighting factor λ3 is set to 2030, the recognition results obtained by applying l1 regularization and the column-wise l2 regularization simultaneously are better than that of using any one of the regularizers. When the amount of adaptation data is limited to one and two sentences, the relative improvements over the lasso method are 1.63% and 0.54%, respectively. These results are comparable to that of the best results of the eigenvoice method. When more adaptation data is available, the relative improvement over the lasso method becomes smaller. However, compared with the group lasso method, the relative improvement is more significant when sufficient adaptation data is provided. The advantages of both the l1 regularization and the column-wise l2 regularization combine well. Significance tests show that with λ1=10 and λ3=30, the sparse group lasso is significantly better than all other regularization methods under all testing conditions.

An interesting phenomenon is observed in that for all experimental settings, the overall sparsity is larger than that of the lasso method, while the column sparsity remains small when λ3≤40, that is, most of the columns remain nonzero. This observation implies that comparing with the lasso method, the performance improvement using the sparse group lasso should be attributed to the column-wise adaptive shrinkage property of the column-wise unsquared l2 norm regularizer.

5.2 Experiments on the WSJ task

This section gives the unsupervised speaker adaptation results on the WSJ 20K open vocabulary speech recognition task. A two-pass decoding strategy was adopted. For a batch of recognition data from one speaker, hypothesized transcriptions were obtained using the SI model in the first pass. Then, speaker adaptation was performed using the hypothesized transcriptions based on the SI model (without SAT) or the canonical model (with SAT). The final results were obtained in a second decoding pass using the adapted model.

The SI model was trained using the following configurations. The standard SI-284 WSJ training set was used for training, which consists of 7,138 WSJ0 utterances from 83 WSJ0 speakers and 30,275 WSJ1 utterances from 200 WSJ1 speakers. The whole training set contains about 70 h of read speech in 37,413 training utterances from 283 speakers. The acoustic features are the same as that of the Mandarin Chinese task. There were 22,699 crossword triphones based on 39 base phonemes, and these were tree-clustered to 3,339 tied states. At most 16 Gaussian components were estimated for each tied state, resulting in a total of 53,424 Gaussiancomponents.

The WSJ1 Hub 1 development test data (denoted by ‘si_dt_20’ in the WSJ1 corpus) were used for evaluation. For each of the 10 speakers, 40 sentences were selected randomly for testing, resulting in 52 min of read speech in 400 utterances. HDecoder was used as the decoder and the standard WSJ 20K-vocabulary trigram language model was used for compiling the decoding graph. We use word error rate (WER) to evaluate the recognition results. The WER of the SI model is 14.71%.

Unsupervised speaker adaptation was performed with varying amounts of adaptation data. The testing data of each speaker was grouped into batches, with the batch size varying from 2 to 20 sentences. Different batches of data were used for adaptation and evaluation independently. The following five adaptation methods were tested for comparison:

  1. 1.

    EV: The standard eigenvoice method.

  2. 2.

    MLLR: The standard MLLR method.

  3. 3.

    SAT + MLLR: The standard MLLR method with speaker-adaptive training.

  4. 4.

    EP: The eigenphone method with or without regularization.

  5. 5.

    SAT + EP: The eigenphone method with speaker-adaptive training.

For the eigenvoice method, the number (K) of the eigenvoices was varied between 20 and 150. The best results were obtained with K=100 and K=120 for two and four adaptation sentences, respectively. When the amount of adaptation data is more than six sentences, K=150 yields the best performance. For the MLLR method, the best results were obtained with a regression class tree using 32 base classes and three-block-diagonal transformation matrices. For the eigenphone method, the dimension (N) of the phone variation subspace was set to 100. Different regularization methods were tested with a wide range of weighting factorsa. Again, the best results were obtained with the SGL method with λ1=10 and λ3=30. The results are summarized in Table6, where ‘ML-EP’ denotes the eigenphone method with maximum likelihood estimation and ‘SGL-EP’ denotes the SGL regularized eigenphone method. For the sake of brevity, only the best results of each method are shown in the table.

Table 6 Word error rate (%) after unsupervised speaker adaptation on the WSJ task

It can be seen that the eigenvoice method performs best when the amount of adaptation data is limited to two sentences. But, it cannot achieve the performance of other methods when more adaptation data become available. The ‘ML-EP’ method outperforms the MLLR method when more than six adaptation sentences are used. Severe over-fitting occurs when the amount of adaptation data is less than four sentences. With sparse group lasso regularization, the robustness of the eigenphone method is improved significantlyb. Compared with the ‘ML-EP’ method, the relative improvements are 13.7%, 3.7%, and 1.7% with two, four, and six adaptation sentences, respectively. With more adaptation data, the relative improvements are negligible.

Combined with SAT, significant performance improvement is observed for all testing methods, except for the ‘SAT + ML-EP’ method with insufficient adaptation data (less than four sentences). Again, the performance degeneration is due to severe over-fitting. The ‘SAT + SGL-EP’ method performs best under all testing conditions. The relative improvements over the ‘SAT + MLLR’ method and the ‘SAT + ML-EP’ method are 0.3%, 0.4%, 0.6%, 1.8%, 1.7%, and 2.8% and 17.1%, 6.2%, 1.6%, 0.5%, 0.7%, and 0.0% with 2, 4, 6, 8, 10, and 20 adaptation sentences, respectively.

6 Conclusion

In this paper, we investigate various regularization methods to improve the robustness of the estimation of the eigenphone matrix in eigenphone-based speaker adaptation. The l1 norm regularization (lasso) introduces sparseness, which reduces the number of free parameters and improves generalization. The squared l2 norm penalizes large values of the matrix, thus alleviating over-fitting. The column-wise unsquared l2 norm regularization (group lasso) forces many columns of the eigenphone matrix to be zero, thus preventing the dimension of the phone variation subspace from being higher than necessary. For nonzero columns, the group lasso is equivalent to the adaptively weighted column-wise squared l2 norm regularizer. A unified framework for solving the various regularized matrix estimation problems is presented, and the performances of these regularization methods, including two combinations of them, i.e., elastic net and sparse group lasso, are compared for a supervised speaker adaptation task as well as an unsupervised speaker adaptation task using varying adaptation data. Compared with the maximum likelihood estimation method, significant performance improvements are observed using any of the regularization methods. Among them, the sparse group lasso method yields best results, which combines the advantages of the lasso and the group lasso methods in a consistent way. The group lasso plays an important role in case of limited amounts of adaptation data, with the performance improvement attributed to its column-wise adaptive shrinkage property. With large amounts of adaptation data, lasso seems to be more important than group lasso. Combined with speaker-adaptive training, performance is further improved.

When the dimension (N) of the phone variation subspace is larger than the feature dimension D and the adaptation data is sufficient, the columns of the eigenphone matrix V form an over-complete dictionary. The corresponding coordinate vector for each Gaussian component should be sparse. However, the matrix L obtained by PCA will not necessarily show any sparsity. Future work will focus on estimation of a sparse coordinate matrix at training time to obtain more performance gain.

Endnotes

aλ1, λ2 and λ3 were varied between 0 and 1,000 at a step size of 10, respectively.

b Again, all significance tests show that the differences between the results of the ‘ML-EP’ and ‘SGL-EP’ methods are significant under all testing conditions.

Abbreviations

DNN:

deep neural network

EP:

eigenphone

EV:

eigenvoice

FISTA:

fast iterative shrinkage-thresholding algorithm

HMM:

hidden Markov model

HTK:

hidden Markov toolkit

IST:

iterative shrinkage-thresholding

MAP:

maximum a posteriori

ML:

maximum likelihood

MLLR:

maximum likelihood linear regression

SA:

speaker adapted

SAT:

speaker-adaptive training

SD:

speaker dependent

SGL:

sparse group lasso

SGMM:

subspace Gaussian mixture model

SI:

speaker independent

SLEP:

sparse learning with efficient projections.

References

  1. Kuhn R, Junqua JC, Nguyen P, Niedzielski N: Rapid speaker adaptation in eigenvoice space. IEEE Trans. Speech Audio Process 2000, 8(6):695-707. 10.1109/89.876308

    Article  Google Scholar 

  2. Gales MJF: Maximum likelihood linear transformations for HMM-based speech recognition.Comput. Speech Lang 1998, 12(2):75-98. 10.1006/csla.1998.0043

    Article  Google Scholar 

  3. Zhang WL, Zhang WQ, Li BC: Speaker adaptation based on speaker-dependent eigenphone estimation. Waikoloa, HI: Paper presented at the IEEE Workshop on automatic speech recognition and understanding; 11–15 Dec 2011.

    Book  Google Scholar 

  4. Kenny P, Boulianne G, Ouellet P, Dumouchel P: Speaker adaptation using an eigenphone basis. IEEE Trans. Speech Acoust. Process 2004, 12(6):579-589. 10.1109/TSA.2004.825668

    Article  Google Scholar 

  5. Tan QF, Georgiou PG, Narayanan SS: Enhanced sparse imputation techniques for a robust speech recognition front-end. IEEE Trans. Acoust. Speech Signal Process 2011, 19(8):2418-2429.

    Google Scholar 

  6. Lu L, Ghoshal A, Renals S: Regularized subspace Gaussian mixture models for speech recognition. IEEE Signal Process. Lett 2011, 18(7):419-422.

    Article  Google Scholar 

  7. D Yu F, Seide G, Li L: Deng, Exploiting sparseness in deep neural networks for large vocabulary speech recognition. Kyoto, Japan: Paper presented at ICASSP; 25–30 Mar 2012.

    Google Scholar 

  8. Tan QF, Narayanan SS: Novel variations of group sparse regularization techniques with applications to noise robust automatic speech recognition. IEEE Trans. Acoust. Speech Signal Process 2012, 20(4):1337-1346.

    Google Scholar 

  9. WL Zhang DQu, Zhang WQ, Li BC: Rapid speaker adaptation using compressive sensing.Speech Commun.. 2013, 55(10):950-963.

    Google Scholar 

  10. Zhang WL, Zhang WQ, BC Li DQu, Johnson MT: Bayesian speaker adaptation based on a new hierarchical probabilistic model. IEEE Trans. Audio Speech Lang. Process 2012, 20(7):2002-2015.

    Article  Google Scholar 

  11. T Anastasakos J, McDonough R, Schwartz J, Makhoul A: compact model for speaker-adaptive training. Paper presented at the ICSLP, Philadelphia, PA. USA, 3–6 Oct 1996, 1137-1140.

    Google Scholar 

  12. Lee CH, Lin CH, Juang BH: A study on speaker adaptation of the parameters of continuous density hidden Markov models. IEEE Trans. Acoust. Speech Signal Process 1991, 39(4):806-814. 10.1109/78.80902

    Article  Google Scholar 

  13. Leggetter CJ, Woodland PC: Flexible speaker adaptation using maximum likelihood linear regression. Paper presented at the ARPA spoken language technology workshop; 22–25 Jan 1995.

    Google Scholar 

  14. Tibshirani R: Regression shrinkage and selection via the LASSO. J. R. Stat. Soc. (Ser. B) 1996, 58: 267-288.

    MathSciNet  Google Scholar 

  15. T Hastie R, Tibshirani J: Friedman, The Elements of Statistical Learning: Data Mining, Inference and Prediction. Berlin: Springer; 2005.

    Google Scholar 

  16. Yuan M, Lin Y: Model selection and estimation in regression with grouped variables. J. R. Stat. Soc. (Ser. B) 2007, 68: 49-67.

    Article  MathSciNet  Google Scholar 

  17. Zou H, Hastie T: Regularization and variable selection via the elastic net. J. R. Stat. Soc. B (Stat. Methodol.) 2005, 67(2):301-320. 10.1111/j.1467-9868.2005.00503.x

    Article  MathSciNet  Google Scholar 

  18. Simon N, Friedman J, Hastie T, Tibshirani R: A sparse-group lasso. J. Comput. Graph. Stat 2013, 22(2):231-245. 10.1080/10618600.2012.681250

    Article  MathSciNet  Google Scholar 

  19. Gemmeke JF, Virtanen T, Hurmalainen A: Exemplar-based sparse representations for noise robust automatic speech recognition. IEEE Trans. Acoust. Speech Signal Process 2011, 19(7):2067-2080.

    Google Scholar 

  20. Figueiredo M, Nowak R, Wright S: Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process 2007, 1(4):586-597.

    Article  Google Scholar 

  21. Liu J, Ji S, Ye J: SLEP: Sparse Learning with Eefficient Projections. Tempe: Arizona State University; 2009.

    Google Scholar 

  22. Beck A, Teboulle M: A fast iterative shrinkage-thresholding algorithm for linear inverse problems.SIAM. J. Imaging Sci 2009, 2: 183-202.

    Article  MathSciNet  Google Scholar 

  23. Richard E, Savalle PA: Estimation of simultaneously sparse and low rank matrices. Paper presented at the ICML; 26 June – 1 July 2012.

    Google Scholar 

  24. Bertsekas DP: Incremental proximal methods for large scale convex optimization. Math. Program 2011, 129(2):163-195. 10.1007/s10107-011-0472-0

    Article  MathSciNet  Google Scholar 

  25. Blatt D, Hero AO, Gauchman H: A convergent incremental gradient method with a constant step size.SIAM. J. Optim 2008, 18: 29-51.

    MathSciNet  Google Scholar 

  26. Parikh N, Boyd S: Proximal algorithms.Foundations Trends Optimization. 2013, 1(3):1-108.

    Google Scholar 

  27. Nesterov Y:A method of solving a convex programming problem with convergence rateO 1 k 2 . Sov. Math. Doklady 1983, 27: 372-376.

    MathSciNet  Google Scholar 

  28. Daubechies I, Friese MD, Mol CD: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint.Comm. Pure Appl. Math 2004, 57: 1413-1457.

    Google Scholar 

  29. Chang E, Shi Y, Zhou J, Huang C: Speech lab in a box : a Mandarin speech toolbox to jumpstart speech related research. Aalborg, Denmark, 3–7 Sept 2001, 2799-2802.

    Google Scholar 

  30. The National Institute of Standards and Technology the NIST Scoring Toolkit (SCTK-2.4.0) . Accessed 25 Sept 2013 ftp://jaguar.ncsl.nist.gov/pub/sctk-2.4.0-20091110-0958.tar.bz2

  31. S Young G, Evermann M, Gales T, Hain D, Kershaw X, Liu G, Moore J, Odell D, Ollason V, Valtchev P: Woodland, The HTK Book (for HTK Version 3.4). Cambridge: Cambridge University Engineering Department; 2009.

    Google Scholar 

  32. Digalakis VV, Neumeyer LG: Speaker adaptation using combined transformation and Bayesian methods. IEEE Trans. Speech Audio Process 1996, 4(4):294-300. 10.1109/89.506933

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China (No. 61175017 and No. 61005019) and the National High-Tech Research and Development Plan of China (No. 2012AA011603). The authors would like to thank the anonymous reviewers and Prof. Michael T. Johnson for their valuable suggestions that improved the presentation of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wen-Lin Zhang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Zhang, WL., Zhang, WQ., Qu, D. et al. Speaker adaptation based on regularized speaker-dependent eigenphone matrix estimation. J AUDIO SPEECH MUSIC PROC. 2014, 11 (2014). https://doi.org/10.1186/1687-4722-2014-11

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-4722-2014-11

Keywords