# A signal subspace approach to spatio-temporal prediction for multichannel speech enhancement

- Adam Borowicz
^{1}Email author

**2015**:5

https://doi.org/10.1186/s13636-015-0051-z

© Borowicz; licensee Springer. 2015

**Received: **23 October 2014

**Accepted: **20 January 2015

**Published: **10 February 2015

## Abstract

The spatio-temporal-prediction (STP) method for multichannel speech enhancement has recently been proposed. This approach makes it theoretically possible to attenuate the residual noise without distorting speech. In addition, the STP method depends only on the second-order statistics and can be implemented using a simple linear filtering framework. Unfortunately, some numerical problems can arise when estimating the filter matrix in transients. In such a case, the speech correlation matrix is usually rank deficient, so that no solution exists. In this paper, we propose to implement the spatio-temporal-prediction method using a signal subspace approach. This allows for nullifying the noise subspace and processing only the noisy signal in the signal-plus-noise subspace. As a result, we are able to not only regularize the solution in transients but also to achieve higher attenuation of the residual noise. The experimental results also show that the signal subspace approach distorts speech less than the conventional method.

## Keywords

## 1 Introduction

Speech enhancement is important for many applications including mobile communications, speech coding, speech recognition, and hearing aids. The traditional objective of multichannel speech enhancement is to recover the source speech signal from the outputs of an array of microphones. It is usually achieved by using the beamforming techniques [1-3]. The key idea of beamforming is to process signals of a microphone array, so as to extract the sounds that come from only one direction. In this way, it is possible to dereverberate speech, but the background noise can be reducted as well by avoiding noise directions. Unfortunately, in order to work reasonably well in a reverberant environment, these techniques usually require knowing the impulse responses of the acoustic room or their relative ratios. These parameters can be fixed, provided the geometry of the microphone array is known, or estimated adaptively [4], which in general is a difficult task, however.

Recently, the objective of multichannel speech enhancement has been reformulated, so that noise reduction can be achieved without dereverberating speech. In opposition to the beamforming techniques, the knowledge about the geometry of the microphone array is not required, and the optimal filter depends only on the second-order statistics of the noisy signal.

In [5], the authors presented the most common techniques of multichannel noise reduction based on linear filtering. In such solutions, the noise-free speech is estimated by a linear transformation of the observation vector. The simplest approach is to minimize the mean square error (MSE) between the noise-free and filtered speech signals at a given microphone, which leads to a multichannel version of the classical Wiener filter. In this case, some noise is reduced at the cost of the increased speech distortion, but we cannot explicitly control the trade-off between these quantities.

Speech estimation can also be considered as a constrained optimization problem, where the speech distortions are minimized subject to the residual noise power. This approach is used by the single-channel methods [6] and was implemented in a similar way using a signal subspace technique in [5]. Unlike the frequency domain methods, which are based on the discrete Fourier transform (DFT), the signal subspace approach decomposes the vector space of noisy signals into the speech-plus-noise subspace and noise-only subspace using the Karhunen-Loeve transform (KLT). Then, spectral weighting is performed only in the signal-plus-noise subspace. The components projected onto the noise-only subspace are simply nullified, which results in significantly better performance when compared to the conventional DFT-based methods, where the full-band (and thus erroneous) spectrum must be processed. Unfortunately, also in this case, it is impossible to reduce the residual noise without introducing speech distortions. Several single-channel approaches [7-9] that exploit the masking effects are known to make the speech distortion or the residual noise inaudible, but introducing psychoacoustics into multichannel speech enhancement is a challenging task. On the other hand, some hearing properties have been introduced in a beamforming technique [10], but the resulting improvement is not as great as in the single-channel case.

It seems that the major limitation of all these methods is that they use only temporal prediction. In fact, spatial correlations are implicitly embedded in the second-order statistics, or inter-channel correlation matrices, but are not explicitly used. Therefore, in [11,12], the authors proposed a novel technique based on the spatio-temporal prediction (STP). A DFT-based implementation of this technique has also been proposed [13,14], but in this case, the algorithm has been restricted to use only spatial prediction. It has been verified experimentally that the STP approach outperforms the classical beamforming techniques in terms of noise reduction [11]. In [5], it was proved analytically that by using the STP method, it is theoretically possible to reduce the residual noise without distorting the speech. However, a major drawback of the STP method is its numerical instability, as this approach assumes that speech correlation matrix is of full rank. Because this is not true for low power speech at transients, the solution must be regularized empirically in practice. Alternatively, under the uncertainty about the speech presence, the conditional estimators can be used [15]. Even if the speech correlation matrix is of full rank, the STP method requires many microphones to effectively reduce the residual noise.

In this paper, we propose a signal-subspace implementation of the STP method. By decomposing the signal vector space, we are able to limit processing to the signal-plus-noise subspace only. Thus, the numerical problems can be evaded in a more natural way. Since the noisy speech projected on the noise-only subspace can simply be nullified, the signal subspace approach allows for attenuating noise more, even for a small number of microphones. In addition, we have rederived the STP method using a notation slightly different from that in [5], in order to expose the possibility of denoising all microphone signals at once.

## 2 Signal model and linear filtering

*N*microphones with arbitrary geometry and a single speech source

*s*(

*k*) located inside a reverberant enclosure. The observation signal at the

*n*th microphone is given by:

*a*

_{ n }is the acoustic impulse response from the source to the

*n*th microphone, and

*x*

_{ n }(

*k*) and

*v*

_{ n }(

*k*) are, respectively, the noise-free speech and the noise components received by the

*n*th microphone. Such a mixing model is illustrated in Figure 1.

*L*-sample blocks. Thus, the signals can be represented using the vector-matrix notation as follows:

*n*th microphone can be obtained using a linear transformation of the observation vector:

**x**

_{ n }(

*k*) and

**v**

_{ n }(

*k*) denote the noise-free speech and the noise, respectively, and are defined similarly to Equation 2.

**H**

_{ n }is a filtering matrix of size

*L*×

*L*

*N*. The estimation error is defined by:

is a selection matrix of size *L*×*L*
*N*. The terms **e**
_{
x
}(*k*) and **e**
_{
v
}(*k*) denote the speech distortion and the residual noise, respectively.

**a**as:

*E*{.} is the expectation operator. Assuming that the speech and noise are short-term stationary and uncorrelated processes, the correlation matrix of the noisy speech can be written as:

Unless otherwise stated, all equations hold for any arbitrarily chosen point in time. Therefore, for the sake of brevity, the time index *k* is often omitted in the rest of this paper.

## 3 Spatio-temporal prediction

**x**

_{ m }(

*k*) can be predicted from the signal

**x**

_{ n }(

*k*) using a linear filter matrix

**W**

_{ n,m }such that:

**W**

_{ n,n }=

**I**

_{ L }. The prediction matrices can be concatenated so as to form the

*L*×

*N*

*L*matrix:

A solution exists if and only if **R**
_{
vv
} is positive definite, and the matrix **W**
_{
n
} is of rank *L*. As noise signals are usually stationary and have smooth spectra, **R**
_{
vv
} has full rank and can be estimated using long-term averaging during speech pauses.

**W**

_{ n,m }for

*m*≠

*n*are not known and have to be estimated. They can be found by solving the following minimization problem:

*i*,

*j*)th

*L*×

*L*submatrix of the matrix

**R**

_{ a a }. The correlation matrices of the clean speech are unknown, and the vectors

**x**

_{ n }(

*k*) cannot be observed directly, but by using Equation 8 we can write:

In order to obtain a full rank matrix **W**
_{
n,m
}, the matrices \(\mathbf {R}_{\mathbf {x}_{m} \mathbf {x}_{n}}\phantom {\dot {i}\!}\) and \(\mathbf {R}_{\mathbf {x}_{n}\mathbf {x}_{n}}\phantom {\dot {i}\!}\) have to be positive definite. In [5], the authors suggest to estimate the filter matrix (Equation 13) only when the speech source is active, using a voice activity detector (VAD), but this generally does not prevent the matrix **W**
_{
n,m
} from being rank deficient. Moreover, such a technique can introduce discontinuity effects at transients or/and increased residual noise during silence intervals. For low-power speech signals, the covariance matrix of the clean speech is usually positive semi-definite, or at least ill-conditioned, which means that in practice the STP method is numerically stable only for high signal-to-noise ratios (SNRs). The simplest solution is to add some white noise to the speech signal, so that the inverses in Equation 13 and Equation 17 can be replaced with pseudoinverses and properly regularized [16]. However, all these approaches are rather empirical and need a careful adjustment. Thus, we need a more robust solution, which can be applied also to low power speech signals, especially at low SNRs.

## 4 Signal subspace approach

In the conventional STP method, data are processed in the vector space of the noisy speech. The key idea of the signal subspace approach is to decompose that vector space into the signal-plus-noise and noise-only subspaces and to process data only in the signal-plus-noise subspace, while the projection of the noisy signal onto the noise-only subspace is simply nullified. The dimensionality of the signal-plus-noise or, simply, signal subspace is closely related to the rank of the speech correlation matrix. Thus, by introducing the signal subspace approach to the STP method, we are able to not only increase the attenuation of the residual noise during silence intervals but also to avoid the ill-conditioning issues.

*R*

_{ vv }is positive definite, the matrices

*R*

_{ xx }and

*R*

_{ vv }can be jointly diagonalized [17,18], i.e.:

**V**denotes the orthogonal matrix of the eigenvectors, and

*Λ*=diag{

*λ*

_{1},…,

*λ*

_{ NL }} is the diagonal matrix of the corresponding eigenvalues. We also assume that the eigenvalues in

*Λ*are arranged in descending order, i.e.

*λ*

_{ i }≥

*λ*

_{ j }for any

*i*<

*j*. The matrix

**V**can also be interpreted as the KLT matrix of the whitened clean speech. Alternatively, it can be obtained using the eigendecomposition of the whitened noisy speech correlation matrix:

**B**

^{−T }to the noisy signal is equivalent to whitening data before performing the subspace decomposition, so that the resulting coefficients are perfectly decorrelated in the transform domain, i.e.:

*R*

_{ vv }is positive definite, and

*R*

_{ xx }can be semi-positive definite, the dimension of the signal-plus-noise subspace is equal to the number of non-zero eigenvalues of the correlation matrix of the whitened clean speech. Assume that

*N*

*L*=

*L*

_{ s }+

*L*

_{ v }, where

*L*

_{ s }and

*L*

_{ v }denote the dimensions of the signal-plus-noise and noise-only subspaces, respectively. Thus, for

*L*

_{ s }<

*N*

*L*, we can rewrite Equation 25 as follows:

can be viewed as a reweighting matrix, with \(\mathbf {Q}_{n,1:L_{s}}\) denoting sub-matrix of **Q**
_{
n
} consisting rows from 1 to *L*
_{
s
}. As can be seen the noisy signal is transformed using a non-orthogonal matrix **B**
^{−T
}. The denoising is achieved by ‘reweighting’ the coefficients in the signal-plus-noise subspace using the matrix *Σ*
_{
n
} and simply nullifying the noise-only subspace. In opposition to the conventional signal subspace approach, the reweighting matrix is not diagonal here but symmetric and idempotent.

Finally, the filtered signal is brought back to the time domain using the inverse transform **B**
^{
T
}.

*L*

_{ s }can be estimated as the number of the strictly positive eigenvalues, according to the following rule:

where the threshold *θ* is a some small positive number.

It can be noticed that \(\mathbf {Q}_{n}^{T} \mathbf {Q}_{n}\) is invertible as long as *L*
_{
s
}≥*L*. However, even when this condition is not in force (which is fairly common at transients or during silence intervals), the inverse can be easily regularized. For example, if *L*
_{
s
}=*L*, **Q**
_{
n,1:L
} is a square matrix, and *Σ*
_{
n
}=**I**, which means that the filter performs nullifying the noise subspace without cleaning the signal-plus-noise subspace, or that the residual noise can be effectively reduced without distorting the speech.

*N*=1 and

*L*

_{ s }=

*L*, then the filter matrix is simply the identity matrix. For

*N*>1, it is possible to arrange matrices

**H**

_{ n },

*n*=1,2,…,

*N*into the single filter matrix:

**x**(

*k*) can be estimated as follows:

**H**

_{P}can also be written in a more convenient form:

and the operators ∘ and ⊗ stand for the Hadamard and the Kronecker products, respectively, and **J**
_{
L×L
} is the *L*×*L* matrix of ones.

**H**

_{ n }as follows:

*ξ*

_{nr}(

*k*)≥1: the larger this factor, the lower residual noise. Usually, the noise is reduced at the cost of attenuating speech. Therefore, in order to quantify this attenuation, we define the speech reduction factor:

*ξ*

_{sr}(

**H**

_{ n })≥1. The output SNR of the filter

**H**can be expressed in the following way:

where the SNR stands for the input SNR.

*L*

_{ s }≥

*L*, the proposed approach is theoretically equivalent to the time-domain implementation of the STP method. In order to analyse performance of the proposed implementation for

*L*

_{ s }<

*L*, we consider the case of the white noise, for which \(\mathbf {R}_{\mathbf {v}_{n}\mathbf {v}_{n}} = \sigma _{\mathbf {v}_{n}} \mathbf {I}\phantom {\dot {i}\!}\). Because the inverse \(\left (\mathbf {Q}_{n}^{T} \mathbf {Q}_{n}\right)^{-1}\phantom {\dot {i}\!}\) does not exist for

*L*

_{ s }<

*L*, we use Equation 29. Then, by replacing

*Σ*

_{ n }in Equation 26 with the identity matrix and by substituting it to Equation 34 and Equation 35, we obtain:

Since *ξ*
_{nr}(**H**
_{
n
})>*ξ*
_{sr}(**H**
_{
n
}), we always have SNR(**H**)>SNR, or an improvement of the SNR.

## 5 Simulations

Although a full evaluation of the proposed approach, including listening tests, is out of the scope of this article, we have conducted some experiments using objective measurements. In this section, we compare the performances of the conventional time-domain implementation of the STP method and of the proposed approach based on the signal subspace.

### 5.1 Implementation

*N*

_{ f }with 50% overlap. Each frame is partitioned into

*M*=

*N*

_{ f }−

*L*+1 shorter overlapping

*L*-dimensional vectors. The sequence of these vectors is arranged into the trajectory matrix of size

*L*-by-

*M*. The trajectory matrices for all microphones are concatenated together so as to form the noisy speech matrix

**Y**(

*k*) of size

*LN*-by-

*M*so that:

As all required parameters are estimated, the effective filter matrix **H**
_{
n
} is computed, and then all in-frame vectors are processed using the same matrix, i.e. \(\hat {\mathbf {Y}}(k) = \mathbf {H}_{n} \mathbf {Y}(k)\). The enhanced vectors are obtained from the matrix \(\hat {\mathbf {Y}}(k)\) using the diagonal averaging technique [19]. Finally, the frames are multiplied by the Hanning window and synthesized using the overlap-add method.

**Y**(

*k*). This estimate is the basis for computing both noise statistics and the KLT of the whitened signal (Equation 20). The matrix

**R**

_{ vv }is estimated only during speech pauses as:

where 0<*α*<1 is the forgetting factor, and *I*(*k*) is the VAD output of the *k*th frame. In our simulations, the VAD was not implemented, and the speech pause/activity regions were marked manually.

**R**

_{ v v }=

*V*

_{ v }

*Λ*

_{ v }

*V*

_{ v }

^{ T }in the following way:

where **V**
_{
v
} denotes the orthogonal matrix of the eigenvectors, and *Λ*
_{
v
} is the diagonal matrix of the corresponding eigenvalues.

*α*=0.75,

*N*

_{ f }=400, and

*L*=20. A proper choice of the value of the parameter

*θ*seems to be crucial for the proposed implementation. In general, greater values of

*θ*lead to cancellation of the residual noise, but a special care must be taken because low-power speech components can be also nullified. Therefore, the simplest solution is to fix this threshold, so that it is large enough to give

*L*

_{ s }=0 (or equivalently

*θ*≫

*λ*

_{1}) during speech pauses. We found empirically that its value depends mainly on the bias of the estimator of the noise correlation matrix, i.e. on the forgetting factor

*α*and the frame/window size

*N*

_{ f }. In Figure 2c, we present the variability of the estimated dimension of the signal-plus-noise subspace for the parameter

*θ*=3. Further experiments show that the optimal value of the parameter

*θ*(in terms of speech distortion) does not depend on the input SNR. It can be observed that

*L*

_{ s }<

*L*occurs fairly commonly, not only at transients, but also during speech activity.

In the case of the conventional implementation, all inverses in Equation 17 and Equation 13 were replaced with pseudoinverses. They were computed using singular value decomposition (SVD), and all singular values less than some tolerance were treated as zeros. In fact, that tolerance plays the same role as the parameter *θ* in the signal subspace approach. Thus, by setting it sufficiently large, it is possible to increase noise reduction. Unfortunately, the speech reduction factor is also increased. Additionally, we have found empirically that the optimal tolerance is SNR dependent. Therefore, during our simulations, all SVD-based pseudoinverses were computed using the default tolerance set by MATLAB.

### 5.2 Objective evaluation

*x*-axis, with spacing 0.1 and beginning from the first microphone at the position (2.65,4,1). The locations of the microphones and the sound sources are shown in Figure 3. The source speech signal was sampled at 16 kHz. The signal was about 14-s-long and comprised of four short sentences uttered by male and female speakers (see Additional file 1). In order to represent general broadband signals the pink noise was chosen. The microphone signals were obtained by convolving the source speech signal with the generated impulse responses of a room, and by adding noise signals at SNRs ranged from −5 to 20 dB, in accordance with Equation 1. An example noisy speech sample is provided as the Additional file 2. In all experiments, we estimated the noise-free signal only at the first microphone,

*n*=1, which served as the reference microphone.

The SNR-based measures were used for evaluating the objective performance. The speech distortion measure (SD) was defined as the segmental signal-to-noise ratio, in which the noise was identified with the difference between the source signal and enhanced speech. The higher the value of this factor, the better the performance. The amount of reduced noise was measured using the noise attenuation (NA) factor defined as the mean ratio between the input noise power and output noise power.

*θ*on speech distortion and noise attenuation. The measured speech distortion, which is shown in Figure 4a, indicates rather weak influence of the parameter

*θ*on the input SNR. The optimal value of

*θ*is between 3 to 4 for all SNRs. On the other hand, the plot of the noise attenuation factor in Figure 4b, demonstrates that the higher the value of the

*θ*, the higher noise attenuation.

*θ*=3 and

*N*=2,3,…,8. For conciseness, we present in Figure 5 only the results of objective measurements of the systems with

*N*=2,4, and eight microphones. Example recordings of the speech enhanced using conventional and proposed method are provided as Additional files 3 and 4, respectively.

It can easily be seen that the proposed method outperforms the conventional one, as it provides lower speech distortions and higher noise attenuation. Surprisingly, the speech distortion for the system with *N*=2 microphones was lower than for the eight-microphone system, especially at high SNRs. A possible explanation of this phenomena is that for more microphones, the correlation matrix is larger, which makes the estimation less accurate. In practice, it makes sense to use more microphones only in the conventional time-domain method (in order to improve the noise attenuation). Figure 5a shows that the speech distortion can be also decreased but only at low SNRs.

Unlike the conventional method, the signal subspace approach does not require many microphones to work reasonably well. The proposed method removes the residual noise almost completely (NA = 70 to 90 dB) without introducing speech distortions or unnatural discontinuity effects at transients. This is not surprising, since the matrix *Σ*
_{
n
} may contain only zeros during silence intervals, which is highly desirable in speech coding or automatic speech recognition (ASR) systems. On the other hand, complete cancellation of the noise is neither necessary nor desired in some applications, like mobile communication. In such cases, zero diagonal coefficients in *Σ*
_{
n
} can be replaced with some small positive numbers.

*N*=4 are presented below. Once again, we see that the proposed method offers incomparably higher noise attenuation during both speech pauses and voice activity periods. Unlike the time-domain implementation, the signal subspace approach does not generate musical tones (random peaks in the time-frequency plane). However, one should remember that this is an idealized situation, because the VAD has not been implemented, and speech/pause frames were marked manually. In practice, the VAD is difficult to implement, and its performance generally depends on the input SNR. Therefore, we expect some performance drop in real applications.

## 6 Conclusions

We have shown that the STP method can be implemented using a signal subspace approach. The conditions for uniqueness of a solution have been provided. We proposed Equation 29 as a simple rule that can be used when the speech correlation matrix is rank deficient. It has been verified analytically that the proposed approach can reduce noise without distorting the speech (as long as the parameter *L*
_{
s
} is not less than the true rank of *R*
_{
yy
}). In order to estimate the dimension of the speech-plus-noise subspace, we also used some sort of the thresholding technique. However, we have found empirically that, unlike in the conventional SVD-based regularization, a corresponding threshold (or the parameter *θ*) is not SNR dependent and can be adjusted to fixed value. The objective measurements show that the signal subspace approach outperforms the conventional one providing higher noise attenuation and lower speech distortion. We have also reported that the proposed implementation does not require as many microphones as its time-domain counterpart to work reasonably well.

Listening tests are usually difficult and time-consuming, thus they were not used to evaluate our approach.

In this article, we have introduced a novel notation that allows for estimating the speech signals at all microphones at once. This can potentially be useful if the system has to work as a preprocessor for a beamformer. Since the STP method relies only on the second-order statistics, it may find other applications in areas where multi-sensor data are processed, i.e. in the electroencephalography, as a means for enhancing EEG signals. These points have not been discussed here, but they are promising directions for future work.

## Declarations

### Acknowledgements

This work was supported by the Polish National Science Centre under Decision No. DEC-2012/07/D/ST6/02454.

## Authors’ Affiliations

## References

- OL Frost, An algorithm for linearly constrained adaptive array processing. Proc. IEEE. 60, pp. 926–935 (1972).View ArticleGoogle Scholar
- LJ Griffiths, CW Jim, An alternative approach to linearly constrained adaptive beamforming. IEEE Trans. Antennas Propag.AP-30(1), 27–34 (1982).View ArticleGoogle Scholar
- S Gannot, D Burshtein, E Winstein, Signal enhancement using beamforming and nonstationarity with applications to speech. IEEE Trans. Signal Process. 49(8), 1614–1626 (2001).View ArticleGoogle Scholar
- S Affes, Y Grenier, A signal subspace tracking algorithm for microphone array processing of speech. IEEE Trans. Speech Audio Process. 5(5), 425–437 (1997).View ArticleGoogle Scholar
- Y Huang, J Benesty, J Chen, Analysis and comparison of multichannel noise reduction methods in a common framework. IEEE Trans. Audio, Speech, Lang. Process. 16(5), 957–968 (2008).View ArticleGoogle Scholar
- Y Ephraim, HL Van Trees, A signal subspace approach for speech enhancement. IEEE Trans. Speech Audio Process. 3(4), 251–266 (1995).View ArticleGoogle Scholar
- D Virette, P Scalart, C Lamblin, Analysis of background noise reduction techniques for robust speech coding. Proc. EUSIPCO. 3, 297–300 (2002).Google Scholar
- A Borowicz, A Petrovsky, Signal subspace approach for psychoacoustically motivated speech enhancement. Speech Comm. 53(2), 210–219 (2011).View ArticleGoogle Scholar
- F Jabloun, B Champagne, Incorporating the human hearing properties in the signal subspace approach for speech enhancemnt. IEEE Trans. Speech Audio Process. 11(6), 700–708 (2003).View ArticleGoogle Scholar
- A Borowicz, A Petrovsky, Incorporating auditory properties into generalised sidelobe canceller. Paper presented at the 2012 Proceedings of the 20th European Signal Processing Conference (EUSIPCO). Bucharest, Romania, 27–31 August 2012.Google Scholar
- J Chen, J Benesty, Y Huang, A minimum distortion noise reduction algorithm with multiple microphones. IEEE Trans. Audio, Speech, Lang. Process. 16(3), 481–493 (2008).View ArticleGoogle Scholar
- J Benesty, J Chen, Y Huang,
*Microphone Array Signal Processing*(Springer, Berlin, Germany, 2008).Google Scholar - B Cornelis, M Moonen, J Wouters, Comparison of frequency domain noise reduction strategies based on multichannel wiener filtering and spatial prediction. Paper presented at the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing. Taipei, 19–24 April 2009.Google Scholar
- J Benesty, J Chen, EAP Habets,
*Speech Enhancement in the STFT Domain. SpringerBriefs in Electrical and Computer Engineering*(Springer, Berlin, Germany, 2012).View ArticleGoogle Scholar - EAP Habets, A distortionless subband beamformer for noise reduction in reverberant environments. Paper presented at the Proc. IWAENC, Tel Aviv. Israel, August 2010.Google Scholar
- PC Hansen, The truncated SVD as a method for regularization. BIT. 27, 534–553 (1987).View ArticleMATHMathSciNetGoogle Scholar
- Y Hu, PC Loizou, A generalized subspace approach for enhancing speech corrupted by colored noise. IEEE Trans. Speech Audio Process. 11(4), 334–341 (2003).View ArticleGoogle Scholar
- H Lev-Ari, Y Ephraim, Extension of the signal subspace enhancement to colored noise. IEEE Signal Process. Lett. 10(4), 104–106 (2003).View ArticleGoogle Scholar
- R Vetter, N Virag, P Renevey, JM Vesin, Single channel speech enhancement using principal component analysis and MDL subspace selection. Paper presented at the Proc. EUROSPEECH. Budapest, Hungary, 5–9 September 1999.Google Scholar
- JB Allen, DA Berkley, Image method for efficiently simulating small-room acoustics. J. Acoust. Soc. Am. 65(4), 943 (1979).View ArticleGoogle Scholar

## Copyright

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.