### 3.1 Proposed two-stage setup

In Figure 3a, a simple block diagram showing two stages of the proposed AENC scheme is presented and in Figure 3b, more detail of the adaptive filter-based AEC algorithm involved in the first stage is shown. Similar to Figure 2, the input to the microphone *y*(*n*) can be described by (3). For the case of single-channel AEC, for example, while delivering a lecture in a large conference hall, the microphone in front of the speaker receives input speech *s*(*n*) corrupted by *v*(*n*). Once this noise-corrupted speech is transmitted through loudspeaker, echo signal is generated and thus the microphone after some initial time delay will receive noise-corrupted speech and echo of previously uttered speech. The task of AEC is to cancel the echo part from this input by using adaptive filter algorithm. In order to obtain adaptively an estimate {\hat{x}}_{s}\left(n\right)+{\hat{x}}_{v}\left(n\right) of the echo signal, we propose to utilize delayed versions of the previously echo-suppressed samples of the noisy speech as reference signal [19]. A symbol hat on the variable is used to indicate estimated value. The error signal *e*(*n*) thus obtained is given by

\begin{array}{l}e\left(n\right)=y\left(n\right)-\left[{\hat{x}}_{s}\right(n)+{\hat{x}}_{v}(n\left)\right].\end{array}

(6)

The estimate of the echo signal can be expressed as

\begin{array}{l}{\hat{x}}_{s}\left(n\right)+{\hat{x}}_{v}\left(n\right)={\hat{\mathbf{\text{w}}}}_{n}^{T}\left[\hat{\mathbf{\text{s}}}\right(n-{k}_{0})+\hat{\mathbf{\text{v}}}(n-{k}_{0}\left)\right],\end{array}

(7)

where {\hat{\mathbf{\text{w}}}}_{n}={\left[{\hat{w}}_{n}\right(1),{\hat{w}}_{n}(2)\dots {\hat{w}}_{n}(p\left)\right]}^{T} is the estimated coefficient vector. The task of the adaptive filter is to obtain an optimum {\hat{\mathbf{\text{w}}}}_{n} by minimizing the error in (6) i.e.,

\begin{array}{l}e\left(n\right)=s\left(n\right)+\left\{\right(v\left(n\right)+{\delta}_{s}\left(n\right))+{\delta}_{v}(n\left)\right\},\end{array}

(8)

where {\delta}_{s}\left(n\right)={x}_{s}\left(n\right)-{\hat{x}}_{s}\left(n\right) and {\delta}_{v}\left(n\right)={x}_{v}\left(n\right)-{\hat{x}}_{v}\left(n\right) are the residual echo of the speech and noise portions of the input signal, respectively, and it is assumed that these signals exhibit the properties of white Gaussian noise. Next, *e*(*n*) is passed through a spectral subtraction-based single-channel ANC block which produces output \stackrel{~}{s}\left(n\right)\approx s\left(n\right)+\Psi \left(n\right) that closely resembles *s*(*n*) provided that the residual echo-noise portion *Ψ*(*n*) becomes very small.

It is to be noted that the task of noise reduction, unlike the proposed AENC scheme, may be carried out prior to the AEC block. However, because of possible nonlinearities introduced by the prior noise reduction block, no proper reference would be available for the single-channel AEC block [17]. Hence, the arrangement shown in Figure 3a is adopted, in which the noise reduction block also serves as a post-processor for attenuating the residual echo.

### 3.2 Development of proposed gradient-based single-channel LMS AEC scheme

A delayed version of the adaptive filter output *e*(*n*) is proposed to use as the reference signal, and from (8), filter output *e*(*n*) can be written as

e\left(n\right)=\hat{s}\left(n\right)+\hat{v}\left(n\right),

(9)

where \hat{s}\left(n\right)=s\left(n\right)+{\delta}_{s}\left(n\right) and \hat{v}\left(n\right)=v\left(n\right)+{\delta}_{v}\left(n\right). The objective function of the adaptive filter involves minimization of the mean square estimation of the error function and using (6) it can be written as

\begin{array}{ll}E\left\{{e}^{2}\right(n\left)\right\}& =E\left\{{\left(s\right(n)+v(n\left)\right)}^{2}\right\}+E\left\{\right({x}_{s}\left(n\right)+{x}_{v}\left(n\right)\\ \phantom{\rule{1em}{0ex}}-\phantom{\rule{0.3em}{0ex}}{\hat{x}}_{s}\left(n\right)-{\hat{x}}_{v}\left(n\right){)}^{2}\}+2E\{\left(s\right(n)+v(n\left)\right)\\ \phantom{\rule{1em}{0ex}}\times \phantom{\rule{0.3em}{0ex}}\left({x}_{s}\right(n)+{x}_{v}(n)-{\hat{x}}_{s}(n)-{\hat{x}}_{v}(n\left)\right)\},\end{array}

(10)

where *E*{.} denotes the expectation operator. In (10), it is intended to use the basic definition of cross-correlation operation, for example, the cross-correlation function between *s*(*n*) and *v*(*n*) is defined as

{r}_{\mathit{\text{sv}}}\left(m\right)=E\left\{s\right(n\left)v\right(n-m\left)\right\},

(11)

where *m* denotes the lag. Using (4), (5), (7), and the above definition, the last term of (10) can be expressed as

\begin{array}{l}\phantom{\rule{-30.0pt}{0ex}}2E\left\{\right[\left(s\right(n)+v(n\left)\right)\left({x}_{s}\right(n)+{x}_{v}(n)-{\hat{x}}_{s}(n)-{\hat{x}}_{v}(n\left)\right)\left]\right\}\\ \phantom{\rule{1em}{0ex}}=2\sum _{k=1}^{k=p}\left\{\right({a}_{n}\left(k\right)-{\hat{w}}_{n}\left(k\right)\left)\right({r}_{\mathit{\text{ss}}}({k}_{0}+k)+{r}_{\mathit{\text{sv}}}({k}_{0}+k)\\ \phantom{\rule{2em}{0ex}}+\phantom{\rule{0.3em}{0ex}}{r}_{\mathit{\text{vs}}}({k}_{0}+k)+{r}_{\mathit{\text{vv}}}({k}_{0}+k))-{r}_{s{\delta}_{s}}({k}_{0}+k)\\ \phantom{\rule{2em}{0ex}}-\phantom{\rule{0.3em}{0ex}}{r}_{s{\delta}_{v}}({k}_{0}+k)-{r}_{v{\delta}_{s}}({k}_{0}+k)-{r}_{v{\delta}_{v}}({k}_{0}+k)\}.\end{array}

(12)

Here, *r*_{
s
s
}(*k*_{0}+*k*) corresponds to the (*k*_{0}+*k*)th lag of the cross-correlation between *s*(*n*) and its previous samples *s*(*n*−*k*_{0}−*k*), and *r*_{
s
v
}(*k*_{0}+*k*) corresponds to the (*k*_{0}+*k*)th lag of the cross-correlation between *s*(*n*) and *v*(*n*−*k*_{0}−*k*). In a similar way, *r*_{
v
s
}(*k*_{0}+*k*), *r*_{
v
v
}(*k*_{0}+*k*), {r}_{s{\delta}_{s}}({k}_{0}+k), {r}_{s{\delta}_{v}}({k}_{0}+k), {r}_{v{\delta}_{s}}({k}_{0}+k), and {r}_{v{\delta}_{v}}({k}_{0}+k) can be defined. It is well known that the value of cross-correlation decreases rapidly with the increasing lags when two signals are uncorrelated. In ideal case, the cross-correlation function between two random noise signals would be nonzero only at the zero lag. Since *v*(*n*) is assumed to be white Gaussian noise and, generally, the value of *k*_{0} is very large, in (12), the effect of the terms *r*_{
s
v
}(*k*_{0}+*k*), *r*_{
v
s
}(*k*_{0}+*k*), and *r*_{
v
v
}(*k*_{0}+*k*) can be neglected. Moreover, because of noise-like characteristics of *δ*_{
s
}(*n*) and *δ*_{
v
}(*n*), in (12), one can neglect {r}_{s{\delta}_{v}}({k}_{0}+k), {r}_{v{\delta}_{s}}({k}_{0}+k), and {r}_{v{\delta}_{v}}({k}_{0}+k) too. Hence, it can easily be comprehended that optimal filter performance occurs when *r*_{
s
s
}(*n*) is minimum, i.e., the least possible correlation between *s*(*n*−*k*_{0}−*k*) and *s*(*n*) is desired. As a result, (10) reduces to

\begin{array}{ll}E\left\{{e}^{2}\right(n\left)\right\}& =E\left\{{\left(s\right(n)+v(n\left)\right)}^{2}\right\}\\ \phantom{\rule{1em}{0ex}}+\phantom{\rule{0.3em}{0ex}}E\left\{{\left[{x}_{s}\right(n)+{x}_{v}(n)-{\hat{x}}_{s}(n)-{\hat{x}}_{v}(n\left)\right]}^{2}\right\}\\ \phantom{\rule{1em}{0ex}}+\phantom{\rule{0.3em}{0ex}}2\sum _{k=1}^{k=p}\left({a}_{n}\right(k)-{\hat{w}}_{n}(k\left)\right){r}_{\mathit{\text{ss}}}({k}_{0}+k).\end{array}

(13)

Here, the magnitude of *r*_{
s
s
}(*k*_{0}+*k*) strongly depends on speech characteristics and the amount of flat delay *k*_{0}. For a reasonably large *k*_{0}, the effect of *r*_{
s
s
}(*k*_{0}+*k*) in 13 can be neglected, and minimization of (13) results in

\begin{array}{l}\phantom{\rule{-10.0pt}{0ex}}\frac{\mathrm{\partial E}\left\{{e}^{2}\right(n\left)\right\}}{\partial {\hat{\mathbf{\text{w}}}}_{n}^{T}}=0\\ \phantom{\rule{-8.0pt}{0ex}}E\left[\phantom{\rule{0.3em}{0ex}}\right\{{x}_{s}\left(n\right)\phantom{\rule{0.3em}{0ex}}+\phantom{\rule{0.3em}{0ex}}{x}_{v}\left(n\right)\phantom{\rule{0.3em}{0ex}}-\phantom{\rule{0.3em}{0ex}}{\hat{x}}_{s}\left(n\right)\phantom{\rule{0.3em}{0ex}}-\phantom{\rule{0.3em}{0ex}}{\hat{x}}_{v}\left(n\right)\left\}\right\{\hat{\mathbf{\text{s}}}(n-{k}_{0})\phantom{\rule{0.3em}{0ex}}+\phantom{\rule{0.3em}{0ex}}\hat{\mathbf{\text{v}}}(n\phantom{\rule{0.3em}{0ex}}-\phantom{\rule{0.3em}{0ex}}{k}_{0})\left\}\right]\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}0.\end{array}

(14)

Hence, we obtain

\begin{array}{l}\phantom{\rule{-8.0pt}{0ex}}E\left\{\right({x}_{s}\left(n\right)+{x}_{v}\left(n\right)\left)\right(\hat{\mathbf{\text{s}}}(n-{k}_{0})+\hat{\mathbf{\text{v}}}(n-{k}_{0})\left)\right\}\\ \phantom{\rule{-4.0pt}{0ex}}={\hat{\mathbf{\text{w}}}}_{n}^{T}E\left[\right\{\hat{\mathbf{\text{s}}}(n-{k}_{0})+\hat{\mathbf{\text{v}}}(n-{k}_{0})\left\}\right\{\hat{\mathbf{\text{s}}}(n-{k}_{0})+\hat{\mathbf{\text{v}}}(n-{k}_{0})\left\}\right].\end{array}

(15)

The above equation is similar to Wiener-Hopf equation and its solution can be written as

{\hat{\mathbf{\text{w}}}}_{n}^{T}={{\mathbf{\text{R}}}_{(s+v)(s+v)}(n-{k}_{0})}^{-1}{\mathbf{\text{r}}}_{({x}_{s}+{x}_{v})(s+v)}(n-{k}_{0}),

(16)

where {\mathbf{\text{r}}}_{({x}_{s}+{x}_{v})(s+v)}(n-{k}_{0}) consists of different lags of cross-correlation between the echo signal *x*_{
s
}(*n*)+*x*_{
v
}(*n*) and the noisy input signal *s*(*n*)+*v*(*n*), while **R**_{(s+v)(s+v)} is the auto-correlation matrix of *s*(*n*)+*v*(*n*). There is no doubt that {\hat{\mathbf{\text{w}}}}_{n} is the most optimum solution possible. Hence, it is shown that even for a single-channel noise corrupted AEC problem, the most optimum solution {\hat{\mathbf{\text{w}}}}_{n} can be achieved under the assumptions stated earlier.

For iterative estimation of optimal filter coefficients, the adaptive LMS algorithm is very popular. It is fast and efficient, and it does not require any correlation measurements or matrix inversion [13]. The update equation of the LMS adaptive algorithm is generally expressed as

{\hat{\mathbf{\text{w}}}}_{n+1}^{T}={\hat{\mathbf{\text{w}}}}_{n}^{T}-\mu \nabla \mathbf{\xi}\left(n\right),

(17)

where *μ* is the step factor controlling the stability and rate of convergence, *ξ*(*n*) is the cost function, and ∇ is the gradient operator. The LMS algorithm simply approximates the mean square error by the square of the instantaneous error, i.e., *ξ*(*n*)=*e*^{2}(*n*), and therefore, from (6) and (7), the gradient of *ξ*(*n*) can be expressed as

\begin{array}{l}\nabla \mathbf{\xi}\left(n\right)=\frac{\mathrm{\partial \xi}\left(n\right)}{\partial {\hat{\mathbf{\text{w}}}}_{n}^{T}}=-2e\left(n\right)\left(\hat{\mathbf{\text{s}}}\right(n-{k}_{0})+\hat{\mathbf{\text{v}}}(n-{k}_{0}\left)\right).\end{array}

Thus, the update equation for the proposed single-channel LMS adaptive scheme can be written as

\begin{array}{l}{\hat{\mathbf{\text{w}}}}_{n+1}^{T}={\hat{\mathbf{\text{w}}}}_{n}^{T}+2\mathrm{\mu e}\left(n\right)\left(\hat{\mathbf{\text{s}}}\right(n-{k}_{0})+\hat{\mathbf{\text{v}}}(n-{k}_{0}\left)\right).\end{array}

(18)

### 3.3 Convergence analysis of the proposed AEC scheme

Considering expectation operation on both sides of the update Eq. 18, one can obtain

\begin{array}{l}{\underline{\hat{\mathbf{\text{w}}}}}_{n+1}^{T}={\underline{\hat{\mathbf{\text{w}}}}}_{n}^{T}+2\mathrm{\mu E}\left\{e\right(n\left)\right(\hat{\mathbf{\text{s}}}(n-{k}_{0})+\hat{\mathbf{\text{v}}}(n-{k}_{0})\left)\right\}.\end{array}

(19)

Here, an underline beneath {\hat{\mathbf{\text{w}}}}_{n} is introduced to represent the expected value E\left\{{\hat{\mathbf{\text{w}}}}_{n}\right\}. For the *k* th unknown weight vector (where *k*=1,2,…,*p*), using (6) and neglecting the effect of *r*_{
s
s
}(*n*) that has already been discussed in the previous subsection, the last term of (19) can be written as

\begin{array}{l}\phantom{\rule{-20.0pt}{0ex}}2\mathrm{\mu E}\left\{e\right(n\left)\right(\hat{\mathbf{\text{s}}}(n-{k}_{0})+\hat{\mathbf{\text{v}}}(n-{k}_{0})\left)\right\}\\ \phantom{\rule{0.75em}{0ex}}=2\mathrm{\mu E}\left\{\right[{x}_{s}\left(n\right)+{x}_{v}\left(n\right)-\phantom{\rule{0.3em}{0ex}}{\hat{x}}_{s}\left(n\right)-{\hat{x}}_{v}\left(n\right)]\times (\phantom{\rule{0.3em}{0ex}}\hat{\mathbf{\text{s}}}\phantom{\rule{0.3em}{0ex}}(\phantom{\rule{0.3em}{0ex}}n-{k}_{0}\phantom{\rule{0.3em}{0ex}})\\ \phantom{\rule{2em}{0ex}}+\phantom{\rule{0.3em}{0ex}}\hat{\mathbf{\text{v}}}\phantom{\rule{0.3em}{0ex}}(\phantom{\rule{0.3em}{0ex}}n-{k}_{0}\phantom{\rule{0.3em}{0ex}})\phantom{\rule{0.3em}{0ex}}\left)\phantom{\rule{0.3em}{0ex}}\right\}.\end{array}

(20)

Based on the assumptions on cross-correlation terms stated in the previous subsection, one can obtain

\begin{array}{lcr}E\left\{e\right(n\left)\right(\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\hat{\mathbf{\text{s}}}\phantom{\rule{0.3em}{0ex}}(n\phantom{\rule{0.3em}{0ex}}-\phantom{\rule{0.3em}{0ex}}{k}_{0})+\hat{\mathbf{\text{v}}}(n-{k}_{0})\left)\right\}\phantom{\rule{0.3em}{0ex}}& \hfill =& \phantom{\rule{0.3em}{0ex}}{\mathbf{\text{r}}}_{({x}_{s}+{x}_{v})(s+v)}(n\phantom{\rule{0.3em}{0ex}}-\phantom{\rule{0.3em}{0ex}}{k}_{0})\hfill \\ \phantom{\rule{0.3em}{0ex}}-\phantom{\rule{0.3em}{0ex}}{\mathbf{\text{R}}}_{(s+v)(s+v)}(n\phantom{\rule{0.3em}{0ex}}-\phantom{\rule{0.3em}{0ex}}{k}_{0}){\hat{\mathbf{\text{w}}}}_{n}^{T}.\end{array}

(21)

Using (21), the update Eq. 19 can be written as

\begin{array}{lcr}{\underline{\hat{\mathbf{\text{w}}}}}_{n+1}^{T}& \hfill =& {\underline{\hat{\mathbf{\text{w}}}}}_{n}^{T}-2\mu {\mathbf{\text{R}}}_{(s+v)(s+v)}(n-{k}_{0}){\underline{\hat{\mathbf{\text{w}}}}}_{n}^{T}\hfill \\ +\phantom{\rule{0.3em}{0ex}}2\mu {\mathbf{\text{r}}}_{({x}_{s}+{x}_{v})(s+v)}(n-{k}_{0}).\hfill \end{array}

(22)

Evaluating the homogeneous and particular solutions of (22), the total solution can be obtained as (see Appendix)

\begin{array}{l}\phantom{\rule{-23.0pt}{0ex}}{\underline{\hat{w}}}_{n+1}^{U}\left(k\right)={C}_{k}{(1-2\mathrm{\mu \lambda}(k\left)\right)}^{n}+\frac{1}{\lambda \left(k\right)}{r}^{U}(n-{k}_{0}-k),\end{array}

(23)

where *λ*(*k*) is the *k* th diagonal element of the eigenvalue matrix obtained by eigenvalue decomposition of **R**_{(s+v)(s+v)}(*n*−*k*_{0}) and *r*^{U}(*n*−*k*_{0}−*k*) is the *k* th element of {\mathbf{\text{U}}}^{T}{\mathbf{\text{r}}}_{({x}_{s}+{x}_{v})(s+v)}(n-{k}_{0})={\mathbf{\text{r}}}_{({x}_{s}+{x}_{v})(s+v)}^{U}(n-{k}_{0}) with the matrix **U** consisting of eigenvectors corresponding to eigenvalues. Since in the iterative update procedure, the homogeneous part (1−2*μ* *λ*(*k*))^{n} diminishes with iterations, (23) in a matrix form can be expressed as

\begin{array}{lcr}{\underline{\hat{\mathbf{\text{w}}}}}^{T}& =& \mathbf{U}{\mathbf{\Lambda}}^{-1}{\mathbf{U}}^{T}{\mathbf{r}}_{({x}_{s}+{x}_{v})(s+v)}(n-{k}_{0})\hfill \\ =& {\mathbf{R}}_{(s+v)(s+v)}^{-1}(n-{k}_{0}){\mathbf{r}}_{({x}_{s}+{x}_{v})(s+v)}(n-{k}_{0}).\hfill \end{array}

(24)

Thus, it is found that the average value of the weight vector converges to the Wiener-Hopf solution, which is the optimum solution with increasing number of iteration.

### 3.4 Noise reduction in spectral domain

In the proposed AENC scheme, the operation of the ANC block is processed frame by frame for noise reduction based on single-channel spectral subtraction algorithm [20–22]. According to (9), for the *i* th frame, the error signal for the duration of a frame length can be written as

{e}_{i}\left(n\right)={\hat{s}}_{i}\left(n\right)+{\hat{v}}_{i}\left(n\right).

(25)

Corresponding frequency domain representation is given by

{E}_{i}\left(\omega \right)={\hat{S}}_{i}\left(\omega \right)+{\hat{V}}_{i}\left(\omega \right).

(26)

The magnitude squared spectrum of {\hat{s}}_{i}\left(n\right) can be written as

\begin{array}{l}\mid \phantom{\rule{0.3em}{0ex}}{\hat{S}}_{i}\left(\omega \right)\phantom{\rule{0.3em}{0ex}}{\mid}^{2}=\mid \phantom{\rule{0.3em}{0ex}}{E}_{i}\left(\omega \right)\phantom{\rule{0.3em}{0ex}}{\mid}^{2}\phantom{\rule{0.3em}{0ex}}-\phantom{\rule{0.3em}{0ex}}\mid {\hat{V}}_{i}\left(\omega \right)\phantom{\rule{0.3em}{0ex}}{\mid}^{2}\phantom{\rule{0.3em}{0ex}}-{\hat{V}}_{i}\left(\omega \right){\hat{S}}_{i}^{\ast}\left(\omega \right)\phantom{\rule{0.3em}{0ex}}-\phantom{\rule{0.3em}{0ex}}{\hat{S}}_{i}\left(\omega \right){\hat{V}}_{i}^{\ast}\left(\omega \right).\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\end{array}

(27)

It is desired to choose an estimate {\stackrel{~}{S}}_{i}\left(\omega \right) that will minimize

\mathit{\text{Er}}{r}_{i}\left(\omega \right)=\mid \mid {\stackrel{~}{S}}_{i}\left(\omega \right){\mid}^{2}-\mid {\hat{S}}_{i}\left(\omega \right){\mid}^{2}\mid .

(28)

Since the noise is assumed to be zero mean and uncorrelated with the signal, the expected values of the last two terms of (27) can be neglected. Thus, (28) can be expressed as

\mathit{\text{Er}}{r}_{i}\left(\omega \right)=\mid {\stackrel{~}{S}}_{i}\left(\omega \right){\mid}^{2}-\mid {E}_{i}\left(\omega \right){\mid}^{2}+E\{\mid {\hat{V}}_{i}(\omega \left){\mid}^{2}\right\}.

(29)

This expression of *E* *r* *r*_{
i
}(*ω*) can be minimized by choosing

\mid {\stackrel{~}{S}}_{i}\left(\omega \right){\mid}^{2}=\mid {E}_{i}\left(\omega \right){\mid}^{2}-E\{\mid {\hat{V}}_{i}(\omega \left){\mid}^{2}\right\}.

(30)

With an estimate of noise spectrum E\{\mid {\hat{V}}_{i}(\omega \left){\mid}^{2}\right\}, signal spectrum {\stackrel{~}{S}}_{i}\left(\omega \right) can be computed as

{\stackrel{~}{S}}_{i}\left(\omega \right)=\mid {\stackrel{~}{S}}_{i}\left(\omega \right)\mid {e}^{\text{jarg}\left[{E}_{i}\right(\omega \left)\right]},

(31)

where the phase (arg[*E*_{
i
}(*ω*)]) is generally assumed to be the phase of the noise corrupted signal without causing significant degradation in terms of loss of intelligibility of the speech signal [20]. It can be seen that an estimate of the magnitude spectrum \mid {\stackrel{~}{S}}_{i}\left(\omega \right)\mid of the signal can be obtained provided an estimate of noise spectrum E\{\mid {\hat{V}}_{i}(\omega \left){\mid}^{2}\right\} is available, which is generally computed during the periods when speech is known *a priori* not to be present.

Final output of the AENC system is the speech frame \left({\stackrel{~}{s}}_{i}\right(n\left)\right), which consists of the original speech *s*_{
i
}(*n*) and a negligible amount of noise-like signal *Ψ*_{
i
}(*n*). The signal *Ψ*_{
i
}(*n*), although very weak, may contain some signature of the input noise *v*(*n*), the residual echo *δ*_{
s
}(*n*), and the residual noise *δ*_{
v
}(*n*). In order to overcome the problem of musical noise and to avoid the speech distortion caused by speech subtraction, in (31), an over estimate of the noise power spectrum can be subtracted carefully such that the spectral floor is preserved [21]. Thus, (31) can be modified as

\begin{array}{lcr}\mid {\stackrel{~}{S}}_{i}\left(\omega \right){\mid}^{2}& =& \mid {\hat{E}}_{i}\left(\omega \right){\mid}^{2}-{\alpha}_{\mathit{\text{ss}}}E\{\mid {\hat{V}}_{i}(\omega \left){\mid}^{2}\right\},\hfill \\ \text{if}\mid {\stackrel{~}{S}}_{i}\left(\omega \right){\mid}^{2}>{\beta}_{\mathit{\text{ss}}}\{\mid {\hat{V}}_{i}(\omega \left){\mid}^{2}\right\}\hfill \\ =& {\beta}_{\mathit{\text{ss}}}\{\mid {\hat{V}}_{i}(\omega \left){\mid}^{2}\right\},\text{otherwise.}\hfill \end{array}

(32)

Here, *α*_{
s
s
} is the subtraction factor and *β*_{
s
s
} is the spectral floor parameter with *α*_{
s
s
}≥1 and 0≤*β*_{
s
s
}≤1. The task of noise power spectral density estimation is carried out based on the minimum statistics noise estimator proposed in [23] which can handle the time-varying nature of the noise.