# Speech enhancement with an acoustic vector sensor: an effective adaptive beamforming and post-filtering approach

- Yue Xian Zou
^{1}Email author, - Peng Wang
^{1}, - Yong Qing Wang
^{1}, - Christian H Ritz
^{2}and - Jiangtao Xi
^{2}

**2014**:17

https://doi.org/10.1186/1687-4722-2014-17

© Zou et al.; licensee Springer. 2014

**Received: **21 January 2014

**Accepted: **3 April 2014

**Published: **27 April 2014

## Abstract

Speech enhancement has an increasing demand in mobile communications and faces a great challenge in a real ambient noisy environment. This paper develops an effective spatial-frequency domain speech enhancement method with a single acoustic vector sensor (AVS) in conjunction with minimum variance distortionless response (MVDR) spatial filtering and Wiener post-filtering (WPF) techniques. In remote speech applications, the MVDR spatial filtering is effective in suppressing the strong spatial interferences and the Wiener post-filtering is considered as a popular and powerful estimator to further suppress the residual noise if the power spectral density (PSD) of target speech can be estimated properly. With the favorable directional response of the AVS together with the trigonometric relations of the steering vectors, the closed-form estimation of the signal PSDs is derived and the frequency response of the optimal Wiener post-filter is determined accordingly. Extensive computer simulations and a real experiment in an anechoic chamber condition have been carried out to evaluate the performance of the proposed algorithm. Simulation results show that the proposed method offers good ability to suppress the spatial interference while maintaining comparable log spectral deviation and perceptual evaluation of speech quality performance compared with the conventional methods with several objective measures. Moreover, a single AVS solution is particularly attractive for hands-free speech applications due to its compact size.

## Keywords

## 1 Introduction

As the presence of background noise significantly deteriorates the quality and intelligibility of speech, enhancement of speech signals has been an important and challenging problem and various methods have been proposed in the literature to tackle this problem. Spectral subtraction, Wiener filtering, and their variations [1] are commonly used for suppressing additive noise, but they are not able to effectively suppress spatial interference. In order to eliminate spatial interferences, beamforming techniques applied to microphone array recordings can be employed [2–9]. Among these, the minimum variance distortionless response (MVDR) beamformer known as the Capon beamformer and their equivalent generalized sidelobe cancellers (GSC) work successfully in remote speech enhancement applications [2]. However, the performance of MVDR-type methods is proportional to the number of array sensors used, thus limiting their application. Moreover, the MVDR beamformer is not effective at suppressing additive noise, leaving residual noise in its output. As a result, the well-known Wiener post-filtering solution normally can be employed to further reduce the residual noise from the output of the beamformer [7]. Recently, speech enhancement using the acoustic vector sensor (AVS) array has received research attention due to the merit of spatial co-location of microphones and signal time alignment [5, 10–12]. Compared with the traditional microphone array, the compact structure (occupying a volume of approximately 1 cm^{3}) makes the AVS much more attractive in portable speech enhancement applications. Research showed that the AVS array beamformer with the MVDR method [5, 10] successfully suppresses spatial interferences but fails to effectively suppress background noise. The integrated MVDR and Wiener post-filtering method using AVS array [12] offers good performance in terms of suppression of spatial interferences and background additive noise, but it requires more than two AVS units as well as the good voice activity detection (VAD) technique.

In this paper, we focus on developing a speech enhancement solution capable of effectively suppressing spatial interferences and additive noise at a less computational cost using only one AVS unit. More specifically, by exploring the unique spatial co-location property (the signal arrives at the sensors at the same time) and the trigonometric relations of the steering vectors of the AVS, a single AVS-based speech enhancement system is proposed. The norm-constrained MVDR method is employed to form the spatial filter, while the optimal Wiener post-filter is designed by using a novel closed-form power spectral density (PSD) estimation method. The proposed solution does not depend on the VAD technique (for noise estimation) and hence has advantages of small size, less computation cost, and the ability to suppress both spatial interferences and background noise.

The paper is organized as follows. The data model of an AVS and the frequency domain MVDR (FMV) with a single AVS are presented in Section 2. The detailed derivation of the closed-form estimation of the signal PSDs for an optimal Wiener post-filtering (WPF) using the AVS structure is given in Section 3. The proposed norm-constrained FMV-effective Wiener post-filtering (NCFMV-EWPF) algorithm for speech enhancement is presented in Section 4. Simulation results are presented in Section 5. Section 6 concludes our work.

## 2 Problem formulation

### 2.1 Data model for an AVS unit

*o*-sensor) and three orthogonally oriented directional sensors depicted as the

*u*-sensor,

*v*-sensor, and

*w*-sensor, respectively. As an example, Figure 1 shows a data capture system with an AVS unit. In this paper, focusing on deriving the algorithm and making the derivation clear, let us assume that there is one target speech

*s*(

*t*) at (

*θ*

_{ s },

*ϕ*

_{ s }) = (90°,

*ϕ*

_{ s }) and one interference signal

*s*

_{ i }(

*t*) at (

*θ*

_{ i },

*ϕ*

_{ i }) = (90°,

*ϕ*

_{ i }) impinging on this AVS unit, where

*ϕ*

_{ s },

*ϕ*

_{ i }∈ [0°, 360°) are the azimuth angles. Since

*s*(

*t*) and

*s*

_{ i }(

*t*) arrive in the horizontal plane, we only need the

*u*-sensor,

*v*-sensor, and

*o*-sensor to capture signals from the AVS unit. The angle difference between

*s*(

*t*) and

*s*

_{ i }(

*t*) is defined as

where [.]^{
T
} denotes the vector/matrix transposition.

**n**

_{avs}(

*t*) is assumed as the additive white Gaussian noise at the AVS unit. Specifically, we have the following definitions:

where *x*_{
u
}(*t*), *x*_{
v
}(*t*), and *x*_{
o
}(*t*) are the received data of the *u*-, *v*-, and *o*-sensor, respectively, and *n*_{
u
}(*t*), *n*_{
v
}(*t*), and *n*_{
o
}(*t*) are the captured noise at the *u*-, *v*-, and *o*-sensor, respectively. The task of speech enhancement with an AVS is to estimate *s*(*t*) from x_{
avs
}(*t*).

In this study, without loss of generality, we follow the commonly used assumptions [4]: (1) *s*(*t*) and *s*_{
i
}(*t*) are mutually uncorrelated; (2) *n*_{
u
}(*t*), *n*_{
v
}(*t*), and *n*_{
o
}(*t*) are mutually uncorrelated.

### 2.2 FMV with a single AVS

*u*-sensor and

*v*-sensor) of the AVS unit. From (2) to (4), the data received by the

*u*-sensor and the

*v*-sensor can be modeled as [14]

*f*) = [

*X*

_{ u }(

*f*),

*X*

_{ v }(

*f*)]

^{ T }and N(

*f*) = [

*N*

_{ u }(

*f*),

*N*

_{ v }(

*f*)]

^{ T }. The beamforming is then performed by applying a complex weight to the captured signals, and the output of the FMV can be denoted as

^{ H }denotes the Hermitian transposition. w

^{ H }(

*f*) = [

*w*

_{ u }(

*f*),

*w*

_{ v }(

*f*)] is the weight vector of the FMV. Let us define

*g*(

*ϕ*

_{ s },

*f*) and

*g*(

*ϕ*

_{ i },

*f*) can be viewed as the spatial response gains of the FMV to the target spatial signal

*S*(

*f*) and the spatial interference signal

*S*

_{ i }(

*f*), respectively. Substituting (12) and (13) into (11), we can get

*g*(

*ϕ*

_{ s },

*f*) = 1 for

*S*(

*f*) while minimizing the output signal power (

*P*

_{ YY }=

*E*[

*Y*(

*f*)

*Y**(

*f*)]) of the FMV to suppress other undesired sources. Hence, the optimal weight vector of the FMV can be obtained by solving the constrained optimization cost function [2]:

**R**

_{ x }(

*f*) =

*E*[X(

*f*)X

^{ H }(

*f*)] is the autocorrelation matrix of the received data of the FMV. The optimal solution of (15) is given as [2]

*ϕ*

_{ s }) is fixed (speech target is static), w

_{FMV}(

*f*) depends on the estimate of ${\mathbf{R}}_{\mathbf{x}}^{-1}\left(\mathit{f}\right)$. There are several methods that have been proposed to estimate R

_{ x }(

*f*) [1], and the diagonal loading technique is one of the robust algorithms aiming at avoiding the non-singularity in (16), which leads to a norm-constrained FMV (NCFMV) as shown in (17) [3]:

*γ*is the positive loading factor, and

*σ*is a small positive number to avoid the denominator becoming zero. It is expected that the NCFMV will greatly suppress the spatial unwanted signals. Obviously, the output of the NCFMV can be derived as follows with (17), (12), (13), and some simple manipulations:

### 2.3 The estimation of the power spectral density

As discussed above, the NCFMV is only effective in suppressing the spatial interferences. In this section, a new solution has been proposed by incorporating the well-known Wiener post-filter (WPF) to further suppress the residual noise in beamformer output *Y*(*f*) in (18).

*S*(

*f*) from

*Y*(

*f*), the frequency response of the Wiener filter is given by [6, 8]

where *ψ*_{
YS
}(*f*) is the cross-power spectrum density (CSD) of *S*(*f*) and *Y*(*f*) and *ψ*_{
YY
}(*f*) is the power spectral density (PSD) of *Y*(*f*). Generally, *Y*(*f*) are considered uncorrelated to interferences, and we can approximately get the second equation in (19) via (18). From (19), it is clear that a good estimate of *ψ*_{
SS
}(*f*) and *ψ*_{
YY
}(*f*) from *X*(*f*) and *Y*(*f*) are very crucial to the performance of the WPF. There are some PSD estimation algorithms that have been proposed under different spatial-frequency joint estimation schemes. For single-channel application as an example, the voice activity detection (VAD) method is usually applied to get the noise and speech segments, and then the spectrum subtraction algorithm can be used to remove noise components before estimating *ψ*_{
SS
}(*f*). Moreover, for microphone array post-filtering schemes, *ψ*_{
SS
}(*f*) can be estimated from the available multichannel signals, which are assumed to be within an incoherent noise environment [6].

*u*-,

*v*, and

*o*-sensor signals) and there exists a trigonometric relationship between the steering vectors a(

*ϕ*

_{ s }) and a(

*ϕ*

_{ i }) of the AVS, in this paper, we will derive a closed-form solution to estimate

*ψ*

_{ SS }(

*f*) and

*ψ*

_{ YY }(

*f*) to form an optimal WPF. The system diagram proposed is shown in Figure 2.

## 3 The formulation of the Wiener post-filter

### 3.1 Derivation of the estimate of CSD and PSD

*α*(

*f*) and

*β*(

*f*) as

*f*will be dropped in the following derivation. Ideally, the additive noises of

*u*-,

*v*-, and

*o*-sensors have the same power, and then we have

*ψ*

_{ NN },

*g*(

*ϕ*

_{ i }), ${\mathit{\psi}}_{{\mathit{S}}_{\mathit{i}}{\mathit{S}}_{\mathit{i}}}$, and ${\mathit{\psi}}_{{\mathit{S}}_{\mathit{i}}{\mathit{S}}_{\mathit{i}}}$. Hence, using (28) and (26), the PSD of noise can be derived as

*S*

_{ i }can be given by

*S*

_{ i }and the target speech

*S*can be derived, respectively, as follows:

### 3.2 The proposed EWPF method and some discussions

Till now, we have mathematically derived the closed-form expressions of the *ψ*_{
SS
} in (34), *ψ*_{
YY
} in (27), and *W*_{
pf
} in (19). Since *Y*, *X*_{
u
}, *X*_{
v
}, and *X*_{
o
} can be measured, the estimates of *ψ*_{
SS
} and *ψ*_{
YY
} can be determined accordingly. Hence, (33), (34), (27), and (19) describe the basic form of our proposed effective Wiener post-filtering algorithm for further enhancing the speech with an AVS (here, we term it as EWPF for short). In the following context, we will have some discussions on our proposed EWPF method.

*ψ*

_{ αβ }(

*f*) needs to be estimated. It is well known that the recursive update formula is a popular approach:

where *l* is the frame index and *λ* ∈ (0, 1] is the forgetting factor.

*ϕ*defined in (1) is close to or equal to 0, the denominator in (32) goes to 0. To avoid this situation, one small positive factor

*σ*

_{ r }should be added to the denominator of (32) and we get

Thirdly, analyzing the properties of *g*(*ϕ*_{
i
}), we observe the following: (1) If the target source *s*(*t*) is considered as short-time spatially stationary (approximately true for speech applications), w_{
NC
} in (17) can be updated every *L*_{
u
} frames for reducing computational complexity. Therefore, from the definition of (13), the gain *g*(*ϕ*_{
i
}) will remain unchanged within *L*_{
u
} frames. However, ${\widehat{\mathit{\psi}}}_{\mathit{\alpha \beta}}\left(\mathit{f},\mathit{l}\right)$ is estimated frame by frame via (35); therefore, a more accurate estimation of *g*(*ϕ*_{
i
}) can be achieved by averaging over *L*_{
u
} frames. (2) From (36), it is clear that the small denominator will lead to a large variation of *g*(*ϕ*_{
i
}), reflecting incorrect estimates since the NCFMV is designed to suppress rather than to amplify the interference. Hence, it is reasonable to apply a clipping function *f*_{
c
}(*x*, *b*) (see (43)) to remove the outliers in the estimate of $\widehat{\mathit{g}}\left({\mathit{\varphi}}_{\mathit{i}}\right)$.

## 4 The proposed NCFMV-EWPF algorithm

_{ NC }in (17), the estimate of the R

_{ x }is given by [10]

*k*is the frequency bin index and

*k*= 1,2,…,

*K. C*

_{ d }is a constant slightly greater than the one that helps avoid matrix singularity.

*F*is the frame number used for estimating R

_{ x }(

*k*), and in our study it is set as

*F*= 2

*L*

_{ u }. Let us define

*X*

_{ u }(

*k*,

*l*) and

*X*

_{ v }(

*k*,

*l*) as the

*k*th component of the spectrum of the

*l*th frame of

*x*

_{ u }(

*n*) and

*x*

_{ v }(

*n*), respectively, and we have

_{ x }(

*k*) is estimated by using the

*F*most recent fast Fourier transforms (FFTs). Therefore, the robust estimation of

*W*

_{ pf }(

*k*) in (19) asks for the robust estimation of

*g*(

*ϕ*

_{ i },

*k*). According to the discussions in Section 3.2, we adopt the following estimation:

*L*

_{1}= fix((

*l*−

*1*)/

*L*

_{ u })

*L*

_{ u }+ 1,

*L*

_{2}= fix((

*l*−

*1*)/

*L*

_{ u })

*L*

_{ u }+

*L*

_{ u }, fix(.) is the floor operation,

*b*is a predefined threshold, and

*f*

_{ c }(

*x*,

*b*) is the clipping function and defined as

For presentation completeness, the proposed NCFMV-EWPF algorithm is summarized in Algorithm 1.

## 5 Simulation study

- 1.Output signal to interference plus noise ratio (SINR) defined as [7]$\mathrm{SINR}=10log\left({\u2225{\mathit{z}}_{\mathit{s}}\left(\mathit{t}\right)\u2225}^{2}/{\u2225{\mathit{x}}_{\mathit{o}}\left(\mathit{t}\right)-{\mathit{z}}_{\mathit{s}}\left(\mathit{t}\right)\u2225}^{2}\right)$(44)

*z*

_{ s }(

*t*) is the enhanced speech of the system and

*x*

_{ o }(

*t*) is the received signal of the

*o*-sensor. Moreover, a segmental output SINR is calculated on a frame-by-frame basis and then averaged over the total number frames to get more accurate prediction of perceptual speech quality [7].

- 2.Log spectral deviation (LSD), which is used to measure the speech distortion and defined as [16]$\mathrm{LSD}=\u2225ln\left({\mathit{\psi}}_{\mathit{ss}}\left(\mathit{f}\right)/{\mathit{\psi}}_{\mathit{zz}}\left(\mathit{f}\right)\right)\u2225$(45)

*ψ*

_{ ss }(

*f*) is the PSD of the target speech and

*ψ*

_{ zz }(

*f*) is the PSD of the enhanced speech. It is clear that the smaller LSD indicates the less speech distortion. Similar to the calculation of SINR, the segmental LSD is computed.

In addition, we also compared the performance of the Zelinski post-filter (ZPF) [4], NCFMV [5], and NCFMV-ZPF [6] algorithms under the same conditions to our proposed algorithm. The setup of the single AVS unit is shown in Figure 1.

*s*(

*t*) and babble speech taken from the Noisex-92 database [18] acts as the interference speech

*s*

_{ i }(

*t*). One set of the typical waveforms used in our simulation studies is shown in Figure 3.

### 5.1 Experiments on simulated data

#### 5.1.1 Experiment 1: the SINR performance under different noise conditions

*λ*= 0.6,

*σ*

_{ r }= 10

^{−3},

*L*

_{ u }= 4,

*γ*=

*σ*= 10

^{−5},

*C*

_{ d }= 1.1, and

*b*= 6, which produced the best experimental results under this specific setup. For comparison algorithms, the parameter settings are set as the same as those in the relevant papers. The experimental results are listed in Table 1.

**SINR-out for different algorithms (dB)**

Algorithm | ZPF[4] | NCFMV[5] | NCFMV-ZPF[6] | NCFMV-EWPF | SINR-input (dB) |
---|---|---|---|---|---|

Trial 1 (n | 2.7 | 12.6 | 14.8 |
| 0 |

Trial 2 (n | 7.8 | 12.8 | 16.4 |
| 5 |

Trial 3 (n | 13.1 | 13.4 | 18.3 |
| 10 |

Trial 4 (n | 8.1 | 2.0 | 7.8 |
| 0 |

Trial 5 (n |
| 6.5 |
| 13.2 | 5 |

Trial 6 (n |
| 9.1 |
| 16.5 | 10 |

Trial 7 (n | 3.1 | 8.1 | 11.9 |
| 0 |

Trial 8 (n | 8.3 | 10.3 | 14.7 |
| 5 |

Trial 9 (n | 13.6 | 12.4 | 18.9 |
| 10 |

*s*

_{ i }(

*t*) = 0). The performance for trial 5 indicates that the proposed NCFMV-EWPF is not as effective as the ZPF in suppressing the additive noise with higher SNR (SNR > 10 dB) when spatial interference is not present. Therefore, these experimental results demonstrate the superior capability of the proposed NCFMV-EWPF in suppressing the spatial and adverse additive interferences. For visualization purposes, the results in Table 1 have also been plotted in Figure 4, where the

*x*-axis represents the SINR of the signal captured by the AVS (termed as SINR-input) and the

*y*-axis represents the SINR of the enhanced speech (termed as SINR-out).

#### 5.1.2 Experiment 2: the impact of the angle between the target and interference speakers

*ϕ*=

*ϕ*

_{ s }−

*ϕ*

_{ i }) on the performance of the NCFMV-EWPF algorithm. The results of the SINR-out versus Δ

*ϕ*are shown in Figure 5, where the same experimental settings as those used for trial 7 in experiment 1 were adopted except the target speech location

*ϕ*

_{ s }varied from (90°,0°) to (90°,360°) with 45° increments. From Figure 5, it is clear to see that when Δ

*ϕ*→ 0° (the target speaker moves closer to the interference speaker), for both algorithms, the SINR-out drops significantly and almost goes to 0. This means the speech enhancement is very much limited under this condition. However, when Δ

*ϕ*> 0°, the SINR-out gradually increases. It is quite encouraging to see that the SINR-out of our proposed NCFMV-EWPF algorithm is superior to that of the NCFMV algorithm for all angles. Moreover, the SINR-out of our proposed NCFMV-EWPF algorithm maintains about 15 dB when Δ

*ϕ*≥ 45°.

#### 5.1.3 Experiment 3: SINR, LSD, and PESQ performance

*ϕ*= 45°). The experimental results are given in Table 2. It can be seen that the overall performance of our proposed NCFMV-EWPF algorithm is superior to that of other comparison algorithms. The LSD and PESQ performance of the NCFMV-EWPF algorithm is comparable to that of the NCFMV-ZPF [6] algorithm. It is encouraging to see that the proposed NCFMV-EWPF algorithm is able to effectively suppress the interference and additive noise while maintaining good speech quality and less distortion.

### 5.2 Experiments on recorded data in an anechoic chamber

#### 5.2.1 Experiment 4: the SINR-out performance with different speakers

*u*-sensor and

*v*-sensor) and one Knowles EK-3132 sensor (

*o*-sensor) (Knowles Electronics Inc., Itasca, IL, USA). Recordings were made of 10 different speech sentences from the IEEE speech corpus [20] in an anechoic chamber and background noise only from computer servers and air conditioning. The anechoic chamber is similar to the noise field: n

_{avs}(

*t*) = 0 and

*s*

_{ i }(

*t*) ≠ 0. The sampling rate was 48 kHz and then down-sampled to 16 kHz for speech enhancement. The speakers were placed in front of the AVS at a distance of 1 m. Target speech was located at a fixed position (90°, 45°), while interference speech was located at (90°, 90°). Ten trials were carried out using the 10 different target speeches.

*x*-axis represents the number of trials, and the

*y*-axis represents the SINR of the enhanced speech (in dB). It is clear to see that the proposed NCFMV-EWPF algorithm provides superior SINR-out performance for all trails when the SINR-input of the recorded data is at about −5 dB. The experimental results with the real recorded data further validate the effectiveness of the proposed NCFMV-EWPF in suppressing the strong competing speech.

#### 5.2.2 Experiment 5: the impact of the angle between the target and interference speakers

*ϕ*= |

*ϕ*

_{ s }−

*ϕ*

_{ i }|) on the performance of the NCFMV-EWPF algorithm. The results of the SINR-out versus Δ

*ϕ*are shown in Figure 8, where the experimental setup is the same as that of experiment 4 except that the angle of the target speaker (

*ϕ*

_{ s }) varies from (90°,90°) to (90°,0°) with 15° decrement.

From Figure 8, it is clear to see that the performance of the proposed NCFMV-EWPF algorithm is superior to that of the NCFMV algorithm for all Δ*ϕ* values. Compared to the results shown in Figure 5 using the simulated data, similar conclusions can be drawn for the proposed NCFMV-EWPF algorithm. More specifically, with the recorded data, when Δ*ϕ* > 15°, the proposed NCFMV-EWPF algorithm can effectively enhance the target speech.

#### 5.2.3 Experiment 6: PESQ performance versus *Δϕ*

*ϕ*

_{ i }) was fixed at (90°,90°) and the angle of the target speaker (

*ϕ*

_{ s }) varied from (90°,90°) to (90°,0°) with 15° decrement. The experimental results are given in Figure 9. It can be seen that the overall performance of PESQ for our proposed NCFMV-EWPF algorithm is superior to that of the comparison algorithm for all angle differences. This experiment also demonstrates the ability of the proposed NCFMV-EWPF algorithm in effectively suppressing the interference and additive noise while maintaining good speech quality and less distortion when Δ

*ϕ*> 15°.

## 6 Conclusions

In this paper, a novel speech enhancement algorithm named as NCFMV-EWPF has been derived with a single AVS unit by an efficient closed-form estimation of the power spectral densities of signals. The results of computer simulation show that the proposed NCFMV-EWPF algorithm outperforms the existing ZPF, NCFMV, and NCFMV-ZPF algorithms, in terms of suppressing the competing speaker and noise field. The results of real experiments show that compared with the NCFMV algorithms, the proposed NCFMV-EWPF algorithm can effectively suppress the competing speech and additive noise while maintaining good speech quality and less distortion. In addition, it is noted that the NCFMV-EWPF algorithm does not require the VAD technique, which not only reduces the computational complexity but also provides more robust performance in a noisy environment, such as the higher output SINR, less speech distortion, and better speech intelligibility. It is expected that this novel approach developed in this paper is a suitable solution for implementation within hands-free speech recording systems.

## Declarations

### Acknowledgements

This work is partially supported by the National Natural Science Foundation of China (No. 61271309) and the Shenzhen Science & Technology Fundamental Research Program (No. JCY201110006). It was also partially supported by the Australian Research Council Grant DP1094053.

## Authors’ Affiliations

## References

- Boll S: Suppression of acoustic noise in speech using spectral subtraction.
*IEEE Trans. Acoust. Speech Signal Process*1979, 27(2):113-120. 10.1109/TASSP.1979.1163209View ArticleGoogle Scholar - Griffiths LJ, Jim CW: An alternative approach to linearly constrained adaptive beamforming.
*IEEE Trans. Antennas Propag.*1982, 30(1):27-34. 10.1109/TAP.1982.1142739View ArticleGoogle Scholar - Zou YX, Chan SC, Wan B, Zhao J:
*Recursive robust variable loading MVDR beamforming in impulsive noise environment*.*Volume 1–4*. Macao: Paper presented at the IEEE ASIA Pacific conference on circuits and system; 2008:988-991.Google Scholar - Zelinski R:
*A microphone array with adaptive post-filtering for noise reduction in reverberant rooms*. New York: Paper presented at the IEEE international conference on acoustics, speech, and signal processing (ICASSP); 1988.View ArticleGoogle Scholar - Lockwood ME, Jones DL: Beamformer performance with acoustic vector sensors in air.
*J. Acoust. Soc. Am.*2006, 119: 608-619. 10.1121/1.2139073View ArticleGoogle Scholar - McCowan IA, Bourlard H: Microphone array post-filter based on noise field coherence.
*IEEE Trans. Speech Audio Process*2003, 11(6):709-716. 10.1109/TSA.2003.818212View ArticleGoogle Scholar - Benesty J, Sondhi MM, Huang Y:
*Springer Handbook of Speech Processing*. Berlin-Heidelberg: Springer; 2008.View ArticleGoogle Scholar - Vaseghi SV:
*Advanced Digital Signal Processing and Noise Reduction*. 2nd edition. Chichester: John Wiley & Sons ltd; 2000.Google Scholar - Bitzer J, Simmer KU, Kammeyer KD:
*Multichannel noise reduction algorithms and theoretical limits*. Rhodes: Paper presented at EURASIP European signal processing conference (EUSIPCO); 1998.Google Scholar - Lockwood ME, Jones DL, Bilger RC, Lansing CR, Brien WDO, Wheeler BC, Feng AS: Performance of time-and frequency-domain binaural beamformers based on recorded signals from real rooms.
*J. Acoust. Soc. Am.*2004, 115: 379. 10.1121/1.1624064View ArticleGoogle Scholar - Shujau M, Ritz CH, Burnett IS:
*Speech enhancement via separation of sources from co-located microphone recordings*. Dallas: Paper presented at IEEE international conference on acoustics, speech and signal processing (ICASSAP); 2010.View ArticleGoogle Scholar - Wu PKT, Jin C, Kan A:
*A multi-microphone speech enhancement algorithm tested using acoustic vector sensor*. Tel-Aviv-Jaffa: Paper presented at the 12th international workshop on acoustic echo and noise control; 2010.Google Scholar - Li B, Zou YX:
*Improved DOA estimation with acoustic vector sensor arrays using spatial sparsity and subarray manifold*. Kyoto: Paper presented at IEEE international conference on acoustics, speech and signal processing (ICASSP); 2012.View ArticleGoogle Scholar - Shi W, Zou YX, Li B, Ritz CH, Shujau M, Xi J:
*Multisource DOA estimation based on time-frequency sparsity and joint inter-sensor data ratio with single acoustic vector sensor*. Vancouver: Paper presented at IEEE international conference on acoustics, speech and signal processing (ICASSP); 2013.Google Scholar - Shujau M:
*In air acoustic vector sensors for capturing and processing of speech signals, Dissertation*. University of Wollongong; 2011.Google Scholar - Gray R, Buzo A, Gray JA, Matsuyama Y: Distortion measures for speech processing.
*IEEE Trans. Acoust. Speech Signal Process*1980, 28(4):367-376. 10.1109/TASSP.1980.1163421View ArticleGoogle Scholar - ITU-T:
*Recommendation P.862 - Perceptual Evaluation of Speech Quality (PESQ): An Objective Method for End-to-End Speech Quality Assessment of Narrow-Band Telephone Networks and Speech Codecs*. Geneva: International Telecommunication Union - Telecommunication Standardization Sector; 2001.Google Scholar *NOISEX-92*. http://www.speech.cs.cmu.edu/comp.speech/Section1/Data/noisex.html- Ritz CH, Burnett IS:
*Separation of speech sources using an acoustic vector sensor*. Hanzhou: Paper presented at IEEE international workshop on multimedia signal processing; 2011.Google Scholar - Subcommittee IEEE: IEEE recommended practice for speech quality measurements.
*IEEE Trans. Audio Electro-acoustics*1969, AU-17(3):225-246.Google Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.