- Research Article
- Open Access
Monaural Voiced Speech Segregation Based on Dynamic Harmonic Function
© Xueliang Zhang et al. 2010
- Received: 17 September 2010
- Accepted: 2 December 2010
- Published: 12 December 2010
Correlogram is an important representation for periodic signals. It is widely used in pitch estimation and source separation. For these applications, major problems of correlogram are its low resolution and redundant information. This paper proposes a voiced speech segregation system based on a newly introduced concept called dynamic harmonic function (DHF). In the proposed system, conventional correlograms are further processed by replacing the autocorrelation function (ACF) with DHF. The advantages of DHF are: 1) peak's width is adjustable by controlling the variance of the Gaussian function and 2) the invalid peaks of ACF, not at the pitch period, tend to be suppressed. Based on DHF, pitch detection and effective source segregation algorithms are proposed. Our system is systematically evaluated and compared with the correlogram-based system. Both the signal-to-noise ratio results and the perceptual evaluation of speech quality scores show that the proposed system yields substantially better performance.
- Harmonic Order
- Clean Speech
- Complex Tone
- Pitch Contour
- Pitch Period
In realistic environment, speech is often corrupted by acoustic interference. Meanwhile, many applications have bad performance when handling the noisy speech. Therefore, noise reduction or speech enhancement is meaningful for systems such as speech recognition and hearing aids. Numerous speech enhancement algorithms have been proposed in the literature . The methods, such as independent component analysis  or beam forming , require multiple sensors. However, this requirement is not applicable for many applications such as telecommunication. Spectrum subtraction  and subspace analysis  proposed for monaural speech enhancement usually make strong assumptions on acoustic interference. Therefore, these methods are limited to some special environments. Segregating speech from one monaural recording has proven to be very challenging. At present, it is still an open problem in realistic environments.
Compared with the limited performance of speech enhancement algorithms, human listeners with normal hearing are capable of dealing with sound intrusions, even in monaural condition. According to Bregman , a human's auditory system segregates a target sound from interference through a process called auditory scene analysis (ASA) which has two parts: (1) sound signal decomposition and (2) components grouping. Bregman considered that the components organization included sequential organization on time series and simultaneous organization on frequency series. To simulate ASA inspired a novel field, computational auditory scene analysis (CASA) , which has obtained more and more attention. Compared with other general methods, CASA can be applied under single channel input, and it has no strong assumption on the prior knowledge of noise.
A large proportion of sounds have harmonic structure, such as vowels and music tone. The most distinct characteristic is that these sounds consist of fundamental harmonic ( ) and several overtones which are called harmonic series. A good deal of evidence suggest that harmonics tend to be perceived as a single sound. The phenomenon is called the "harmonicity" principle in ASA. Pitch and harmonic structure provide an efficient mechanism for voiced speech segregation in CASA systems [8, 9]. Continuous variation of pitch is good for sequential grouping, and harmonic structure is suitable for simultaneous grouping. Licklider  proposed that pitch could be extracted from nerve firing patterns by a running autocorrelation function performed on the activity of individual fibers. Licklider's theory was implemented by the scholars (e.g., [11–14]). Meddis and Hewitt  implemented a similar computer model for harmonics perception. Specifically, their model firstly simulated the mechanical filtering of basilar membrane to decompose the signal and then the mechanism of neural transduction at hair cell. Their important innovation was to conduct the autocorrelation to model the neural firing rate analysis of human being. These banks of autocorrelation functions (ACF) were called correlograms which provide a simple way to pitch estimation and source separation. For pitch estimation, previous research  showed that peaks of summary correlograms indicate the pitch periods. According to the experiment results, Meddis and Hewitt argued that many phenomena about pitch perception could be explained with their model including the missing fundamental, ambiguous pitch, the pitch of interrupted noise, inharmonic components, and the dominant region of pitch. For source separation, the method as in  is to directly check that whether the pitch period is close to the peak of correlograms. By these advantages of correlogram, it is widely used in pitch detection  and speech separation algorithms [8, 9, 15].
However, there are some unsatisfactory facts. One was pointed out that the peak corresponding to the pitch period for a pure tone is rather wide . It leads to low resolution for the pitch extraction since mutual overlap between voices weakens their pitch cues. Some methods were proposed to obtain narrow peaks, such as "narrowed" ACF  and generalized correlation function . Another problem is redundant information caused by the "invalid" peaks of ACF. In fact, we care more about the peak of ACF at the pitch period when using correlogram to estimate pitch and separate sound sources. For example, algorithm  used the maximum peak of summary correlogram to indicate the pitch period. However, competitive peaks at multiples of pitch period may leads to subharmonic errors. To overcome the drawbacks, the first thing is to make the peaks narrower, and the second is to remove or suppress the peaks which are not at the pitch periods. We propose a novel feature called dynamic harmonic function to solve these two problems. The basic idea of DHF is shown in the next section.
The rest of the paper is organized as follows. We firstly present the basic idea behind DHF in Section 2. Section 3 gives an overview of our model and specific description. Our system is systematically evaluated and compared with the Hu and Wang model for speech segregation in Section 4, followed by the discussion in Section 5 and the conclusion in Section 6.
DHF is defined as a Gaussian mixture function. Gaussian means equal to the peak position of ACF which carries periodic information about the original signal in a certain frequency range. The peak width can be narrowed by adjusting the Gaussian variance. Meanwhile, the Gaussian mixture coefficient controls the peak height of DHF. The problem is how to estimate the mixture coefficients. The basic idea is as follows.
3.1. Front-End Processing
3.1.1. Signal Decomposition
At first, an input signal is decomposed by 128-channel gammatone filterbank  whose center frequencies are quasilogarithmically spaced from 80 Hz to 5 kHz and bandwidths are set according to equivalent rectangle bandwidth (ERB). The gammatone filterbank simulates the characteristic of basilar membrane of the cochlea. Then, the outputs of filterbank are transited into neural firing rate by hair cell model . The same processing is employed in [9, 15]. Amplitude modulation (AM) is important for channels dominated by multiple harmonics. Psychoacoustic experiments have demonstrated that amplitude modulation or beat rate is perceived in a critical band within which harmonic partials are unresolved . The AM in channels are obtained by performing Hilbert transform on gammatone filter output and then by filtering the squared Hilbert envelope by a filter with passband (50 Hz, 550 Hz). In the following part, gammatone filter output, hair cell output, and amplitude modulation at channel are represented by , , and , respectively.
Then, time frequency (T-F) units are formed with 10 ms offset and 20 ms window in each channel. Let denote a T-F unit for frequency channel and time frame . The T-F units will be segregated into foreground and background according to their features.
3.1.2. Feature Extraction
where lag , shift corresponds to 10 ms and window length .
In a unit , severe fluctuation of envelope leads to being small. Hence, we regard as unresolved if or else as resolved. Here, the according to the experiments.
where, and are zero-mean and unity-variance versions of and .
3.2. Dynamic Harmonic Function
where, lag (same as in ACF); is the number of peaks of ACF.
In formula (6), there are four parameters component number, Gaussian means, Gaussian variances, and Gaussian mixture coefficients to be computed. The component number equals to the number of peaks of ACF. Mean of the th Gaussian function is set to the position of the th peak of ACF. Gaussian variances are used to control the peak width of DHF which are determined later. The following part will show the estimation method of the mixture coefficients.
Formula (10) shows that the th mixture coefficient depends on the appearance of the th or th harmonic. As seen in Figure 5, the second mixture coefficient of DHF in (b) is large, because there are channels (a) and (c) dominated by the first and the third harmonic of the complex tone whose pitch period is 5.0 ms. While the forth mixture coefficient is small, because no channels were dominated by the third or the fifth harmonic whose frequencies are 300 Hz and 500 Hz, respectively.
From formula (8)–(10), it can be seen that a mixture coefficient of DHF does not depend on its all related harmonics but only two neighbours. One reason is to simplify the algorithm. The other is that previous psychoacoustic experiments  showed that the nearest related harmonics have the strongest effect for the harmonic fusion. During the experiments, scholars used a stimulus in which a rich tone with 10 harmonics wav alternated with a pure tone and checked if the harmonic of rich tone could be captured by the pure tone. It was found that a harmonic was easier to capture out of the complex tone when neighboring harmonics were removed. According to the results, one of conclusions is "the greater the frequency separation between a harmonic and its nearest frequency neighbors, the easier it was to capture it out of the complex tone."
where is the enhanced envelope ACF; is the th peak's position of ACF.
3.3. Pitch Estimation
Pitch estimation in noisy environment is closely related to sound separation. If, on one hand, the mixed sound is separated, the pitch of each sound can be obtained relatively easily. On the other hand, pitch is a very efficient grouping cue for sound separation and widely used in previous systems [8, 9, 15]. In the Hu and Wang model, a continuous pitch estimation method is proposed based on correlogram in which the T-F units are merged into segments according to cross-channel correlation and time continuity. Each segment is expected to be dominated by a single voiced sound. At first, they employed the longest segment as a criterion to initially separate the segments into foreground and background. And then, the pitch contour is formed using units in foreground and followed by sequential linear interpolation, more details can be found in .
It is obvious that initial separation plays an important role for pitch estimation. Although result of the simple decision could be adjusted in the following stage through iterative estimation and linear interpolation so as to give an acceptable prediction of pitch contour, it yet does not satisfy the requirements of the segregation and may also deliver some segments which are dominated by the intrusions into the foreground. This will certainly affect the accuracy of the result of pitch.
As a matter of fact, the pitch period is reflected by the ACF of each harmonic. The problem is that ACF has multiple peaks pitch estimation could be simple that if we find the longest segment which is dominated not only by the same source but also by the same harmonic and also know the harmonic order. It only needs to summate the corresponding peaks on each frame and regard the position of the maximum peak as pitch period. This process avoids source separation and pitch interpolation. Under the instruction of above analysis, we try (1) to find the longest segment and (2) to estimate the harmonic order. In this subsection, we will solve these two problems based on DHFs.
3.3.1. Initial Segmentation
As mentioned in Section 3.2, T-F units are classified into resolved and unresolved by carrier-to-envelope energy ratio. Each resolved T-F unit is dominated by a single harmonic. In addition, because the passbands of adjacent channels have significant overlap, a resolved harmonic usually activates adjacent channels, which leads to high-cross-channel correlations. Thus, only resolved T-F units with sufficiently high-cross-channel correlations are considered. More specifically, resolved unit is selected for consideration if , chosen to be little lower than in . Selected neighboring units are iteratively merged into segments. Finally, segments shorter than 30 ms are removed, since they unlikely arise from target speech. Figure 8(b) shows a result of segmentation for the same signal in Figure 8(a).
3.3.2. Harmonic Order Computation
Here, all the variances of DHFs are 2.0 for computation of summary DHF. The results are not significantly affected when the variances are in range [2, 4]. Too large values will cause the mutual influence by peaks of different sources. But too small values are also improper for describing the peaks' vibration of the units which are dominated by target speech.
3.3.3. Pitch Contour Tracking
summate the th peak of DHF of T-F units in the longest segment at each frame where is the harmonic order of T-F unit,
normalize the maximum value of summation at each frame to 1,
find all the peaks of summation as pitch period candidates at each frame,
track the pitch contour within candidates by dynamic programming,
where is the summation at frame , is the th peak of , the weight .
3.4. Unit Labeling
The pitch computed above is used to label the T-F units according to whether target speech dominates the unit responses or not. Mechanism of the Hu and Wang model is to test that the pitch period is close to the maximum peak of ACF. It is because that for the units dominated by target speech, there should be a peak around the pitch period. The method employed here is similar but with some differences.
where ; is estimated pitch period at frame ; the variance for .
where ; the variance .
The variance of DHF in each unit depends on the first peak's position . It leads to the peak width of DHF close to ACF. And the threshold is according to our experiment results.
3.5. Segregation Based on Segment
In this stage, units are segregated based on segmentation. Previous studies showed that it is more robust. Our method here is very similar with the Hu and Wang model .
3.5.1. Resolved Segment Grouping
For a resolved segment generated in Section 3.3, it is segregated into foreground if more than half of its units are marked as target, or else it is segregated into background . The spectra of target and intrusion often overlap, and as a result, some resolved segments contain units dominated by target as well as those dominated by intrusion. The is further divided according to the unit label. The target units and intrusion units in merged into segments according to frequency and time continuity. The segment retained in which is made up of target units and larger than 50 ms. And the segment are added to , if it is made up of intrusion units and larger than 50 ms. The rest smaller segments are removed.
3.5.2. Unresolved Segment Grouping
The unresolved segment is formed by target unresolved T-F units with frequency and time continuity. The segments larger than 30 ms are retained. The rest of the units in small segments are merged into large segment iteratively. At last, the unresolved units in large segments are grouped into , and the rest are grouped into . This processing part is similar with the Hu and Wang model.
Proposed model is evaluated on a corpus of 100 mixtures composed of ten voiced utterances mixed with ten different kinds of intrusions collected by Cooke . In the dataset, ten voiced utterances have continuous pitch nearly throughout whole duration. The intrusions are ten different kinds of sounds including N0, 1 kHz pure tone; N1, white noise; N2, noise bursts; N3, "cocktail party" noise; N4, rock music; N5, siren; N6, trill telephone; N7, female speech; N8, male speech; and N9, another female speech. Ten voiced utterances are regarded as targets. Frequency sampling rate of the corpus is 16 kHz.
There are two main reasons for using this dataset. The first is that the proposed system focuses on primitive driven  separation, and it is possible for system to obtain the pitch from same source without schema driven principles. The other reason is that the dataset has been widely used in evaluate CASA-based separation systems [8, 9, 15] which facilitates the comparison.
The objective evaluation criterion is signal to noise ratio (SNR) of original and distorted signal after segregation. Although SNR is used as a conventional method for system evaluation, it is not always consistent with the voice quality. Perceptual evaluation of speech quality (ITU-T P.862 PESQ, 2001) is employed as another objective evaluation criterion. The ITU-T P.862 is an intrusive objective speech quality assessment algorithm. Since the original speech before mixing is available, it is convenient to apply the ITU-T P.862 algorithm to obtain the intrusive speech quality evaluation result of the separated speech.
SNR Results. (Mixture: Original degraded speech; Hu-Wang: Hu and Wang model; Proposed: Proposed model; TP Hu-Wang: true pitch-based Hu and Wang model; TP proposed: true pitch-based proposed model; IBM: Ideal binary mask)
The proposed system is compared with the Hu and Wang model. Meanwhile, we also show the performance of ideal binary mask (IBM) which is obtained by calculating local SNR in each T-F unit and selecting units (SNR > 0 dB) as the target. The SNR results of IBM are the upper limit of all CASA-based systems which employ "binary mask". Table 1 gives the variety of SNR in which each value represents the average SNR of one kind intrusion mixed with ten target utterances and the last column shows the average SNR over all intrusions. As shown in Table 1, proposed system improves SNR for every intrusion and gets 13.01 dB improvement of overall mean against unprocessed mixture. Compared with results of the Hu and Wang model, proposed model enhances the separation results about 1.48 dB for overall mean. The highest enhancement of SNR happens on the mixtures of N2 and is about 3.50 dB higher than the Hu and Wang model. Other larger improvements (more than 1.0 dB) are obtained for harmonic sound (N4, N5, N7, N8, and N9) or tone-like sound (N0 and N6). While less improvements are obtained for broadband noises (e.g., N1 and N3).
To further compare the pitch detection algorithm and T-F unit grouping method separately, we replace the estimated pitch with true pitch (obtained on clean speech) for both the Hu and Wang model and proposed system. From Table 1, we can see that true pitch makes the Hu and Wang model enhance the SNR for 0.56 dB (from 11.12 dB to 11.68 dB). But the enhancement is tiny about 0.07 dB for the true pitch-based proposed system. And the only noticeable improvement is on N3 about 0.46 dB. The overall mean of SNR of the true pitch-based proposed system is about 1.00 dB higher than that of true pitch-based Hu and Wang model.
Although conventional SNR is widely used, it does not reflect the related perceptual effects, such as auditory masking. As computational goal of CASA , IBM directly corresponds to the auditory masking phenomenon. Recent psychoacoustic experiments have demonstrated that target speech reconstructed from the IBM can dramatically improve the intelligibility of speech masked by different types of noise, even in very noisy conditions . Li and Wang  also systematically compared the performance of IBM and ideal ratio masks (IRM) and the results showed that IBM is optimal as computational goal in terms of SNR gain. Considering the advantages of IBM, we compute the SNR and PESQ score using the speeches reconstructed from IBM as the ground truth instead of clean speeches.
Comparing the results of the Hu and Wang model, the most SNR gain about 4 dB is obtained in N0 (pure tone) By analyzing the segregated speeches, we found that the Hu and Wang model groups many target units into the background. It is mainly because some segments include both target units and interference units. These kinds of segments are divided into small ones by harmonic order in our system. Therefore, it leads to the significant SNR gain. For N2 (click noise), the SNR gain also due to the segmentation (see Figure 8). The difference is that the Hu and Wang model groups many interference units into foreground. It should be noticed that the gains of PESQ scores on these two noises are different, about 0.1 on N0 and 0.5 on N2, comparing with the Hu and Wang model. It implies that the second error, grouping intrusion units into foreground, has a greater impact on speech perceptual quality.
In sound separation, the application concerns about whether a unit is dominated by a resolved harmonic or by unresolved harmonics. Previous research showed that this process is very important. Resolved and unresolved harmonics are relative concepts which depend on the distance of harmonics and also the resolution of gammatone filterbank. Therefore, the decision of unit cannot be made by its channel frequency. A reasonable decision is to check the filter response in unit. As in previous research , cross-channel correlation is used which measures the similarity between the responses of two adjacent filters, indicates whether the filters are responding to the same sound component. However, it is not reliable for some units especially in high frequency region (as shown in Figure 8(a)). Hence, we use a more direct measurement, carrier to envelope energy ratio, to help classifying the units.
ACF reflects the period information of the signal in a unit. According to the "harmonicity" principle, each peak position could be a pitch period. However, only one of them corresponds to the true pitch period. DHF tends to reduce the peaks by the fact that voiced speeches have continuous numbered harmonics. In noisy environment, it will lead to errors when both neighbors of a harmonic are masked at the same time. However, we found that these cases are relative less.
Pitch detection is another key stage for sound separation. Our algorithm uses only the longest resolved segment for pitch detection. Based on this process, it is relative easy for pitch tracking which is a difficult problem. It should be pointed out that robustness of the system may reduce when the interfering sounds dominate frequency regions for resolved harmonics. However, resolved harmonics have larger energy than unresolved ones. They are more robust to noise. In addition, it should be pointed out that DHF is generated based on the idea of continuous numbered harmonics. For sounds without this feature, DHF is improper
In this paper, we propose the dynamic harmonic functions which derive from conventional correlograms. DHF has the uniform representation for both resolved and unresolved units. Based on DHF, the pitch detection algorithm and T-F unit grouping strategy are proposed. Results show that proposed algorithm improves the SNRs for variety kinds of noises over the Hu and Wang model.
This work was supported in part by the China National Nature Science Foundation (no. 60675026, no. 60121302, and no. 90820011), the 863 China National High Technology Development Projects (no. 20060101Z4073, no. 2006AA01Z194), and the National Grand Fundamental Research 973 Program of China (no. 2004CB318105).
- Benesty J, Makino S, Chen J: Speech Enhancement. Springer, Berlin, Germany; 2005.Google Scholar
- Barros AK, Rutkowski T, Itakura F, Ohnishi N: Estimation of speech embedded in a reverberant and noisy environment by independent component analysis and wavelets. IEEE Transactions on Neural Networks 2002, 13(4):888-893. 10.1109/TNN.2002.1021889View ArticleGoogle Scholar
- Brandstein M, Ward D: Microphone Arrays: Signal Processing Techniques and Applications. Springer, Berlin, Germany; 2001.View ArticleGoogle Scholar
- Boll SF: Suppression of acoustic noise in speech using spectral subtraction. IEEE Trans Acoust Speech Signal Process 1979, 27(2):113-120. 10.1109/TASSP.1979.1163209View ArticleGoogle Scholar
- Ephraim Y, Van Trees HL: Signal subspace approach for speech enhancement. IEEE Transactions on Speech and Audio Processing 1995, 3(4):251-266. 10.1109/89.397090View ArticleGoogle Scholar
- Bregman AS: Auditory Scene Analysis. MIT Press, Cambridge, Mass, USA; 1990.Google Scholar
- Wang DL, Brown GJ: Computational Auditory Scene Analysis: Principles, Algorithms and Applications. Wiley-IEEE Press, New York, NY, USA; 2006.View ArticleGoogle Scholar
- Cooke MP: Modeling Auditory Processing and Organization. Cambridge University Press, Cambridge, UK; 1993.Google Scholar
- Hu G, Wang DL: Monaural speech segregation based on pitch tracking and amplitude modulation. IEEE Transactions on Neural Networks 2004, 15(5):1135-1150. 10.1109/TNN.2004.832812View ArticleGoogle Scholar
- Licklider JCR: A duplex theory of pitch perception. Experientia 1951, 7(4):128-134. 10.1007/BF02156143View ArticleGoogle Scholar
- Lyon RF: Computational models of neural auditory processing. Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '84) 41-44.Google Scholar
- Weintraub M: A theory and computational model of auditory monaural sound separation, Ph.D. dissertation. Dept. Elect. Eng., Stanford Univ., Stanford, Calif, USA; 1985.Google Scholar
- Slaney M, Lyon RF: A perceptual pitch detector. Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, April 1990 357-360.View ArticleGoogle Scholar
- Meddis R, Hewitt MJ: Virtual pitch and phase sensitivity of a computer model of the auditory periphery. I: pitch identification. Journal of the Acoustical Society of America 1991, 89(6):2866-2882. 10.1121/1.400725View ArticleGoogle Scholar
- Wang DL, Brown GJ: Separation of speech from interfering sounds based on oscillatory correlation. IEEE Transactions on Neural Networks 1999, 10(3):684-697. 10.1109/72.761727MathSciNetView ArticleGoogle Scholar
- Wu M, Wang DL, Brown GJ: A multipitch tracking algorithm for noisy speech. IEEE Transactions on Speech and Audio Processing 2003, 11(3):229-241. 10.1109/TSA.2003.811539View ArticleGoogle Scholar
- Cheveigne A: Pitch and the narrowed autocoindidence histogram. Proceedings of the International Conference on Music Perception and Cognition, 1989, Kyoto, Japan 67-70.Google Scholar
- Brown JC, Puckette MS: Calculation of a "narrowed" autocorrelation function. Journal of the Acoustical Society of America 1989, 85(4):1595-1601. 10.1121/1.397363View ArticleGoogle Scholar
- Xu JW, Principe JC: A pitch detector based on a generalized correlation function. IEEE Transactions on Audio, Speech and Language Processing 2008, 16(8):1420-1432.View ArticleGoogle Scholar
- De Boer E, De Jongh HR: On cochlear encoding: potentialities and limitations of the reverse-correlation technique. Journal of the Acoustical Society of America 1978, 63(1):115-135. 10.1121/1.381704View ArticleGoogle Scholar
- Meddis R: Simulation of auditory-neural transduction: further studies. Journal of the Acoustical Society of America 1988, 83(3):1056-1063. 10.1121/1.396050View ArticleGoogle Scholar
- Tolonen T, Karjalainen M: A computationally efficient multipitch analysis model. IEEE Transactions on Speech and Audio Processing 2000, 8(6):708-716. 10.1109/89.876309View ArticleGoogle Scholar
- Zhang X, Liu W, Li P, Xu BO: Monaural voiced speech segregation based on elaborate harmonic grouping strategy. Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '09), April 2009 4661-4664.Google Scholar
- Wang DL: On ideal binary masks as the computational goal of auditory scene analysis. In Speech Separation by Humans and Machines. Edited by: Divenyi P. Kluwer Academic Publishers, Boston, Mass, USA; 2005:181-197.View ArticleGoogle Scholar
- Li N, Loizou PC: Factors influencing intelligibility of ideal binary-masked speech: implications for noise reduction. Journal of the Acoustical Society of America 2008, 123(3):1673-1682. 10.1121/1.2832617View ArticleGoogle Scholar
- Li Y, Wang D: On the optimality of ideal binary time-frequency masks. Speech Communication 2009, 51(3):230-239. 10.1016/j.specom.2008.09.001View ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.