Skip to main content

Bayesian group sparse learning for music source separation

Abstract

Nonnegative matrix factorization (NMF) is developed for parts-based representation of nonnegative signals with the sparseness constraint. The signals are adequately represented by a set of basis vectors and the corresponding weight parameters. NMF has been successfully applied for blind source separation and many other signal processing systems. Typically, controlling the degree of sparseness and characterizing the uncertainty of model parameters are two critical issues for model regularization using NMF. This paper presents the Bayesian group sparse learning for NMF and applies it for single-channel music source separation. This method reconstructs the rhythmic or repetitive signal from a common subspace spanned by the shared bases for the whole signal and simultaneously decodes the harmonic or residual signal from an individual subspace consisting of separate bases for different signal segments. A Laplacian scale mixture distribution is introduced for sparse coding given a sparseness control parameter. The relevance of basis vectors for reconstructing two groups of music signals is automatically determined. A Markov chain Monte Carlo procedure is presented to infer two sets of model parameters and hyperparameters through a sampling procedure based on the conditional posterior distributions. Experiments on separating single-channel audio signals into rhythmic and harmonic source signals show that the proposed method outperforms baseline NMF, Bayesian NMF, and other group-based NMF in terms of signal-to-interference ratio.

1 Introduction

Many problems in audio, speech and music processing can be tackled through matrix factorization. Different cost functions and constraints may lead to different factorized matrices. This procedure can identify underlying sources from the mixed signals through blind source separation [1]. Nonnegative matrix factorization (NMF) is designed to find an approximate factorization XA S for a data matrix X into a basis matrix A and a weight matrix S which are all nonnegative [2]. Some divergence measures have been proposed to derive solutions to NMF [3, 4]. NMF provides a useful learning tool for clustering as well as for classification. When a portion of labeled data are available, the semi-supervised NMF was developed for an improved classification system [5]. Different from standard principal component analysis (PCA) and independent component analysis (ICA), NMF only allows additive combination due to the nonnegative constraints on matrices A and S. Nevertheless, nonnegative PCA and nonnegative ICA were proposed for blind source separation in the presence of nonnegative image and music sources [6].

On the other hand, NMF conducts a parts-based sparse representation where only a few components or bases are relevant for representation of input nonnegative matrix X. The sparseness constraint is imposed in objective function [2]. An automatic relevance determination (ARD) scheme [79] is developed to determine relevant bases for sparse representation. Such sparse coding is efficient and robust. However, controlling the sparseness or smoothness is influential for system performance. Bayesian learning is beneficial to deal with sparse representation [9] and model regularization [7]. In [10], Bayesian learning was performed for sparse representation of image data where Laplacian distribution was used as prior density. The 1-regularized optimization was comparably performed. In addition, the group-based NMF [11] was proposed to capture the intra-subject variations and the inter-subject variations in EEG signals. In [12], the group sparse NMF was proposed by minimizing the Itakura-Saito divergence between X and AS. In [13], NMF was applied for drum source separation where the factorized components were partitioned into rhythmic sources and harmonic sources. No Bayesian learning was performed in [1113].

More recently, a Bayesian NMF approach [14] was proposed for model selection and image reconstruction. This approach inferred NMF model by a variational Bayes method and a Markov chain Monte Carlo (MCMC) algorithm. In [15], a Bayesian NMF with gamma priors for source signals and mixture weights was implemented through a MCMC algorithm. In [16], the Bayesian NMF with Gaussian likelihood and exponential prior was constructed for image feature extraction where the posterior distribution was approximated by Gibbs sampling procedure. In [17], a Bayesian approach for blind separation of linear mixtures of sources was developed. The Student t distribution for mixture weights was introduced to achieve sparse basis representation. The underdetermined noisy mixtures were separated. However, the case of nonnegative source was not applied. Besides, single-channel source separation is known as an underdetermined problem. In [18], the harmonic structure information was adopted to estimate the demixed instrumental sources. In [19], the NMF was applied for single-channel speech separation where the speech of target speaker over that of masking speaker was enhanced by using sparse dictionaries learned on a phoneme level for individual speakers.

This paper addresses the problem of underdetermined source separation based on NMF for an application to music source separation [20]. The uses of NMF and Bayesian theory to source separation are not new since they have been many papers [1113, 15]. But, to our best knowledge, the novelty of this paper is to propose Bayesian group sparse (BGS) learning using Laplacian distribution and Laplacian scale mixture (LSM) distribution and apply it for single-channel music signal separation. We present a group-based NMF where the groups of common bases and individual bases are estimated for blind separation of rhythmic sources and harmonic sources, respectively. Bayesian sparse learning is developed by introducing LSM distributions as the priors for two groups of reconstruction weights. Gamma priors are used to represent two groups of nonnegative basis components. The BGS-NMF algorithm is accordingly established. A MCMC algorithm is derived to infer BGS-NMF parameters and hyperparameters according to full Bayesian theory. The rhythmic sources and harmonic sources are reconstructed through the relevant bases in common subspace and individual subspace, respectively. In the experiments, the proposed BGS-NMF is evaluated and compared with the other NMF methods for single-channel separation of audio signals into rhythmic signals and harmonic signals. From comparative study, we find that the improvement of separation performance benefits from Bayesian modeling, group basis representation, and sparse signal reconstruction. Sparser priors identify fewer but more relevant bases and correspondingly lead to a better performance in terms of signal-to-interference ratio.

The remaining of this paper is organized as follows. In the next section, the related studies on NMF and group basis representation are surveyed. Some Bayesian learning approaches are addressed. Section 3 highlights on the construction of BGS-NMF model as well as the inference procedure based on MCMC algorithm. The conditional posterior distributions of different parameters and hyperparameters are derived in the sampling procedure. Section 4 reports a series of experiments on underdetermined music source separation with different music sources. The convergence condition in MCMC sampling is investigated. The evaluation of demixed signals in terms of signal-to-interference ratio is reported. Finally, the conclusions drawn by this study are provided in Section 5.

2 Background survey

In what follows, nonnegative matrix factorization (NMF) and its extensions to different regularization functions are introduced. Several approaches to group basis representation are addressed. Group sparse coding is surveyed. Then Bayesian learning methods for matrix factorization and other related tasks are introduced.

2.1 Nonnegative matrix factorization

NMF is a linear model where the observed signals, factorized signals, and source signals are all assumed to be nonnegative. Given a data matrix X={X i k }, NMF estimates two factorized matrices A={A i j } and S={S j k } by minimizing the reconstruction error between X and AS. In [2], the sparseness constraint was imposed on minimization of an objective function F which is based on a regularized error function

X AS 2 + η a i j f( A ij )+ η s j k f( S jk )
(1)

where ηa≥0 and ηs≥0 are regularization parameters and different sparseness measures could be used, e.g., f(S j k )=|S j k |, f(S j k )=S j k , f(S j k )=S j k ln(S j k ), etc. Several extensions of NMF have been proposed. In [21], the nonnegative matrix partial co-factorization (NMPCF) was proposed for rhythmic source separation. Given the magnitude spectrogram as input data matrix X, NMPCF decomposes the music signal into a drum or rhythmic part and a residual or harmonic part XArSr+AhSh with the factorized matrices including basis matrix and weight matrix for rhythmic source {Ar,Sr} and for harmonic source {Ah,Sh}. The prior knowledge from drum-only signal YArSr given the same rhythmic bases Ar is incorporated in joint minimization of two Euclidean error functions

X A r S r A h S h 2 +η Y A r S r 2
(2)

where η is a trade-off between the first and the second reconstruction errors due to X and Y, respectively. In [22], the mixed signals were divided into L segments. Each segment X(l) is decomposed into common and individual parts which reflect the rhythmic and harmonic sources, respectively. The common bases Ar are shared for different segments due to high temporal repeatability in rhythmic sources. The individual bases A h ( l ) are separate for individual segment l due to the changing frequency and low temporal repeatability. The resulting objective function consists of a weighted Euclidean error function and the regularization terms due to bases Ar and A h ( l ) which are expressed by

l = 1 L ω ( l ) X ( l ) A r S r ( l ) A h ( l ) S h ( l ) 2 +ηL A r 2 +η l = 1 L A h ( l ) 2
(3)

where { ω ( l ) , S r ( l ) , S h ( l ) } denotes the segment-dependent weights and weight matrices for common basis and individual basis, respectively. This is a NMPCF for L segments. The solutions to these NMFs are derived and implemented by the multiplicative update rules so that nonnegative constraints are met for individual model parameters. For example, the terms in gradient of objective function F with respect to nonnegative parameter A are divided into positive terms and negative terms F ∂A = F ∂A + F ∂A where F ∂A + >0 and F ∂A >0. The multiplicative update rule is yielded by

AA F ∂A F ∂A +
(4)

where and denote element-wise multiplication and division, respectively.

2.2 Group basis representation

The signal reconstruction methods in (2) and (3) correspond to the group basis representation where two groups of bases Ar and A h ( l ) are applied. The separation of single-channel mixed signal into two source signals is achieved. The issue of underdetermined source separation is resolved. In [11], the group-based NMF (GNMF) was developed by conducting group analysis and constructing two groups of bases. The intra-subject variations for a subject in different trials and the inter-subject variations for different subjects could be compensated. Given the L subjects or segments, the l th segment is generated by X ( l ) A r ( l ) S r ( l ) + A h ( l ) S h ( l ) where A r ( l ) denotes the common bases which capture the intra and inter-subject variations and A h ( l ) denotes the individual bases which reflect the residual information. In general, different common bases A r ( l ) should be close together since these bases represent the shared information in mixed signal. Contrarily, individual bases A h ( l ) characterize individual features which should be discriminated and mutually far apart [11]. The object function of GNMF is formed by

l = 1 L X ( l ) A r ( l ) S r ( l ) A h ( l ) S h ( l ) 2 + η a l = 1 L A r ( l ) 2 + η a l = 1 L A h ( l ) 2 + η a r l = 1 L m = 1 L A r ( l ) A r ( m ) 2 η a h l = 1 L m = 1 L A h ( l ) A h ( m ) 2 .
(5)

In (5), the second and third terms are seen as the 2 regularization functions, the fourth term enforces the distance between different common bases to be small, and the fifth term enforces the distance between different individual bases to be large. Regularization parameters { η a , η a r , η a h } are used. The NMPCFs in [21, 22] and GNMF in [11] did not consider sparsity in group basis representation.

More generally, a group sparse coding algorithm [23] was proposed for basis representation of group instances { X k ,kG} where objective function is defined by

k G X k j = 1 | D | S j k A j 2 +η j = 1 | D | S j .
(6)

All the instances within a group G share the same dictionary D with basis vectors { A j } j = 1 | D | . The weight matrix { S j } j = 1 | D | consists of nonnegative vectors S j = [ S j 1 , , S j | G | ] T . The weight parameters { S j k } are estimated for different group instances kG using different bases jD. In (6), 1 regularization term is incorporated to carry out group sparse coding. The group sparsity was further extended to structural sparsity for dictionary learning and basis representation. Nevertheless, nonnegative constraints were not imposed on bases {A j } and observed signals {X k }. Basically, all the above-mentioned methods [2, 11, 2124] did not apply probabilistic framework. No Bayesian learning was considered.

2.3 Bayesian learning approaches

Model regularization is critical for improving the generalization of a learning machine to new data [7]. Conducting Bayesian learning shall compensate the variations of the estimated parameters and accordingly improve model regularization. Typically, NMF and group basis representation are viewed as learning machine which is based on a set of bases. Following the perspective of relevance vector machines [8, 9], Bayesian sparse learning is beneficial to identify relevant bases for regularized basis representation. To do so, sparse priors based on Student t distribution [17] and Laplacian distribution [10, 25] could act as regularization functions and merged with likelihood function to come up with a posteriori probability. Maximizing the logarithm of a posteriori probability is equivalent to minimizing the 1-regularized error function if Laplacian prior is applied. Hyperparameters of sparse priors are then used as the regularization parameter which controls the trade-off between a reconstruction error function and a sparsity-favorable penalty function.

In the literature, a probabilistic matrix factorization (PMF) [26] for X=ATS was proposed by assuming Gaussian noise for each independent entry of data matrix X={X i k } by p(X|A,S,α)= i = 1 N k = 1 M N( X ik | A i T S k , α 1 ) and assuming Gaussian priors p(A| α a )= i = 1 N N( A i |0, α a 1 I) and p(S| α s )= k = 1 M N( S k |0, α s 1 I) where {α,αa,αs} is a set of precision parameters of Gaussians. Here, A i denotes the i th column of A and S k denotes the k th column of S. Learning for PMF is equivalent to maximizing the log posterior likelihood

ln p ( A , S | X , α , α a , α s ) = ln p ( X | A , S , α ) + ln p ( A | α a ) + ln p ( S | α s ) + C
(7)

with respect to A and S. In (7), C is a constant. This optimization turns out to minimizing the sum-of-squares error function with quadratic regularization terms

i = 1 N k = 1 M ( X ik A i T S k ) 2 + η a i = 1 N A i 2 + η s k = 1 M S k 2 .
(8)

The regularization terms are determined from hyperparameters by ηa=αa/α and ηs=αs/α. Bayesian learning of PMF was performed through MCMC algorithm where Gaussian-Wishart priors for Gaussian mean vectors and precision matrices were assumed. There was no constraint on nonnegative matrices by using PMF. No sparse learning was considered.

In [27], a full Bayesian NMF was implemented to determine the number of bases according to the marginal likelihood. Furthermore, Bayesian nonparametric approach to NMF was proposed in [28] where model structure was determined through Gamma process NMF. This method was applied to find both latent sources in spectrograms and their number. In [25], the group sparse coding [23] was upgraded with Bayesian interpretation. Bayesian sparse learning was only developed for single-sample basis representation. In [29], the group sparse priors were presented for maximum a posteriori estimation of covariance matrix which was used in Gaussian graphical model. More recently, the group sparse hidden Markov models (HMMs) [30] were proposed to represent a sequence of observations and have been successfully applied for speech recognition. A set of common bases were shared for representation of speech samples across HMM states, while a set of individual bases were employed to represent speech samples within individual HMM states. Bayesian group sparse learning was performed for speech recognition [30] and signal separation [20] by using Laplacian scale mixture distribution.

3 Bayesian group sparse matrix factorization

Previous NMF methods [11, 13, 21] were developed to extract task-specific nonnegative factors, but they did not simultaneously consider the uncertainty of model parameters and control the sparsity of weight parameters. In [23, 25], the group sparse coding and its Bayesian extension did not impose nonnegative constraints in data matrix X and factorized matrices A and S. This paper presents a new Bayesian group sparse learning for NMF (denoted by BGS-NMF) and applied it for single-channel music source separation.

3.1 Model construction

In this study, magnitude spectrogram X={X(l)} of a mixed audio signal is calculated and chopped into L segments for implementation of BGS-NMF algorithm. The audio signal is assumed to be mixed from two kinds of source signals. One is rhythmic or repetitive source signal and the other is harmonic or residual source signal. As illustrated in Figure 1, BGS-NMF aims to decompose a nonnegative matrix X ( l ) R + N × M of the l th segment into a product of two nonnegative matrices A(l)S(l). A linear decomposition model is constructed in a form of

X ( l ) = A r S r ( l ) + A h ( l ) S h ( l ) + E ( l )
(9)
Figure 1
figure 1

Illustration for group basis representation. There are |D| bases in the dictionary.

where A r R + N × D r denotes the shared basis matrix for all segments {X(l),l=1,…,L}; A h ( l ) R + N × D h and E(l) denotes the individual matrix and the noise matrix for a given segment l, respectively. Typically, common bases capture the repetitive patterns which continuously happen in different segments of a whole signal. Individual bases are used to compensate the residual information that common bases could not handle. Without loss of generality, common bases and individual bases are applied to recover the rhythmic signal and the harmonic signal, respectively, from a mixed audio signal. Such a signal recovery problem could be interpreted from a perspective of subspace approach. Namely, an observed signal is demixed into one signal from principal subspace spanned by common bases and the other signal from minor subspace spanned by individual bases [31]. Moreover, the sparseness constraint is imposed on two groups of reconstruction weights S r ( l ) R + D r × M and S h ( l ) R + D h × M . It is assumed that the reconstruction weights of rhythmic sources S r ( l ) and harmonic sources S h ( l ) are independent, but the dependencies between reconstruction weights within each group are allowed. Assuming that the k th noise vector E k ( l ) is Gaussian distributed with zero mean and N×N diagonal covariance matrix Σ(l)=diag{[Σ(l)] i i } which is shared for all samples within a segment l, the likelihood function of an audio signal segment X(l) is expressed by

p ( X ( l ) | Θ ( l ) ) = i = 1 N k = 1 M N ( X ik ( l ) [ A r S r ( l ) ] ik + [ A h ( l ) S h ( l ) ] ik , [ Σ ( l ) ] ii ) .
(10)

BGS-NMF model is therefore constructed with parameters Θ ( l ) ={ A r , A h ( l ) , S r ( l ) , S h ( l ) , Σ ( l ) }.

3.2 Priors for Bayesian group sparse learning

From Bayesian perspective, the uncertainties of BGS-NMF parameters, expressed by prior densities, are considered to assure model regularization. Using BGS-NMF model, the common bases Ar are constructed to represent the characteristics of repetitive patterns for different data segments, while the individual bases A h ( l ) are estimated to reflect unique information in each segment l. Sparsity control is enforced in the corresponding reconstruction weights S r ( l ) and S h ( l ) so that relevant bases are retrieved for group basis representation. In accordance with [15], the nonnegative basis parameters are assumed to be gamma distributed by

p( A r )= i = 1 N j = 1 D r G( [ A r ] ij | α rj , β rj )
(11)
p( A h ( l ) )= i = 1 N j = 1 D h G( [ A h ( l ) ] ij | α hj ( l ) , β hj ( l ) )
(12)

where Φ a ( l ) ={{ α rj , β rj },{ α hj ( l ) , β hj ( l ) }} denotes the hyperparameters of gamma distributions and {Dr,Dh} denote the numbers of common bases and individual bases, respectively. Gamma distribution is an exponential family distribution for nonnegative data. Its two parameters {α,β} can be adjusted to fit different shapes of distributions. In (11) and (12), all entries in matrices Ar and A h ( l ) are assumed to be independent.

Importantly, we control the sparsity of reconstruction weights by using prior density based on the Laplacian scale mixture (LSM) distribution [25]. The LSM of a reconstruction weight of common basis is constructed by [ S r ( l ) ] jk = ( λ rj ( l ) ) 1 u rj ( l ) where u rj ( l ) is a Laplacian distribution p( u rj ( l ) )= 1 2 exp{| u rj ( l ) |} with scale 1 and λ rj ( l ) is an inverse scale parameter. Accordingly, the parameter [ S r ( l ) ] jk has a Laplacian distribution

p( [ S r ( l ) ] jk | λ rj ( l ) )= λ rj ( l ) 2 exp{ λ rj ( l ) [ S r ( l ) ] jk }
(13)

which is controlled by a positive continuous mixture parameter λ rj ( l ) 0. Considering a gamma distribution for inverse scale parameter, i.e., p( λ rj ( l ) )=G( λ rj ( l ) | γ rj ( l ) , δ rj ( l ) ), the marginal distribution of a reconstruction weight can be calculated by [25]

p ( [ S r ( l ) ] jk ) = 0 p ( [ S r ( l ) ] jk | λ rj ( l ) ) p ( λ rj ( l ) ) d λ rj ( l ) = γ rj ( l ) ( δ rj ( l ) ) γ rj ( l ) 2 ( δ rj ( l ) + [ S r ( l ) ] jk ) γ rj ( l ) + 1 .
(14)

In (13) and (14), the constraint [ S r ( l ) ] jk 0 has been considered. This LSM distribution is obtained by adopting the property that gamma distribution is the conjugate prior for Laplacian distribution. In application of image coding, LSM distribution was estimated and measured to be sparser than Laplacian distribution by approximately a factor of 2 [25]. Figure 2 compares Gaussian, Laplacian, and LSM distributions with specific parameters. In this example, LSM is the sharpest distribution among these distributions. In addition, a truncated LSM prior for nonnegative parameter [ S r ( l ) ] jk R + is adopted, namely, the distribution of negative parameter is forced to be zero. The sparse prior for reconstruction weight for individual basis [ S h ( l ) ] jk is also expressed by LSM distribution with hyperparameter { γ hj ( l ) , δ hj ( l ) }. The hyperparameters of BGS-NMF is formed by Φ ( l ) ={ Φ a ( l ) , Φ s ( l ) ={ γ rj ( l ) , δ rj ( l ) , γ hj ( l ) , δ hj ( l ) }}. Figure 3 displays a graphical representation for construction of BGS-NMF with different parameters Θ(l) and hyperparameters Φ(l).

Figure 2
figure 2

Comparison of Gaussian, Laplacian, and LSM distributions.

Figure 3
figure 3

A graphical representation for BGS-NMF.

By combining the likelihood function in (10) and the prior densities in (11) to (13), the negative logarithm of posterior distribution lnp( A r , A h ( l ) , S r ( l ) , S h ( l ) |X) can be calculated and arranged as a new objective function expressed by

l = 1 L i = 1 N k = 1 M ( X ik ( l ) [ A r S r ( l ) ] ik [ A h ( l ) S h ( l ) ] ik ) 2 + η a L i = 1 N j = 1 D r ( ( 1 α rj ) ln [ A r ] ij + β rj [ A r ] ij ) + η a l = 1 L i = 1 N j = 1 D h ( ( 1 α hj ( l ) ) ln [ A h ( l ) ] ij + β hj ( l ) [ A h ( l ) ] ij ) + η s r l = 1 L j = 1 D r k = 1 M [ S r ( l ) ] jk + η s h l = 1 L j = 1 D h k = 1 M [ S h ( l ) ] jk
(15)

where { η a , η s r , η s h } denote the regularization parameters for two groups of bases and reconstruction weights. Some BGS-NMF parameters or hyperparameters have been absorbed in these regularization parameters. Comparing with the objective functions (3) for NMPCF, (5) for GNMF, and (8) for PMF, the optimization of (15) for BGS-NMF shall lead to two groups of signals which are reconstructed from the sparse common bases Ar and sparse individual bases A h ( l ) . The regularization terms due to two gamma bases are additionally considered. Different from the Bayesian NMF (BNMF) [15], BGS-NMF conducts group sparse learning which does not only characterize the within-segment harmonic information but also represent the across-segment rhythmic regularity. Sparse sets of basis vectors are further determined for sparse representation. Basically, BGS-NMF follows a general objective function. By applying different hyperparameter values { α rj , β rj , α hj ( l ) , β hj ( l ) }, probability structures, and prior distributions for { A r , A h ( l ) , S r ( l ) , S h ( l ) }, BGS-NMF can be realized to find solutions to NMF [2], NMPCF [21], GNMF [11], PMF [26], and BNMF [15]. Notably, the objective function in (15) is written for comparative study among different methods. This function only considers BGS-NMF based on Laplacian prior. BGS-NMF algorithms with Laplacian prior and LSM prior shall be both implemented in the experiments. Nevertheless, in what follows, we address the model inference procedure for BGS-NMF with LSM prior.

3.3 Model inference

The full Bayesian framework for BGS-NMF model based on the posterior distribution of parameters and hyperparameters p(Θ,Φ|X) is not analytically tractable. A stochastic optimization scheme is adopted. We develop a MCMC sampling algorithm for approximate inference through iteratively generating samples of parameters Θ and hyperparameters Φ according to the posterior distribution. This algorithm converges by those samples. The key idea of MCMC sampling is to simulate a stationary ergodic Markov chain whose samples asymptotically follow the posterior distribution p(Θ,Φ|X). The estimates of parameters Θ and hyperparameters Φ are then computed via Monte Carlo integrations on the simulated Markov chains. For simplicity, the segment index l is neglected in derivation of MCMC algorithm for BGS-NMF. At each new iteration t+1, the BGS-NMF parameters Θ(t+1) and hyperparameters Φ(t+1) are sequentially sampled in an order of {Ar,Sr,Ah,Sh,Σ,αr,βr,αh,βh,λr,λh,γr,δr,γh,δh} according to their corresponding conditional posterior distributions. In this subsection, we describe the calculation of conditional posterior distributions under BGS-NMF parameters {Ar,Sr,Ah,Sh,Σ}. The conditional posterior distributions for hyperparameters {αr,βr,αh,βh,λr,λh,γr,δr,γh,δh} are derived in the Appendix.

1. Sampling of [Ar] i j . First of all, the common basis parameter [ A r ( t + 1 ) ] ij is sampled by the conditional posterior distribution

p( [ A r ] ij | X i T , Θ A rij ( t ) , Φ A rij ( t ) )p( X i T | Θ A rij ( t ) )p( [ A r ] ij | Φ A rij ( t ) )
(16)

where Θ A rij ( t ) ={ [ A r ( t + 1 ) ] i ( 1 : j 1 ) , [ A r ( t ) ] i ( j + 1 : D r ) , S r ( t ) , A h ( t ) , S h ( t ) , Σ ( t ) } and Φ A rij ( t ) ={ α rj ( t ) , β rj ( t ) }. Here, X i denotes the i th row vector of X. Notably, for each sampling, we use the preceding bases [ A r ( t + 1 ) ] i ( 1 : j 1 ) at new iteration t+1 and subsequent bases [ A r ] i ( j + 1 : D r ) ( t ) at current iteration t. The likelihood function can be arranged as a Gaussian distribution of [Ar] i j

p( X i T | Θ A rij ( t ) )exp ( [ A r ] ij μ A rij likel ) 2 2 [ σ A rij likel ] 2
(17)

where μ A rij likel = [ σ A rij likel ] 2 k = 1 M ( [ S r ( t ) ] jk ε ik ( j ) ), ε ik ( j ) = X ik ( m = 1 j 1 [ A r ( t + 1 ) ] im [ S r ( t ) ] mk + m = j + 1 D r [ A r ( t ) ] im [ S r ( t ) ] mk ) m = 1 D h [ A h ( t ) ] im [ S h ( t ) ] mk and [ σ A rij likel ] 2 = [ Σ ( t ) ] ii ( k = 1 M [ S r ( t ) ] jk ) 1 . By combining likelihood function of (17) and gamma prior p( [ A r ] ij | Φ A rij ( t ) ) of (11), the conditional posterior distribution in (16) is derived in a form of

[ A r ] ij α rj ( t ) 1 exp ( [ A r ] ij μ A rij post ) 2 2 [ σ A rij post ] 2 I [ 0 , + [ ( [ A r ] ij )
(18)

where μ A rij post = μ A rij likel β rj ( t ) [ σ A rij likel ] 2 , [ σ A rij post ] 2 = [ σ A rij likel ] 2 , and I [ 0 , + [ (z) denotes an indicator function which has value either 1 if z[0,+[ or 0 for the other case. In (18), the posterior distribution for negative [Ar] i j is forced to be zero. Derivations of (17) and (18) are detailed in the Appendix. However, (18) is not an usual distribution, therefore its sampling requires the use of a rejection sampling method, such as the Metropolis-Hastings algorithm [32]. Using this algorithm, an instrumental distribution q([Ar] i j ) is chosen to fit at best the target distribution (18) so that high rejection condition is avoided or equivalently rapid convergence toward true parameter could be achieved. In case of rejection, the previous parameter sample is used, namely, [ A r ( t + 1 ) ] ij [ A r ( t ) ] ij . Generally, the shape of target distribution is characterized by its mode and width. The instrumental distribution is constructed as a truncated Gaussian distribution which is calculated by

q( [ A r ] ij )= N + ( [ A r ] ij | μ A rij inst , [ σ A rij inst ] 2 ).
(19)

In (19), the mode μ A rij inst is obtained by finding the roots of a quadratic equation of [Ar] i j which appears in the exponent of the posterior distribution in (18). Derivation for the mode μ A rij inst is detailed in the Appendix. In case of complex-valued root or negative-valued root, the mode is forced by μ A rij inst =0. The width of instrumental distribution is controlled by [ σ A rij inst ] 2 = [ σ A rij post ] 2 .

2. Sampling of [Sr] j k . The sampling of reconstruction weight of common basis [ S r ( t + 1 ) ] jk depends on the conditional posterior distribution

p ( [ S r ] jk | X k , Θ S rjk ( t ) , Φ S rjk ( t ) ) p ( X k [ S r ] jk , Θ S rjk ( t ) ) p ( [ S r ] jk | Φ S rjk ( t ) )
(20)

where Θ S rjk ( t ) ={ A r ( t + 1 ) , [ S r ( t + 1 ) ] ( 1 : j 1 ) k , [ S r ( t ) ] ( j + 1 : D r ) k , A h ( t ) , S h ( t ) , Σ ( t ) } and Φ s rjk ( t ) = λ rj ( t ) . X k is the k th column of X. Again, the preceding weights [ S r ( t + 1 ) ] ( 1 : j 1 ) k at new iteration t+1 and subsequent weights [ S r ( t ) ] ( j + 1 : D r ) k at current iteration t x are used. The likelihood function is rewritten as a Gaussian distribution of [Sr] j k given by

p( X k [ S r ] jk , Θ S rjk ( t ) )exp ( [ S r ] jk μ S rjk likel ) 2 2 [ σ S rjk likel ] 2 .
(21)

The Gaussian parameters are obtained by μ S rjk likel = [ σ S rjk likel ] 2 i = 1 N ( [ Σ ( t ) ] ii 1 [ A r ( t + 1 ) ] ij ε ik ( j ) ), ε ik ( j ) = X ik ( m = 1 j 1 [ A r ( t + 1 ) ] im [ S r ( t + 1 ) ] mk + m = j + 1 D r [ A r ( t + 1 ) ] im [ S r ( t ) ] mk ) m = 1 D h [ A h ( t ) ] im [ S h ( t ) ] mk and [ σ S rjk likel ] 2 = ( i = 1 N [ Σ ( t ) ] ii 1 ( [ A r ( t + 1 ) ] ij ) 2 ) 1 . Given the Gaussian likelihood and Laplacian prior, the conditional posterior distribution is calculated by

λ rj ( t ) exp ( [ S r ] jk μ S rjk post ) 2 2 [ σ S rjk post ] 2 I [ 0 , + [ ( [ S r ] jk )
(22)

where μ S rjk post = μ S rjk likel λ rj ( t ) [ σ S rjk likel ] 2 and [ σ S rjk post ] 2 = [ σ S rjk likel ] 2 . Notably, the hyperparameters { γ rj ( t + 1 ) , δ rj ( t + 1 ) } in LSM prior are also sampled and used to sample LSM parameter λ rj ( t + 1 ) based on a gamma distribution. Here, Metropolis-Hastings algorithm is applied again. The best instrumental distribution q([S r ] j k ) is selected to fit (22). This distribution is derived as a truncated Gaussian distribution N + ( [ S r ] jk | μ S rjk inst , [ σ S rjk inst ] 2 ) where the mode μ S rjk inst is derived by finding the root of a quadratic equation of [Sr] j k and the width is obtained by [ σ S rjk inst ] 2 = [ σ S rjk post ] 2 . In addition, the conditional posterior distributions for sampling the individual basis parameter [ A h ( t + 1 ) ] ij and its reconstruction weight [ S h ( t + 1 ) ] jk are similar to those for sampling [ A r ( t + 1 ) ] ij and [ S r ( t + 1 ) ] jk , respectively. We do not address these two distributions.

3. Sampling of [ Σ ] ii 1 . The sampling of the inverse of noise variance ( [ Σ ] ii ( t + 1 ) ) 1 is performed according to the conditional posterior distribution

p ( [ Σ ] ii 1 | X i T , Θ Σ ii ( t ) , Φ Σ ii ( t ) ) p ( X i T [ Σ ] ii 1 , Θ Σ ii ( t ) ) p ( [ Σ ] ii 1 | Φ Σ ii ( t ) )
(23)

where Θ Σ ii ( t ) ={ A r ( t + 1 ) , S r ( t + 1 ) , A h ( t + 1 ) , S h ( t + 1 ) } and p [ Σ ] ii 1 | Φ Σ ii ( t ) =G( [ Σ ] ii 1 | α Σ ii , β Σ ii ). The resulting posterior distribution can be derived as a new gamma distribution with updated hyperparameters α Σ ii post = M 2 + α Σ ii and β Σ ii post = 1 2 k = 1 M ( X ik m = 1 D r [ A r ( t + 1 ) ] im [ S r ( t + 1 ) ] mk m = 1 D h [ A h ( t + 1 ) ] im [ S h ( t + 1 ) ] mk ) 2 + β Σ ii . In the experiments, we conduct MCMC sampling procedure for tmax iterations. However, the first tmin iterations are not stable. These burn-in samples are abandoned. The marginal posterior estimates of common basis [ Â r ] ij , individual basis [ Â h ] ij and their reconstruction weights [ Ŝ r ] jk and [ Ŝ h ] jk are calculated by finding the following sample means, e.g.,

[ Â r ] ij = 1 t max t min t = t min + 1 t max [ A r ] ij ( t ) .
(24)

With these posterior estimates, the rhythmic source and the harmonic source are calculated by  r Ŝ r and  h Ŝ h , respectively. The BGS-NMF algorithm is completed. Different from BNMF [15], the proposed BGS-NMF conducts a group sparse learning based on LSM distribution. Common bases Ar are shared for different data segments l. The group sparse learning performs well in our experiments.

4 Experiments

In this study, BGS-NMF is implemented to estimate two audio source signals from a single-channel mixed signal. One source signal contains rhythmic pattern which is constructed by the bases shared for all audio segments while the other source contains harmonic information which is represented via bases from individual segments. Bayesian sparse learning is performed to conduct probabilistic reconstruction based on the relevant group bases. Some experiments are reported to evaluate the performance of model inference and signal reconstruction.

4.1 Experimental setup

In the experiments, we sampled six rhythmic signals and six harmonic signals from http://www.free-scores.com/index_uk.php3and http://www.freesound.org/. Six mixed music signals were collected as follows: ‘music 1,’ bass+piano; ‘music 2,’ drum+guitar; ‘music 3,’ drum+violin; ‘music 4,’ cymbal+organ; ‘music 5,’ drum+saxophone; and ‘music 6,’ cymbal+singing, which contained combinations of different rhythmic and harmonic source signals. Three different drum signals and two different cymbal signals were included. For each set of experimental data, we applied a different mixing matrix music 1 (1.2667 −1.9136), music 2 (1.1667 −1.9136), music 3 (−1.2667 1.6136), music 4 (1.8667 1.1136), music 5 (−1.1667 2.8136), and music 6 (1.9617 1.1510) to simulate the corresponding single-channel mixed signal. Each audio signal was 21 s long. Readers may access http://chien.cm.nctu.edu.tw/bgs-nmf to listen to the twelve source signals and the corresponding six mixed signals. The specification of 44,100-Hz sampling rate and 16-bit resolution was used in the collected audio signals. In our implementation, the magnitude of fast Fourier transform of audio signal was extracted every 1,024 samples with 512 samples in frame overlapping. Each mixed signal was equally chopped into L segments for music source separation. Each segment had a length of 3 s. Sufficient rhythmic signal existed within a segment. The numbers of common bases and individual bases were empirically set to be 15 and 10, respectively, i.e., Dr=15 and Dh=10. The common bases were sufficiently allocated so as to capture the shared base information from different segments. The initial common bases A r ( 0 ) and individual bases A h ( 0 ) were estimated by applying k-means clustering using the automatically detected rhythmic and harmonic segments, respectively. The detection was based on a classifier using Gaussian mixture model. We performed 1,000 Gibbs sampling iterations (tmax=1,000). The separation performance was evaluated according to the signal-to-interference ratio (SIR) in decibels

SIR (dB)=10 log 10 l = 1 L k = 1 M X k ( l ) 2 l = 1 L k = 1 M X ̂ k ( l ) X k ( l ) 2 .
(25)

The interference was measured by the Euclidean distance between original signal { X k ( l ) } and reconstructed signal { X ̂ k ( l ) } for different samples k in different segments l. These signals include rhythmic signals { [ Â r Ŝ r ( l ) ] k } and harmonic signals { [ Â h ( l ) Ŝ h ( l ) ] k }.

For system initialization at t=0, we detected two short segments with only rhythmic signal and harmonic signal and applied them for finding rhythmic parameters { A r ( 0 ) , S r ( 0 ) } and harmonic parameters { A h ( 0 ) , S h ( 0 ) }, respectively. This prior information was used to implement five NMF methods for single-channel source separation. We carried out baseline NMF [2], Bayesian NMF (BNMF) [15], group-based NMF (GNMF) [11] (or NMPCF [22]), and the proposed BGS-NMF under consistent experimental conditions. To evaluate the effect of sparse priors in BGS-NMF for music source separation, we additionally realized BGS-NMF by applying Laplacian distribution. For this realization, the sampling steps of LSM parameters { γ r j , δ r j , γ h j , δ h j } were ignored. The BGS-NMFs with Laplacian distribution (denoted by BGS-NMF-LP) and LSM distribution (BGS-NMF-LSM) were compared. All these NMFs were implemented for different segments l. Basically, the NMF model [2] was realized by using multiplicative updating algorithm in (4). The BNMF [15] conducted Bayesian learning of NMF model where MCMC sampling was performed, and gamma distributions were assumed for bases and reconstruction weights. No group sparse learning was considered in NMF and BNMF. Using NMPCF [22] or GNMF [11], the common bases and individual bases were constructed by applying multiplicative updating algorithm. No probabilistic framework was involved. The 2-norm regularization for basis parameters Ar and A h ( l ) was considered. There was no sparseness constraint imposed on reconstruction weight parameters S r ( l ) and S h ( l ) . Only the result of GNMF method was reported. Using GNMF, the regularization parameters in (5) were empirically determined as { η a =0.35, η a r =0.2, η a h =0.2}. Nevertheless, the Bayesian group sparse learning is presented in BGS-NMF-LP and BGS-NMF-LSM algorithms. Using this algorithm, the uncertainties of bases and reconstruction weights are represented by gamma distributions and LSM distributions, respectively. MCMC algorithm is developed to sample BGS-NMF parameters Θ(t+1) and hyperparameters Φ(t+1). The groups of common bases Ar and individual bases Ah are estimated to capture between-segment repetitive patterns and within-segment residual information, respectively. The relevant bases are detected via sparse priors in accordance with Laplacian or LSM distributions. Using BGS-NMF-LP, we sampled the parameters and hyperparameters by using different frames from six music signals and automatically calculated the averaged values of regularization parameters in (15) as { η a =0.41, η s r =0.31, η s h =0.26}. The regularization parameters in (5) and (15) reflect different physical meanings in objective function. The computational cost and the model size are also examined. The computation times of running MATLAB codes were measured by a personal computer with Intel Core 2 Duo 2.4-GHz CPU and 4-GB RAM. In our investigation, the computation times of demixing an audio signal with 21 s long were measured as 3.1, 12.1, 16.2, 20.9, and 21.2 min by using NMF, BNMF, GNMF, and the proposed BGS-NMF-LP and BGS-NMF-LSM respectively. In addition, BNMF, GNMF, BGS-NMF-LP, and BGS-NMF-LSM were measured to be 2.5, 4.5, 5.2, and 5.3 times the model size of the baseline NMF respectively.

4.2 Evaluation for MCMC iterative procedure

In this set of experiments, the sampling process of BGS-NMF algorithm is evaluated. The control parameter of sparsity λ r j and its hyperparameters γ r j and δ r j for common basis are investigated. Figure 4 displays an example of MCMC iterative sampling process for LSM parameter λ rj ( t + 1 ) . The value of samples converges after 200 iterations. Also, Figure 5 shows an example of iterative sampling process for LSM hyperparameters γ rj ( t + 1 ) and δ rj ( t + 1 ) . Convergence condition is good in these examples. MCMC samples converge after 200 iterations. Empirically, the parameter t m i n is specified as 200 when calculating posterior estimates of BGS-NMF parameters as given in (24). In addition, Figure 6 shows an estimated distribution of reconstruction weight of common basis p([S r ] j k |γ r j ,δ r j ) where only nonnegative [S r ] j k is valid in the distribution. This distribution is shaped as a LSM distribution which is estimated from the 2nd segment of “music 2”.

Figure 4
figure 4

An example of iterative sampling process for LSM parameter λ rj ( t + 1 ) .

Figure 5
figure 5

An example of iterative sampling process for LSM hyperparameters γ rj ( t + 1 ) (green curve) and δ rj ( t + 1 ) (blue curve).

Figure 6
figure 6

An estimated distribution of reconstruction weight of common basis p([S r ] jk | γ rj rj ).

4.3 Evaluation for single-channel music source separation

A quantitative comparison over different NMFs is conducted by measuring SIRs of reconstructed rhythmic signal and reconstructed harmonic signal. Table 1 shows the experimental results on six mixed music signals. These six signals come from twelve different source signals. The averaged SIRs are reported in the last row. Comparing NMF and BNMF, we find that BNMF obtains higher SIRs on the reconstructed signals. Further, BNMF is more robust to different combination of rhythmic signals and harmonic signals. The variation of SIRs using NMF is relatively high. Bayesian learning provides model regularization for NMF. On the other hand, GNMF (or NMPCF) performs better than BNMF in terms of averaged SIR of the reconstructed signals. The key difference between BNMF and GNMF is the reconstruction of rhythmic signal. BNMF estimates the rhythmic bases for individual segments while GNMF (or NMPCF) calculates the shared rhythmic bases for different segments. Prior information { A r ( 0 ) , S r ( 0 ) , A h ( 0 ) , S h ( 0 ) } is applied for these methods. From these results, we confirm the importance of basis grouping in signal reconstruction based on NMF. In particular, BGS-NMF-LP and BGS-NMF-LSM perform better than other NMF methods. BGS-NMF-LSM even outperforms BGS-NMF-LP in terms of SIRs. Reconstruction weights modeled by LSM distributions are better than those by Laplacian distributions. Sparser reconstruction weights identify fewer but more relevant basis vectors for signal separation. Nevertheless, among these five related NMFs, the highest SIRs of reconstructed signals are achieved by using BGS-NMF-LSM. The SIRs of reconstructed rhythmic and harmonic signals are measured as 8.13 dB and 8.40 dB which are higher than 3.71 dB and 3.38 dB by using NMF, 4.87 dB and 4.61 dB by using BNMF, 5.63 dB and 5.71 dB by using GNMF and 7.91 dB and 8.11 dB by using BGS-NMF-LP, respectively. Basically, the superiority of BGS-NMF-LSM to other NMFs is three-fold, i.e. Bayesian probabilistic modeling, group basis representation and sparse reconstruction weight. Again, compared to GNMF, the proposed BGS-NMF-LP and BGS-NMF-LSM obtain a more robust performance in SIRs against different music source signals. Figure 7 shows the waveforms of a drum signal, a saxophone signal and the resulting mixed signal in “music 5”. Figure 8 displays the spectrograms of these three signals. Figure 9 demonstrates the spectrograms of the reconstructed drum signal and saxophone signal using BGS-NMF-LSM. For the other five mixed signals, the performance of reconstructed signals in single-channel music source separation is shown at http://chien.cm.nctu.edu.tw/bgs-nmf.

Figure 7
figure 7

Waveforms of music 5 containing a drum signal, a saxophone signal, and their mixed signal.

Figure 8
figure 8

Spectrograms of music 5 containing a drum signal, a saxophone signal, and their mixed signal.

Table 1 Comparison of SIR (in dB) of the reconstructed rhythmic signal and harmonic signal based on NMF, BNMF, GNMF, BGS-NMF-LP and BGS-NMF-LSM
Figure 9
figure 9

Spectrograms of the demixed drum signal (upper) and the demixed saxophone signal (lower).

5 Conclusions

This paper has presented the Bayesian group sparse learning and applied it for single-channel nonnegative source separation. The basis vectors in NMF were grouped into two partitions. The first group was the common bases which were used to explore the inter-segment repetitive characteristics, while the second was the individual bases which were applied to represent the intra-segment harmonic information. The LSM distribution was introduced to express sparse reconstruction weights for two groups of basis vectors. Bayesian learning was incorporated into group basis representation with model regularization. The MCMC algorithm or the Metropolis-Hastings algorithm was developed to conduct approximate inference of model parameters and hyperparameters. Model parameters were used to find the decomposed rhythmic signals and harmonic signals. Hyperparameters were used to control the sparsity of reconstructed weights and the generation of basis parameters. In the experiments, we implemented the proposed BGS-NMFs for underdetermined source separation. The convergence condition of sampling procedure for approximate inference was investigated. The performance of BGS-NMF-LP and BGS-NMF-LSM was shown to be robust to the different kinds of rhythmic and harmonic sources and mixing conditions. BGS-NMF-LSM outperformed the other NMFs in terms of SIRs. The BGS-NMF controlled by LSM distribution performed better than that controlled by Laplacian distribution. In the future, the system performance of BGS-NMF may be further improved by some other considerations. For example, the numbers of common bases and individual bases could be automatically selected according to Bayesian framework by using marginal likelihood. The group sparse learning could be extended for constructing hierarchical NMF where hierarchical grouping of basis vectors is examined. The underdetermined separation under different number of sources and sensors could be tackled. Also, the online learning could be involved to update segment-based parameters and hyperparameters [33, 34]. The evolutionary BGS-NMFs shall work for nonstationary single-channel blind source separation. In addition, more evaluations shall be conducted by using realistic data with larger amount of mixed speech signals from different application domains, such as meetings and call centers.

Appendix

Derivations for inference of BGS-NMF parameters and hyperparameters

We address some derivations for model inference of BGS-NMF parameters and hyperparameters. First, the exponent of the likelihood function p( X i T | [ A r ( t + 1 ) ] i ( 1 : j 1 ) , [ A r ( t ) ] i ( j + 1 : D r ) , S r ( t ) , A h ( t ) , S h ( t ) , Σ ( t ) ) in (16) is expressed by

1 2 [ Σ ( t ) ] ii k = 1 M X ik m = 1 j 1 [ A r ( t + 1 ) ] im [ S r ( t ) ] mk [ A r ] ij [ S r ( t ) ] jk m = j + 1 D r [ A r ( t ) ] im [ S r ( t ) ] mk m = 1 D h [ A h ( t ) ] im [ S h ( t ) ] mk 2
(26)

which can be manipulated as a quadratic function of parameter [Ar] i j and leads to (17). The conditional posterior distribution p( [ A r ] ij | X i T , Θ A rij ( t ) , Φ A rij ( t ) ) is then derived by combining (17) and (11) and turns out to be

[ A r ] ij α rj ( t ) 1 exp [ A r ] ij 2 2 ( μ A rij likel β rj ( t ) [ σ A rij likel ] 2 ) [ A r ] ij + [ μ A rij likel ] 2 ) 2 [ σ A rij likel ] 2 I [ 0 , + [ ( [ A r ] ij )
(27)

which is proportional to (18). In addition, when finding the mode of (18), we take logarithm of (18) and solve a corresponding quadratic equation of [Ar] i j as

[ A r ] ij ( α rj ( t ) 1 ) ln [ A r ] ij ( [ A r ] ij μ A rij post ) 2 2 [ σ A rij post ] 2 = 0 [ A r ] ij 2 μ A rij post [ A r ] ij ( α rj ( t ) 1 ) [ σ A rij post ] 2 = 0 .
(28)

By defining = ( μ A rij post ) 2 +4( α rj ( t ) 1) [ σ A rij post ] 2 , the mode is determined by

μ A rij inst = 0 , if < 0 max { 1 2 ( μ A rij post + ) , 0 } , else.
(29)

On the other hand, following the model inference in Section 3.3, we continue to describe the MCMC sampling algorithm and the calculation of conditional posterior distributions for the remaining BGS-NMF hyperparameters {αr,βr,αh,βh,λr,λh,γr,δr,γh,δh}.

4. Sampling of α r j . The hyperparameter α rj ( t + 1 ) is sampled according to a conditional posterior distribution which is obtained by combining a likelihood function of [Ar] i j and an exponential prior density of α r j with parameter λ α rj . The resulting distribution is written by

p ( α rj [ A r ( t + 1 ) ] ij , β rj ( t ) ) 1 Γ ( α rj ) exp { λ α rj post α rj } D r I [ 0 , + [ ( α rj )
(30)

where λ α rj post =ln β rj ( t ) +(1/ D r ) j = 1 D r ln [ A r ( t + 1 ) ] ij (1/ D r ) λ α rj . This distribution does not belong to a known family, so the Metropolis-Hastings algorithm is applied. An instrumental distribution q(α r j ) is obtained by fitting the term within the brackets of (30) through a gamma distribution as detailed in [15].

5. Sampling of β r j . The hyperparameter β rj ( t + 1 ) is sampled according to a conditional posterior distribution which is obtained by combining a likelihood function of [Ar] i j and a gamma prior density of β r j with parameters { α β rj , β β rj }, i.e.,

p ( β rj [ A r ( t + 1 ) ] ij , α rj ( t + 1 ) ) ( β rj ) D r α rj ( t + 1 ) × exp β rj j = 1 D r [ A r ( t + 1 ) ] ij G ( β rj | α β rj , β β rj ) .
(31)

The resulting distribution is arranged as a new gamma distribution G( β rj | α β rj post , β β rj post ) where α β rj post =1+ D r α rj ( t + 1 ) + α β rj and β β rj post = j = 1 D r [ A r ( t + 1 ) ] ij + β β rj . Here, we do not describe the sampling of α hj ( t + 1 ) and β hj ( t + 1 ) since the conditional posterior distributions for sampling these two hyperparameters are similar to those for sampling of α rj ( t + 1 ) and β rj ( t + 1 ) .

6. Sampling of λ r j or λ h j . For sampling of scaling parameter λ rj ( t + 1 ) , the conditional posterior distribution is obtained by

p ( λ rj [ S r ( t + 1 ) ] j ( k = 1 : M ) , γ rj ( t ) , δ rj ( t ) ) k = 1 M p ( [ S r ( t + 1 ) ] jk | λ rj ) p ( λ rj | γ rj ( t ) , δ rj ( t ) ) ( λ rj ) M γ rj ( t ) exp M λ rj δ rj ( t ) + k = 1 M [ S r ( t + 1 ) ] jk .
(32)

7. Sampling of γ r j . The sampling of LSM parameter γ rj ( t + 1 ) is performed by using the conditional posterior distribution which is derived by combining a likelihood function of λ r j and an exponential prior density of γ r j with parameter λ γ rj . The resulting distribution is expressed as

p( γ rj | λ rj ( t + 1 ) , δ rj ( t ) ) 1 Γ ( γ rj ) exp{ λ γ rj post γ rj } I [ 0 , + [ ( γ rj ),
(33)

where λ γ rj post =ln δ rj ( t ) + γ rj 1 γ rj ln λ rj ( t + 1 ) λ γ rj . Again, we need to find an instrumental distribution q(γ r j ) which optimally fits the conditional posterior distribution p( γ rj | λ rj ( t + 1 ) , δ rj ( t ) ). An approximate gamma distribution is found accordingly. The Metropolis-Hastings algorithm is then applied.

8. Sampling of δ r j . The sampling of the other LSM parameter δ rj ( t + 1 ) is performed by using the conditional posterior distribution which is derived from a likelihood function of λ r j and a gamma prior density of δ r j with parameters { α δ rj , β δ rj }

p ( δ rj | λ rj ( t + 1 ) , γ rj ( t + 1 ) ) ( δ rj ) γ rj ( t + 1 ) exp { δ rj λ rj ( t + 1 ) } G ( δ rj | α δ rj , β δ rj ) .
(34)

This distribution can be arranged as a new gamma distribution G( δ rj | α δ rj post , β δ rj post ) where α δ rj post = D r γ rj ( t + 1 ) + α δ rj and β δ rj post = λ rj ( t + 1 ) + β δ rj . Similarly, the conditional posterior distributions for sampling γ hj ( t + 1 ) and δ hj ( t + 1 ) could be formulated by referring those for sampling γ rj ( t + 1 ) and δ rj ( t + 1 ) , respectively.

References

  1. Cichocki A, Zdunek R, Amari S: New algorithms for non-negative matrix factorization in applications to blind source separation. In Proceedings of International Conference on Acoustic, Speech and Signal Processing (ICASSP). Piscataway: IEEE,; 2006:621-624.

    Google Scholar 

  2. Hoyer PO: Non-negative matrix factorization with sparseness constraints. J. Mach. Lear. Res 2004, 5: 1457-1469.

    MATH  MathSciNet  Google Scholar 

  3. Chien J-T, Hsieh H-L: Convex divergence ICA for blind source separation. IEEE Trans. Audio, Speech, Language Process 2012, 20(1):290-301.

    Article  Google Scholar 

  4. Kompass R: A generalized divergence measure for nonnegative matrix factorization. Neural Comput 2007, 19: 780-791. 10.1162/neco.2007.19.3.780

    Article  MATH  MathSciNet  Google Scholar 

  5. Lee H, Yoo J, Choi S: Semi-supervised nonnegative matrix factorization. IEEE Signal Process. Lett 2010, 17(1):4-7.

    Article  Google Scholar 

  6. Plumbley MD: Algorithms for nonnegative independent component analysis. IEEE Trans. Neural Netw 2003, 14(3):534-543. 10.1109/TNN.2003.810616

    Article  Google Scholar 

  7. Bishop CM: Pattern Recognition and Machine Learning. New York: Springer Science; 2006.

    MATH  Google Scholar 

  8. Saon G, Chien J-T: Bayesian sensing hidden Markov models. IEEE Trans. Audio, Speech Language, Process 2012, 20(1):43-54.

    Article  Google Scholar 

  9. Tipping ME: Sparse Bayesian learning and the relevance vector machine. J Mach. Learn. Res 2001, 1: 211-244.

    MATH  MathSciNet  Google Scholar 

  10. Babacan SD, Molina R, Katsaggelos AK: Bayesian compressive sensing using Laplace priors. IEEE Trans. Image Process 2010, 19(1):53-63.

    Article  MathSciNet  Google Scholar 

  11. Lee H, Choi S: Group nonnegative matrix factorization for EEG classification. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS). JMLR; 2009:320-327.

    Google Scholar 

  12. Lefevre A, Bach F, Fevotte C, Itakura-Saito: Nonnegative matrix factorization with group sparsity. In Proceedings of the International Conference on Acoustic, Speech and Signal Processing (ICASSP). Prague Congress Center; 22–27 May 2011:21-24.

    Google Scholar 

  13. Kim M, Yoo J, Kang K, Choi S: Blind rhythmic source separation: nonnegativity and repeatability. In Proceedings of the International Conference on Acoustic, Speech and Signal Processing (ICASSP). Piscataway: IEEE,; 2010:2006-2009.

    Google Scholar 

  14. AT: Bayesian inference for nonnegative matrix factorization models. University of Cambridge, Technical Report CUED/F-INFENG/TR.609, 2008

  15. Moussaoui S, Brie D: Mohammad-A Djafari, C Carteret, Separation of non-negative mixture of non-negative sources using a Bayesian approach and MCMC sampling. IEEE Trans. Signal Process 2006, 54(11):4133-4145.

    Article  Google Scholar 

  16. Schmidt MN, Winther O, Hansen LK: Bayesian non-negative matrix factorization. In Proceedings of the International Conference on Independent Component Analysis and Signal Separation, Paraty, March 2009. Lecture Notes in Computer Science. Heidelberg: Springer,; 2009:540-547.

    Google Scholar 

  17. Fevotte C, Godsill SJ: A Bayesian approach for blind separation of sparse sources. IEEE Trans. Audio, Speech, Language Process 2006, 14(6):2174-2188.

    Article  Google Scholar 

  18. Duan Z, Zhang Y, Zhang C, Shi Z: Unsupervised single-channel music source separation by average harmonic structure modeling. IEEE Trans. on Audio, Speech, Language Process 2008, 16(4):766-778.

    Article  Google Scholar 

  19. Schmidt MN, Olsson RK: Single-channel speech separation using sparse non-negative matrix factorization. In Proceedings of the Annual Conference of International Speech Communication Association (INTERSPEECH). Pittsburgh; 17–21 September 2006:2614-2617.

    Google Scholar 

  20. Chien J-T, Hsieh H-L: Bayesian group sparse learning for nonnegative matrix factorization. In Proceedings of the Annual Conference of International Speech Communication Association (INTERSPEECH). Portland; 9–13 September 2012:1552-1555.

    Google Scholar 

  21. Yoo J, Kim M, Kang K, Choi S: Nonnegative matrix partial co-factorization for drum source separation. In Proceedings of the International Conference on Acoustic, Speech and Signal Processing (ICASSP). Piscataway: IEEE,; 2010:1942-1945.

    Google Scholar 

  22. Kim M, Yoo J, Kang K, Choi S: Nonnegative matrix partial co-factorization for spectral and temporal drum source separation. IEEE J. Sel. Top. Signal Process 2011, 5(6):1192-1204.

    Article  Google Scholar 

  23. Bengio S, Pereira F, Singer Y, Strelow D: Group sparse coding. In Advances in Neural Information Processing Systems (NIPS). La Jolla: NIPS; 2009:82-89.

    Google Scholar 

  24. Jenatton R, Mairal J, Obozinski G, Bach F: Proximal methods for sparse hierarchical dictionary learning. In Proceedings of the International Conference on Machine Learning (ICML). Haifa; 21–25 June 2010.

    Google Scholar 

  25. Garrigues PJ, Olshausen BA: Group sparse coding with a Laplacian scale mixture prior. In Advances in Neural Information Processing Systems (NIPS). La Jolla: NIPS; 2010:676-684.

    Google Scholar 

  26. Salakhutdinov R, Mnih A: Bayesian probabilistic matrix factorization using Markov chain Monte Carlo. In Proceedings of the International Conference on Machine Learning (ICML). Helsinki; 5–9 July 2008:880-887.

    Chapter  Google Scholar 

  27. Zhong M, Girolami M: Reversible jump MCMC for non-negative matrix factorization. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS). Clearwater Beach; 16–18 April 2009:663-670.

    Google Scholar 

  28. Hoffman MD, Blei DM, Cook PR: Bayesian nonparametric matrix factorization for recorded music. In Proceedings of the International Conference on Machine Learning (ICML). Haifa; 21–24 June 2010.

    Google Scholar 

  29. Marlin M, Murphy KP, BM: Group sparse priors for covariance estimation. In Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI). Montreal; 18–21 June 2009:383-392.

    Google Scholar 

  30. Chien J-T, Chiang C-C: Group sparse hidden Markov models for speech recognition. In Proceedings of the Annual Conference of International Speech Communication Association (INTERSPEECH). Portland; 9–13 September 2012:2646-2649.

    Google Scholar 

  31. Chien J-T, Ting C-W: Factor analyzed subspace modeling and selection. IEEE Trans. Audio, Speech Language Process 2008, 16(1):239-248.

    Article  Google Scholar 

  32. Chib S, Greenberg E: Understanding the Metropolis-Hastings algorithm. Am. Statistician 1995, 49(4):327-335.

    Google Scholar 

  33. Chien J-T, Hsieh H. -L: Nonstationary source separation using sequential and variational Bayesian learning. IEEE Trans. Neural Netw. Learn. Syst 2013, 24(5):681-694.

    Article  Google Scholar 

  34. Hsieh H-L, Chien J-T: Nonstationary and temporally-correlated source separation using Gaussian process. In Proceedings of the International Conference on Acoustic, Speech and Signal Processing (ICASSP). Prague Congress Center; 22–27 May 2011:2120-2123.

    Google Scholar 

Download references

Acknowledgments

The authors acknowledge anonymous reviewers for their constructive feedback and helpful suggestions. This work has been partially supported by the National Science Council, Taiwan, Republic of China, under contract NSC 100-2628-E-009-028-MY3.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jen-Tzung Chien.

Additional information

Competing interests

Both authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Chien, JT., Hsieh, HL. Bayesian group sparse learning for music source separation. J AUDIO SPEECH MUSIC PROC. 2013, 18 (2013). https://doi.org/10.1186/1687-4722-2013-18

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-4722-2013-18

Keywords