Skip to main content

Comparative evaluation of interpolation methods for the directivity of musical instruments

Abstract

Measurements of the directivity of acoustic sound sources must be interpolated in almost all cases, either for spatial upsampling to higher resolution representations of the data, for spatial resampling to another sampling grid, or for use in simulations of sound propagation. The performance of different interpolation techniques applied to sparsely sampled directivity measurements depends on the sampling grid used but also on the radiation pattern of the sources themselves. Therefore, we evaluated three established approaches for interpolation from a low-resolution sampling grid using high-resolution measurements of a representative sample of musical instruments as a reference. The smallest global error on average occurs for thin plate pseudo-spline interpolation. For interpolation based on spherical harmonics (SH) decomposition, the SH order and the spatial sampling scheme applied have a strong and difficult to predict influence on the quality of the interpolation. The piece-wise linear, spherical triangular interpolation provides almost as good results as the first-order spline approach, albeit with on average 20 times higher computational effort. Therefore, for spatial interpolation of sparsely sampled directivity measurements of musical instruments, the thin plate pseudo-spline method applied to absolute-valued data is recommended and, if necessary, a subsequent modeling of the phase.

1 Introduction

The first studies on the specific sound radiation characteristics of the human voice were conducted as early as the late 1930s [1], while systematic investigations of the directivity of musical instruments began 30 years later [2]. The radiation patterns of acoustic sound sources such as speakers, singers or musical instruments are commonly measured in anechoic environments with the source centered in an enclosing spherical microphone array.

For a comprehensive analysis of the directivity of 40 human speakers a nearly full spherical array was used, measured sequentially at 253 positions [3]. With respect to the singing voice, the radiation characteristics of 8 opera singers [4] and 15 trained singers [5] were determined in the horizontal plane, measured at 9 and 13 positions, respectively. A higher spatial resolution was used for measurements of a professional male singer using an adjustable semi-circular microphone array with 24 receivers [6]. For a recent review of research on the sound radiation of singing voices, see [7].

The directivities of eight musical instruments were measured using 64 microphones [8], 22 microphones were used for a measurement of 22 instruments [9]. A recently generated database for 14 instruments and a speaker contains radiation patterns measured at 2522 positions on a sphere [10]. However, these data contain only (third-)octave band directivities, which limits their use for research purposes. The most comprehensive public database was collected for 41 modern and historic instruments measured with 32 microphones and contains single tones within the playable range of each instrument and directivities computed from the stationary parts of these tones [11, 12].

The spatial resolutions of the available directivity measurements of acoustic sound sources thus depend on the technology used and differs greatly from each other. At the same time, many source directivity based applications such as room acoustical simulations require either continuous or higher resolution data. And even if the application uses a discrete spatial representation, the sampling grid required is usually different from that used in the measurement. The measured data must therefore be interpolated or resampled.

In the common polar representation, the measured values are usually linearly interpolated (cf. [9]) and occasionally also smoothed in addition (cf. [5]). For 3D balloon plots showing the spherical radiation pattern for single frequencies or frequency bands, a common method is to decompose the sound pressure measurements into spherical harmonic (SH) basis functions followed by spatial oversampling on the surface of a sphere. The resampled grid is sometimes linearly interpolated at the end for visual display (cf. [8]).

The accuracy of room acoustical simulations was shown to strongly depend on the directivity of the sound source and thus also on the quality of the chosen interpolation method. The angular resolution of the directivity affects the simulated room impulse response (RIR) and several other room acoustic parameters up to a spherical harmonics (SH) order of N=10 [13] even if the information incorporated in higher order components may no longer be perceptually relevant, at least at larger distances from the source [14].

Not only the required resolution and the required sampling scheme, but also the required physical information inherent in the radiation pattern depends on the subsequent application. In wave-based simulations, such as the Boundary Element Method (BEM) and the Finite Element Method (FEM), it appears beneficial to have both, a continuous magnitude and phase response of the source [15]. In simulations based on geometrical acoustics [16] that combine image sources and stochastic ray tracing to compute early reflections and the late reverberation [17, 18], a complex-valued description of the source directivity might be beneficial for the image source part, while the phase response is spurious in an energy-histogram-based ray tracing approach.

Moreover, if directivities are calculated from the steady part of played tones, the phase spectrum may be subject to fluctuations, especially if the source in the center of the measurement system is not completely spatially fixed, causing a fluctuating excess phase that renders phase information practically useless (Fig. 1). To account for this, Zagala & Zotter [19] suggested to iteratively optimize the sign of the absolute magnitude response prior to SH interpolation to minimize the mean squared error (MSE) between the input and interpolated data. Ahrens & Bilbao [20] chose to make the magnitude response minimum phase to avoid excess phase and to get directivity more easily decomposed into SH impulse responses applicable to time domain room acoustical simulations. However, both studies did not investigate the general suitability of SH for interpolating the magnitude response of the directivity.

Fig. 1
figure 1

Excerpt of the spectrogram for the signal of a trumpet playing note A4 (440 Hz) at fortissimo, recorded with microphone 4 from [12], separated in amplitude in dB (top) and phase in degree (bottom)

The question about whether and how to interpolate directivities with phase has been successfully addressed for head-related transfer functions. A variety of techniques either pre-align the entire or a high-frequency portion of the impulse responses, or manipulate the corresponding phase to improve magnitude interpolation (cf. [21] and ([22], Chapter 4.11) for an overview). These methods either reconstruct the phase after interpolation or are justified by the irrelevance of inter-aural phase at high frequencies [23] and rely on the relation of the frequency-domain directivity to a short, impulse-shaped time-domain representation. Hereby, they do not apply to the directional spectra of musical instruments, as exemplified in Fig. 1.

The aim of this study is thus to evaluate the suitability of established methods for interpolating the magnitude response of sparsely sampled directivities of musical instruments. For this purpose, high-resolution measurements of four different musical instruments, whose technical construction and radiation characteristics cover a wide range of natural sound sources, were selected. The data were sub-sampled at 32 sparse grid points, interpolated to a high-resolution grid, and evaluated against the measured reference.

Note that in this paper we use the term “interpolation” for any kind of continuous approximation of discrete spatial radiation patterns, no matter whether the grid points are precisely reproduced by this approximation or not.

2 Background

A plethora of interpolation techniques for real-valued scattered data exist that make different assumptions about the distribution of the discrete set of known data points [24]. Because the quality of the interpolation depends on how well these assumptions are fulfilled, the performance of the interpolation methods considerably depends on the specific application. Simple techniques include discontinuous nearest-neighbor interpolation, as well as continuous linear and natural neighbor interpolation. More commonly used are advanced concepts such as deterministic inverse distance weighted or spline interpolation [25], as well as kriging [26]—a stochastic technique from the field of geostatistics that minimizes the spatial variance between the value to be estimated and the ambient measurements. An essential tool for data fitting and interpolation in the field of computer aided geometric design (CAGD) are barycentric coordinates defined on spherical triangles, which can be used to define the associated spherical Bernstein-Bézier polynomials for constructing piece-wise functional and parametric surfaces [27]. For acoustical sound sources, a decomposition into SH basis functions has become particularly popular [2830], since it not only allows for a synthesis of the radiation pattern in virtual acoustic reality [31], but also for a decomposition of the room impulse response into SH-based spatial components [32]. In case of an order-limited directivity, SH interpolation is physically correct.

Based on the above review, we selected three interpolation approaches for the detailed evaluation. SH interpolation was included because of its widespread use in musical acoustics. Spline interpolation was chosen because it is superior to inverse distance weighting and kriging if only a small number of sample points are available [33, 34]. The spherical triangular interpolation technique corresponds to a piece-wise degree-1 barycentric spherical Bernstein-Bézier polynomial interpolation; in audio technology it is commonly employed in three-dimensional vector based amplitude panning (VBAP) as introduced by Pulkki [35] for robust virtual sound source positioning [22].

2.1 Spherical harmonics interpolation

If the sound pressure on the surface of a sphere is sampled with a finite number of microphones, spherical Fourier coefficients can be calculated from the measured values, which can then be used to estimate the sound pressure function on the entire measuring surface [36]. The limited number of sample points results in an order-limited sound pressure function on the measurement surface. Thus, the spherical function f(θ,ϕ) (θ=azimuth,ϕ=colatitude) is represented by a weighted sum of a finite set of orthogonal base functions:

$$ f(\theta,\phi)=\sum^{N}_{n=0}\sum^{n}_{m=-n}f_{{nm}}Y^{m}_{n}(\theta,\phi), $$
(1)

where \(N\in \mathbb {N}\) indicates the spherical harmonics order and fnm are the considered weights of the corresponding spherical harmonics

$$ Y^{m}_{n}(\theta,\phi)=\sqrt{\frac{2n+1}{4\pi} \frac{(n-m)!}{(n+m)!}}P^{m}_{n}(\cos\theta)e^{im\phi}, $$
(2)

where \(P^{m}_{n}(\cdot)\) are the associated Legendre functions, (·)! represents the factorial function, \(m\in \mathbb {Z}\) specifies the function degree, and \(N\in \mathbb {N}\) the order of the function. Consequently, the Fourier coefficients fnm completely describe the order-constrained function f(θ,ϕ) on the entire sphere and their determination is yet sufficient for a correct SH interpolation.

By sampling the sound pressure function f(θ,ϕ) with a Q channel spherical microphone array, the samples pq=f(θq,ϕq) are given at the positions (θq,ϕq) of the respective microphones for \(q\in \{1,2,...,Q\}=\mathbb {N}_{Q}\). In matrix form Eq. 1 can be written as

$$ \mathbf{f} =\mathbf{Y} \mathbf{f}_{{nm}}, $$
(3)

where the matrix Y of dimensions Q×(N+1)2 is given by

$$ \mathbf{Y} = \left[\begin{array}{cccc} Y_{0}^{0}(\theta_{1},\phi_{1}) & Y_{1}^{-1}(\theta_{1},\phi_{1}) & \cdots & Y_{N}^{N}(\theta_{1},\phi_{1})\\ Y_{0}^{0}(\theta_{2},\phi_{2}) & Y_{1}^{-1}(\theta_{2},\phi_{2}) & \cdots & Y_{N}^{N}(\theta_{2},\phi_{2})\\ \vdots & \vdots & \ddots & \vdots\\ Y_{0}^{0}(\theta_{Q},\phi_{Q}) & Y_{1}^{-1}(\theta_{Q},\phi_{Q}) & \cdots & Y_{N}^{N}(\theta_{Q},\phi_{Q})\\ \end{array}\right] $$
(4)

and the vector \(\mathbf {f} = [p_{1},\dots,p_{Q}]^{T}\) contains the Q sound pressure measurements at position (θq,ϕq) for \(q \in \mathbb {N}_{Q}\).

For the rare scenario, when the number of microphones Q matches the spherical harmonics order N, i.e. Q=(N+1)2, under consideration of perfectly distributed measuring points [37] and thus a well-conditioned full-rank matrix Y, Eq. 3 can be solved with the inverse of matrix Y:

$$ \mathbf{f}_{{nm}} =\mathbf{Y}^{-1}\mathbf{f}. $$
(5)

For Q>(N+1)2 an over-determined system of linear equations results which can be solved through best fit, in the least-squares sense, by taking the Moore-Penrose inverse of Y and thus seeking a solution fnm that minimizes the energy of the error:

$$ \min_{\mathbf{f}_{{nm}}} \|\mathbf{f} - \mathbf{Y} \mathbf{f}_{{nm}}\|^{2} \quad \Longrightarrow \quad \mathbf{f}_{{nm}} =\mathbf{Y}^{\dagger} \mathbf{f}, $$
(6)

with Y=(YHY)−1YH and · denoting the Euclidean norm. For functions that are not order-limited, errors occur due to spatial aliasing and fYfnm and consequently f(θq,ϕq)≠pq [38].

For Q<(N+1)2, the system of equations is under-determined and Eq. 3 provides infinitely many solutions. In this case the Moore-Penrose inverse of the matrix Y seeks a solution fnm with minimum Euclidean norm, i.e. with minimal wave-spectral power fnm2 ([29], p. 79):

$$ \min_{\mathbf{f}_{{nm}}} \| \mathbf{f}_{{nm}} \|^{2} \quad \textrm{s.t.} \quad \mathbf{f} = \mathbf{Y} \mathbf{f}_{{nm}} \quad \Longrightarrow \quad \mathbf{f}_{{nm}} =\mathbf{Y}^{\dagger} \mathbf{f}. $$
(7)

To interpolate samples of the sound pressure measurements on a sphere, the calculated weights of the spherical harmonics can be used in the inverse spherical Fourier transform from Eq. 1 and arbitrary points between the samples can be estimated. The values at the sampling positions (θq,ϕq) for \(q \in \mathbb {N}_{Q}\) can be reproduced exactly if the order N is sufficiently high. In the case of under-determined systems, however, notches occur between the sample points due to the chosen constraint of minimum wave-spectral power and therefore even order-limited functions can no longer be represented accurately.

An indication for the numerical accuracy of SH interpolation based on matrix inversion (Eq. 5) is the condition number κ of YN. A large condition number indicates that small changes in the measured sound pressures f could lead to large changes in the Fourier coefficient matrix fnm. The solution of the linear system of equations is thus highly sensitive to errors and noise in the input data. While κ=1 is ideal, a system with κ>3.5 is considered as ill-conditioned [39]. The condition number depends on the chosen spatial sampling scheme and the SH order N.

2.2 Thin plate pseudo-spline interpolation

The thin plate pseudo-spline solution [40, 41] allows the regularized interpolation of sparsely distributed measurements on the sphere with closed-form expressions that make this approach well suited for numerical computation. The aim is to find a smooth function f(θ,ϕ), where the values for f(θq,ϕq) should be as close as possible to the measured values pq while containing minimum bending energy on the surface of the sphere S. An interpolating (A) or smoothing (B) thin plate pseudo-spline can therefore be obtained by seeking the solution to one of the following problems:

$$ \min_{f} J_{k}(f) \quad \text{s.t.} \quad f(\theta_{q},\phi_{q}) = p_{q} $$
(8)

for (A) or with the option of regularization

$$ \min_{f} \frac{1}{Q}\sum_{q=1}^{Q}(p_{q}-f(\theta_{q},\phi_{q}))^{2} + \lambda J_{k}(f) $$
(9)

for (B), where λ≥0 denotes the tuning parameter and Jk(f) is defined by

$$ J_{k}(f) = \sum_{n=1 }^{\infty}\sum_{m=-n}^{n} \frac{\check f^{2}_{{nm}}}{\xi_{{nm}}}, $$
(10)

with

$$ \check f_{{nm}}=\int_{S} f(\theta,\phi)Y_{n}^{m}(\theta,\phi) d\theta d\phi $$
(11)

and

$$ \xi_{{nm}}=\left[ (n+\frac{1}{2})(n+1)(n+2)\dotsm(n+2k-1) \right]^{-1}. $$
(12)

A solution of the two problems given by Eqs. 8 and 9 is obtained with

$$ f_{Q,k,\lambda}(\theta,\phi)=\sum_{q=1}^{Q}c_{q} R(\theta,\phi;\theta_{q},\phi_{q})+d. $$
(13)

R(θ,ϕ;θq,ϕq) is the reproducing kernel for the Hilbert space \(\mathscr {H}_{k}^{0} (S)\) with norm \(J_{k}^{1/2} (\cdot)\):

$$ \begin{aligned} R(\theta,\phi;\theta_{q},\phi_{q})&=\sum_{n=1}^{\infty}\sum_{m=-n}^{n}\xi_{{nm}}Y_{n}^{m}(\theta,\phi)Y_{n}^{m}(\theta_{q},\phi_{q})\\ &=\frac{1}{2\pi}\sum_{n=1}^{\infty} \frac{1}{(n+1)(n+2) \dotsm (n+2k-1)}P_{n}(z), \end{aligned} $$
(14)

where Pn are the associated Legendre polynomials and z denotes the cosine of the spherical angle γ between the two arguments of the kernel function with

$$ z = \cos\gamma = \sin(\phi) \sin(\phi_{q}) + \cos(\phi) \cos(\phi_{q}) \cos(\theta-\theta_{q}). $$
(15)

The spline order \(M \in \mathbb {N}\) determines the derivability of the solution from Eq. 13. We define the spline order as M=2k−2, and corresponding splines are continuous up to the (M−1)th derivative, so they are called CM−1 smooth.

A closed-form expression for the reproducing kernel R(θ,ϕ;θq,ϕq), suitable for numerical computation, is given by

$$ R(\theta,\phi;\theta_{q},\phi_{q}) = \frac{1}{2\pi}\left[ \frac{1}{(2k-2)!}q_{2k-2}(z)-\frac{1}{(2k-1)!} \right], $$
(16)

with

$$ q_{2k-2}(z)=\int_{0}^{1}(1-h)^{2k-2}(1-2hz+h^{2})^{-1/2} dh $$
(17)

and 2k−2=M.

A recursive evaluation of q2k−2(z) for \(k = \left \{\frac {3}{2}, 2,\frac {5}{2},...,6\right \}\) can be found in ([40, 41], Tab. 1), as well as the determination of the coefficients c and d from Eq. 13 in matrix formFootnote 1:

$$ \left[\begin{array}{c} \mathbf{c} \\ d \end{array}\right] = \left[\begin{array}{cc} \mathbf{R}_{Q} + Q\lambda \mathbf{I}\ & \mathbf{T}\\ \mathbf{T}^{T}& 0 \end{array}\right]^{-1} \left[\begin{array}{c} \mathbf{f} \\ 0 \end{array}\right], $$
(18)

where RQ is the Q×Q matrix with the element i,j defined as (RQ)i,j=R(θi,ϕi;θj,ϕj),I is the Q×Q identity matrix, the vector \(\mathbf {f} = [p_{1},\dots,p_{Q}]^{T}\) contains the Q sound pressure measurements at position (θq,ϕq) for \(q \in \mathbb {N}_{Q}\) and \(\mathbf {T} = [1,\dots,1]^{T}\).

If the measured values are noisy, it can be advantageous to regularize the interpolation in order to suppress outliers; the tuning parameter λ>0 will smooth the estimated function f on the surface of the sphere. Due to the low noise measurement data used for this study, smoothing of the estimation function did not improve the quality of the interpolation (c.f. Section 5), therefore the thin plate pseudo-splines were performed without regularization (λ=0).

2.3 Piece-wise linear, spherical triangular interpolation

The entire set of Q microphone positions (θq,ϕq) can be equivalently expressed as a 3×Q matrix containing its three-dimensional unit direction vectors

$$ \mathbf{U}= \left[\begin{array}{c} \mathbf{u}_{1},\dots,\mathbf{u}_{Q} \end{array}\right],\qquad \mathbf{u}_{q}= \left[\begin{array}{c} \cos\phi_{q}\,\sin\theta_{q}\\ \sin\phi_{q}\,\sin\theta_{q}\\ \cos\theta_{q} \end{array}\right], $$
(19)

Using the Quickhull algorithm [42] vertex index triplets vl=[v1l,v2l,v3l] are obtained to describe a set of triangular facets that span the convex hull of the vertices stored in U.

Any arbitrary unit direction vector u can be represented by the non-negative spherical barycentric/area coordinates g=[g1,g2,g3]T of the vertices Ul of the lth triangle,

$$\begin{array}{*{20}l} \mathbf{u}&=\mathbf{U}_{l}\;\mathbf{g}, & \mathbf{U}_{l}&= [\mathbf{u}_{v_{1l}},\mathbf{u}_{v_{2l}},\mathbf{u}_{v_{3l}}], \end{array} $$
(20)
$$\begin{array}{*{20}l} \mathbf{g}&=\mathbf{U}_{l}^{-1}\,\mathbf{u}, \end{array} $$
(21)

where gi≥0 and \(\sum _{i} g_{i}\geq 1\). Note that the required all-positive spherical barycentric coordinates are only found if a suitable spherical triangle l is selected from the convex hull, which will then contain u. While the spherical barycentric coordinates g reproduce the direction u, spherical triangular interpolation uses the corresponding planar barycentric coordinates \(\tilde g_{i}=\frac {g_{i}}{\sum _{j} g_{j}}\) [27] to linearly interpolate the values measured at the microphones of the triangle l by their weighted average,

$$\begin{array}{*{20}l} f(\mathbf{u})=\tilde g_{1}\,p_{v_{1l}}+\tilde g_{2}\, p_{v_{2l}}+\tilde g_{3}\,p_{v_{3l}}. \end{array} $$
(22)

At the boundaries, this interpolation exactly reproduces the values at the triangle vertices and linearly interpolates the value pairs along any edge of the lth triangle. Because neighboring triangles share edges and vertices, interpolation across triangles is continuous. There is no condition for the first-order derivatives, therefore this interpolation is C0 smooth.

2.4 Robustness and bias

Robustness is often measured by observing the range of amplifications that stochastic perturbations linearly superimposed with the input data can undergo. Due to linearity, it is insightful and common practice to observe changes that uncorrelated Gaussian noise as the only input \(\mathbf {f}=\mathcal {N}\) undergoes, which we adopt to analyze the robustness of the three above-mentioned interpolation methods. We consider the 32 nodes of a pentakis dodecahedron as directional sampling for the input data f, which is interpolated using the 2520 nodes of a Chebyshev-type quadrature [43], yielding the 2520 output values \(\tilde {\mathbf {f}}\).

Figure 2 shows a statistical analysis of the two ratios \(\frac {\text {RMS}\{\tilde {\mathbf {f}}\}}{\text {RMS}\{\mathbf {f}\}}\) and \(\frac {\max \{\tilde {\mathbf {f}}\}}{\max \{\mathbf {f}\}}\), analyzing these ratios for 1000 independent instances of a random input vector f.

Fig. 2
figure 2

Distribution of the relative changes in root mean square (RMS) in dB (top) and maximum values (bottom) when interpolating Gaussian input noise with the SH, spline, and triangular interpolation methods, for 1000 independent instances of a random input vector f. On each box, the center indicates the median, and the bottom and top edge of the box show the 25th and 75th percentiles, respectively

Regarding changes between RMS values from input to output, we observe that SH methods for N={7,8}, Spline for all M={1,2,3}, and TrI produce a bias towards smaller output RMS values of 2 dB or more for stochastic input. For TrI, it is understandable that within any triangle, three uncorrelated inputs get averaged linearly, therefore the output RMS gets reduced by stochastic instead of additive interference. For SH interpolation with N=4, this reduction only happens sparsely. The implicit minimization of the Euclidian norm for N≥5 minimizes the output RMS value, and therefore causes the observable bias towards lower RMS. This minimization might be optimistically regarded as an increase in robustness between the sampling nodes for N≤6, but it also implies a general decrease in magnitude there, a bias causing dips between the observed samples when interpolating omnidirectional directivities. Spline methods process constant inputs separately, therefore, it is reasonable to assume that the observed reduction in output RMS rather displays increased robustness to stochastic perturbation. For the chosen spatial sampling scheme, all methods appear robust enough to avoid enlarged output RMS values.

As a more critical test, SH-based interpolation exhibits the largest differences between maxima in the interpolated output compared to those in the input, with around ±3 dB for N={4,5,6,7}, while the settings SH 8 and Spline 3 behave reasonably. Rigorously, TrI as a linear interpolation is capable of precisely avoiding enlarged output maxima, and the same benefit is observed for Spline 1.

3 Method

3.1 High-resolution reference directivities

For an objective evaluation of the estimation accuracy of a spatial interpolation method based on finite samples on a measurement surface, a high-resolution reference is required. This could theoretically be an analytical function sampled at the evaluation points. However, since the quality of the interpolation also depends significantly on the properties of the pattern to be interpolated, a diverse sample of musical instruments was chosen for which high-resolution measurements were made. The sample included a trombone, a violin, a flue pipe and a bassoon, and thus different types of sound production, different physical principles of sound radiation, and different sizes and geometries of the radiator. To achieve high reliability, the excitation of the instruments was automated, the instruments were rotated by a computer-controlled 3D loudspeaker measurement system ELF, and the sound pressure measurements were obtained on a dense spatial sampling grid. The measurements were conducted in the anechoic chamber of the OWL University of Applied Sciences and Arts in Lemgo. A 1/2" free-field equalized BK4190 cartridge was used as measurement microphone, placed 2 m from the sound source.

The trombone, a member of the brass family, is a relatively small and straightforward sound source, with the bell being the only port from which sound energy is emitted. The directional dependence of sound radiation of brass instruments is rotationally largely symmetric with respect to the center axis of the bell, which is also the main radiation direction. With increasing frequency the main lobe of the directivity constricts, resulting in a more directional sound radiation. To determine the directivity of the trombone, the shortened instrument (without slide) was artificially excited with a sine sweep signal of the order 16 (216 samples ≈ 1.4 s @48 kHz sampling rate), emitted by a horn driver directly attached to the small end of the bell [44]. An equal angle sampling grid with an angular resolution of 5 in azimuth and colatitude was chosen and thus 2522 unique positions were measured.

The sound radiation for a violin, a string instrument, is partially determined by the parallel plates of the instruments’ body which vibrate locally in different amplitudes and phases. Particularly at low frequencies the sound is additionally radiated through the characteristic open f-holes that build a Helmholtz resonator in connection with the air cavity of the body. With a source extension of about 40 cm it is a medium-sized instrument. However, the two vibrating plates with a different local phasing cause interferences and therefore a distinct directional characteristic in the far-field. The directivity of a violin was measured exemplarily for the open A string (f0=440 Hz), applying a repeatable bowing machine for excitation [45] and utilizing an equal angle grid with an angular resolution of 6 for azimuth and colatitude, yielding the sound pressure at 1742 positions on a sphere.

The sound of a flue pipe of an organ is radiated through the mouth as well as through the open end of the resonator. The two spatially separated partial sound sources thus produce frequency-dependent directional characteristics, which get more complex with increasing frequency following the characteristics of a dipole and corresponding to the sound radiation behavior of a flute [2]. To measure the directivity of the flue pipe, a horn driver has been attached directly to its toe hole which was artificially excited, again with a sine sweep of order 16. An equal angle sampling grid within an angular resolution of 5 was chosen, again yielding 2522 positions. The pipe used has a length of 51.3 cm with a diameter of 4.8 cm and a fundamental frequency of f0=280 Hz [46].

The bassoon, a woodwind instrument of the double reed family, has a bell and numerous extended tone holes distributed irregularly across a long, bent corpus. The openings act as secondary sound sources depending on the fingering. The superposition of their radiated sound fields can cause a relatively complex directivity in the far-field. The sound radiation of a bassoon, fingered for the note Eb3 (f0=156 Hz), was measured with an equal angle sampling distribution within a horizontal and vertical angular resolution of 5, applying a repeatable artificial excitation [47]. Accordingly, the data was acquired at 2522 positions on a sphere.

3.2 Interpolation of microphone array measurements

In the first step, the high-resolution reference data was sub-sampled at 32 microphone positions used in the Berlin-Aachen database of musical instruments [12]. The 32 sampling points are located at the vertices of a pentakis dodecahedron (Fig. 3) and were chosen as one possible sampling scheme to evaluate the interpolation techniques under realistic conditions. Except at the two poles, the 32 positions of the sparse grid are not contained in the reference grids. To account for this, the final sparse grid was generated from the closest positions of the high-resolution reference grids. The resulting grid is called sample grid in the following and diverges from ideal sparse grid by 1.7/ 1.0 on average, with a maximum deviation of 2.4/ 2.6.

Fig. 3
figure 3

The high-resolution reference of the bassoon directivety for the second overtone at f2=468 Hz is shown with the 32 sampling points in the upper left. The triangular interpolation (TrI) and spline interpolation estimate for M={1,2} is shown at the top and the results of the spherical harmonics Interpolation (SH) for N={4,5,...,8} at the bottom. The difference of sound pressure level in dB between reference and interpolation is shown over azimuth and colatitude, the gray crosses indicate the sampling points. The result for the spline interpolation with M=3 is not included in this figure, but is shown in detail in Fig. 6

For the SH orders examined in this paper, the condition numbers of the ideal sparse grid are small due to the equal distribution of the sampling points (κ=1.0 for N≤4; κ<2.0 for N≥6). The large condition number of κ=1.24×1016 for N=5, however, indicates that this SH order should not be used for the selected grid. The condition numbers increased only slightly due to using the nearest neighbors for the sample grid, i.e., κ<2.0 for N≤4 and N≥6 still holds.

In a second step, SH interpolation for orders N={1,2,...,8}, the closed-form spherical spline interpolation for orders M={1,2,3} (with M=2k−2, c.f. Eq. 16) and the spherical triangular interpolation was realized with AKtools using the functions AKsht(), AKisht(), AKsphSplineInterp(), and AKsphTriInterp() [48].

The interpolation functions were sampled at the corresponding high-resolution reference grid points, allowing a direct comparison between the interpolation result and the reference, i.e., the measured directivity.

The interpolation was done on the magnitude responses, i.e., the phase information was neglected for two reasons. First, interpolation of the phase spectrum is very susceptible to noisy data and errors can occur, particularly at high frequencies [49]. Second, natural sound sources, in contrast to artificial sound sources, do not have a stationary phase response neither for a certain frequency nor for a certain radiation direction [20], which could be used for room acoustics simulation without further ado. Figure 1 shows a spectrogram for the note A4 (440 Hz) played by a trumpet (recording taken from [12]). While the amplitude of the fundamental and the overtones are almost constant during the observed time window, the phase of the trumpet signal fluctuates strongly and cannot be determined unambiguously. A proposal on how absolute-valued interpolated directivity patterns can be used for wave-based simulation methods is presented in Section 5.

3.3 Global error measure Ψ

For the evaluation of the interpolation algorithms, a global single-number error measure is proposed. To describe the mathematical accuracy of the interpolation, the difference of the sound pressure levels of interpolated directivity and reference directivity averaged over all directions could be used, in the way Arend et al. have done to describe the accuracy in interpolation of head-related transfer functions (HRTFs) [21]. However, to describe the acoustic effect of erroneous excitation of the sound field caused by an incorrect directivity, it seems important to consider the sound power radiated (incorrectly or correctly) and not level differences, which correspond to larger differences in power at high levels than at low levels. As a physically meaningful measure we therefore propose to calculate the sound power erroneously radiated with respect to the direction due to the interpolation error and to relate this to the total radiated sound power. To obtain such as measure, the relative error in radiated sound power Ψ can be calculated as the summed area weighted relative differences of the squared sound pressures of interpolation \({\hat {p}}^{2}\left (\theta _{r},\phi _{r}\right)\) and reference p2(θr,ϕr) over the R directions (θr,ϕr) of the reference grid for r{1,2,...,R}, related to the summed area weighted squared sound pressure of the reference:

$$ \Psi=\frac{\sum_{r=1}^{R}\left|{\hat{p}}^{2}\left(\theta_{r},\phi_{r}\right)-p^{2}\left(\theta_{r},\phi_{r}\right)\right| w_{a}^{\prime}\left(\theta_{r},\phi_{r}\right)}{\sum_{r=1}^{R} p^{2}\left(\theta_{r},\phi_{r}\right) w_{a}^{\prime}\left(\theta_{r},\phi_{r}\right)} $$
(23)

where \(w_{a}^{\prime }\) are the normalized area weights of the reference grid with

$$ \sum_{r=1}^{R}{w_{a}^{\prime}\left(\theta_{r},\phi_{r}\right)}=1. $$
(24)

Note that an error of Ψ=0.5 states that the interpolated directivity emits 50% of the sound power in incorrect directions, whereas a value of Ψ=0 shows that the radiation pattern of the interpolated fully matches the reference.

4 Results

Since we investigated tonal instruments, we restricted the analysis to fundamentals and overtones. In addition, we discarded tones whose spatially averaged energy was more than − 40 dB below the tone with the maximum average energy separately for each instrument and played note. Thus, only the radiation pattern of the fundamental at f0=280 Hz and the first five overtones f1,...,f5 of the flue pipe were examined. For the violin the fundamental f0=440 Hz and the nine first overtones f1,...,f9 were evaluated, and for the bassoon the fundamental at f0=156 Hz and the first five overtones f1,...,f5. The trombone was a special case exhibiting an unnatural overtone series due to the shortened instrument and the artificial excitation with a horn driver. Therefore only the first five resonance frequencies at 966 Hz, 2280 Hz, 4882 Hz, 7212 Hz, and 9179 Hz are considered in the following.

The results of the spatial interpolations of the bassoon’s directivity for the second overtone at f2=468 Hz are shown in Fig. 3, along with the related global error measures Ψ. Distinct differences emerge between the interpolation methods. The spherical triangular interpolation approach (TrI) shows the lowest reproduction error for this directivity with Ψ=0.36, with the two distinctive indentations in the θ=90;ϕ=90 and 270;90 radiation directions estimated quite accurately despite the small number of sample points.

For the spline interpolation, the global error Ψ≈0.4 is almost constant across the order M. Differences between the orders can be seen mainly in the 90;90 radiation direction, where the notch is well reproduced by the spline interpolation of order M=1, however at the expense of larger errors in the transition areas between high and low radiation, which are better interpolated with order M=2.

For SH interpolation of orders N={4,5}, the largest errors occur in the notch regions at 90;90 and 270;90. As the SH order increases, these errors disappear, but indentations between the sample points become visible, most pronounced at N=8. The lowest global error is obtained with order N=7 and Ψ=0.37, whereas the maximum error of Ψ=0.49 occurs when interpolating with the theoretical optimal SH order of N=4, considering the 32 points of the sample grid.

For other radiation patterns such as the bassoon’s third overtone at f3=624 Hz (Fig. 4), the interpolation methods used behave somewhat differently. At this slightly higher frequency, the spline interpolation of order M=1 is superior with Ψ=0.49. SH interpolations of orders N={6,7} again perform better than the reproduction with N={4,5} as well as N=8, where indentations between the sample points become again visible. While for the second overtone triangular interpolation was superior to all other methods investigated, it now performs slightly worse than spline interpolation.

Fig. 4
figure 4

The high-resolution reference of the bassoon directivety for the third overtone at f3=624 Hz is shown with the 32 sampling points in the upper left. The triangular interpolation (TrI) and spline interpolation estimate for M={1,2} is shown at the top and the results of the spherical harmonics Interpolation (SH) for N={4,5,...,8} at the bottom. The difference of sound pressure level in dB between reference and interpolation is shown over azimuth and colatitude, the gray crosses indicate the sampling points. The result for the spline interpolation with M=3 is not included in this figure, but is shown in Fig. 6

An overview of global errors for all examined orders and musical instruments is shown in Fig. 5, where the individual error distributions contain between 5 and 10 partials, depending on the instrument. To assess the benefit of using musical instrument directivities from small sample grids, Ψ was additionally calculated for the trivial assumption of an omnidirectional directivity using the mean radiated energy over all directions.

Fig. 5
figure 5

Distribution of the global error measure Ψ for the examined interpolation methods triangular (T), spline (Spl, for M={1,2,3}), spherical harmonics (SH, for N={1,2,...,8}), and ordinary omnidirectional assumptions of the directivity (O) for each instrument individually and across all 27 analyzed partial tones. On each box, the central mark indicates the median, and the bottom and top edges of the box show the 25th and 75th percentiles, respectively. The whiskers extend from minimum to maximum

Taking the median of the distribution over all 27 partial tones of the examined instruments analyzed as a measure of quality of the interpolation methods, the spline approach with order M=1 shows the best result, closely followed by the triangular interpolation and the spline interpolations with orders M={2,3}.

The largest reproduction errors occur at the lowest and the highest examined SH order, i.e., at N={1,8}, whereas the best SH interpolation is achieved with order N=7.

A detailed list of results is shown in Table 1.

Table 1 Distribution of the global error measure Ψ for the examined interpolation methods triangular (TrI), spline (for M={1,2,3}), spherical harmonics (SH, for N={1,2,...,8}), and ordinary omnidirectional assumptions of the directivity (Omni) across all 27 analyzed partial tones

5 Discussion

The directivities of four different tonal musical instruments were sub-sampled at 32 almost equally distributed points and interpolated using spherical triangulation, spherical splines and spherical harmonics. A global error measure was proposed to assess the quality of the various interpolation techniques.

5.1 Comparison of interpolation algorithms

It is obvious that the absolute quality of all interpolation methods strongly depends on the acoustic size of the sound source and the resulting complexity of the radiation pattern. Acoustically small sound sources like the trombone bell can be relatively precisely interpolated already with 32 measuring points. With a median value for the difference between interpolation and reference of <0.3 for the spline and the triangulation approach and for the SH interpolation with N={5,6,7}, more than 70% of the sound power is radiated in exactly the right direction on average using the interpolated directivities.

For extended sources with more complex radiation patterns, however, the far-field directivities can be increasingly poorly estimated with a sparse sampling grid.

Since the spatial frequencies of acoustic sound sources can generally be assumed not to be limited, SH decomposition based on a finite number of microphones on the surface of a sphere allows a correct reproduction of this function only up to a cutoff frequency of kr<N, where k denotes the wave number and r the radius of the microphone array. Above this frequency errors occur due to spatial aliasing [50] (depending on the SH order of the function and the sampling grid) and series truncation [39]. The almost equally distributed Q=32 points of the applied sampling grid support a maximum SH order of N=4 to solve the linear system of equation through best fit in the least-squares sense (Eq. 6). Taking into account the measuring distance of 2 m between the microphone and the sound source, the reference directivity can be correctly reconstructed up to a cutoff frequency of only fc≈108 Hz without aliasing and truncation errors. All partial tones investigated in this study, and all musical frequency components in general, are above this frequency. Closest to this frequency is the fundamental of the bassoon at f0=156 Hz, which can be reconstructed most accurately with SH order N=4, and a global error of Ψ=0.03. In this case, the interpolation error increases with SH order because the system of equations becomes increasingly under-determined. As a consequence, the minimization of the wave-spectral power associated with the Moore-Penrose inversion of the SH transformation (Eq. 7) entails an increasingly poor interpolation between the sampling points.

Radiation patterns for frequencies well above the cutoff frequency of the microphone array, such as for the second harmonic of the bassoon at f2=468 Hz, not only show larger errors in general, but can provide smaller errors for orders N≥5. In these cases, the minimization of the wave-spectral power between the 32 sample points (c.f. Fig. 3), can lead to smaller errors due to a better trade-off between truncation and aliasing errors. Figure 6 shows the global error for all six investigated directivities of the bassoon across frequency.

Fig. 6
figure 6

The global error measure Ψ of all six examined directivities of the bassoon plotted individually. The black dashed line represents the triangular interpolation (TrI) error. The solid lines show the reproduction error when interpolating the radiation patterns with the spline approach for M={1,2,3}; the gray dashed lines indicate the error of SH interpolations for N={4,5,...,8}

At this point it seems worthwhile to take a closer look at the chosen sampling grid. An analysis showed that an exact reproduction of the 32 magnitude values at the sampling points can only be achieved with SH orders of N≥6 for all radiation patterns investigated in this study and contained in [12]. Finding the best SH order for interpolating sparsely sampled directivities can thus also be interpreted as optimizing the trade-off between the desired exact reproduction of the magnitude values at the sampling points and the undesired minimization of the wave-spectral energy which causes notches between the sampling points and increases with increasing SH order. For the selected sampling grid, the optimum appears on average at an SH order of N=7.

The triangular and spline interpolation, however, both show not only smaller median errors but also smaller 25th and 75th percentiles than SH interpolation. Since the spline technique was applied without regularization (λ=0), the 32 sample points are always correctly reconstructed regardless of the order M. The same applies to triangular interpolation, where all sample points are reproduced correctly as well. Even at low frequencies, where SH interpolation with N=4 estimates the reference almost physically correct, the triangular and spline interpolation perform comparably well (Fig. 6). Interestingly, the errors for the spline interpolation increase with increasing order. This may be explained by the fact that splines are piece-wise defined functions f that are continuous at the sampling points up to the (M−1)th derivative, i.e. f is C(M−1) smooth. The smoothness of the splines thus increases with increasing order M, whereas the smoothness of SH increases with decreasing order. In both cases some degree of non-smoothness increases the accuracy of the interpolation for the selected sample grid.

5.2 Generalization to different sample grids

In the first instance, the results of this study only hold for the selected sample grid with almost equally distributed 32 points. Even though this is likely to be a typical design and close to designs used for other measurements (Section 1), we provide a way to check the error for other designs as well. For finding the best interpolation algorithm for a specific sample grid, we provide the Matlab tool SourceInterp.m [51]. It calculates the global error Ψ for all interpolation methods evaluated in this paper, based on the publicly available high-resolution Bassoon radiation patterns [52] for the note F3 (f0=175 Hz) and will be extended for other instrument directivities once they are made publicly available.

5.3 Comparison to SH interpolation with iterative sign retrieval

As detailed earlier, only the absolute magnitude response was interpolated due to the stochastic nature of the phase information of measured natural sound sources. In case of SH interpolation, however, this approach increases the required SH order. To counteract this, iterative semidefinite relaxation methods to find a suitable real-valued sign to an absolute-valued radiation pattern prior to SH interpolation were proposed by Zagala and Zotter [19]. To assess the quality of the triangular and spline interpolation with respect to the proposed sign-retrieval algorithm, we replicated the benchmark from [19]. Therefore, 50 radiation patterns were created with randomly generated standard normally distributed real-valued SH coefficients \( f_{{nm}} \in \mathbb {R}\) up to SH order N=3. The absolute unsigned values of these radiation patterns were evaluated at Q=64 extremal sampling points (cf. [37]) according to Eq. 3 and used as input for triangular interpolation, the first-order spline interpolation and third order SH interpolation. Finally, the area weighted Mean Square Error (MSE) between the analytical reference f and the interpolation result \(\hat {f}\)

$$ \text{MSE} = \sum_{r=1}^{R}\ (f(\theta_{r},\phi_{r})-\hat{f}(\theta_{r},\phi_{r}))^{2} \ w'_{a}(\theta_{r},\phi_{r}) $$
(25)

was calculated for 2522 sampling points of the reference grid used above.

The distribution of the MSE across the 50 radiation patterns is shown in Fig. 7.

Fig. 7
figure 7

Distribution of the Mean Square Error MSE (top) and reconstruction time in s (bottom) for interpolation with the (zero phase) SH approach (left), the sign-retrieval algorithm SDR rgn+dbl, the spline method and the triangular technique (right). On each box, the center indicates the median, and the bottom and top edges of the box show the 25th and 75th percentiles, respectively

On average, the spline interpolation and the triangular method reconstruct the random directivity patterns only slightly better than the SDR-based sign retrieval with common-sign regions algorithm (median of 0.05 vs. 0.07). However, the dispersion of the error is considerably lower for the spline approach, with a 75th percentile of 0.06 compared to 0.12 for SDR rgn+dbl. It should also be noted that the spline method reconstructs one radiation pattern on average 30 faster than the SDR rgn+dbl algorithm and 26 times faster than triangular interpolation (using Matlab on a PC with Intel Core i5-6400 CPU @ 2.70 GHz and 16 GB RAM, Fig. 7). Note that the triangular interpolation AKsphTriInterp() is already optimized by searching only sub-lists of triangles extending into the octant of any regarded interpolated coordinate. The potential speed-up factor has an upper limit of 8 and practically reached 6 with the required control flow and overlap of the sub-lists.

5.4 Combination with models for the phase response

Time domain acoustical simulations and low-order SH decompositions may benefit from complex-valued directivities with both magnitude and phase information. In this regard, the interpolation methods discussed up to this point might not be optimal, yet. The unsigned (zero phase) SH interpolation causes acausal impulse responses that are symmetrical around t=0 and the sign-retrieval algorithm generates an non-continuous phase response that might also deteriorate the corresponding time signals. In addition, due to treating each frequency separately, it remains to be clarified based on which criteria to smooth the phase responses across frequencies. A direct integration of a numerically derived absolute-value directivity into time domain simulations such as the finite difference time domain (FDTD) method was presented by Ahrens & Bilbao using an analytical point source phase response and SH interpolation [49]. However, our results suggest that spline interpolation is more suitable for interpolating sparsely sampled magnitude responses regardless of the phase. Therefore, for interpolating sparsely resolved directivity measurements of musical instruments for the use in time domain simulations, we recommend to first interpolate the magnitude response to a high-resolution grid using first-order splines, followed by a subsequent SH expansion according to [49]. Note, however, that a physically correct extrapolation of the sound field within the near field of the sound source is not possible even with this approach, despite subsequently added phase information.

5.5 Generalization to different musical instruments

The results of the current study first only apply to the four instruments studied, and cannot simply be generalized to all musical instruments or acoustical sound sources in general. Considering the instruments of the classical orchestra, however, all fundamental radiation mechanisms such as the vibrating soundboard (violin), the piston-like air vibration in the bell of brass instruments (trombone), the complex multi-source radiation of woodwind instruments (bassoon), and the air jet excitation of flute instruments (flue pipe) are represented, at least those with stationary excitation, i.e., not including percussion instruments.

It is noticeable that piece-wise interpolation methods such as the spherical spline and spherical triangular interpolation seem to provide a better reconstruction of complex radiation patterns in cases where the sampling theorem (kr<N) is not met. This behavior is largely consistent for the 27 patterns analyzed, all of them at frequencies violating the spatial sampling theorem and representing different source types and different degrees of complexity.

We thus expect that a similar result would also be obtained also for directional characteristics measured in the presence of a player. Players as reflecting or diffracting object next to the instrument have an influence on the directivity of a musical instrument [11]. For largely omnidirectional sources they make the radiation pattern more complex, in specific cases, one could imagine it to be smoothened, for highly directional sources such as the trombone at high frequencies, the influence will be negligible. We thus see no reason why interpolation methods for radiation patterns with players should behave fundamentally differently from those without players observed in the current study.

6 Conclusion

The performance of different interpolation techniques applied to sparsely sampled directivity measurements of acoustic sound sources depends on the sampling grid used but also on the radiation pattern of the sources themselves. Therefore, we evaluated three established approaches for interpolation from a low-resolution sampling grid using high-resolution measurements of a representative sample of musical instruments as a reference. The smallest global error on average occurs for thin plate pseudo-spline interpolation, with order 1 performing slightly better than orders 2 and 3. For interpolation based on spherical harmonics (SH) decomposition, the SH order and the spatial sampling scheme applied have a strong influence on the quality of the interpolation that is difficult to predict in individual cases. The piece-wise linear, spherical triangular interpolation finally provides almost as good results as the first-order spline approach, albeit with on average 20 times higher computational effort. Therefore, for spatial interpolation of sparsely sampled directivity measurements of musical instruments, the thin plate pseudo-spline method applied to absolute-valued data with order M=1 is recommended and, if necessary, a subsequent modeling of the phase.

Availability of data and materials

The complete data sets used for the current study are available from the corresponding author on request. The raw data of the directivity measurements can also be provided in higher resolutions on request, please contact MK.

Notes

  1. In ([40], p. 14, THEOREM 2), the formula for determining c and d is printed with a sign error.

Abbreviations

SH:

Spherical harmonics

RIR:

Room impulse response

BEM:

Boundary element method

FEM:

Finite element method

MSE:

Mean squared error

CAGD:

Computer -aided geometric design

VBAP:

Vector-based amplitude panning

RMS:

Root mean square

HRTF:

Head-related transfer functions

TrI:

Triangular interpolation

SDR:

Demidefinite relaxation

FDTD:

Finite difference time domain

References

  1. H. K. Dunn, D. W. Farnsworth, Exploration of pressure field around the human head during speech. J. Acoust. Soc. Am.10(3), 184–199 (1939). https://doi.org/10.1121/1.1915975.

    Article  Google Scholar 

  2. J. Meyer, Acoustics and the Performance of Music (Springer, New York, 2009).

    Book  Google Scholar 

  3. W. T. Chu, A. C. C. Warnock, Detailed directivity of sound fields around human talkers. Tech. Rep. RR-104, National Research Council of Canada (2002). https://doi.org/10.4224/20378930.

  4. D. Cabrera, P. J. Davis, A. Connolly, in Proceedings of the 19th International Congress on Acoustics. Vocal directivity of eight opera singers in terms of spectro-spatial parameters (Madrid, 2007).

  5. B. B. Monson, E. J. Hunter, Horizontal directivity of low- and high-frequency energy in speech and singing. J. Acoust. Soc. Am.132(1), 433–441 (2012).

    Article  Google Scholar 

  6. B. Katz, C. d’Alessandro, in Proceedings of the 19th International Congress on Acoustics. Directivity measurements of the singing voice (Madrid, 2007).

  7. O. Abe, Sound radiation of singing voices. PhD thesis, Universität Hamburg (2019).

  8. F. Hohl, Kugelmikrofonarray zur Abstrahlungsvermessung von Musikinstrumenten. Master’s thesis, Institute of Electronic Music and Acoustics, University of Music and Performing Arts, Graz, Austria (2009).

  9. J. Pätynen, T. Lokki, Directivities of symphony orchestra instruments. Acta Acustica U. Acustica. 96(1), 138–167 (2010). https://doi.org/10.3813/AAA.918265.

    Article  Google Scholar 

  10. S. D. Bellows, K. J. Bodon, T. W. Leishman, Violin directivity (2020). https://scholarsarchive.byu.edu/directivity/15/. Accessed 12 Apr 2021.

  11. N. R. Shabtai, G. Behler, M. Vorländer, S. Weinzierl, Generation and analysis of an acoustic radiation pattern database for forty-one musical instruments. J. Acoust. Soc. Am.141(2), 1246–1256 (2017). https://doi.org/10.1121/1.4976071.

    Article  Google Scholar 

  12. S. Weinzierl, M. Vorländer, G. Behler, F. Brinkmann, H. v. Coler, E. Detzner, J. Krämer, A. Lindau, M. Pollow, F. Schulz, N. R. Shabtai, A database of anechoic microphone array measurements of musical instruments (2017). https://doi.org/10.14279/depositonce-5861.2.

  13. J. Klein, M. Vorländer, in Proceedings of EAA Spatial Audio Sig. Proc. Symp., Paris. Simulative investigation of required spatial source resolution in directional room impulse response measurements, (2019), pp. 37–42. https://doi.org/10.25836/SASP.2019.24.

  14. M. Frank, M. Brandner, in Proceedings of Fortschritte der Akustik – DAGA 2019. Perceptual evaluation of spatial resolution in directivity patterns (DEGARostock, 2019), pp. 74–77.

    Google Scholar 

  15. S. Bilbao, B. Hamilton, Directional sources in wave-based acoustic simulation. IEEE/ACM Trans. Audio Speech Lang. Process. 27(2), 415–428 (2019). https://doi.org/10.1109/TASLP.2018.2881336.

    Article  Google Scholar 

  16. L. Savioja, U. P. Svensson, Overview of geometrical room acoustic modeling techniques. J. Acoust. Soc Am.138(2), 708–730 (2015).

    Article  Google Scholar 

  17. D. Schröder, M. Vorländer, in Proceedings of Forum Acusticum. RAVEN: A real-time framework for the auralization of interactive virtual environments (Aalborg, 2011), pp. 1541–1546.

  18. F. Brinkmann, L. Aspöck, D. Ackermann, S. Lepa, M. Vorländer, S. Weinzierl, A round robin on room acoustical simulation and auralization. J. Acoust. Soc. Am.145(4), 2746–2760 (2019). https://doi.org/10.1121/1.5096178.

    Article  Google Scholar 

  19. F. Zagala, F. Zotter, in Proceedings of Fortschritte der Akustik – DAGA 2019. Idea for sign-change retrieval in magnitude directivity patterns (DEGARostock, 2019), pp. 1430–1433.

    Google Scholar 

  20. J. Ahrens, S. Bilbao, Computation of spherical harmonic representations of source directivity based on the finite-distance signature. IEEE/ACM Trans. Audio Speech Lang. Process.29:, 83–92 (2021). https://doi.org/10.1109/TASLP.2020.3037471.

    Article  Google Scholar 

  21. J. M. Arend, F. Brinkmann, C. Pörschmann, Assessing Spherical Harmonics Interpolation of Time-Aligned Head-Related Transfer Functions. J. Audio Eng. Soc.69(1/2), 104–117 (2021). https://doi.org/10.17743/jaes.2020.0070.

    Article  Google Scholar 

  22. F. Zotter, M. Frank, Ambisonics : A Practical 3D Audio Theory for Recording, Studio Production, Sound Reinforcement, and Virtual Reality (Springer, Cham, Switzerland, 2019).

    Book  Google Scholar 

  23. C. Schörkhuber, M. Zaunschirm, R. Höldrich, in Proceedings of Fortschritte der Akustik – DAGA 2018. Binaural rendering of Ambisonics signals via magnitude least sqaures (DEGAMunich, 2018), pp. 339–342.

    Google Scholar 

  24. J. Li, A. D. Heap, A Review of Spatial Interpolation Methods for Environmental Scientists (Geoscience Australia, Canberra, 2008).

    Google Scholar 

  25. J. Duchon, Interpolation des fonctions de deux variables suivant le principe de la flexion des plaques minces. Rev. Fr. D’automatique Informatique Resour. Opérationnelle Anal. Numérique. 10(R3), 5–12 (1976).

    Article  MathSciNet  Google Scholar 

  26. D. G. Krige, A statistical approach to some basic mine valuation problems on the witwatersrand. J. South. Afr. Inst. Min. Metall.52(6), 119–139 (1951).

    Google Scholar 

  27. P. Alfeld, M. Neamtu, L. L. Schumaker, Bernstein-bézier polynomials on spheres and sphere-like surfaces. Comput. Aided Geom. Des.13(4), 333–349 (1996). https://doi.org/10.1016/0167-8396(95)00030-5.

    Article  Google Scholar 

  28. G. Weinreich, E. B. Arnold, Method for measuring acoustic radiation fields. J. Acoust. Soc. Am.68(2), 404–411 (1980). https://doi.org/10.1121/1.384751.

    Article  Google Scholar 

  29. F. Zotter, Analysis and synthesis of sound-radiation with spherical arrays. PhD thesis, University of Music and Performing Arts, Graz (2009).

  30. M. Pollow, Directivity patterns for room acoustical measurements and simulations. PhD thesis, RWTH, Aachen (2014).

  31. M. Noisternig, F. Zotter, B. F. Katz, in Principles and Applications of Spatial Hearing, ed. by Y. Suzuki, D. S. Brungart, and H. Kato. Reconstructing sound source directivity in virtual acoustic environments (World Scientific PublishingSingapore, 2011), pp. 357–372.

    Chapter  Google Scholar 

  32. S. Weinzierl, M. Vorländer, Room acoustical parameters as predictors of room acoustical impression: What do we know and what would we like to know?Acoust. Aust.43(1), 41–48 (2015).

    Article  Google Scholar 

  33. K. Hartung, J. Braasch, S. J. Steinberg, in Proceedings of AES 16th International Conference on Spatial Sound Reproduction. Comparison of different methods for the interpolation of head-related transfer functions (Audio Engineering SocietyRovaniemi, 1999), pp. 319–329.

    Google Scholar 

  34. G. Simpson, Y. H. Wu, Accuracy and effort of interpolation and sampling: Can gis help lower field costs?ISPRS Int. J. Geo-Inf. 3(4), 1317–1333 (2014). https://doi.org/10.3390/ijgi3041317.

    Article  Google Scholar 

  35. V. Pulkki, Virtual sound source positioning using vector base amplitude panning. J. Audio Eng. Soc.45(6), 456–466 (1997).

    Google Scholar 

  36. B. Rafaely, Fundamentals of Spherical Array Processing (Springer, Berlin, Heidelberg, 2015). https://doi.org/10.1007/978-3-662-45664-4.

    Book  Google Scholar 

  37. I. H. Sloan, R. S. Womersley, Extremal systems of points and numerical integration on the sphere. Adv. Comput. Math.21(1), 107–125 (2004). https://doi.org/10.1023/B:ACOM.0000016428.25905.da.

    Article  MathSciNet  Google Scholar 

  38. B. Rafaely, B. Weiss, E. Bachmat, Spatial aliasing in spherical microphone arrays. IEEE Trans. Signal Process.55(3), 1003–1010 (2007). https://doi.org/10.1109/TSP.2006.888896.

    Article  MathSciNet  Google Scholar 

  39. Z. Ben-Hur, D. L. Alon, B. Rafaely, R. Mehra, Loudness stability of binaural sound with spherical harmonic representation of sparse head-related transfer functions. EURASIP J. Audio Speech Music Process.2019(1) (2019). https://doi.org/10.1186/s13636-019-0148-x.

  40. G. Wahba, Spline interpolation and smoothing on the sphere. SIAM J. Sci. Stat. Comput.2(1), 5–16 (1981). https://doi.org/10.1137/0902002.

    Article  MathSciNet  Google Scholar 

  41. G. Wahba, Erratum: Spline interpolation and smoothing on the sphere. SIAM J. Sci. Stat. Comput.3(3), 385–386 (1982). https://doi.org/10.1137/0903024.

    Article  MathSciNet  Google Scholar 

  42. C. B. Barber, D. P. Dobkin, H. Huhdanpaa, The quickhull algorithm for convex hulls. Trans. Math. Softw.22:, 469–483 (1996). https://doi.org/10.1145/235815.235821.

    Article  MathSciNet  Google Scholar 

  43. M. Gräf, D. Potts, On the computation of spherical designs by a new optimization approach based on fast spherical fourier transforms. Numer. Math.119(Dec.), 699–724 (2011).

    Article  MathSciNet  Google Scholar 

  44. M. Kob, A. Baskind, in Proceedings of the International Symposium on Music Acoustics 2019 – ISMA 2019. Impact of free field inhomogenity on directivity measurements due to the measurement set-up (Detmold, 2019), pp. 62–67.

  45. N. Filipo, T. Grothe, M. Kob, in Proceedings of Fortschritte der Akustik – DAGA 2019. Investigation on the directivity of string instruments using a bowing machine (DEGARostock, 2019), pp. 334–337.

    Google Scholar 

  46. M. Kob, Influence of wall vibrations on the transient sound of a flue organ pipe. Acta Acustica U. Acustica. 86(4), 642–648 (2000).

    Google Scholar 

  47. T. Grothe, M. Kob, in Proceedings of the International Symposium on Music Acoustics 2019 – ISMA 2019. High resolution 3D radiation measurements on the bassoon (Detmold, 2019), pp. 139–145.

  48. F. Brinkmann, S. Weinzierl, in Proceedings of Audio Engineering Society Convention 142. AKtools – An Open Software Toolbox for Signal Acquisition, Processing, and Inspection in Acoustics (Audio Engineering SocietyBerlin, 2017).

    Google Scholar 

  49. J. Ahrens, S. Bilbao, in Proceedings of Forum Acusticum. Computation of spherical harmonics based sound source directivity models from sparse measurement data (Lyon, 2020).

  50. B. Rafaely, Analysis and design of spherical microphone arrays. IEEE Trans. Speech Audio Process.13(1), 135–143 (2005). https://doi.org/10.1109/TSA.2004.839244.

    Article  Google Scholar 

  51. D. Ackermann, F. Brinkmann, S. Weinzierl, SourceInterp - A Matlab tool for determining the quality of spatial interpolation methods for natural sound sources (2021). doi:10.14279/depositonce-12436.

  52. T. Grothe, M. Kob. Bassoon directivity data, (2020). http://nbn-resolving.org/urn:nbn:de:hbz:575-opus4-971. Accessed 12 Apr 2021.

Download references

Acknowledgements

The authors would like to thank Timo Grothe for providing the data set of the bassoon directivities.

Funding

Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and Affiliations

Authors

Contributions

DA and FB, conceptualization, methodology, formal analysis, data curation, writing - original draft, visualization. FZ, methodology, formal analysis, data curation, writing - original draft. MK, methodology, data curation. SW, conceptualization, writing - review and editing, supervision. The authors read and approved the final manuscript.

Corresponding author

Correspondence to David Ackermann.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ackermann, D., Brinkmann, F., Zotter, F. et al. Comparative evaluation of interpolation methods for the directivity of musical instruments. J AUDIO SPEECH MUSIC PROC. 2021, 36 (2021). https://doi.org/10.1186/s13636-021-00223-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13636-021-00223-6

Keywords