 Research
 Open Access
 Published:
Comparative evaluation of interpolation methods for the directivity of musical instruments
EURASIP Journal on Audio, Speech, and Music Processing volume 2021, Article number: 36 (2021)
Abstract
Measurements of the directivity of acoustic sound sources must be interpolated in almost all cases, either for spatial upsampling to higher resolution representations of the data, for spatial resampling to another sampling grid, or for use in simulations of sound propagation. The performance of different interpolation techniques applied to sparsely sampled directivity measurements depends on the sampling grid used but also on the radiation pattern of the sources themselves. Therefore, we evaluated three established approaches for interpolation from a lowresolution sampling grid using highresolution measurements of a representative sample of musical instruments as a reference. The smallest global error on average occurs for thin plate pseudospline interpolation. For interpolation based on spherical harmonics (SH) decomposition, the SH order and the spatial sampling scheme applied have a strong and difficult to predict influence on the quality of the interpolation. The piecewise linear, spherical triangular interpolation provides almost as good results as the firstorder spline approach, albeit with on average 20 times higher computational effort. Therefore, for spatial interpolation of sparsely sampled directivity measurements of musical instruments, the thin plate pseudospline method applied to absolutevalued data is recommended and, if necessary, a subsequent modeling of the phase.
Introduction
The first studies on the specific sound radiation characteristics of the human voice were conducted as early as the late 1930s [1], while systematic investigations of the directivity of musical instruments began 30 years later [2]. The radiation patterns of acoustic sound sources such as speakers, singers or musical instruments are commonly measured in anechoic environments with the source centered in an enclosing spherical microphone array.
For a comprehensive analysis of the directivity of 40 human speakers a nearly full spherical array was used, measured sequentially at 253 positions [3]. With respect to the singing voice, the radiation characteristics of 8 opera singers [4] and 15 trained singers [5] were determined in the horizontal plane, measured at 9 and 13 positions, respectively. A higher spatial resolution was used for measurements of a professional male singer using an adjustable semicircular microphone array with 24 receivers [6]. For a recent review of research on the sound radiation of singing voices, see [7].
The directivities of eight musical instruments were measured using 64 microphones [8], 22 microphones were used for a measurement of 22 instruments [9]. A recently generated database for 14 instruments and a speaker contains radiation patterns measured at 2522 positions on a sphere [10]. However, these data contain only (third)octave band directivities, which limits their use for research purposes. The most comprehensive public database was collected for 41 modern and historic instruments measured with 32 microphones and contains single tones within the playable range of each instrument and directivities computed from the stationary parts of these tones [11, 12].
The spatial resolutions of the available directivity measurements of acoustic sound sources thus depend on the technology used and differs greatly from each other. At the same time, many source directivity based applications such as room acoustical simulations require either continuous or higher resolution data. And even if the application uses a discrete spatial representation, the sampling grid required is usually different from that used in the measurement. The measured data must therefore be interpolated or resampled.
In the common polar representation, the measured values are usually linearly interpolated (cf. [9]) and occasionally also smoothed in addition (cf. [5]). For 3D balloon plots showing the spherical radiation pattern for single frequencies or frequency bands, a common method is to decompose the sound pressure measurements into spherical harmonic (SH) basis functions followed by spatial oversampling on the surface of a sphere. The resampled grid is sometimes linearly interpolated at the end for visual display (cf. [8]).
The accuracy of room acoustical simulations was shown to strongly depend on the directivity of the sound source and thus also on the quality of the chosen interpolation method. The angular resolution of the directivity affects the simulated room impulse response (RIR) and several other room acoustic parameters up to a spherical harmonics (SH) order of N=10 [13] even if the information incorporated in higher order components may no longer be perceptually relevant, at least at larger distances from the source [14].
Not only the required resolution and the required sampling scheme, but also the required physical information inherent in the radiation pattern depends on the subsequent application. In wavebased simulations, such as the Boundary Element Method (BEM) and the Finite Element Method (FEM), it appears beneficial to have both, a continuous magnitude and phase response of the source [15]. In simulations based on geometrical acoustics [16] that combine image sources and stochastic ray tracing to compute early reflections and the late reverberation [17, 18], a complexvalued description of the source directivity might be beneficial for the image source part, while the phase response is spurious in an energyhistogrambased ray tracing approach.
Moreover, if directivities are calculated from the steady part of played tones, the phase spectrum may be subject to fluctuations, especially if the source in the center of the measurement system is not completely spatially fixed, causing a fluctuating excess phase that renders phase information practically useless (Fig. 1). To account for this, Zagala & Zotter [19] suggested to iteratively optimize the sign of the absolute magnitude response prior to SH interpolation to minimize the mean squared error (MSE) between the input and interpolated data. Ahrens & Bilbao [20] chose to make the magnitude response minimum phase to avoid excess phase and to get directivity more easily decomposed into SH impulse responses applicable to time domain room acoustical simulations. However, both studies did not investigate the general suitability of SH for interpolating the magnitude response of the directivity.
The question about whether and how to interpolate directivities with phase has been successfully addressed for headrelated transfer functions. A variety of techniques either prealign the entire or a highfrequency portion of the impulse responses, or manipulate the corresponding phase to improve magnitude interpolation (cf. [21] and ([22], Chapter 4.11) for an overview). These methods either reconstruct the phase after interpolation or are justified by the irrelevance of interaural phase at high frequencies [23] and rely on the relation of the frequencydomain directivity to a short, impulseshaped timedomain representation. Hereby, they do not apply to the directional spectra of musical instruments, as exemplified in Fig. 1.
The aim of this study is thus to evaluate the suitability of established methods for interpolating the magnitude response of sparsely sampled directivities of musical instruments. For this purpose, highresolution measurements of four different musical instruments, whose technical construction and radiation characteristics cover a wide range of natural sound sources, were selected. The data were subsampled at 32 sparse grid points, interpolated to a highresolution grid, and evaluated against the measured reference.
Note that in this paper we use the term “interpolation” for any kind of continuous approximation of discrete spatial radiation patterns, no matter whether the grid points are precisely reproduced by this approximation or not.
Background
A plethora of interpolation techniques for realvalued scattered data exist that make different assumptions about the distribution of the discrete set of known data points [24]. Because the quality of the interpolation depends on how well these assumptions are fulfilled, the performance of the interpolation methods considerably depends on the specific application. Simple techniques include discontinuous nearestneighbor interpolation, as well as continuous linear and natural neighbor interpolation. More commonly used are advanced concepts such as deterministic inverse distance weighted or spline interpolation [25], as well as kriging [26]—a stochastic technique from the field of geostatistics that minimizes the spatial variance between the value to be estimated and the ambient measurements. An essential tool for data fitting and interpolation in the field of computer aided geometric design (CAGD) are barycentric coordinates defined on spherical triangles, which can be used to define the associated spherical BernsteinBézier polynomials for constructing piecewise functional and parametric surfaces [27]. For acoustical sound sources, a decomposition into SH basis functions has become particularly popular [28–30], since it not only allows for a synthesis of the radiation pattern in virtual acoustic reality [31], but also for a decomposition of the room impulse response into SHbased spatial components [32]. In case of an orderlimited directivity, SH interpolation is physically correct.
Based on the above review, we selected three interpolation approaches for the detailed evaluation. SH interpolation was included because of its widespread use in musical acoustics. Spline interpolation was chosen because it is superior to inverse distance weighting and kriging if only a small number of sample points are available [33, 34]. The spherical triangular interpolation technique corresponds to a piecewise degree1 barycentric spherical BernsteinBézier polynomial interpolation; in audio technology it is commonly employed in threedimensional vector based amplitude panning (VBAP) as introduced by Pulkki [35] for robust virtual sound source positioning [22].
Spherical harmonics interpolation
If the sound pressure on the surface of a sphere is sampled with a finite number of microphones, spherical Fourier coefficients can be calculated from the measured values, which can then be used to estimate the sound pressure function on the entire measuring surface [36]. The limited number of sample points results in an orderlimited sound pressure function on the measurement surface. Thus, the spherical function f(θ,ϕ) (θ=azimuth,ϕ=colatitude) is represented by a weighted sum of a finite set of orthogonal base functions:
where \(N\in \mathbb {N}\) indicates the spherical harmonics order and f_{nm} are the considered weights of the corresponding spherical harmonics
where \(P^{m}_{n}(\cdot)\) are the associated Legendre functions, (·)! represents the factorial function, \(m\in \mathbb {Z}\) specifies the function degree, and \(N\in \mathbb {N}\) the order of the function. Consequently, the Fourier coefficients f_{nm} completely describe the orderconstrained function f(θ,ϕ) on the entire sphere and their determination is yet sufficient for a correct SH interpolation.
By sampling the sound pressure function f(θ,ϕ) with a Q channel spherical microphone array, the samples p_{q}=f(θ_{q},ϕ_{q}) are given at the positions (θ_{q},ϕ_{q}) of the respective microphones for \(q\in \{1,2,...,Q\}=\mathbb {N}_{Q}\). In matrix form Eq. 1 can be written as
where the matrix Y of dimensions Q×(N+1)^{2} is given by
and the vector \(\mathbf {f} = [p_{1},\dots,p_{Q}]^{T}\) contains the Q sound pressure measurements at position (θ_{q},ϕ_{q}) for \(q \in \mathbb {N}_{Q}\).
For the rare scenario, when the number of microphones Q matches the spherical harmonics order N, i.e. Q=(N+1)^{2}, under consideration of perfectly distributed measuring points [37] and thus a wellconditioned fullrank matrix Y, Eq. 3 can be solved with the inverse of matrix Y:
For Q>(N+1)^{2} an overdetermined system of linear equations results which can be solved through best fit, in the leastsquares sense, by taking the MoorePenrose inverse of Y and thus seeking a solution f_{nm} that minimizes the energy of the error:
with Y^{†}=(Y^{H}Y)^{−1}Y^{H} and ∥·∥ denoting the Euclidean norm. For functions that are not orderlimited, errors occur due to spatial aliasing and f≠Yf_{nm} and consequently f(θ_{q},ϕ_{q})≠p_{q} [38].
For Q<(N+1)^{2}, the system of equations is underdetermined and Eq. 3 provides infinitely many solutions. In this case the MoorePenrose inverse of the matrix Y seeks a solution f_{nm} with minimum Euclidean norm, i.e. with minimal wavespectral power ∥f_{nm}∥^{2} ([29], p. 79):
To interpolate samples of the sound pressure measurements on a sphere, the calculated weights of the spherical harmonics can be used in the inverse spherical Fourier transform from Eq. 1 and arbitrary points between the samples can be estimated. The values at the sampling positions (θ_{q},ϕ_{q}) for \(q \in \mathbb {N}_{Q}\) can be reproduced exactly if the order N is sufficiently high. In the case of underdetermined systems, however, notches occur between the sample points due to the chosen constraint of minimum wavespectral power and therefore even orderlimited functions can no longer be represented accurately.
An indication for the numerical accuracy of SH interpolation based on matrix inversion (Eq. 5) is the condition number κ of Y_{N}. A large condition number indicates that small changes in the measured sound pressures f could lead to large changes in the Fourier coefficient matrix f_{nm}. The solution of the linear system of equations is thus highly sensitive to errors and noise in the input data. While κ=1 is ideal, a system with κ>3.5 is considered as illconditioned [39]. The condition number depends on the chosen spatial sampling scheme and the SH order N.
Thin plate pseudospline interpolation
The thin plate pseudospline solution [40, 41] allows the regularized interpolation of sparsely distributed measurements on the sphere with closedform expressions that make this approach well suited for numerical computation. The aim is to find a smooth function f(θ,ϕ), where the values for f(θ_{q},ϕ_{q}) should be as close as possible to the measured values p_{q} while containing minimum bending energy on the surface of the sphere S. An interpolating (A) or smoothing (B) thin plate pseudospline can therefore be obtained by seeking the solution to one of the following problems:
for (A) or with the option of regularization
for (B), where λ≥0 denotes the tuning parameter and J_{k}(f) is defined by
with
and
A solution of the two problems given by Eqs. 8 and 9 is obtained with
R(θ,ϕ;θ_{q},ϕ_{q}) is the reproducing kernel for the Hilbert space \(\mathscr {H}_{k}^{0} (S)\) with norm \(J_{k}^{1/2} (\cdot)\):
where P_{n} are the associated Legendre polynomials and z denotes the cosine of the spherical angle γ between the two arguments of the kernel function with
The spline order \(M \in \mathbb {N}\) determines the derivability of the solution from Eq. 13. We define the spline order as M=2k−2, and corresponding splines are continuous up to the (M−1)th derivative, so they are called C^{M−1} smooth.
A closedform expression for the reproducing kernel R(θ,ϕ;θ_{q},ϕ_{q}), suitable for numerical computation, is given by
with
and 2k−2=M.
A recursive evaluation of q_{2k−2}(z) for \(k = \left \{\frac {3}{2}, 2,\frac {5}{2},...,6\right \}\) can be found in ([40, 41], Tab. 1), as well as the determination of the coefficients c and d from Eq. 13 in matrix form^{Footnote 1}:
where R_{Q} is the Q×Q matrix with the element i,j defined as (R_{Q})_{i,j}=R(θ_{i},ϕ_{i};θ_{j},ϕ_{j}),I is the Q×Q identity matrix, the vector \(\mathbf {f} = [p_{1},\dots,p_{Q}]^{T}\) contains the Q sound pressure measurements at position (θ_{q},ϕ_{q}) for \(q \in \mathbb {N}_{Q}\) and \(\mathbf {T} = [1,\dots,1]^{T}\).
If the measured values are noisy, it can be advantageous to regularize the interpolation in order to suppress outliers; the tuning parameter λ>0 will smooth the estimated function f on the surface of the sphere. Due to the low noise measurement data used for this study, smoothing of the estimation function did not improve the quality of the interpolation (c.f. Section 5), therefore the thin plate pseudosplines were performed without regularization (λ=0).
Piecewise linear, spherical triangular interpolation
The entire set of Q microphone positions (θ_{q},ϕ_{q}) can be equivalently expressed as a 3×Q matrix containing its threedimensional unit direction vectors
Using the Quickhull algorithm [42] vertex index triplets v_{l}=[v_{1l},v_{2l},v_{3l}] are obtained to describe a set of triangular facets that span the convex hull of the vertices stored in U.
Any arbitrary unit direction vector u can be represented by the nonnegative spherical barycentric/area coordinates g=[g_{1},g_{2},g_{3}]^{T} of the vertices U_{l} of the lth triangle,
where g_{i}≥0 and \(\sum _{i} g_{i}\geq 1\). Note that the required allpositive spherical barycentric coordinates are only found if a suitable spherical triangle l is selected from the convex hull, which will then contain u. While the spherical barycentric coordinates g reproduce the direction u, spherical triangular interpolation uses the corresponding planar barycentric coordinates \(\tilde g_{i}=\frac {g_{i}}{\sum _{j} g_{j}}\) [27] to linearly interpolate the values measured at the microphones of the triangle l by their weighted average,
At the boundaries, this interpolation exactly reproduces the values at the triangle vertices and linearly interpolates the value pairs along any edge of the lth triangle. Because neighboring triangles share edges and vertices, interpolation across triangles is continuous. There is no condition for the firstorder derivatives, therefore this interpolation is C^{0} smooth.
Robustness and bias
Robustness is often measured by observing the range of amplifications that stochastic perturbations linearly superimposed with the input data can undergo. Due to linearity, it is insightful and common practice to observe changes that uncorrelated Gaussian noise as the only input \(\mathbf {f}=\mathcal {N}\) undergoes, which we adopt to analyze the robustness of the three abovementioned interpolation methods. We consider the 32 nodes of a pentakis dodecahedron as directional sampling for the input data f, which is interpolated using the 2520 nodes of a Chebyshevtype quadrature [43], yielding the 2520 output values \(\tilde {\mathbf {f}}\).
Figure 2 shows a statistical analysis of the two ratios \(\frac {\text {RMS}\{\tilde {\mathbf {f}}\}}{\text {RMS}\{\mathbf {f}\}}\) and \(\frac {\max \{\tilde {\mathbf {f}}\}}{\max \{\mathbf {f}\}}\), analyzing these ratios for 1000 independent instances of a random input vector f.
Regarding changes between RMS values from input to output, we observe that SH methods for N={7,8}, Spline for all M={1,2,3}, and TrI produce a bias towards smaller output RMS values of 2 dB or more for stochastic input. For TrI, it is understandable that within any triangle, three uncorrelated inputs get averaged linearly, therefore the output RMS gets reduced by stochastic instead of additive interference. For SH interpolation with N=4, this reduction only happens sparsely. The implicit minimization of the Euclidian norm for N≥5 minimizes the output RMS value, and therefore causes the observable bias towards lower RMS. This minimization might be optimistically regarded as an increase in robustness between the sampling nodes for N≤6, but it also implies a general decrease in magnitude there, a bias causing dips between the observed samples when interpolating omnidirectional directivities. Spline methods process constant inputs separately, therefore, it is reasonable to assume that the observed reduction in output RMS rather displays increased robustness to stochastic perturbation. For the chosen spatial sampling scheme, all methods appear robust enough to avoid enlarged output RMS values.
As a more critical test, SHbased interpolation exhibits the largest differences between maxima in the interpolated output compared to those in the input, with around ±3 dB for N={4,5,6,7}, while the settings SH 8 and Spline 3 behave reasonably. Rigorously, TrI as a linear interpolation is capable of precisely avoiding enlarged output maxima, and the same benefit is observed for Spline 1.
Method
Highresolution reference directivities
For an objective evaluation of the estimation accuracy of a spatial interpolation method based on finite samples on a measurement surface, a highresolution reference is required. This could theoretically be an analytical function sampled at the evaluation points. However, since the quality of the interpolation also depends significantly on the properties of the pattern to be interpolated, a diverse sample of musical instruments was chosen for which highresolution measurements were made. The sample included a trombone, a violin, a flue pipe and a bassoon, and thus different types of sound production, different physical principles of sound radiation, and different sizes and geometries of the radiator. To achieve high reliability, the excitation of the instruments was automated, the instruments were rotated by a computercontrolled 3D loudspeaker measurement system ELF, and the sound pressure measurements were obtained on a dense spatial sampling grid. The measurements were conducted in the anechoic chamber of the OWL University of Applied Sciences and Arts in Lemgo. A 1/2" freefield equalized BK4190 cartridge was used as measurement microphone, placed 2 m from the sound source.
The trombone, a member of the brass family, is a relatively small and straightforward sound source, with the bell being the only port from which sound energy is emitted. The directional dependence of sound radiation of brass instruments is rotationally largely symmetric with respect to the center axis of the bell, which is also the main radiation direction. With increasing frequency the main lobe of the directivity constricts, resulting in a more directional sound radiation. To determine the directivity of the trombone, the shortened instrument (without slide) was artificially excited with a sine sweep signal of the order 16 (2^{16} samples ≈ 1.4 s @48 kHz sampling rate), emitted by a horn driver directly attached to the small end of the bell [44]. An equal angle sampling grid with an angular resolution of 5^{∘} in azimuth and colatitude was chosen and thus 2522 unique positions were measured.
The sound radiation for a violin, a string instrument, is partially determined by the parallel plates of the instruments’ body which vibrate locally in different amplitudes and phases. Particularly at low frequencies the sound is additionally radiated through the characteristic open fholes that build a Helmholtz resonator in connection with the air cavity of the body. With a source extension of about 40 cm it is a mediumsized instrument. However, the two vibrating plates with a different local phasing cause interferences and therefore a distinct directional characteristic in the farfield. The directivity of a violin was measured exemplarily for the open A string (f_{0}=440 Hz), applying a repeatable bowing machine for excitation [45] and utilizing an equal angle grid with an angular resolution of 6^{∘} for azimuth and colatitude, yielding the sound pressure at 1742 positions on a sphere.
The sound of a flue pipe of an organ is radiated through the mouth as well as through the open end of the resonator. The two spatially separated partial sound sources thus produce frequencydependent directional characteristics, which get more complex with increasing frequency following the characteristics of a dipole and corresponding to the sound radiation behavior of a flute [2]. To measure the directivity of the flue pipe, a horn driver has been attached directly to its toe hole which was artificially excited, again with a sine sweep of order 16. An equal angle sampling grid within an angular resolution of 5^{∘} was chosen, again yielding 2522 positions. The pipe used has a length of 51.3 cm with a diameter of 4.8 cm and a fundamental frequency of f_{0}=280 Hz [46].
The bassoon, a woodwind instrument of the double reed family, has a bell and numerous extended tone holes distributed irregularly across a long, bent corpus. The openings act as secondary sound sources depending on the fingering. The superposition of their radiated sound fields can cause a relatively complex directivity in the farfield. The sound radiation of a bassoon, fingered for the note Eb3 (f_{0}=156 Hz), was measured with an equal angle sampling distribution within a horizontal and vertical angular resolution of 5^{∘}, applying a repeatable artificial excitation [47]. Accordingly, the data was acquired at 2522 positions on a sphere.
Interpolation of microphone array measurements
In the first step, the highresolution reference data was subsampled at 32 microphone positions used in the BerlinAachen database of musical instruments [12]. The 32 sampling points are located at the vertices of a pentakis dodecahedron (Fig. 3) and were chosen as one possible sampling scheme to evaluate the interpolation techniques under realistic conditions. Except at the two poles, the 32 positions of the sparse grid are not contained in the reference grids. To account for this, the final sparse grid was generated from the closest positions of the highresolution reference grids. The resulting grid is called sample grid in the following and diverges from ideal sparse grid by 1.7^{∘}/ 1.0^{∘} on average, with a maximum deviation of 2.4^{∘}/ 2.6^{∘}.
For the SH orders examined in this paper, the condition numbers of the ideal sparse grid are small due to the equal distribution of the sampling points (κ=1.0 for N≤4; κ<2.0 for N≥6). The large condition number of κ=1.24×10^{16} for N=5, however, indicates that this SH order should not be used for the selected grid. The condition numbers increased only slightly due to using the nearest neighbors for the sample grid, i.e., κ<2.0 for N≤4 and N≥6 still holds.
In a second step, SH interpolation for orders N={1,2,...,8}, the closedform spherical spline interpolation for orders M={1,2,3} (with M=2k−2, c.f. Eq. 16) and the spherical triangular interpolation was realized with AKtools using the functions AKsht(), AKisht(), AKsphSplineInterp(), and AKsphTriInterp() [48].
The interpolation functions were sampled at the corresponding highresolution reference grid points, allowing a direct comparison between the interpolation result and the reference, i.e., the measured directivity.
The interpolation was done on the magnitude responses, i.e., the phase information was neglected for two reasons. First, interpolation of the phase spectrum is very susceptible to noisy data and errors can occur, particularly at high frequencies [49]. Second, natural sound sources, in contrast to artificial sound sources, do not have a stationary phase response neither for a certain frequency nor for a certain radiation direction [20], which could be used for room acoustics simulation without further ado. Figure 1 shows a spectrogram for the note A4 (440 Hz) played by a trumpet (recording taken from [12]). While the amplitude of the fundamental and the overtones are almost constant during the observed time window, the phase of the trumpet signal fluctuates strongly and cannot be determined unambiguously. A proposal on how absolutevalued interpolated directivity patterns can be used for wavebased simulation methods is presented in Section 5.
Global error measure Ψ
For the evaluation of the interpolation algorithms, a global singlenumber error measure is proposed. To describe the mathematical accuracy of the interpolation, the difference of the sound pressure levels of interpolated directivity and reference directivity averaged over all directions could be used, in the way Arend et al. have done to describe the accuracy in interpolation of headrelated transfer functions (HRTFs) [21]. However, to describe the acoustic effect of erroneous excitation of the sound field caused by an incorrect directivity, it seems important to consider the sound power radiated (incorrectly or correctly) and not level differences, which correspond to larger differences in power at high levels than at low levels. As a physically meaningful measure we therefore propose to calculate the sound power erroneously radiated with respect to the direction due to the interpolation error and to relate this to the total radiated sound power. To obtain such as measure, the relative error in radiated sound power Ψ can be calculated as the summed area weighted relative differences of the squared sound pressures of interpolation \({\hat {p}}^{2}\left (\theta _{r},\phi _{r}\right)\) and reference p^{2}(θ_{r},ϕ_{r}) over the R directions (θ_{r},ϕ_{r}) of the reference grid for r∈{1,2,...,R}, related to the summed area weighted squared sound pressure of the reference:
where \(w_{a}^{\prime }\) are the normalized area weights of the reference grid with
Note that an error of Ψ=0.5 states that the interpolated directivity emits 50% of the sound power in incorrect directions, whereas a value of Ψ=0 shows that the radiation pattern of the interpolated fully matches the reference.
Results
Since we investigated tonal instruments, we restricted the analysis to fundamentals and overtones. In addition, we discarded tones whose spatially averaged energy was more than − 40 dB below the tone with the maximum average energy separately for each instrument and played note. Thus, only the radiation pattern of the fundamental at f_{0}=280 Hz and the first five overtones f_{1},...,f_{5} of the flue pipe were examined. For the violin the fundamental f_{0}=440 Hz and the nine first overtones f_{1},...,f_{9} were evaluated, and for the bassoon the fundamental at f_{0}=156 Hz and the first five overtones f_{1},...,f_{5}. The trombone was a special case exhibiting an unnatural overtone series due to the shortened instrument and the artificial excitation with a horn driver. Therefore only the first five resonance frequencies at 966 Hz, 2280 Hz, 4882 Hz, 7212 Hz, and 9179 Hz are considered in the following.
The results of the spatial interpolations of the bassoon’s directivity for the second overtone at f_{2}=468 Hz are shown in Fig. 3, along with the related global error measures Ψ. Distinct differences emerge between the interpolation methods. The spherical triangular interpolation approach (TrI) shows the lowest reproduction error for this directivity with Ψ=0.36, with the two distinctive indentations in the θ=90^{∘};ϕ=90^{∘} and 270^{∘};90^{∘} radiation directions estimated quite accurately despite the small number of sample points.
For the spline interpolation, the global error Ψ≈0.4 is almost constant across the order M. Differences between the orders can be seen mainly in the 90^{∘};90^{∘} radiation direction, where the notch is well reproduced by the spline interpolation of order M=1, however at the expense of larger errors in the transition areas between high and low radiation, which are better interpolated with order M=2.
For SH interpolation of orders N={4,5}, the largest errors occur in the notch regions at 90^{∘};90^{∘} and 270^{∘};90^{∘}. As the SH order increases, these errors disappear, but indentations between the sample points become visible, most pronounced at N=8. The lowest global error is obtained with order N=7 and Ψ=0.37, whereas the maximum error of Ψ=0.49 occurs when interpolating with the theoretical optimal SH order of N=4, considering the 32 points of the sample grid.
For other radiation patterns such as the bassoon’s third overtone at f_{3}=624 Hz (Fig. 4), the interpolation methods used behave somewhat differently. At this slightly higher frequency, the spline interpolation of order M=1 is superior with Ψ=0.49. SH interpolations of orders N={6,7} again perform better than the reproduction with N={4,5} as well as N=8, where indentations between the sample points become again visible. While for the second overtone triangular interpolation was superior to all other methods investigated, it now performs slightly worse than spline interpolation.
An overview of global errors for all examined orders and musical instruments is shown in Fig. 5, where the individual error distributions contain between 5 and 10 partials, depending on the instrument. To assess the benefit of using musical instrument directivities from small sample grids, Ψ was additionally calculated for the trivial assumption of an omnidirectional directivity using the mean radiated energy over all directions.
Taking the median of the distribution over all 27 partial tones of the examined instruments analyzed as a measure of quality of the interpolation methods, the spline approach with order M=1 shows the best result, closely followed by the triangular interpolation and the spline interpolations with orders M={2,3}.
The largest reproduction errors occur at the lowest and the highest examined SH order, i.e., at N={1,8}, whereas the best SH interpolation is achieved with order N=7.
A detailed list of results is shown in Table 1.
Discussion
The directivities of four different tonal musical instruments were subsampled at 32 almost equally distributed points and interpolated using spherical triangulation, spherical splines and spherical harmonics. A global error measure was proposed to assess the quality of the various interpolation techniques.
Comparison of interpolation algorithms
It is obvious that the absolute quality of all interpolation methods strongly depends on the acoustic size of the sound source and the resulting complexity of the radiation pattern. Acoustically small sound sources like the trombone bell can be relatively precisely interpolated already with 32 measuring points. With a median value for the difference between interpolation and reference of <0.3 for the spline and the triangulation approach and for the SH interpolation with N={5,6,7}, more than 70% of the sound power is radiated in exactly the right direction on average using the interpolated directivities.
For extended sources with more complex radiation patterns, however, the farfield directivities can be increasingly poorly estimated with a sparse sampling grid.
Since the spatial frequencies of acoustic sound sources can generally be assumed not to be limited, SH decomposition based on a finite number of microphones on the surface of a sphere allows a correct reproduction of this function only up to a cutoff frequency of kr<N, where k denotes the wave number and r the radius of the microphone array. Above this frequency errors occur due to spatial aliasing [50] (depending on the SH order of the function and the sampling grid) and series truncation [39]. The almost equally distributed Q=32 points of the applied sampling grid support a maximum SH order of N=4 to solve the linear system of equation through best fit in the leastsquares sense (Eq. 6). Taking into account the measuring distance of 2 m between the microphone and the sound source, the reference directivity can be correctly reconstructed up to a cutoff frequency of only f_{c}≈108 Hz without aliasing and truncation errors. All partial tones investigated in this study, and all musical frequency components in general, are above this frequency. Closest to this frequency is the fundamental of the bassoon at f_{0}=156 Hz, which can be reconstructed most accurately with SH order N=4, and a global error of Ψ=0.03. In this case, the interpolation error increases with SH order because the system of equations becomes increasingly underdetermined. As a consequence, the minimization of the wavespectral power associated with the MoorePenrose inversion of the SH transformation (Eq. 7) entails an increasingly poor interpolation between the sampling points.
Radiation patterns for frequencies well above the cutoff frequency of the microphone array, such as for the second harmonic of the bassoon at f_{2}=468 Hz, not only show larger errors in general, but can provide smaller errors for orders N≥5. In these cases, the minimization of the wavespectral power between the 32 sample points (c.f. Fig. 3), can lead to smaller errors due to a better tradeoff between truncation and aliasing errors. Figure 6 shows the global error for all six investigated directivities of the bassoon across frequency.
At this point it seems worthwhile to take a closer look at the chosen sampling grid. An analysis showed that an exact reproduction of the 32 magnitude values at the sampling points can only be achieved with SH orders of N≥6 for all radiation patterns investigated in this study and contained in [12]. Finding the best SH order for interpolating sparsely sampled directivities can thus also be interpreted as optimizing the tradeoff between the desired exact reproduction of the magnitude values at the sampling points and the undesired minimization of the wavespectral energy which causes notches between the sampling points and increases with increasing SH order. For the selected sampling grid, the optimum appears on average at an SH order of N=7.
The triangular and spline interpolation, however, both show not only smaller median errors but also smaller 25th and 75th percentiles than SH interpolation. Since the spline technique was applied without regularization (λ=0), the 32 sample points are always correctly reconstructed regardless of the order M. The same applies to triangular interpolation, where all sample points are reproduced correctly as well. Even at low frequencies, where SH interpolation with N=4 estimates the reference almost physically correct, the triangular and spline interpolation perform comparably well (Fig. 6). Interestingly, the errors for the spline interpolation increase with increasing order. This may be explained by the fact that splines are piecewise defined functions f that are continuous at the sampling points up to the (M−1)th derivative, i.e. f is C^{(M−1)} smooth. The smoothness of the splines thus increases with increasing order M, whereas the smoothness of SH increases with decreasing order. In both cases some degree of nonsmoothness increases the accuracy of the interpolation for the selected sample grid.
Generalization to different sample grids
In the first instance, the results of this study only hold for the selected sample grid with almost equally distributed 32 points. Even though this is likely to be a typical design and close to designs used for other measurements (Section 1), we provide a way to check the error for other designs as well. For finding the best interpolation algorithm for a specific sample grid, we provide the Matlab tool SourceInterp.m [51]. It calculates the global error Ψ for all interpolation methods evaluated in this paper, based on the publicly available highresolution Bassoon radiation patterns [52] for the note F3 (f_{0}=175 Hz) and will be extended for other instrument directivities once they are made publicly available.
Comparison to SH interpolation with iterative sign retrieval
As detailed earlier, only the absolute magnitude response was interpolated due to the stochastic nature of the phase information of measured natural sound sources. In case of SH interpolation, however, this approach increases the required SH order. To counteract this, iterative semidefinite relaxation methods to find a suitable realvalued sign to an absolutevalued radiation pattern prior to SH interpolation were proposed by Zagala and Zotter [19]. To assess the quality of the triangular and spline interpolation with respect to the proposed signretrieval algorithm, we replicated the benchmark from [19]. Therefore, 50 radiation patterns were created with randomly generated standard normally distributed realvalued SH coefficients \( f_{{nm}} \in \mathbb {R}\) up to SH order N=3. The absolute unsigned values of these radiation patterns were evaluated at Q=64 extremal sampling points (cf. [37]) according to Eq. 3 and used as input for triangular interpolation, the firstorder spline interpolation and third order SH interpolation. Finally, the area weighted Mean Square Error (MSE) between the analytical reference f and the interpolation result \(\hat {f}\)
was calculated for 2522 sampling points of the reference grid used above.
The distribution of the MSE across the 50 radiation patterns is shown in Fig. 7.
On average, the spline interpolation and the triangular method reconstruct the random directivity patterns only slightly better than the SDRbased sign retrieval with commonsign regions algorithm (median of 0.05 vs. 0.07). However, the dispersion of the error is considerably lower for the spline approach, with a 75th percentile of 0.06 compared to 0.12 for SDR rgn+dbl. It should also be noted that the spline method reconstructs one radiation pattern on average 30 faster than the SDR rgn+dbl algorithm and 26 times faster than triangular interpolation (using Matlab on a PC with Intel Core i56400 CPU @ 2.70 GHz and 16 GB RAM, Fig. 7). Note that the triangular interpolation AKsphTriInterp() is already optimized by searching only sublists of triangles extending into the octant of any regarded interpolated coordinate. The potential speedup factor has an upper limit of 8 and practically reached 6 with the required control flow and overlap of the sublists.
Combination with models for the phase response
Time domain acoustical simulations and loworder SH decompositions may benefit from complexvalued directivities with both magnitude and phase information. In this regard, the interpolation methods discussed up to this point might not be optimal, yet. The unsigned (zero phase) SH interpolation causes acausal impulse responses that are symmetrical around t=0 and the signretrieval algorithm generates an noncontinuous phase response that might also deteriorate the corresponding time signals. In addition, due to treating each frequency separately, it remains to be clarified based on which criteria to smooth the phase responses across frequencies. A direct integration of a numerically derived absolutevalue directivity into time domain simulations such as the finite difference time domain (FDTD) method was presented by Ahrens & Bilbao using an analytical point source phase response and SH interpolation [49]. However, our results suggest that spline interpolation is more suitable for interpolating sparsely sampled magnitude responses regardless of the phase. Therefore, for interpolating sparsely resolved directivity measurements of musical instruments for the use in time domain simulations, we recommend to first interpolate the magnitude response to a highresolution grid using firstorder splines, followed by a subsequent SH expansion according to [49]. Note, however, that a physically correct extrapolation of the sound field within the near field of the sound source is not possible even with this approach, despite subsequently added phase information.
Generalization to different musical instruments
The results of the current study first only apply to the four instruments studied, and cannot simply be generalized to all musical instruments or acoustical sound sources in general. Considering the instruments of the classical orchestra, however, all fundamental radiation mechanisms such as the vibrating soundboard (violin), the pistonlike air vibration in the bell of brass instruments (trombone), the complex multisource radiation of woodwind instruments (bassoon), and the air jet excitation of flute instruments (flue pipe) are represented, at least those with stationary excitation, i.e., not including percussion instruments.
It is noticeable that piecewise interpolation methods such as the spherical spline and spherical triangular interpolation seem to provide a better reconstruction of complex radiation patterns in cases where the sampling theorem (kr<N) is not met. This behavior is largely consistent for the 27 patterns analyzed, all of them at frequencies violating the spatial sampling theorem and representing different source types and different degrees of complexity.
We thus expect that a similar result would also be obtained also for directional characteristics measured in the presence of a player. Players as reflecting or diffracting object next to the instrument have an influence on the directivity of a musical instrument [11]. For largely omnidirectional sources they make the radiation pattern more complex, in specific cases, one could imagine it to be smoothened, for highly directional sources such as the trombone at high frequencies, the influence will be negligible. We thus see no reason why interpolation methods for radiation patterns with players should behave fundamentally differently from those without players observed in the current study.
Conclusion
The performance of different interpolation techniques applied to sparsely sampled directivity measurements of acoustic sound sources depends on the sampling grid used but also on the radiation pattern of the sources themselves. Therefore, we evaluated three established approaches for interpolation from a lowresolution sampling grid using highresolution measurements of a representative sample of musical instruments as a reference. The smallest global error on average occurs for thin plate pseudospline interpolation, with order 1 performing slightly better than orders 2 and 3. For interpolation based on spherical harmonics (SH) decomposition, the SH order and the spatial sampling scheme applied have a strong influence on the quality of the interpolation that is difficult to predict in individual cases. The piecewise linear, spherical triangular interpolation finally provides almost as good results as the firstorder spline approach, albeit with on average 20 times higher computational effort. Therefore, for spatial interpolation of sparsely sampled directivity measurements of musical instruments, the thin plate pseudospline method applied to absolutevalued data with order M=1 is recommended and, if necessary, a subsequent modeling of the phase.
Availability of data and materials
The complete data sets used for the current study are available from the corresponding author on request. The raw data of the directivity measurements can also be provided in higher resolutions on request, please contact MK.
Notes
 1.
In ([40], p. 14, THEOREM 2), the formula for determining c and d is printed with a sign error.
Abbreviations
 SH:

Spherical harmonics
 RIR:

Room impulse response
 BEM:

Boundary element method
 FEM:

Finite element method
 MSE:

Mean squared error
 CAGD:

Computer aided geometric design
 VBAP:

Vectorbased amplitude panning
 RMS:

Root mean square
 HRTF:

Headrelated transfer functions
 TrI:

Triangular interpolation
 SDR:

Demidefinite relaxation
 FDTD:

Finite difference time domain
References
 1
H. K. Dunn, D. W. Farnsworth, Exploration of pressure field around the human head during speech. J. Acoust. Soc. Am.10(3), 184–199 (1939). https://doi.org/10.1121/1.1915975.
 2
J. Meyer, Acoustics and the Performance of Music (Springer, New York, 2009).
 3
W. T. Chu, A. C. C. Warnock, Detailed directivity of sound fields around human talkers. Tech. Rep. RR104, National Research Council of Canada (2002). https://doi.org/10.4224/20378930.
 4
D. Cabrera, P. J. Davis, A. Connolly, in Proceedings of the 19th International Congress on Acoustics. Vocal directivity of eight opera singers in terms of spectrospatial parameters (Madrid, 2007).
 5
B. B. Monson, E. J. Hunter, Horizontal directivity of low and highfrequency energy in speech and singing. J. Acoust. Soc. Am.132(1), 433–441 (2012).
 6
B. Katz, C. d’Alessandro, in Proceedings of the 19th International Congress on Acoustics. Directivity measurements of the singing voice (Madrid, 2007).
 7
O. Abe, Sound radiation of singing voices. PhD thesis, Universität Hamburg (2019).
 8
F. Hohl, Kugelmikrofonarray zur Abstrahlungsvermessung von Musikinstrumenten. Master’s thesis, Institute of Electronic Music and Acoustics, University of Music and Performing Arts, Graz, Austria (2009).
 9
J. Pätynen, T. Lokki, Directivities of symphony orchestra instruments. Acta Acustica U. Acustica. 96(1), 138–167 (2010). https://doi.org/10.3813/AAA.918265.
 10
S. D. Bellows, K. J. Bodon, T. W. Leishman, Violin directivity (2020). https://scholarsarchive.byu.edu/directivity/15/. Accessed 12 Apr 2021.
 11
N. R. Shabtai, G. Behler, M. Vorländer, S. Weinzierl, Generation and analysis of an acoustic radiation pattern database for fortyone musical instruments. J. Acoust. Soc. Am.141(2), 1246–1256 (2017). https://doi.org/10.1121/1.4976071.
 12
S. Weinzierl, M. Vorländer, G. Behler, F. Brinkmann, H. v. Coler, E. Detzner, J. Krämer, A. Lindau, M. Pollow, F. Schulz, N. R. Shabtai, A database of anechoic microphone array measurements of musical instruments (2017). https://doi.org/10.14279/depositonce5861.2.
 13
J. Klein, M. Vorländer, in Proceedings of EAA Spatial Audio Sig. Proc. Symp., Paris. Simulative investigation of required spatial source resolution in directional room impulse response measurements, (2019), pp. 37–42. https://doi.org/10.25836/SASP.2019.24.
 14
M. Frank, M. Brandner, in Proceedings of Fortschritte der Akustik – DAGA 2019. Perceptual evaluation of spatial resolution in directivity patterns (DEGARostock, 2019), pp. 74–77.
 15
S. Bilbao, B. Hamilton, Directional sources in wavebased acoustic simulation. IEEE/ACM Trans. Audio Speech Lang. Process. 27(2), 415–428 (2019). https://doi.org/10.1109/TASLP.2018.2881336.
 16
L. Savioja, U. P. Svensson, Overview of geometrical room acoustic modeling techniques. J. Acoust. Soc Am.138(2), 708–730 (2015).
 17
D. Schröder, M. Vorländer, in Proceedings of Forum Acusticum. RAVEN: A realtime framework for the auralization of interactive virtual environments (Aalborg, 2011), pp. 1541–1546.
 18
F. Brinkmann, L. Aspöck, D. Ackermann, S. Lepa, M. Vorländer, S. Weinzierl, A round robin on room acoustical simulation and auralization. J. Acoust. Soc. Am.145(4), 2746–2760 (2019). https://doi.org/10.1121/1.5096178.
 19
F. Zagala, F. Zotter, in Proceedings of Fortschritte der Akustik – DAGA 2019. Idea for signchange retrieval in magnitude directivity patterns (DEGARostock, 2019), pp. 1430–1433.
 20
J. Ahrens, S. Bilbao, Computation of spherical harmonic representations of source directivity based on the finitedistance signature. IEEE/ACM Trans. Audio Speech Lang. Process.29:, 83–92 (2021). https://doi.org/10.1109/TASLP.2020.3037471.
 21
J. M. Arend, F. Brinkmann, C. Pörschmann, Assessing Spherical Harmonics Interpolation of TimeAligned HeadRelated Transfer Functions. J. Audio Eng. Soc.69(1/2), 104–117 (2021). https://doi.org/10.17743/jaes.2020.0070.
 22
F. Zotter, M. Frank, Ambisonics : A Practical 3D Audio Theory for Recording, Studio Production, Sound Reinforcement, and Virtual Reality (Springer, Cham, Switzerland, 2019).
 23
C. Schörkhuber, M. Zaunschirm, R. Höldrich, in Proceedings of Fortschritte der Akustik – DAGA 2018. Binaural rendering of Ambisonics signals via magnitude least sqaures (DEGAMunich, 2018), pp. 339–342.
 24
J. Li, A. D. Heap, A Review of Spatial Interpolation Methods for Environmental Scientists (Geoscience Australia, Canberra, 2008).
 25
J. Duchon, Interpolation des fonctions de deux variables suivant le principe de la flexion des plaques minces. Rev. Fr. D’automatique Informatique Resour. Opérationnelle Anal. Numérique. 10(R3), 5–12 (1976).
 26
D. G. Krige, A statistical approach to some basic mine valuation problems on the witwatersrand. J. South. Afr. Inst. Min. Metall.52(6), 119–139 (1951).
 27
P. Alfeld, M. Neamtu, L. L. Schumaker, Bernsteinbézier polynomials on spheres and spherelike surfaces. Comput. Aided Geom. Des.13(4), 333–349 (1996). https://doi.org/10.1016/01678396(95)000305.
 28
G. Weinreich, E. B. Arnold, Method for measuring acoustic radiation fields. J. Acoust. Soc. Am.68(2), 404–411 (1980). https://doi.org/10.1121/1.384751.
 29
F. Zotter, Analysis and synthesis of soundradiation with spherical arrays. PhD thesis, University of Music and Performing Arts, Graz (2009).
 30
M. Pollow, Directivity patterns for room acoustical measurements and simulations. PhD thesis, RWTH, Aachen (2014).
 31
M. Noisternig, F. Zotter, B. F. Katz, in Principles and Applications of Spatial Hearing, ed. by Y. Suzuki, D. S. Brungart, and H. Kato. Reconstructing sound source directivity in virtual acoustic environments (World Scientific PublishingSingapore, 2011), pp. 357–372.
 32
S. Weinzierl, M. Vorländer, Room acoustical parameters as predictors of room acoustical impression: What do we know and what would we like to know?Acoust. Aust.43(1), 41–48 (2015).
 33
K. Hartung, J. Braasch, S. J. Steinberg, in Proceedings of AES 16th International Conference on Spatial Sound Reproduction. Comparison of different methods for the interpolation of headrelated transfer functions (Audio Engineering SocietyRovaniemi, 1999), pp. 319–329.
 34
G. Simpson, Y. H. Wu, Accuracy and effort of interpolation and sampling: Can gis help lower field costs?ISPRS Int. J. GeoInf. 3(4), 1317–1333 (2014). https://doi.org/10.3390/ijgi3041317.
 35
V. Pulkki, Virtual sound source positioning using vector base amplitude panning. J. Audio Eng. Soc.45(6), 456–466 (1997).
 36
B. Rafaely, Fundamentals of Spherical Array Processing (Springer, Berlin, Heidelberg, 2015). https://doi.org/10.1007/9783662456644.
 37
I. H. Sloan, R. S. Womersley, Extremal systems of points and numerical integration on the sphere. Adv. Comput. Math.21(1), 107–125 (2004). https://doi.org/10.1023/B:ACOM.0000016428.25905.da.
 38
B. Rafaely, B. Weiss, E. Bachmat, Spatial aliasing in spherical microphone arrays. IEEE Trans. Signal Process.55(3), 1003–1010 (2007). https://doi.org/10.1109/TSP.2006.888896.
 39
Z. BenHur, D. L. Alon, B. Rafaely, R. Mehra, Loudness stability of binaural sound with spherical harmonic representation of sparse headrelated transfer functions. EURASIP J. Audio Speech Music Process.2019(1) (2019). https://doi.org/10.1186/s136360190148x.
 40
G. Wahba, Spline interpolation and smoothing on the sphere. SIAM J. Sci. Stat. Comput.2(1), 5–16 (1981). https://doi.org/10.1137/0902002.
 41
G. Wahba, Erratum: Spline interpolation and smoothing on the sphere. SIAM J. Sci. Stat. Comput.3(3), 385–386 (1982). https://doi.org/10.1137/0903024.
 42
C. B. Barber, D. P. Dobkin, H. Huhdanpaa, The quickhull algorithm for convex hulls. Trans. Math. Softw.22:, 469–483 (1996). https://doi.org/10.1145/235815.235821.
 43
M. Gräf, D. Potts, On the computation of spherical designs by a new optimization approach based on fast spherical fourier transforms. Numer. Math.119(Dec.), 699–724 (2011).
 44
M. Kob, A. Baskind, in Proceedings of the International Symposium on Music Acoustics 2019 – ISMA 2019. Impact of free field inhomogenity on directivity measurements due to the measurement setup (Detmold, 2019), pp. 62–67.
 45
N. Filipo, T. Grothe, M. Kob, in Proceedings of Fortschritte der Akustik – DAGA 2019. Investigation on the directivity of string instruments using a bowing machine (DEGARostock, 2019), pp. 334–337.
 46
M. Kob, Influence of wall vibrations on the transient sound of a flue organ pipe. Acta Acustica U. Acustica. 86(4), 642–648 (2000).
 47
T. Grothe, M. Kob, in Proceedings of the International Symposium on Music Acoustics 2019 – ISMA 2019. High resolution 3D radiation measurements on the bassoon (Detmold, 2019), pp. 139–145.
 48
F. Brinkmann, S. Weinzierl, in Proceedings of Audio Engineering Society Convention 142. AKtools – An Open Software Toolbox for Signal Acquisition, Processing, and Inspection in Acoustics (Audio Engineering SocietyBerlin, 2017).
 49
J. Ahrens, S. Bilbao, in Proceedings of Forum Acusticum. Computation of spherical harmonics based sound source directivity models from sparse measurement data (Lyon, 2020).
 50
B. Rafaely, Analysis and design of spherical microphone arrays. IEEE Trans. Speech Audio Process.13(1), 135–143 (2005). https://doi.org/10.1109/TSA.2004.839244.
 51
D. Ackermann, F. Brinkmann, S. Weinzierl, SourceInterp  A Matlab tool for determining the quality of spatial interpolation methods for natural sound sources (2021). doi:10.14279/depositonce12436.
 52
T. Grothe, M. Kob. Bassoon directivity data, (2020). http://nbnresolving.org/urn:nbn:de:hbz:575opus4971. Accessed 12 Apr 2021.
Acknowledgements
The authors would like to thank Timo Grothe for providing the data set of the bassoon directivities.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Affiliations
Contributions
DA and FB, conceptualization, methodology, formal analysis, data curation, writing  original draft, visualization. FZ, methodology, formal analysis, data curation, writing  original draft. MK, methodology, data curation. SW, conceptualization, writing  review and editing, supervision. The authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ackermann, D., Brinkmann, F., Zotter, F. et al. Comparative evaluation of interpolation methods for the directivity of musical instruments. J AUDIO SPEECH MUSIC PROC. 2021, 36 (2021). https://doi.org/10.1186/s13636021002236
Received:
Accepted:
Published:
Keywords
 Source directivity
 Spatial interpolation
 Musical instruments