 Research
 Open access
 Published:
Combined perception and control for timing in robotic music performances
EURASIP Journal on Audio, Speech, and Music Processing volume 2012, Article number: 8 (2012)
Abstract
Interaction with human musicians is a challenging task for robots as it involves online perception and precise synchronization. In this paper, we present a consistent and theoretically sound framework for combining perception and control for accurate musical timing. For the perception, we develop a hierarchical hidden Markov model that combines event detection and tempo tracking. The robot performance is formulated as a linear quadratic control problem that is able to generate a surprisingly complex timing behavior in adapting the tempo. We provide results with both simulated and real data. In our experiments, a simple Lego robot percussionist accompanied the music by detecting the tempo and position of clave patterns in the polyphonic music. The robot successfully synchronized itself with the music by quickly adapting to the changes in the tempo.
1 Introduction
With the advances in computing power and accurate sensor technologies, increasingly more challenging tasks in humanmachine interaction can be addressed, often with impressive results. In this context, programming robots that engage in music performance via realtime interaction remained as one of the challenging problems in the field. Yet, robotic performance is criticized for being to mechanical and robotic [1]. In this paper, we therefore focus on a methodology that would enable robots to participate in natural musical performances by mimicking what humans do.
Humanlike musical interaction has roughly two main components: a perception module that senses what other musicians do and a control module that generates the necessary commands to steer the actuators. Yet, in contrast to many robotic tasks in the real world, musical performance has a very tight realtime requirement. The robot needs to be able to adapt and synchronize well with the tempo, dynamics and rhythmic feel of the performer and this needs to be achieved within hard realtime constraints. Unlike repetitive and dull tasks, such expressive aspects of musical performance are hard to formalize and realize on real robots. The existence of humans in the loop makes the task more challenging as a human performer can be often surprisingly unpredictable, even on seemingly simple musical material. In such scenarios, highly adaptive solutions, that combine perception and control in an effective manner, are needed.
Our goal in this paper is to illustrate the coupling of perception and control modules in music accompaniment systems and to reveal that even with the most basic hardware, it is possible to carry out this complex task in real time.
In the past, several impressive demonstrations of robotic performers have been displayed, see Kapur [2] as a recent survey. The improvements in the field of humancomputer interaction and interactive computer music systems influenced the robotic performers to listen and respond to human musicians in a realistic manner. The main requirement for such an interaction is a tempo/beat tracker, which should run in realtime and enable the robot to synchronize well with the music.
As a pioneering work, Goto and Muraoka [3] presented a realtime beat tracking for audio signals without drums. Influenced by the idea of an untrained listener can track the musical beats without knowing the names of the chords or the notes being played, they based their method on detecting the chord changes. The method performed well on popular music; however, it is hard to improve or adapt the algorithm for a specific domain since it was built on top of many heuristics. Another interesting work on beat tracking was presented in Kim et al. [4], where the proposed method estimates the tempo of rhythmic motions (like dancing or marching) through a visual input. They first capture the 'motion beats' from sample motions in order to capture the transition structure of the movements. Then, a new rhythmic motion synchronized with the background music is synthesized using this movement transition information.
An example of an interactive robot musician was presented by Kim et al. [5], where the humanoid robot accompanied the playing music. In the proposed method, they used both audio and visual information to track the tempo of the music. In the audio processing part, an autocorrelation method is employed to determine the periodicity in the audio signal, and then, a corresponding tempo value is estimated. Simultaneously, the robot tracks the movements of a conductor visually and makes another estimation for the tempo [6]. Finally, the results of these two modules are merged according to their confidences and supplied to the robot musician. However, this approach lacks an explicit feedback mechanism which is supposed to handle the synchronization between the robot and the music.
In this paper, rather than focusing on a particular piece of custom build hardware, we will focus on a deliberately simple design, namely a Lego robot percussionist. The goal of our percussionist will be to follow the tempo of a human performer and generate a pattern to play in sync with the performer. A generic solution to this task, while obviously simpler than that for an acoustic instrument, captures some of the central aspects or robotic performance, namely:

Uncertainties in human expressive performance

Superpositionsounds generated by the human performer and robot are mixed

Imperfect perception

Delays due to the communication and processing of sensory data

Unreliable actuators and hardwarenoise in robot controls causes often the actual output to be different than the desired one.
Our ultimate aim is to achieve an acceptable level of synchronization between the robot and a human performer, as can be measured via objective criteria that correlate well with human perception. Our novel contribution here is the combination of perception and control in a consistent and theoretically sound framework.
For the perception module, we develop a hierarchical hidden Markov model (a changepoint model) that combines event detection and tempo tracking. This module combines the template matching model proposed by Şimşekli and Cemgil [7] and the tempo tracking model by Whiteley et al. [8] for event detection in sound mixtures. This approach is attractive as it enables to separate sounds generated by the robot or a specific instrument of the human performer (clave, hihat) in a supervised and online manner.
The control model assumes that the perception module provides information about the human performer in terms of an observation vector (a bar position/tempo pair) and an associated uncertainty, as specified possibly by a covariance matrix. The controller combines the observation with the robots state vector (here, specified as an angularposition/angularvelocity pair) and generates an optimal control signal in terms of minimizing a cost function that penalizes a mismatch between the "positions" of the robot and the human performer. Here, the term position refers to the score position to be defined later. While arguably more realistic and musically more meaningful cost functions could be contemplated, in this paper, we constrain the cost to be quadratic to keep the controller linear.
A conceptually similar approach to ours was presented by Yoshii et al. [9], where the robot synchronizes its steps with the music by a realtime beat tracking and a simple control algorithm. The authors use a multiagent strategy for realtime beat tracking where several agents monitor chord changes and drum patterns and propose their hypotheses; the most reliable hypothesis is selected. While the robot keeps stepping, the step intervals are sent as control signals from a motion controller. The controller calculates the step intervals in order to adjust and synchronize the robots stepping tempo together with beat timing. Similar to this work, Murata et al. [10] use the same robotic platform and controller with an improved beattracking algorithm that uses a spectrotemporal pattern matching technique and echo cancelation. Their tracking algorithm deals better with environmental noise and responds faster to tempo changes. However, the proposed controller only synchronizes the beat times without considering which beat it is. This is the major limitation of these systems since it may allow phase shifts in beats if somebody wants to synchronize a whole musical piece with the robot.
Our approach to tempo tracking is also similar to the musical accompaniment systems developed by Dannenberg [11], Orio [12], Cemgil and Kappen [13], Raphael [14], yet it has two notable novelties. The first one is a novel hierarchical model for accurate online tempo estimation that can be tuned to specific events, while not assuming the presence of a particular score. This enables us to use the system in a natural setting where the sounds generated by the robot and the other performers are mixed. This is in contrast to existing approaches where the accompaniment only tracks a target performer while not listening to what it plays. The second novelty is the controller component, where we formulate the robot performance as a linear quadratic control problem. This approach requires only a handful of parameters and seems to be particularly effective for generating realistic and humanlike expressive musical performances, while being fairly straightforward to implement.
The paper is organized as follows. In the sequel, we elaborate on the perception module for robustly inferring the tempo and the beat from polyphonic audio. Here, we describe a hierarchical hidden Markov model. Section 3 introduces briefly the theory of optimal linear quadratic control and describes the robot performance in this framework. Sections 4 describes simulation results. Section 5 describes experiments with our simple Lego robot system. Finally Section 6 describes the conclusions, along with some future directions for further research.
2 The perception model
In this study, the aim of the perception model is to jointly infer the tempo and the beat position (score position) of a human performer from streaming polyphonic audio data in an online fashion. Here, we assume that the observed audio includes a certain instrument that carries the tempo information such as a hihat or a bass drum. We assume that this particular instrument is known beforehand. The audio can include other instrument sounds, including the sound of the percussion instrument that the robot plays.
As the scenario in this paper, we assume that the performer is playing a clave pattern. The claves is the name for both a wooden percussive instrument and a rhythmic pattern that organizes the temporal structure and forms the rhythmic backbone in AfroCuban music. Note that, this is just an example, and our framework can be easily used to track other instruments and/or rhythmic patterns in a polyphonic mixture.
In the sequel, we will construct a probabilistic generative model which relates latent quantities, such as acoustic event labels, tempi, and beat positions, to the actual audio recording. This model is an extension that combines ideas from existing probabilistic models: the bar pointer model proposed by Whiteley et al. [8] for tempo and beat position tracking and an acoustic event detection and tracking model proposed by Şimşekli and Cemgil [7].
In the following subsections, we explain the probabilistic generative model and the associated training algorithm. The main novelty of the current model is that it integrates tempo tracking with minimum delay online event detection in polyphonic textures.
2.1 Tempo and acoustic event model
In [8], Whiteley et al. presented a probabilistic "bar pointer model", which modeled one period of a hidden rhythmical pattern in music. In this model, one period of a rhythmical pattern (i.e., one bar) is uniformly divided into M discrete points, so called the "position" variables, and a "velocity" variable is defined with a state space of N elements, which described the temporal evolution of these position variables. In the bar pointer model, we have the following property:
Here, m_{ τ } ∈ {0, . . . , M  1} are the position variables, n_{ τ } ∈ {0, . . . , N} are the velocity variables, f (·) is a mapping between the velocity variables n_{ τ } and some real numbers, ⌊·⌋ is the floor operator, and τ denotes the time frame index. To be more precise, m_{ τ } indicate the position of the music in a bar and n_{ τ } determine how fast m_{ τ } evolve in time. This evolution is deterministic or can be seen as probabilistic with a degenerate probability distribution. The velocity variables, n_{ τ } , are directly proportional to the tempo of the music and have the following Markovian prior:
where p_{ n } is the probability of a change in velocity. When the velocity is at the boundaries, in other words if n_{ τ } = 1 or n_{ τ } = N, the velocity does not change with probability, p_{ n } , or transitions respectively to n_{τ+1}= 2 or n_{τ+1}= N  1 with probability 1  p_{ n } . The modulo operator reflects the periodic nature of the model and ensures that the position variables stay in the set {0, . . . , M  1}.
In order to track a clave pattern from a sound mixture, we extend the bar pointer model by adding a new acoustic event variable. For each time frame τ, we define an indicator variable r_{ τ } on a discrete state space of R elements, which determines the acoustic event label we are interested in. In our case, this state space may consist of event labels such as {claves hit, bongo hit, . . . , silence}. Since we are dealing with clave patterns, we can assume that the rhythmic structure of the percussive sound is constant, as the clave is usually repeated over the whole musical piece [15]. With this assumption, we come up with the following transition model for r_{ τ } . For simplicity, we assume that r_{ τ } = 1 indicates r_{ τ } = {claves hit}.
where m_{ τ } is defined as in Equation 1 and μ (·) is a Boolean function which is defined as follows:
Essentially, this transition model assumes that the claves hits can only occur on the beat positions, which are defined by the clave pattern. A similar idea for clave modeling was also proposed in Wright et al. [16].
By eliminating the selftransition of the claves hits, we prevent the "double detection" of a claves hit (i.e., detecting multiple claves hits in a very short amount of time). Figure 1 shows the son clave pattern, and Figure 2 illustrates the state transitions of the tempo and acoustic event model for the son clave. In the figure, the shaded nodes indicate the positions, where the claves hits can happen.
Note that, in the original bar pointer model definition, there are also other variables such as the meter indicator and the rhythmic pattern indicator variables, which we do not use in our generative model.
2.2 Signal model
Şimşekli and Cemgil presented two probabilistic models for acoustic event tracking in Şimşekli and Cemgil [7] and demonstrated that these models are sufficiently powerful to track different kinds of acoustic events such as pitch labels [7, 17, 18] and percussive sound events [19]. In our signal model, we use the same idea that was presented in the acoustic event tracking model [7]. Here, the audio signal is subdivided into frames and represented by their magnitude spectrum, which is calculated with discrete Fourier transform. We define x_{ν,τ}as the magnitude spectrum of the audio data with frequency index ν and time frame index τ, where ν ∈ {1, 2, . . . , F} and τ ∈ {1, 2, . . . , T}.
The main idea of the signal model is that each acoustic event (indicated by r_{ τ } ) has a certain characteristic spectral shape which is rendered by a specific hidden volume variable, v_{ τ } . The spectral shapes, socalled spectral templates, are denoted by t_{ν,i}. The ν index is again the frequency index, and the index i indicates the event labels. Here, i takes values between 1 and R, where R has been defined as the number of different acoustic events. The volume variables v_{ τ } define the overall amplitude factor, by which the whole template is multiplied.
By combining the tempo and acoustic event model and the signal model, we define our hybrid perception model as follows:
where, again, m_{ τ } indicate the position in a bar, n_{ τ } indicate the velocity, r_{ τ } are the event labels (i.e., r_{ τ } = 1 indicates a claves hit), v_{ τ } are the volume of the played template, t_{ν,i}are the spectral templates, and finally, x_{ν,τ}are the observed audio spectra. Besides, here, the prior distrubutions, p(n_{ τ } ·) and p(r_{ τ } ·) are defined in Equations 2 and 3, respectively. [x] is the indicator function, where [x] = 1 if x is true, [x] = 0 otherwise and the symbols \mathcal{G} and \mathcal{P}\mathcal{O} represent the Gamma and the Poisson distributions respectively, where
where Γ is the Gamma function. Figure 3 shows the graphical model of the perception model. In the graphical model, the nodes correspond to probability distributions of model variables and edges to their conditional dependencies. The joint distribution can be rewritten by making use of the directed acyclic graph:
where pa(χ) denotes the parent nodes of χ.
The Poisson model is chosen to mimic the behavior of popular NMF models that use the KL divergence as the error metric when fitting a model to a spectrogram [20, 21]. We also choose Gamma prior on v_{ τ } to preserve conjugacy and make use of the scaling property of the Gamma distribution.
An attractive property of the current model is that we can integrate out analytically the volume variables, v_{ τ } . Hence, given that the templates t_{ν,i}are already known, the model reduces to a standard hidden Markov model with a Compound Poisson observation model and a latent state space of D_{ n } × D_{ m } × D_{ r } , where × denotes the Cartesian product and D_{ n } , D_{ m } , and D_{ r } are the state spaces of the discrete variables n_{ τ } , m_{ τ } , and r_{ τ } , respectively. The Compound Poisson model is defined as follows (see Şimşekli [17] for details):
Since we have a standard HMM from now on, we can run the forwardbackward algorithm in order to compute the filtering or smoothing densities. Also, we can estimate the most probable state sequence by running the Viterbi algorithm. A benefit of having a standard HMM is that the inference algorithm can be made to run very fast. This lets the inference scheme to be implemented in realtime without any approximation [22]. Detailed information about the forward backward algorithm can be found in "Appendix A".
One point here deserves attention. The Poisson observation model described in this section is not scale invariant; i.e., turning up the volume can affect the performance. The Poisson model can be replaced by an alternative that would achieve scale invariance. For example, instead of modeling the intensity of a Poisson, we could assume conditionally Gaussian observations and model the variance. This approach corresponds to using a ItakuraSaito divergence rather than the KullbackLeibler divergence [23]. However, in practice, scaling the input volume to a specific level is sufficiently good enough for acceptable tempo tracking performance.
2.3 Training
As we have constructed our inference algorithm with the assumption of the spectral templates t_{ν,i}to be known, they have to be learned at the beginning. In order to learn the spectral templates of the acoustic events, we do not need the tempo and the bar position information of the training data. Therefore, we reduce our model into the model that was proposed in Şimşekli et al. [19], so that we only care about the label and the volume of the spectral templates. The reduced model is as follows:
In order to learn the spectral templates, in this study, we utilize the expectationmaximization (EM) algorithm. This algorithm iteratively maximizes the loglikelihood via two steps:
Estep:
Mstep:
where 〈 f (x)〉_{p(x)}= ∫ p (x) f (x) dx is the expectation of the function f(x) with respect to p(x).
In the Estep, we compute the posterior distributions of r_{ τ } and v_{ τ } . These quantities can be computed via the forwardbackward algorithm (see "Appendix A"). In the Mstep, we aim to find the t_{ν,i}that maximize the likelihood. Maximization over t_{ν,i}yields the following fixedpoint equation:
Intuitively, we can interpret this result as the weighted average of the normalized audio spectra with respect to v_{ τ } .
3 The control model
The goal of the control module is to generate the necessary control signals to accelerate and decelerate the robot such that the performed rhythm matches the performance by its tempo and relative position. As observations, the control model makes use of the bar position and velocity (tempo) estimates m_{ τ } and n_{ τ } inferred by the perception module and possibly their associated uncertainties. In addition, the robot uses additional sensor readings to determine its own state, such as the angular velocity and angular position of its rotating motors axis that is connected directly to the drum sticks.
3.1 Dynamic linear system formulation
Formally, at each discrete time step τ, we represent the robot state by the motors angular position {\widehat{m}}_{\tau}\in \left[0,2\pi \right) and angular velocity {\widehat{n}}_{\tau}>0. In our case, we assume these quantities are observed exactly without noise. Then, the robot has to determine the control action u_{ τ } , which corresponds to an angular acceleration/deceleration value of its motor.
For correctly following the music, our main goal is to keep the relative distance between the observed performer state as in Figure 4a and the robot state as in Figure 4b. Here, states of the robot and music correspond to points on a twodimensional space of velocity and bar position values. We can visualize the state space symbolically the difference between these states as in Figure 4c.
Hence, we can model the problem as a tracking problem that aims to keep the differences between the perceived tempo and the sensors values close to zero. Therefore, we define a new control state s_{ τ } as,
Intuitively, the control state represents the drift of the robot relative to the performer; the goal of the controller will be to force the control state toward zero.
At each time step τ, the new bar position difference between the robot and the music Δm_{ τ } is the sum of the previous bar position difference Δm_{τ  1}and the previous difference in velocity Δn_{τ  1}. Additionally, the difference in velocity n_{ τ } can only be affected by the acceleration of the robot motor u_{ τ } . Hence, the transition model is explicitly formulated as follows,
where u_{ τ } ∈ ℝ is the control signal to accelerate the motor and ε_{ τ } is the zeromean transition noise with Σ _{ A } covariance. Here, the first coordinate of s_{ τ } give the amount of difference in the score position of the performer and the robot.
For example, consider a case where the robot is lagging behind, so Δm_{ τ } < 0. If the velocity difference Δn_{ τ } is also negative, i.e., the robot is "slower", then in subsequent time steps, the difference will grow in magnitude and the robot would lag further behind.
We write the model as a general linear dynamic system, where we define the transition matrix
and the control matrix B = [0, 1]^{Τ} to get
To complete our control model, we need to specify an appropriate cost function. While one can contemplate various attractive choices, due to computational issues, we constrain ourselves to the quadratic case. The cost function should capture two aspects. The first one is the amount of difference in the score position. Explicitly, we do not care too much if the tempo is off as long as the robot can reproduce the correct timing of the beats. Hence, in the cost function, we only take the position difference into account. The second aspect is the smoothness of velocity changes. If abrupt changes in velocity are allowed, the resulting performance would not sound realistic. Therefore, we also introduce a penalty on large control changes.
The following cost function represents both aspects described in the previous paragraph:
where κ ∈ ℝ^{+} is a penalty parameter to penalize large magnitude control signals.
In order to keep the representation standard, the quadratic cost function can also be shown in the matrix formulation as,
with explicit values, R = κ and
Hence, after defining the corresponding linear dynamic system, the aim of the controller is to determine the optimal control signal, namely the acceleration of the robot motor u_{ τ } given the transition and the control matrices and the cost function.
3.2 Linearquadratic optimal control
In contrast to the general stochastic optimal control problems defined for general Markov decision processes (MDPs), linear systems with quadratic costs have an analytical solution.
When the transition model is written as in Equation 17, the cost function is defined as,
the optimal control {u}_{\tau}^{*} can be explicitly calculated for each state s_{ τ } in the form of Bertsekas [24],
where gain matrix L* is defined as,
Here, K* is the converged value of the recursively defined discretetime Riccati equations,
for stationary transition matrix A, control maxtrix B and state cost matrix Q.
Thus, in order to calculate the gain matrix L*, a fixedpoint iteration method with an initial point of K_{0} = Q is used to find the converged K value of K* = lim_{t → ∞}K_{ t } .
Finally, the control optimal action {u}_{\tau}^{*} can be determined realtime simply by a vector multiplication at each time step τ. Choosing the control action {u}_{\tau}={u}_{\tau}^{*}, Figure 5 shows an example of a simulated system.
3.3 Imperfect knowledge case
In the previous section, both perceived and sensor values are assumed to be true and noise free. However, possible errors of the perception module and noise of the sensors can be modeled as an uncertainty over the states. Actually, the perception module already infers a probability density over possible tempi and score positions. So, instead of a single point value, we can have a probability distribution as our belief state. However, this would bring us out of the framework of the linearquadratic control into the more complicated general case of partially observed Markov decision processes (POMDPs) [24].
Fortunately, in the linearquadratic Gaussian case, i.e., where the system is linear and the errors of the sensors and perception model are assumed to be Gaussian, the optimal control can still be calculated very similarly to the previous case as in Equation 22, by merely replacing s_{ τ } with its expected value,
This expectation is with respect to the filtering density of s_{ τ } . Since the system still behaves as a linear dynamical system due to the linearquadratic Gaussian case assumption, this filtering density can be calculated in closed form using the Kalman filter [24].
In the sequel, we will denote this expectation as E[s_{ τ } ] = μ_{ τ } . In order to calculate the mean μ_{ τ } , perceived values m_{ τ } , n_{ τ } and the sensor values {\widehat{m}}_{\tau}, {\widehat{n}}_{\tau} are considered as the observations. Explicitly, we define the observation vector
Here, we assume the observation model
where ε_{ O } is a zeromean Gaussian noise with observation covariance matrix Σ _{ O } which can be explicitly calculated as the weighted sum of the covariances of the perception model and the sensor noise as,
where Σ_{perception} is the estimated covariance of the tempo and position values inferred by the perception module by moment matching and Σ_{robot} is the covariance of the sensor noises specific to the actuators.
Given the model parameters, the expectation μ_{ τ } is calculated at each time step by the Kalman filter.
with initial values of,
Here, Σ _{ A } is the variance of the transition noise and A is the transition matrix defined in Equation 17, G_{ τ } is Kalman gain matrix and P_{ τ } is the prediction variance defined as,
4 Simulation results
Before implementing the whole system, we have evaluated our perception and the control models via several simulation scenarios. We have first evaluated the perception model on different parameter and problem settings, and then simulated the robot itself in order to evaluate the performance of both models and the synchronization level between them. At the end, we combine the Lego robot with the perception module and evaluate their joint performance.
4.1 Simulation of the perception model
In order to understand the effectiveness and the limitations of the perception model, we have conducted several experiments by simulating realistic scenarios. In our experiments, we generated the training and the testing data by using a MIDI synthesizer. We first trained the templates offline, and then, we tested our model by utilizing the previously learned templates.
At the training step, we run the EM algorithm which we described in Section 2.3, in order to estimate the spectral templates. For each acoustic event, we use a short isolated recording where the acoustic events consist of the claves hit, the conga hit (that is supposed to be produced by the robot itself), and silence. We also use templates in order to handle the polyphony in the music.
In the first experiment, we tested the model with a monophonic claves sound, where the son clave is played. At the beginning of the test file, the clave is played in medium tempo, where the tempo is increased rapidly in a couple of bars. In this particular example, we set M = 640, N = 35, R = 3, F = 513, p_{ n } = 0: 01, and the window length = 1,024 samples under 44.1 kHz sampling rate. With this parameter setting, the size of the transition matrix (see "Appendix A") becomes 67; 200 × 67,200; however, only 0: 87% of this matrix is nonzero. Therefore, by using sparse matrices, exact inference is still viable. As shown in Figure 6, the model captures the slight tempo change in the test file.
The smoothing distribution, which is defined as p(n_{ τ } , m_{ τ } , r_{ τ }  x_{1:F},_{1:T}), needs all the audio data to be accumulated. Since we are interested in online inference, we cannot use this quantity. Instead, we need to compute the filtering distribution p(n_{ τ } , m_{ τ } , r_{ τ }  x_{1:F},_{1:τ}) or we can compute the fixedlag smoothing distribution p(n_{ τ } , m_{ τ } , r_{ τ }  x_{1:F},_{1:τ + 1}) in order to have smoother estimates by introducing a fixed amount of latency (see "Appendix A" for details). Figure 7 shows the filtering, smoothing, and the fixedlag smoothing distributions of the bar position, and the velocity variables provided the same audio data as in Figure 6.
In our second experiment, we evaluated the perception model on a polyphonic texture, where the sounds of the conga and the other instruments (brass section, synths, bass, etc.) are introduced. In order to deal with the polyphony, we trained spectral templates by using a polyphonic recording which does not include the claves and conga sound. In this experiment, apart from the spectral templates that are used in the previous experiment, we trained two more spectral templates by using the polyphonic recording that is going to be played during the robotic performance. Figure 8 visualizes the performance of the perception model on polyphonic audio. The parameter setting is the same as the first experiment described above, except in this example we set N = 40 and R = 5. It can be observed that the model performs sufficiently good enough for polyphonic cases. Besides, despite the fact that the model cannot detect some of the claves hits, it can still successfully track the tempo and the bar position.
4.2 Simulation of the robot
In this section, we wish to evaluate the convergence properties of the model under different parameter settings. In particular, we want to evaluate the effect of the perception estimates over the control model. Therefore, we have simulated a synthetic system where the robot follows the model described in Equation 17. Moreover, we simulate a conga hit whenever the state reaches to a predefined position as in Figure 9, and both signals from the clave and conga are mixed and fed back into the perception module, to simulate a realistic scenario. Before describing the results, we identify and propose solutions to some technicalities.
4.2.1 Practical issues
Due to the modulo operation of the bar position representation, using a simple subtraction operation causes irregularities at boundaries. Such as, when robot senses a bar position close to the end of a bar and the perception modules infers a bar position at the beginning of the next bar, the bar difference Δm_{ τ } would be calculated close to 1 and the robot would tend to decelerate heavily. But, as soon as robot advances to the next bar, the difference becomes closer to 0. However, this time robot would have already slowed down greatly and would need to accelerate in order to get back on track. In order to circumvent this obstacle, a modular difference operation is defined that would return the smallest difference in magnitude,
where b_{ τ } , namely bar difference between the robot and the perception module, was defined as,
Additionally, even though the optimal control u_{ τ } could be in ℝ^{+}, due to the physical properties of the robot, it is actually in a bounded set such as [0, u_{max}] during the experiments with robot. Hence, its value is truncated when working with the robot in order to keep it in the constrained set. However, while this violates our theoretical assumptions, the simulations are not affected from this nonlinearity.
4.2.2 Results
In the first experiment, we illustrate the effect of the action costs on the convergence by testing different values of κ. First, κ is chosen as 0: 1 to see the behavior of the system with low action costs. During the simulation, the robot managed to track the bar position as expected as in Figure 10a. However, while doing so, it did not track the velocity, but instead, it fluctuated around its actual value as shown in Figure 10b.
In the following experiment, while keeping κ = 0.1, the cost function is chosen as,
in order to make the robot explicitly track velocity in addition to bar position. However, as in Figure 10c and 10d it was easily affected by the perception module errors and fluctuate a lot before converging. This behavior mainly occurs because the initial velocity of the robot is zero and the robot tends to accelerate quickly in order to track the tempo of the music. However, with this rapid increase in the velocity, its bar position gets ahead of the bar position of the music. As a response the controller would decelerate, and this would cause the fluctuating behavior until the robot reaches a stable tracking position.
In order to get smooth changes in velocity, κ is chosen larger (κ = 150) to penalize large magnitude controls. In this setting, in addition to explicit tracking of bar position, robot also implicitly tracked the velocity without making big jumps as in Figure 11. In addition to good tracking results, the control module was also more robust against the possible errors of the perception module. As seen in Figure 12, even the perception module made a significant estimation error in the beginning of the experiment, the controller module was only slightly affected by this error and kept on following the correct track with a small error.
As a general conclusion about the control module, it could not track the performer in the first bar of the songs, because the estimations of the perception module are not yet accurate, and the initial position of the robot is arbitrary. However, as soon as the second bar starts, control state, expected normalized difference between the robot state and the music state, starts to converge to the origin.
Also note that, when κ is chosen close to 0, velocity values of the robot tend to oscillate a lot. Even sometimes they became 0 as in Figure 10a and 10c. This means that the robot has to stop in order to wait the performer because of its previous actions with high magnitudes.
In the experiments, we observe that the simulated system is able to converge quickly in a variety of parameter settings, as can be seen from control state diagrams. We omit quantitative results for the synthetic model at this stage and provide those for the Lego robot. In this final experiment, we combine the Lego robot with the perception module and run an experiment with a monophonic claves example with steady tempo. Here, we estimate the tempo and score position and try to synchronize the robot via optimal control signals. We also compare the effects of different cost functions provided that the clave is played in steady tempo, and the other parameters are selected to be similar to the ones that are described in synthetic data experiments. While perceptually more relevant measures can be found, for simplicity, we just monitor and report the mean square error.
In Figure 13a(b) shown are the average difference between the position (velocity) of music and the position (velocity) of the robot. In these experiments, we tried two different cost matrices
Here, Q penalizes both the position and velocity error, where Q^{pos} penalizes only the position. The results seem to confirm our intuition: the control cost parameter κ needs to be chosen carefully to tradeoff elasticity versus rigidity. The figures visualize the corresponding control behaviors for the three different parameter regimes: converging with early fluctuations, closetooptimal converging and converging slowly, respectively.
We also observe that the cost function taking into account only the score position difference is competitive generally. Considering the tempo estimate Δn_{ τ } does not significantly improve the tracking performance other than the extremely small chosen κ < 1 which actually is not an appropriate choice for κ.
5 Experiments with a Lego robot
In this section, we describe a prototype system for musical interaction. The system is composed of a human claves player, a robot conga player, and a central computer as shown in Figure 14. The central computer listens to the polyphonic music played by all parties and jointly infers the tempo, and bar position, and the acoustic event. We will describe this quantities in the following section. The main goal of the system is to illustrate the feasibility of coupling listening (probabilistic inference) with taking actions (optimal control).
Since the microcontroller used on the robot is not powerful enough to run the perception module, the perception module runs on the central computer. The perception module sends the tempo and bar position information to the robot through a Bluetooth connection. On the other hand, the control module runs on the robot by taking into account its internal motor speed and position sensors and the tempo and bar position information. The central computer also controls a MIDI synthesizer that plays the other instrumental parts upon the rhythm.
5.1 The robot
The conga player robot is designed with Lego Mindstorm NXT programmable robotics kit. The kit includes a 48MHz, 32bits microcontroller with 64 KB memory. The controller is capable of driving 3 servo motors and 4 sensors of different kinds. The controller provides a USB and a Bluetooth communication interface.
The robot plays the congas by hitting them with sticks attached to rotating disks as shown in Figure 15. The disks are rotated by a single servo motor, attached to another motor which adjusts the distance between the congas and the sticks at the beginning of the experiment. Once this distance calibration is done (with the help of the human supervisor), the motor locks in its final position, and disks start to rotate to catch the tempo of the music. Although it looks more natural, we did not choose to build a robot with arms hitting the congas with drum sticks because the Lego kits are not appropriate to build robust and precisely controllable robotics arms,
The rhythm to be played by the robot is given in Figure 16. The robot is supposed to hit the left conga at 3rd and 11th, and the right conga at 7th, 8th, 15th and 16th sixteenth beats of the bar. In order to play this rhythm by constantly rotating disks, the rhythm must be hardcoded on the disks. For each conga, we designed a disk with sticks attached in appropriate positions such that each stick corresponds to a conga hit as shown in Figure 9. As the disks rotate, the sticks hit the congas at the time instances specified in the sheet music.
5.2 Evaluation of the system
We evaluated the realtime performance of our robot controller by feeding the tempo and score position estimates directly from the listening module. In the first experiment, we generated synthetic data that simulate a rhythm starting at a tempo of 60 bpm; initially accelerating followed by a ritardando. These data, without any observation noise, are sent to the robot in real time; e.g., the bar position and velocity values are sent in every 23 ms. The controller algorithm is run on the robot. While the robot rotates, we monitor its tachometer as an accurate estimate of its position and compare it with target bar position.
We observe that the robot successfully followed the rhythm as shown in Figure 17. In the second experiment we used the same setup but this time the output of the tempo tracker is send to the robot as input. The response of the robot is given in Figure 18. The errors in tempo at the beginning of the sequence comes from the tracker's error in detecting the actual bar position.
The meansquared errors for the bar position and velocity for the experiments are given in the Table 1. We see that the robot is able to follow the score position very accurately while there are relatively large fluctuations in the instantaneous tempo. Remember that in our cost function 21, we are not penalizing the tempo discrepancy but only errors in score position. We believe that such controlled fluctuations make the timing more realistic and human like.
6 Conclusions
In this paper, we have described a system for robotic interaction, especially useful for percussion performance that consists of a perception and a control module. The perception model is a hierarchical HMM that does online event detection and separation while the control module is based on linearquadratic control. The combined system is able to track the tempo quite robustly and respond in real time in a flexible manner.
One important aspect of the approach is that it can be trained to distinguish between the performance sounds and the sounds generated by the robot itself. In synthetic and real experiments, the validity of the approach is illustrated. Besides, the model incorporates domainspecific knowledge and contributes to the area of Computational Ethnomusicology[25].
We also realized that and we will investigate another platform for such demonstrations and evaluations as a future work.
While our approach to tempo tracking is conceptually similar to the musical accompaniment systems reviewed earlier, our approach here has a notable novelty, where we formulate the robot performance as a linear quadratic control problem. This approach requires only a handful of parameters and seems to be particularly effective for generating realistic and humanlike expressive musical performances, while being straightforward to implement. In some sense, we circumvent a precise statistical characterization of expressive timing deviations and still are able to generate a variety of rhythmic "feels" such as rushing or lagging quite easily. Such aspects of musical performance are hard to quantify objectively, but the reader is invited to visit our web page for audio examples and a video demonstration at http://www.cmpe.boun.edu.tr/~umut/orumbata/. As such, the approach has also potential to be useful in generating MIDI accompaniments that mimics a real human musicians behavior, control of complicated physical sound synthesis models or control of animated visual avatars.
Clearly, a Lego system is not solid enough to create convincing performances (including articulation and dynamics); however, our robot is more a proof of concept rather than a complete robotic performance system, and one could anticipate several improvements in the hardware design. One possible improvement for the perception model is to introduce different kinds of rhythmic patterns, i.e., clave patterns, to the perception model. This can be done by utilizing the rhythm indicator variable, which is presented in Whiteley et al. [8]. One other possible improvement is to introduce continuous state space for bar position and the velocity variables in order to have more accurate estimates and eliminate the computational needs of the large state space of the perception model. However, in that case exact inference will not be tractable, therefore, one should resort to approximate inference schemata, as discussed, for example in Whiteley et al. [26]. As for the control system, it is also possible to investigate POMDP techniques to deal with more diverse cost functions or extend the set of actions for controlling, besides timing, other aspects of expressive performance such as articulation, intensity, or volume.
Appendix
A Inference in the perception model
Inference is a fundamental issue in probabilistic modeling where we ask the question "what can be the hidden variables as we have some observations?" [27]. For online processing, we are interested in the computation of the socalled filtering density: p(n_{ τ } , m_{ τ } , r_{ τ }  x_{1:F},_{1:τ}), that reflects the information about the current state {n_{ τ } , m_{ τ } , r_{ τ } } given all the observations so far x_{1:F,1:τ}. The filtering density can be computed online, however the estimates that can be obtained from it are not necessarily very accurate as future observations are not accounted for.
An inherently better estimate can be obtained from the socalled fixedlag smoothing density, if we can afford to wait a few steps more. In other words, in order to estimate {n_{ τ } , m_{ τ } , r_{ τ } }, if we accumulate L more observations, at time τ + L, we can compute the distribution p(n_{ τ } , m_{ τ } , r_{ τ }  x_{1:F},_{1:τ + L}) and estimate {n_{ τ } , m_{ τ } , r_{ τ } } via:
Here, L is a specified lag and it determines the trade off between the accuracy and the latency.
As a reference to compare against, we compute an inherently batch quantity: the most likely state trajectory given all the observations, the socalled the Viterbi path
This quantity requires that we accumulate all data before estimation and should give a high accuracy at the cost of very long latency.
Briefly, the goal of inference in the HMM is computing the filtering and the (fixedlag) smoothing distributions and the (fixedlag) Viterbi path. These quantities can be computed by the wellknown forwardbackward and the Viterbi algorithms.
Before going into details, we define the variable Ψ _{ τ } ≡ [n_{ τ } , m_{ τ } , r_{ τ } ], which encapsulates the state of the system at time frame τ. By introducing this variable, we reduce the number of latent variables to one, where we can write the transition model as follows:
Here p(m_{ τ } ·) is the degenerate probability distribution, which is defined in Equation 1. For practical purposes, the set of all possible states (in Dn × D_{ m } × D_{ r } ) can be listed in a vector Ω and the state of the system at the time slice τ can be represented as Ψ_{τ} = Ω (j), where j ∈ {1, 2, . . . , (NMR)}. The transition matrix of the HMM, A can be constructed by using Equation 38, where
For big values of N, M, and R this matrix becomes extremely large, but sufficiently sparse so that making exact inference is viable.
Now, we can define the forward (α) and the backward (β) messages as follows:
We can compute these messages via the following recursions:
Here, α_{0}(Ψ_{0}) = p(Ψ_{0}), β_{ T } (Ψ _{ T } ) = 1 [28], and p(x_{1:F,τ} Ψ _{ τ } ) ≡ p(x_{1:F,τ} r_{ τ } ). Once these messages are computed, the smoothing distribution can be computed easily by multiplying the forward and backward messages as
where ∝ denotes the proportionality up to a multiplicative constant. Besides, the Viterbi path is obtained by replacing the summations over r_{ τ } by maximization in the forward recursion.
References
Otsuka T, Nakadai K, Takahashi T, Ogata T, Okuno HG: Realtime audiotoscore alignment using particle filter for coplayer music robots. EURASIP J Adv Signal Process 2011, 2011: 2:12:13.
Kapur A: A history of robotic musical instruments. International Computer Music Conference (ICMC) 2005.
Goto M, Muraoka Y: Realtime beat tracking for drumless audio signals: chord change detection for musical decisions. Speech Commun 1999, 27(34):311335. 10.1016/S01676393(98)000764
Kim Th, Park SI, Shin SY: Rhythmicmotion synthesis based on motionbeat analysis. ACM SIGGRAPH 2003 Papers 2003, 392401.
Kim YE, Batula AM, Grunberg D, Lofaro DM, Oh J, Oh P: Developing humanoids for musical interaction. International Conference on Intelligent Robots and Systems 2010.
Lofaro DM, Oh P, Oh J, Kim Y: Interactive musical participation with humanoid robots through the use of novel musical tempo and beat tracking techniques in the absence of auditory cues. 2010 10th IEEERAS International Conference on Humanoid Robots (Humanoids) 2010, 436441.
Şimşekli U, Cemgil AT: Probabilistic models for realtime acoustic event detection with application to pitch tracking. J New Music Res 2011, 40: 175185. 10.1080/09298215.2011.573561
Whiteley N, Cemgil AT, Godsill SJ: Bayesian modelling of temporal structure in musical audio. Proceedings of International Conference on Music Information Retrieval 2006.
Yoshii K, Nakadai K, Torii T, Hasegawa Y, Tsujino H, Komatani K, Ogata T, Okuno HG: IROS. 2007, 17431750.
Murata K, Nakadai K, Yoshii K, Takeda R, Torii T, Okuno HG, Hasegawa Y, Tsujino H: A robot uses its own microphone to synchronize its steps to musical beats while scatting and singing. IROS 2008, 24592464.
Dannenberg R: An online algorithm for realtime accompaniment. International Computer Music Conference 1984, 193198.
Orio N: An automatic accompanist based on hidden markov models. Proceedings of the 7th Congress of the Italian Association for Artificial Intelligence on Advances in Artificial Intelligence ser. AI*IA 01. London: Springer; 2001, 6469. [http://portal.acm.org/citation.cfm?id=648152.751104]
Cemgil AT, Kappen HJ: Monte carlo methods for tempo tracking and rhythm quantization. J Artif Intell Res 2003, 18: 4581.
Raphael C: Music plus one and machine learning. International Conference on Machine Learning 2010, 2128.
Völkel T, Abeßer J, Dittmar C, Großmann H: Automatic genre classification of latin american music using characteristic rhythmic patterns. Proceedings of the 5th Audio Mostly Conference: A Conference on Interaction with Sound ser. AM '10. New York, NY, USA: ACM; 2010, 16: 116:7. [http://doi.acm.org/10.1145/1859799.1859815]
Wright M, Schloss WA, Tzanetakis G: Analyzing afrocuban rhythms using rotationaware clave template matching with dynamic programming. ISMIR 2008, 647652.
Şimşekli U: Bayesian methods for realtime pitch tracking. Master's thesis, Boğaziçi University; 2010.
Şimşekli U, Cemgil AT: A comparison of probabilistic models for online pitch tracking. Proceedings of the 7th Sound and Music Computing Conference (SMC) 2010.
Şimşekli U, Jylhä A, Erkut C, Cemgil AT: Realtime recognition of percussive sounds by a modelbased method. EURASIP J Adv Signal Process 2011, in press.
Févotte C, Bertin N, Durrieu JL: Nonnegative matrix factorization with the ItakuraSaito divergence. with application to music analysis. Neural Comput 2009, 21(3):793830. 10.1162/neco.2008.0408771
Vincent E, Bertin N, Badeau R: Harmonic and inharmonic nonnegative matrix factorization for polyphonic pitch transcription. ICASSP 2008.
Alpaydin E: Introduction to Machine Learning (Adaptive Computation and Machine Learning). Cambridge: The MIT Press; 2004.
Fevotte C, Cemgil AT: Nonnegative matrix factorisations as probabilistic inference in composite models. Proceedings of the 17th European Signal Processing Conference (EUSIPCO'09) 2009.
Bertsekas D: Dynamic Programming and Optimal Control. Belmont: Athena Scientific; 1995.
Tzanetakis G, Kapur A, Schloss WA, Wright M: Computational ethnomusicology. J Interdiscip Music Stud 2007, 1(2):124.
Whiteley N, Cemgil AT, Godsill S: Sequential inference of rhythmic structure in musical audio. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing ICASSP 2007 2007, 4: IV1321IV1324.
Cappé O, Moulines E, Ryden T: Inference in Hidden Markov Models (Springer Series in Statistics). Secaucus: Springer; 2005.
Barber D, Cemgil AT: Graphical models for time series. IEEE Signal Process Mag Special Issue Graph. Models 2010, 27(27):1828.
Acknowledgements
We are grateful to Prof. Levent Akin and the members of the AI lab for letting us to use their resources (lab space and Lego^{©} robots) during this study. We also thank Antti Jylhä and Cumhur Erkut of the acoustics labs Aalto University, Finland for the fruitful discussions. We also want to thank Sabanc³ University Music Club (Müzikus) for providing the percussion instruments. We would like to also thank Ömer Temel and Alper Güngörmüşler for their contributions in program development. We thank the reviewers for their constructive feedback. This work is partially funded by The Scientific and Technical Research Council of Turkey (TÜBİTAK) grant number 110E292, project "Bayesian matrix and tensor factorisations (BAYTEN)" and Boğaziçi University research fund BAP 5723. The work of Umut Şimşekli and Orhan Sönmez is supported by the Ph.D. scholarship (2211) from TÜBİTAK.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
UŞ and ATC conceived and designed the perception model. OS and ATC carried out the control model. UŞ and BK implemented the Lego^{©} robot. UŞ, OS, BK, and ATC wrote the paper. All authors read and approved the final manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Şimşekli, U., Sönmez, O., Kurt, B.K. et al. Combined perception and control for timing in robotic music performances. J AUDIO SPEECH MUSIC PROC. 2012, 8 (2012). https://doi.org/10.1186/1687472220128
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1687472220128