The content of the proposed simulation was a reproduction of the Prague botanical garden, whose visual content is shown in Figure 1. As seen in Figure 1, the environment has a floor made of concrete, where subjects are allowed to walk. This is an important observation when sonically simulating the act of walking in the environment.
The main goal of the auditory feedback was both to reproduce the soundscape of the botanical garden of Prague and to allow subjects to hear the sound of their own footsteps while walking in the environment. The implementation of the two situations is described in the following.
4.1. Simulating the Act of Walking
We are interested in combining sound synthesis based on physical models with soundscape design in order to simulate the act of walking on different surfaces and place them in a context. Specifically, we developed real-time sound synthesis algorithms which simulate the act of walking on different surfaces. Such sounds were simulated using a synthesis technique called modal synthesis [20].
Every vibrating object can be considered as an exciter which interacts with a resonator. In our situation, the exciters are the subjects' shoes, and the resonators are the different walking surfaces. In modal synthesis, every mode (i.e., every resonance) of a complex object is identified and simulated using a resonator. The different resonances of the object are connected in parallel and excited by different contact models, which depend on the interaction between the shoes and the surfaces. Modal synthesis has been implemented to simulate the impact of a shoe with a hard surface.
In the case of stochastic surfaces, such as the impact of a shoe with gravel, we implemented the physically informed stochastic models (PhISM) [21].
The footstep synthesizer was built starting by analyzing footsteps recorded on surfaces obtained from the Hollywood Edge Sound Effects library (http://www.hollywoodedge.com). For each recorded set of sounds, single steps were isolated and analyzed. The main goal of the analysis was to identify an average amplitude envelope for the different footsteps, as well as extracting the main resonances and isolating the excitation.
A real-time footstep synthesizer, controlled by the subjects using a set of sandals embedded with force sensors was designed. Such sandals are shown in Figure 2. By navigating in the environment, the user controlled the synthetic footsteps sounds.
Despite its simplicity, the shoe controller was effective in enhancing the user's experience, as it will be described later. While subjects were navigating around the environment, the sandals were coming in contact with the floor, thereby activating the pressure sensors. Through the use of a microprocessor, the corresponding pressure value was converted into an input parameter which was read by the real-time sound synthesizer implemented in Max/MSP (http://www.cycling74.com). The sensors were wirelessly connected to a microcontroller, as shown in Figure 2, and the microprocessor was connected to a laptop PC.
The continuous pressure value was used to control the force of the impact of each foot on the floor, to vary the temporal evolution of the synthetic generated sounds. The use of physically based synthesized sounds allowed to enhance the level of realism and variety compared to sampled sounds, since the produced sounds of the footsteps depended on the impact force of subjects in the environment, and therefore varied dynamically. In the simulation of the botanical garden, we used two different surfaces: concrete and gravel. The concrete surface was used most of the time and corresponded to the act of walking around the visitors' floor. The gravel surface was used when subjects were stepping outside the visitors' floor.
Both surfaces were rendered through an 8-channel surround sound system.
4.2. Simulating Soundscapes
In order to reproduce the characteristic soundmarks of a botanical garden, a dynamic soundscape was built. The soundscape was designed by creating an 8-channel soundtrack in which subjects could control the position of different sound sources.
In the laboratory shown in Figure 4, eight speakers were positioned in a parallelepipedal configuration. Current commercially available sound delivery methods are based on sound reproduction in the horizontal plane. However, we decided to deliver sounds in eight speakers and thereby implementing full 3D capabilities. By using this method, we were allowed to position both static sound elements as well as dynamic sound sources linked to the position of the subject. Moreover, we were able to maintain a similar configuration to other virtual reality facilities such as CAVEs [22], where eight-channel surround is presently implemented, in order to perform in the future experiments with higher-quality visual feedback. This is the reason why 8-channel sound rendering was chosen compared to, for example, binaural rendering [23].
Three kinds of auditory feedback were implemented:
-
(1)
"static" soundscape, reproduced at max. peak of 58?dB, measured c-weighted with slow response. This soundscape was delivered through the 8-channel system;
-
(2)
dynamic soundscape with moving sound sources, developed using the VBAP algorithm, reproduced at max. peak of 58?dB, and measured c-weighted with slow response;
-
(3)
auditory simulation of ego-motion, reproduced at 54?dB (this has been recognised as the proper output level as described in [24]).
The content of the soundscape in the first two conditions was the same. The soundscape contained typical environmental sounds present in a garden such as bird singing and insects flying. The soundscape was designed by performing a recording in the real botanical garden in Prague and reproducing a similar content by using sound effects from the Hollywood Edge Sound Effects library.
In the first and second conditions, the soundscape only varied in the way it was rendered. In the second condition, in fact, the position of the sound sources was dynamic and controlled by the user's motion, who was wearing a head tracker as described below. In the third condition, the dynamic soundscape was augmented with auditory simulation of ego-motion obtained by having subjects generating in real-time footsteps of themselves walking in the garden.