How would it sound if your body would be an instrument – would you be more of a soft, melodious harp or more of a hard and aggressive dis-tuned electric guitar? To me, that’s a very interesting matter to speculate about..however, it is also a matter that is in principle impossible to answer. Motion is basically a silent process, with the general spectrum of movement acoustics restricted to few environmental sources of sound. These are mostly associated to interactions of or with equipment as well as physical reaction and exhaustion. One concept that extends this spectrum to originally silent phases of actions is movement sonification. Here, the basic principle is to transform motor properties into an auditive display of the underlying motion data, which comes close to exploring the previous question.

When set into meaningful correlation to motor actions, a movement sonification can serve as valuable additional source of motion feedback information. Movement sonification is known to enhance the process of motor control and motor learning, but also offers a wide variety of technical implementations. The main question that has to be answered in this context is hence how auditory feedback should be designed to convey motion information with largest possible effect on the motor learning task. Sonification strategies depend on the underlying input motion data, the sonification purpose and the intended sound mapping. Let us in the following have a closer look on how to implement a system that sonifies body kinematics via MIDI.

Basic Requirements of Movement Sonification

Auditory feedback is shown to be subject to internal biological processes of motion perception in the same way as visual feedback. This means that auditive motion representations stimulate certain areas of the human brain (for example the mirror neurons) that can then result in better biological representation of a motor task. To achieve a highly efficient motor skill acquisition, auditory feedback should be perceived simultaneously with the (always present) kineasthetic motor perception. This is the case when the intermodal delay lies within an interval of approximately 100 ms. To ensure a reliable merging of the senses, I recommend to choose a smaller benchmark as maximal delay, such as 30-40 ms.

As a general rule, a sonification should be understood intuitively and reflect specific motion structures and parameters. This means that the most relevant aspects within a motion – which are often are the kinematic properties determined with the implemented processing framework – have to be described. Transformation of these properties (for example segment orientation, joint position data, accelerations or motion velocity of certain joints) then yields an acoustic version of the numeric data streams.

movement sonification process

Acoustic Data Transformation

Different acoustic kinematics are of different level of complexity and might hence be more or less intuitive and useful for certain sonification tasks. Here, it is generally not known which sound design is good, and which one has a positive effect on the learning of a motor task at all. Once you decide on a certain sound design, it is therefore necessary to investigate the effects of the selected sound mappings in an empirical study. But that’s another question we don’t have time to talk about now…

The standardized MIDI specification offers a consequent and well-defined way to control and generate sound. This makes it convenient to map motion data onto sound. Tones in MIDI are for example generated by sending a respective control demand, and sound properties easily influenced and changed by a variation of sound control messages. MIDI offers a broad range of pre-defined standard control commands that leaves various possibilities to map motion data onto auditory features. They are amongst others attack and release time, timbre, tone frequency, velocity or sound effects as reverb and echo.¬†Furthermore, the timbre or constitution of a sound can easily be changed before or even during a performance with so called ‘sound programs’ simulating different sounds and instruments (similar as in an electrical keyboard).

Sample Sound Design

For demonstration, let us consider a sample sonification design that got developed in collaboration with the movement science research group at Hanover University. It was based on the transformation of position and velocity data and was demonstrated to be intuitive and effective in user studies.
In concrete, the chosen sound mapping represented vertical position by pitch (tone frequency), velocity by volume, radial distance to the center of mass by brightness (spectral composition) and lateral position by stereo panning. The maximal and minimal values of every kinematic property defined the gradation of every mapping parameter. Every time the motion velocity within a motion performance reached the set maximal velocity, the corresponding MIDI volume controller was then for example maximal (of value 127). For static phases, the MIDI controller was equivalently minimal (of value 0).

To have access to a broad variety of simulated electronic sounds, the sonification output was connected to the synth module ‘SonicCell’ (Roland Germany GmbH, Nauheim, Germany). This module offered more than 300 pre-defined sound sets ranging from instruments like classical viola and flutes to percussion instruments and artificial sound creations. The resulting system then displayed the internally triggered sound settings and selected acoustical kinematics in real-time, only restricted by hardware specifications like sensor battery lifetime.