We wanted to give the car a voice, a sonic “soul” that would make it easier for people to relate to, and ultimately trust it through the various stages of an autonomous journey.
We approached the sound design from several directions:
- Sounds from the real world such as environmental backgrounds, tire noises and wind noise to illustrate movement. Also bells, footsteps, bicycles and cell phones to illustrate objects that the car will notice along the way.
- Synthetic sounds that mimic real world sounds, but in a simplified, focused way, much like icons in the visual realm.
- Synthetic sounds that are abstract and tonal, describing an event rather than portraying it.
- Organic sounds such as human voices.
We were looking for a sonic theme that would feel intuitive, solid and informative while maintaining a pleasant and non-intrusive character. And this theme should also be able to adapt, not only to various use cases but also to various personality types. People finding joy in mastering the driving experience, optimising it, would for example prefer a slightly different set of sounds than people using the car as a place to focus on work.
During the project it progressively became clear that the synthetic tonal route was generating promising results.
One reason for this is probably that another layer of information can be added to the sonification by choosing the rhythmic and tonal content in a musical way.
So for sounds describing the movement of the car, such as turning or accelerating, these kinds of sounds became particularly successful.
For scenarios such as the car welcoming its passengers to drive them to work in the morning or home at night, a hybrid of real-world and synthetic sounds became a good solution. The real world layer would add detail to the sonification while the tonal part would invoke an emotion of being welcomed to a smooth and safe journey.
In this communication between machine and man, the emotional response will play a big part and will make it easier for the user to instinctively trust it.
It also became a good way of notifying the passengers that the car is aware of things like bikes or pedestrians along the way that may have to be avoided.
To actually create these sounds we used a combination of analog and digital sound generators, particularly the Reaktor modular software. Total parameter recall was necessary for iteration so a very structured setup was used. The sounds were then layered and further processed in a digital audio workstation.
An important part of the final result was to adapt these sounds to what the car was doing at any given point. Passenger-perspective videos were shot at test tracks and in live traffic and those were used to prototype the sound behaviour back in the studio.
Our objective was to make the sounds emerge from the background when required and then fade back at rates determined by the nature of the maneuver. They should be heard clearly through the road noise while driving, but not stand out too much or become intrusive.
They would also have to move across the spatial sound-field for manoeuvres like turning corners. All those behaviours – amplitude, spatial location, pitch, timbre modulation – were prototyped to picture in the DAW and then exported to video and VR for evaluation.
Finally, sound assets had to be prepared for export where these properties were neutralised so they could be re-performed in real time by the audio engine in the car itself.