Q&A: Sound Particles founder Nuno Fonseca

We speak to Nuno Fonseca, the Portuguese professor and founder of Sound Particles, a new piece of sound design software that has already been used by some of Hollywood’s biggest film studios for a number of blockbuster movies.

How did Sound Particles come about? Could you tell us about the software?

The idea initially started around ten years ago, when I realised the most interesting visual effects that I was seeing in the movies were using particle systems, a technique widely used in computer graphics for many years. Instead of individually animating each raindrop or grain of dust, particle systems software creates thousands or even millions of small points that together create a visual of fire or smoke or something similar.

At the time I thought ‘wouldn’t it be nice if an approach like this could also be used for sound, to create thousands of small sounds that come together to make something great?’ At first it was just an idea and time passed by.

Then four years ago I finished my PhD and since no one was yet using particle systems for sound, I thought ‘let’s start developing some software or a simulator that can do just that.’ It uses a lot of CGI concepts – similarly to Maya [computer animation and modelling software] but with sounds that you can position in your virtual space as desired. Instead of a virtual camera, we use a virtual microphone, which can be positioned anywhere and allows you to capture the sound from a special perspective.

How did its adoption by these major studios happen so soon after release?

A year and a half ago I went to LA to present a paper at an AES convention. A few weeks before this I sent emails to some of the Hollywood studios explaining who I am and what kind of work I do. One of the first studios that replied was Skywalker Sound, inviting me to go there to do a presentation to their team. In the following months I ended up going three times to both LA and San Francisco to engage in talks with major Hollywood studios like Universal, Warner Brothers, Fox, Paramount Pictures and Sony.

What I gather from the perception and my interaction with these sound designers is that they feel like it is something completely new – it’s a true native 3D software.

Particle systems can be used to create a soundscape made up of thousands of sounds spread over a certain area – Sound Particles particularly shines when used in big budget productions with epic battle scenes for example – and then render everything in 5.1 or Dolby Atmos.

In what specific ways are these studios using the software?

The feedback I get from the users is that when they understand the concept they start getting ideas to create immersive, grand-scale audio experiences. People are using Sound Particles for things they never thought possible before. Most times it is being used for its ability to collect thousands of sounds together, but there are some like David Farmer (sound designer for Lord of the Rings and The Hobbit) who believe this is the best software available for Doppler, and who have shown a lot of support for it. Another field that is currently using the software a lot is virtual reality because they need to position sound very accurately.

The software allows you to import a 360º video, and simply drag and drop the sound on top of the image. This is perfect for VR because it can be a nightmare to position sounds manually, especially when the sound sources are moving around. Keyframe animation then allows users to animate them perfectly in sync with the image. I am seeing users take several approaches to the software in order to utilise it for their own purposes.

Are there any other applications that the software is aimed at?

Currently, the main target audience of the software is sound designers working on big Hollywood productions, but it can also be used in other areas like theatre to create a virtual microphone with a custom speaker configuration to render perfectly for a special room. There are even uses for it in theme parks where Disney is testing it to create sounds for their unconventional speaker arrangements. Game audio designers will also find it useful, and of course, people working in virtual reality.

The idea is to, within several months, create a version that will be able to work in real time. Hopefully we start to see additional people using it like re-recording mixers in the cinema who can use this just like designers do.

And there is support for numerous multichannel formats?

One of the interesting concepts of this 3D software is the use of a virtual microphone, because when using a digital audio workstation this is not the case. Here we can use virtual microphones, ranging from single monophonic microphones to ambisonics, which the software can render audio in. It supports a wide range of configurations and formats including immersive formats like Dolby Atmos 9.1, Auro 11.2 and NHK 22.2.

Where do you plan to take the software in the next five years?

I see the software as just the first step because there are a number of fields I want to explore. One of those things is integration with CGI information, because audio people at the moment often disregard it and simply use the final video as reference when using visual effects or animated features. What I want to be able to do though is import all of this CGI information in order to co-ordinate images and sounds easier.

Another thing that I want to look at is having native support for Dolby Atmos and interoperability with other software. I want to create a Windows version because it is currently only available for Mac and I also want to develop more features and support for virtual reality. There is a huge gap in the market here because there are currently no specially created post-production tools for VR, as people mostly use digital audio workstations.

It’s important to start creating new solutions, new software and new options with regards to these 3D tools.