Resonance Audio: Multi Platform Spatial Audio at Scale

Resonance Audio: Multi Platform Spatial Audio at Scale

We human beings on depend on the sound from our surroundings to help find our way through our environment, communicate with others and get ourselves connected with what is occurring around us.  Whether it is just walking along a city street that is busy or attending a music concert that is packed, we are capable of hearing hundreds of sounds originating from various directions. Therefore in cases of AR, VR, games and 360 video, you need good quality sound to create an enriching and interactive experience that makes you feel like you are really present there physically. A new spatial audio software development kit (SDK) called Resonance Audio has been recently released based on the technology of VR Audio SDK from Google and functions at scale across all mobile and desktop platforms.

Resonance Audio: Functioning that scales on desktop and mobile

Enabling dynamic and rich audio environments to the VR, AR, gaming or video experiences without hampering the performance is a challenge in itself. Most of the times, there are some CPU resources assigned for audio, particularly on mobile, which can curtail the amount of instantaneous high quality 3D sound sources for complex surroundings. The SDK uses the best possible algorithms for processing digital signals based on a higher level of Ambisonics for the spatialization of hundreds of simultaneous 3D sound sources, without affecting the audio quality, also on mobile.

Resonance Audio _1

A new feature in Unity has also been introduced for computing in beforehand very practical reverb effects that exactly match the auditory properties of the environment, reducing the usage of CPU considerably during playback. Since the resources of the CPU are often primarily assigned to support the visuals, these complexities can create a lot of unnecessary difficulties that eventually could result to shipping in products with just the basic audio. Resonance Audio is aimed at fixing this issue with a few tricks that consist of things like pre-rendering how some sounds resound in various environments so these exchanges are not left being worked upon while on the fly.

Resonance Audio: Support for sound designers and developers on various platforms

Resonance Audio works with various game engines and has various plugins on hand for a whole range of other suites for editing, so that it fits in seamlessly with the already existing workflows. Since it is vital that audio solutions are able to seamlessly merge with the required audio and sound design tools, Resonance Audio has released SDKs that are cross-platforms for the most well-known game engines, audio engines and digital audio workstations (DAW) to manage workflows, so that users can focus and concentrate on creating more realistic audio. Running on Android, Windows, iOS, MacOS and Linux platforms, SDKs can be easily integrated for Unreal Engine, FMOD, Unity, DAWs and Wwise. Native APIs are also provided for Java, C/C++, Objective-C and the web. This support for various platforms allows developers to incorporate designs for sound just once and thereafter easily execute their project with sounding results that are consistent across all the top desktop and mobile platforms. Sound designers save up on time by using the latest plugin of DAW for precisely monitoring spatial audio that is widely used for YouTube videos or apps that have been developed using Resonance Audio SDKs.

Resonance Audio Model complex sound environments

Resonance Audio goes way further than the basic 3D spatialization by providing tools that are powerful for modelling complicated sound environments very accurately. The SDK allows developers have power over and determine how the acoustic waves are propagated from the source of the sound.  Say, for example, the music from a guitar will sound quieter to a person standing behind the guitar player compared to the person standing in front of him. Also, when standing in front of the guitar, facing the guitar sounds louder than the when your back is facing it.

Another feature of SDK is that it automatically renders near-field effects when the source of the sound comes in close proximity to the listener, making available an exact observation of distance, even in cases where the sources are near to the ear. The SDK also provides for sound source spread and does so by the specification of width of the source, thereby permitting the sound to be replicated from a miniscule point present in space right up to the magnitude like that of a wall of sound. Also, an Ambisonic tool for recording has been released to capture the design of the sound spatially straight inside Unity, save the data to a file and make use of it wherever there is support for Ambisionic soundfield playback, ranging from game engines to even YouTube videos.

Those interested in creation of soundscapes that are pure and engaging, with the utilisation of cutting-edge technology in spatial audio, can have a look at the documentation for Resonance Audio on the developer site. Spatial audio can also be experienced for Daydream and Steam VR in the Audio Factory VR app.

Article categories
Related tags to article

About author

menu
menu