[sdiy] Ray tracing hardware for audio simulation
cheater cheater
cheater00social at gmail.com
Sat Jul 30 17:47:42 CEST 2022
Hi all,
there was recently an AMA with ray tracing experts at nvidia and I
asked about uses for audio. I thought the possibilities could be
interesting for some people here. Below is the link to the original
thread as well as a copy of the question and answer.
Cheers
https://forums.developer.nvidia.com/t/meet-the-ray-tracing-gems-team-live-ama-july-28-2022/217920
-------------------------------------------------------------------
Welcome to our first Connect with Experts - AMA, an exclusive benefit
of the NVIDIA Developer Program 12
The team behind the popular Ray Tracing Gems 23 will be here live on
July 28, 2022 at 10am (PDT) to 11am
Eric Haines, Adam Marrs, Peter Shirley and Ingo Wald,
Post questions - watch the discussion - ask the team anything.
-------------------------------------------------------------------
Could you please talk about how ray tracing and RTX could help
realistic sound in sound-oriented games, in the context of the
gameplay of the original Thief game, to give a concrete example?
Effects such as realistic reverberation, sound occlusion, the carrying
of sound by continuous surfaces, and others are all of benefit here.
Are there current approaches to such problems which will still allow
ray tracing the graphics at the same time?
Here is an example of gameplay: https://youtu.be/4rWfb7ZtSPc?t=8855
(Thief Gold | 1080p60 | Longplay Full Game Walkthrough No Commentary -
YouTube)
-------------------------------------------------------------------
Answer from Ingo Wald (iwald):
First off: “IRL” sound is important to judge directions - you can
absolutely hear what direction something is coming from - so
simulating that better in a game should help make this game “feel”
more realistic, and better. On the technical side, sound transport and
light transport are - conceptually - actually very similar; though
there’s differences in “how” things reflect you still need frequent
“line of sight” computations, which are exactly what ray tracing does
- so yes, having fast ray tracing should help in making sound
simulation better/more accurate.
Eric notes two things: VRWorks - Audio | NVIDIA Developer is from
NVIDIA and may be just what the poster wants. For more on research in
the area, a good place to start might be “Guided Multiview Ray Tracing
for Fast Auralization” by Micah Taylor, Anish Chandak, Qi Mo,
Christian Lauterbach, Carl Schissler, and Dinesh Manocha, 2012
Response from Tony Scudiero:
There’s a good history of ray tracing in audio: there are a number of
commercial products that use ray methods for generating synthetic room
impulse response filters. RTX technology is actually very good for
acoustic simulations, as the material interactions of sounds are
usually modeled at a coarser granularity than interactions of material
and light. Acoustic simulations tend to have simple shaders, making
their performance fundamentally a function of ray-scene queries, which
RTX accelerates quite well!
One of the fundamental challenges of ray tracing acoustic energy is
that the wavelengths in question are about 1 million times longer than
visible light. Wavelengths can be on the order of a meter, which is
the same order of magnitude as many objects. The consequence is that
many effects must be treated over a cross-sectional area of the
wavefront: the interaction of sound energy with a surface cannot be
accurately modeled only at an infinitesimal point. That said, there
has been some research on how these effects can be treated using ray
tracing techniques. The ‘right’ approach usually depends on your
goals: accuracy or speed.
>From a technological perspective, there’s absolutely nothing standing
in the way of writing a real-time acoustics simulation using ray
tracing graphics APIs like DXR or VkRay to do sound propagation
simulation in tandem with ray tracing graphics. The available
ray-tracing power of current-generation GPUs should be able to handle
a moderately complex acoustic simulation in tandem with graphics.
Depending on how the graphics rendering engine is designed, primary
rays could be used for both purposes, further economizing the
simulation. While this is perfectly possible, I’m not aware of anyone
that has actually done this in one of the graphics APIs.
NVIDIA’s VRWorks Audio, which is a relatively simple acoustics
simulation intended for interactive experiences, uses OptiX. Version
2.0 of that SDK can make use of RTX hardware when available.
More information about the Synth-diy
mailing list