convolution tool and reverbs
Martin Czech
martin.czech at intermetall.de
Mon Jan 17 16:01:11 CET 2000
Many list members encouraged me that this is not ot.
So be it.
My fast convolution tool has made great progress during
the last weekends, it is now very stable.
I had lots of fun this weekend, computing all kind of effects.
I used some of the impulse responses from a link given by a friendly
list member (thanks), it was "shower" and "Drotningholm castle".
The last having a very long reverb of ~ 10s.
This is about as much as I can compute on my IBM compatible
with Win95 and 32MB DRAM.
Yes, fast convolution needs lots of memory, but is 1000 times
faster then direct convolution with impulse responses as long
as this. And memory is cheap these days, I'll plug
in another 64MB asap.
And I listened to examples with my very own impulse files,
"very small glass room 1.5x1.5x2m", "swimming pool",
"staircase" and "fireworks", or vocal filters "iiii",
and "aaaaaa". Very impressive.
The results are very encouraging. The richness of these ambient
reverbs are especially striking if you compare them to the usuall
IIR reverbs you get from effect boxes like LXP1/LXP5 (which offer
medium quality. I'd say). This is simply because an affordable
machine can not compute so many uncorrelated reflextions like
a natural recording has. And rooms have strong resonances
which are not found in the artificial algorithms, too.
In some respect the artifical reverbs sounds nicer,
but reality isn't that way.
The microphones I used have a circular characteristic , no cardioide.
So convolution will yield much to much "room", some direct
signal will always be needed to make it really realistic.
I know that serveral RADAR antennas can be joined to have
variable, special spatial characteristics, we have two ears, and I guess
we can focus to nearby sources in a similar way, so
a circular microphone at the same place and time will
give aparently much too much reverb.
A total failure was the attempt of deconvoluting the impulse responses,
however. The resulting filter is instable and zaps of to infinity
after the first 80 samples or so.
But this is no big problem, I know the rooms where the samples were
taken, the results are very "real" even without deconvolution.
The free air sample revealed a 300 Hz peak and a high frequnecy
rolloff up to 2000Hz (-20 dB down) in it's spetrum,
so I used a "inverse" filter (cooledit) to flatten the reverb samples
a bit, and really this was a great improvement.
I'd say that after this frequency correction it would be very hard
to decide which is a recording in situ or which is "only" convolved.
Very often a theoretical concept yields absolutely uninteresting
results after implementation, but this time things are different.
The algorithm works offline, I know that most of the people won't
like this idea. But if you think that you can implement filters with
very high order, you'll realize that setting the right recording level
will be a big problem. Of line this is no problem, the result is stored
into a intermediate 32bit file, and normalized afterwards in a few seconds,
so if the input files are properly made the output will always automatically
have the maximum dynamic you can get from 16bit. I believe this is faster
than doing a live recording twice, or even three times, just to determine
the headroom.
The sectioned convolution algorithm can not operate in real time,
but maybe one could combine a direct convolution to get the early samples
needed for instant response. I don't know, it wasn't intended to
work like this, since I don't need it.
It will take some additional time till it is made public available.
I'm thinking about a TCL/TK GUI frontend.
Again, thanks for all the good input: C-programming hints, ballons,
convolution theory.
m.c.
More information about the Synth-diy
mailing list