[sdiy] Walsh Generators

ASSI Stromeko at compuserve.de
Sat Mar 19 16:02:47 CET 2005


On Freitag, 18. März 2005 17:38, Scott Gravenhorst wrote:
> I think it was ASSI who posted that high computing power is required
> and that general purpose CPU chips aren't fast enough for a
> significant number of Walsh functions.

On a typical PC it is much easier to just pre-compute a set of waves and 
play back an interpolation of them. The trade-off here is computing 
cycles vs. memory.

Walsh functions and the Walsh Transform have interesting and quite 
surprising links to Wavelets, especially the Haar Wavelet, so-called 
anallagmatic pavements, Hadamard's maximum determinant problem and 
error correcting codes. The Walsh Transform builds on Walsh functions 
as their orthonormal basis and does roughly the same thing as the 
Fourier transform, just that frequency is now called sequency. In his 
article Harmuth even went as far as calling the even and odd sequency 
functions Cal_n and Sal_n to underscore the similarity to cos(n*x) and 
sin(n*x) in terms of the symmetric properties.

Walsh functions are polynomials of Rademacher functions (which are 
square waves related by successive doubling of frequency). They take on 
only values -1 and +1 and the set of coefficients for the generator 
polynomials consists of 0 and 1 only. If you concatenate those 
coefficients to a binary bitstring, you get the Gray code of the 
sequency number of the Walsh function generated by those coefficients. 
If you also map -1 to 0 and +1 to 1, then you can compute Walsh 
functions in a number space that has the interesting property that both 
addition and multiplication can be accomplished by a binary XOR. This 
is the reason that the Walsh function generator is so exceptionally 
easy to implement in digital logic. A general purpose computer is not 
designed to operate on that number space, so it can't use these 
properties efficiently. You can also recursively define the Walsh 
functions. Depending on the ordering the Walsh matrix takes on 
different forms, Hadamard (natural) and Paley (dyadic) orderings are 
created by recursive or nested procedures and are related to either 
Gray coding and/or bitreversing of the generator bitstring.

That takes care of generating the Walsh functions, but you can't really 
do anything with the functions themselves. What you need is a Walsh 
transformer, which is either a matrix multiplication between the Walsh 
matrix (created by stacking up all Walsh functions row by row) and the 
set of coefficients for one cycle of the waveform _or_ a vector 
multiplication between one column (or row, as the Walsh matrix is 
symmetric) of the Walsh matrix and the set of Walsh coefficients to 
produce a single sample. What makes these operations efficient is the 
property of the Walsh matrix to contain only -1 and +1 values, which 
means you never really have to multiply the coefficients - you either 
add or substract depending on the sign. There are many, many possible 
implementations of that operation depending on where you put the 
boundary between digital and analog.

The approach that Neil Johnson took is to do the it mostly in analog and 
just generate control signals for switching the control voltages that 
make up the set of Walsh coefficients. The analog part could be 
implemented more efficiently with a differential pair or analog flip 
switch per coefficient (tail current represents the coefficient value) 
and doing the summation in current mode for each side, with the final 
output created by the same current mirror arrangement that an OTA uses. 
To keep out the switching noise, you'll probably want to have an S&H at 
that end as well, which may lend itself to a switched capacitor 
realization (that is voltage mode instead of current mode summation). 
If you do the Walsh generator in an FPGA anyway, a bit of redundant 
arithmetic transforms what looks like a painful succession of 
add/substract into one or two adder tree(s) of roughly the complexity 
of a multiplier. That saves generating voltages or currents for each 
coefficient and can probably be done at several tens of MHz, so you 
could either get more than one voice from a single generator or achieve 
very high update rates on the DAC (trading off for reduced resolution 
and easing the requirements on the anti-aliasing filter). You could 
sigma-delta modulate the coefficients instead of adding the binary 
numbers representing them and then compress the resulting bitstreams 
(the compressor tree gets simpler than the adder tree, but it has to 
accomodate the oversampling ratio of the sigma-delta modulation). Or 
you don't compress and build a simple 1.5-bit DAC for each function and 
do the summation in the analog domain again.

The appeal of Walsh functions to generate waveforms is not that it's a 
completely new form of synthesis. What makes it interesting (to me at 
least) is that it can be implemented in many ways that don't require a 
computer or DSP and the hardware I'd implement it with (FPGA) can be 
adapted to almost any other ideas I may come up with and provides 
enough power to make serious instruments from (FWIW the concept of a 
mono synth is completely lost on me - I want polyphony, and lots of 
it). The other thing it has going is that at least for "correctly done" 
realizations, the oscillator can be modulated both in phase and 
frequency at extremely high rates and with essentially no latency while 
at the same time the waveform might be changed at the same rate. This 
is almost impossible to do with any CPU-based form of synthesis or 
analog oscillators. If there are any new sounds to synthesize, I would 
expect that extended modulation capabilities bring it to us. I'd really 
like to hear an oscillator that produces the same animation that for 
instance a single string inside my piano exhibits.

> He also mentioned that there is no simple or perhaps even any
> discovered way to manipulate Walsh coefficients based on standard FFT
> style input information, i.e., to have a slider for the fundamental
> _sine_, another for a 2F _sine_, etc.

You can do that easily. The Walsh transform is linear, just like the 
Fourier transform, so you can compute the coefficients for sin(n*x) and 
then just scale each set of coefficients with the Fourier magnitude and 
add them all together. My comment was that I have an inkling that there 
ought to be a more efficient way of doing that (I will explain why in a 
second).

Many people don't believe this until they've seen it, but for any set of 
basis functions that define a linear transform you can move smoothly 
from one waveform to another in the time domain by interpolating 
between the two spectra in the transform domain. It doesn't have to be 
a Fourier spectrum, Walsh or any other works just as well, even though 
the Walsh functions don't look at all like they'd lend themselves to 
smooth interpolation. In the case of Fourier transform, the spectrum is 
a frequency spectrum which we can relate to relatively easily. In the 
case of the Walsh transform we look at a sequency spectrum, which is 
harder to understand - but it works just the same way in all other 
respect. So to restate my comment once more, I think that there ought 
to be a way to transform between sequency spectrum and frequency 
spectrum without going through the time domain. But that is more of an 
academic interest, not a hindrance in practise.

> Without this, the human 
> interface will lack the easy intuitiveness that is present in
> sinewave additive synthesis.  Basically, the interface will suck.

No. You can for instance do drawbar style synthesis with any set of 
basis functions. You can do the same with wavetable synthesis or wave 
sequencing. Whether the UI is intuitive or sucks is really not a 
function of the method of synthesis.

Regarding drawbar-style synthesis with Walsh functions: the coefficient 
tables for saw and triangle waves are very sparse and the necessary CV 
is easily produced with a simple ladder network to divide the original 
CV successively by 2. For triangle the coefficients are further apart 
than for saw, so you need even less CV. If you insist on sine waves, 
you'll find that for symmetry reasons three out of four coefficient 
should be zero. Neil's table doesn't show this because he has chosen a 
form of sampling for the transform that effectively phase-shifts the 
sine and thus destroys the quarter-wave symmetry, which means only 
every second coefficient is zero.


Achim.
-- 
+<[Q+ Matrix-12 WAVE#46 Neuron microQkb Andromeda XTk sonic heaven]>+

DIY Stuff:
http://Stromeko.Synth.net/#DIY





More information about the Synth-diy mailing list