[sdiy] Voice assignment algorithms
ASSI
Stromeko at nexgo.de
Thu Mar 25 20:12:06 CET 2010
Hi Tom,
On Thursday 25 March 2010, Tom Wiltshire wrote:
> The system I've got currently stores two stacks of voices; the free
> stack and the busy stack.
[...]
I wouldn't call this a stack, but rather a FIFO or a linked list,
depending on implementation. Unless I misunderstood, there will be a
good deal of pointer chasing to implement a least recently used voice
allocation strategy. A most recently used strategy would be possible
with minor modifications, but some other strategies (like lowest voice
number first or round-robin) would need extra effort or are a bit
awkward to implement. This linked list stuff is very flexible in that
regard, but uses memory and cycles and god help you if you make an
implementation error on some fringe case...
> When a Note Off event arrives, we can look up which voice is
> playing that note in the 'notes' array.
[...]
The note array is all you need, IMHO - the index is the voice number
and the size tells you how many voices you have. If a valid MIDI note
number is in the array (if multitimbral: channel|note, with the
channel or sound part pre-allocated if necessary or wanted), that
voice is playing it. If the voice is freed, set the sign bit or
something else that is easily tested. Leave the note information
untouched and you can later do some decisions on which sound/note the
voice has been playing before. If you have lots of voices, add a
bitmap that saves you the scanning of the array (I'd think up until
8-16 voices simply scanning the array is faster on most uC), etc.pp.
Keep in mind that the note off does not immediately free that voice,
but is only getting it into the release phase, so you probably need
some flag to tell you when the release has really finished (or, for
some voice stealing strategies, how far the release phase already is).
Things get a bit more hairy when you take sostenuto and sustain into
account. If you do work with the raw MIDI events, all voices would
have to process these controllers themselves, if you don't you'll have
to generate synthetic gate events in the controller and the voices
just have to deal with that, which is preferrable IMHO.
Another shady area is what should happen if you re-trigger the same
note while it is still playing (with sustain for instance). Should
you implement an arpeggiator that can trigger more than one voice,
things get even more interesting. OTOH, if you don't need to
implement any of that, a voice assigner that could deal with these
situations would carry a lot of unnecessary baggage.
Depending on what (re-)allocation strategy you want to implement, you
will need some registers to keep the index of some of the voices, but
probably not linked lists. Things get more tricky if you need to
steal voices, as there are many things you could do to make this less
noticeable at some cost in the assigner code.
> This seems like a reasonable scheme from many points of view (and
> seems to be fast), but there are things it doesn't do. For example,
> it doesn't cope particularly well when set up with only a single
> voice (e.g. monophonic). And specifically, if you play and release
> a note whilst holding down a previous note, you'll get silence
> after the release, whereas most monophonic synths store the held
> keys and fall back to the previously-held key.
This is where the synthetic gate comes in handy. As long as the gate
is open, the voice is reused even if it gets new pitch data. The
other nice thing is that you get to decide whether to re-trigger or
not without burdening the voice code with those details. You will
need to keep a stack to keep track of which keys Mr. Fumblefinger
keeps pressed if you want to do it LIFO style. Other note priorities
will require a quick way to determine the highest or lowest note
(still) pressed. You could implement that with linked lists, but
bitmaps are often faster depending on what uC you use and certainly
use less memory. This all said, I think it's conceptually still
easier to simply block off that mono voice from use by the poly voice
assigner and have a separate pitch&trigger assigner for mono mode.
This does not necessarily mean you can't dynamically assign this voice
to mono, just that once it is used as mono voice some extra strand of
code gets to deal with it until de-allocation.
> Does anyone know of any papers that discuss this sort of stuff, or
> where I could find more information? Has anyone else tried writing
> polyphonic/monophonic voice assignment schemes? Any experience to
> offer?
Sorry, can't help you much there, although I think Mr. Chamberlin
would have to say something about this. A hardware voice assigner
however can always be written into software form, so if you like a
particular implementation, a schematic isn't the worst thing to look
at. For voice allocation on monosynths, Gordon Reid has spent several
installments of his Synth Secret series in SOS (now fully available
for free online viewing) to explain their different note priorities
and how they are useful. Something that is to the best of my
knowledge sadly neglected in software voice assigners are duophonic
voice modes, for instance.
Achim.
--
+<[Q+ Matrix-12 WAVE#46+305 Neuron microQkb Andromeda XTk Blofeld]>+
SD adaptation for Waldorf rackAttack V1.04R1:
http://Synth.Stromeko.net/Downloads.html#WaldorfSDada
More information about the Synth-diy
mailing list