[sdiy] Communications with voices in a polyphonic synth
cheater cheater
cheater00 at gmail.com
Thu Mar 11 16:08:55 CET 2010
On Thu, Mar 11, 2010 at 13:16, Tom Wiltshire <tom at electricdruid.net> wrote:
>
> On 11 Mar 2010, at 11:38, Neil Johnson wrote:
>>
>> Ok, so on that chip you have a choice of SPI, I2C, UART and CAN:
>>
>> * I2C is limited to 400kHz if going off-board, but has HW support for
>> in-band addressing.
>>
>> * SPI as discussed looks like a no-no (too many pins).
>>
>> * CAN goes up to 1Mbit, lots of HW support for addressing so minimising SW
>> overhead.
>>
>> * UART goes up to 10Mbaud, in 9-bit mode can do in-band addressing to
>> minimise SW overheads.
>>
>> I'd choose either CAN or UART and with differential signalling. But - as
>> has been asked already - what speed do you *NEED*..?
One thing that makes RS a good standard is the fact that .. it's a
standard. Your cut down CAN is not going to be standard; in 10 years
it might be difficult for you to recreate the protocol. RS is well
documented.
> Well, "as fast as possible", obviously! Isn't that always the answer?!
>
> Practically, though, I have been thinking about this, as John Speth
> suggested - quantify the problem and you might solve it.
>
> I want the time taken from pressing a key to the note starting in the voice
> processor to be <2mS. There, that's the specification.
> There's obviously a lot of software in that path, so it depends on (for
> example) how fast the keyboard is scanned. In many ways the voice comms is
> one of the smallest sources of delay, but there's no reason why it should be
> a larger delay than it needs to be.
> Let's assume that we have <0.25mS to get the message from the main uP to the
> voice. The other 7/8ths of the time will be eaten up by parsing at both ends
> and such like. If I was trying to get an address byte and a double-byte of
> data into that time, that'd imply a data rate of 100Kbit/S or better. Not
> very demanding.
Think about the worst case scenario when the packet will only happen
after other packets. As I can see it, what could happen is:
1. user presses 64 keys
2. lfo/noise packets happen (say, 10 multicast packets)
3. for each key you get a packet which sets up every matrix
destination based on velocity and pitch (say, 10 packets each)
4. for each key you get a packet which starts the note (say, 2 packets)
so this is 10 + 640 + 128 packets for 64 voices.
For 16 voices that's 10 + 160 + 32 packets.
For 8 voices that's 10 + 80 + 16 packets. That's 106 packets. In 9-bit
mode that's 954 bits, so round it up to 1 kilobit, which has to happen
in 0.25 ms. This means 4 Kbit per 1ms so 4 Mbit per 1 sec so 4 Mb/s.
For 64 voices it's 8 times more, so 32 Mb/s.
This is the absolute worst case scenario. Usually, keys won't be
pressed at the same time, and for this sort of chording to happen you
would need all keys to be pressed within the time of transfer of this
many packets:
10 + 10 + 2
which is 22 packets, which are 198 bits in 9-bit mode, which happens
in 0.19 ms on a 1Mb/s bus with zero latency. This is not very likely
to happen at all; but bear in mind that catastrophes can happen.
What's more important is that if you want to one day support OSC, then
you'll be having this situation, because sequencers will send all
notes at once, since the notes will be set up nicely on a grid, and
therefore can and will be grouped into one packet or expressed as a
multicast message.
But that's completely removed from the reality of today, just bear that in mind.
What's less removed from reality is the fact that at 1Mb/s if you have
those 10 lfo events and 10 mod matrix events happening, then you only
have a 0.05ms margin, which is very very little, so you could think
about hardware with less latency than that.
D.
> I'd like to thank everyone for their help and ideas on this topic. All
> really good stuff.
>
> Having read through the relevant datasheet sections for the various
> interfaces I have available (the 4 Neil mentions above) I think I'm leaning
> towards CAN. This is because (1) it seems to be designed for this sort of
> thing (2) it is plenty fast enough (3) it has lots of hardware options for
> addressing and filtering. Any software I can eliminate is a good thing,
> because my poor voice processors have enough to do already, what with audio
> and modulation sources.
>
> As far as the protocol goes, one or two people suggested using MIDI
> internally. This seems like a sensible idea to me. Like Veronica said, why
> reinvent the wheel? It has all the messages I'm going to need, there's a
> specification already done that I can work to, and I'll have data coming in
> from the outside world in that format already. I can always tweak it a bit
> if I really need to.
>
> I think the next step is probably a multi-voice test setup, with one main uP
> talking to a group of voice uPs. For the test, I can use just the raw voice
> uPs and mix their audio outputs together. This is/will be a basic dsPIC
> digital synth. After that, I take the tested comms code back to the voice
> board hardware I have and see if I can't get the whole lot running at once.
> I'll keep you posted if I come up with anything good.
>
> Thanks everyone.
>
> Tom
>
>
>
>
> _______________________________________________
> Synth-diy mailing list
> Synth-diy at dropmix.xs4all.nl
> http://dropmix.xs4all.nl/mailman/listinfo/synth-diy
>
More information about the Synth-diy
mailing list