[sdiy] Getting dsPICs talking

Tom Wiltshire tom at electricdruid.net
Thu May 28 18:38:28 CEST 2009


Hi All,

I'm finally into the construction stage of my "Protowave" monosynth  
project. The synth uses two dsPIC 33FJ128GP802 processors for its  
voice, followed by analog VCF/VCA.
One of these uPs is dedicated to production of audio and can generate  
two oscillators at 62.5KHz (the "Sources uP"). The second uP is  
dedicated to modulation and deals with all the envelopes, LFOs etc.  
The final sample rate for this hasn't been determined, but is likely  
to be around 10KHz. Both processors are controlled via an SPI link  
from a third processor which deals with MIDI, scans the keyboard and  
reads the panel controls.

The modulation processor has a 2-channel DAC on chip. The output from  
this DAC is multiplexed out to sample-and-hold buffers in the usual  
way. These analogue control voltages control the VCF and VCA.

Now my query:

I have to get modulation information (pitch and waveshape  
modulations) from the Mod uP to the Sources uP: How best to do it?

I had thought to also feed analogue control voltages from the Mod uP  
to the ADC inputs on the Sources uP. The fact that the signals go  
from the digital domain to the analogue domain and then back again  
conveniently gets around the fact that they are running at different  
sample rates (off different clocks, actually), since the anti-alias  
filtering acts as interpolation between sample values.

However, this conversion digital->analogue->digital does seem a bit  
crazy. Especially so when you consider that the chips have a codec  
interface on chip. Having realised this, I wondered whether I could  
send the modulation signals to the Sources uP via the codec interface  
instead. Presumably I'd be able to send multichannel data (I've got 6  
channels to transmit) over the digital connection. But how would I  
handle the different sample rates? Would I need to synchronise the  
two uPs? Do I need to digitally filter the outgoing/incoming  
modulation data to deal with the different sample rates? Am I crazy  
having different rates? Would I be better having one as a multiple of  
the other?
Does anyone have any experience making two uPs talk to each other  
like this? One is essentially pretending to be an audio codec, and  
the other is processing that signal and outputting it. The difference  
with the usual situation is that the codec rate and the output sample  
rate would be the same.

If I can make this work, it'll be really great. But there's a fair  
mountain of work still. I'd hoped to finish before this year's SDIY  
UK, but it'll have to be next year!

Any suggestions appreciated.

Thanks,
Tom





More information about the Synth-diy mailing list