[sdiy] dsPIC Stuff
Scott Gravenhorst
music.maker at gte.net
Mon Nov 12 13:44:42 CET 2012
rburnett at richieburnett.co.uk wrote:
>Hi Scott,
>
>Nice demo sounds and thanks for sharing with us! If this is your first
>venture into dsPIC territory then you've done a blinding job :-)
It is my first DSP project using a microprocessor, but I've done work with other
micros, so I'm not a stranger to them. This is also my first attempt at getting more
than one microprocessor to talk to each other.
>How are you exciting the K-S model? Have you got a PRBS noise
>generator running in each dsPIC and preload the buffer with a burst of
>noise at each "pluck" instant, or using an impulse or something?
I use a rectangular pulse. At first, there was a noticable click, so I tried an LFSR
to fill the waveguide with noise. I found that to be rather underwhelming and went
back to the rectangular pulse, but I pass the pulse through a single pole lowpass IIR
filter (4 dsPIC instructions) which removed the click. The pulse length is 1/4 the
length of the waveguide. That provides some nice harmonics and also preserves any old
data in the RAM. Part of the MIDI controller's job is voice assignment. The
assignment algorithm includes first checking for any waveguide that has already been
used at the pitch for the new requested note. This yields two things: 1) The voice in
question is reused (as it would be on a real harp-like instrument) which ensures that
no other voice is stolen from another pitch and 2) the old data that is likely to be in
the RAM provides for a randomization of initial timbre very much like a real struck
string (think hammer dulcimer). One of the features of #1 is that one can hammer away
on several strings (bass notes for example) and they sound more natural than if their
voicing was stolen at random by another incoming note. This tends to extend the
usefulness of the fairly small number of voices (8) available.
>I bet this would sound really nice with multiple K-S models for each
>note to give a slightly detuned sound as each note decays away. It
>sounds great as it is, but would be a bit less static with voices
>doubled up or chorus applied to the voice mix afterwards.
In fact, my work with K-S started several years ago with FPGAs. One of the instruments
I designed was a 64 string harp running on a 500K gate FPGA. I also have a 1800K gate
device (Spartan-3A DSP 1800) and I instantiated the 64 string harp in that FPGA three
times. This gives me a 192 string harp and indeed, I can tune each string of the three
harps as I wish. It can sound quite rich.
>Are you using block based processing for the dsPICs?
No, I used a more pedestrian approach since it was my first shot at a synth, I just
wanted the damn thing to work at all... In my design, I use the DAC's interrupt as a
"create-new-sample" trigger, the new sample is computed and then system maintenance
tasks are checked and performed. Each voice engine supporting 4 strings and being
oversampled 4 times uses the vast majority of the clocks. Of the 704 clocks available,
all but about 80 are used. I also prioritized RAM use for the delay lines, but as it
turns out, there are some 600 bytes of leftover RAM, so now that the instrument works,
I may change it to do block processing with a DMA buffer to possibly squeeze in another
feature or two. I do, however, use DMA to capture two analog inputs and DMA is used to
receive the 32 bit SPI message packets. The SPI DMA channel does not interrupt, rather
it is polled once per sample time which is plenty fast since the source of the SPI
message is complete note on or note off messages from a MIDI UART. The ADC DMA channel
also does not interrupt. I use the DAC interrupt to first read the values acquired
from two pots and then tell the ADC to go again. The ADC easily finishes it's work in
the 704 clocks before it's data is needed again. So while I don't take advantage of
all of the DMA features, I use enough to seriously offload work to the DMA controller
that would otherwise have to be performed by the CPU. The way I configured it, the ADC
starts with a single instruction and no status register reads nor ready bit resets are
required - the DMA controller handles that automatically.
>You can squeeze a
>lot more out of them if you process a block of say 16 samples for one
>K-S voice into a buffer, then process 16 samples for the next voice into
>another buffer, etc. before mixing them all together and playing them
>out. It just minimises the time spent pulling variables from RAM to set
>up for a calculation, and stashing the results back in RAM so you can do
>something else. It's a lot more efficient if you can set up a voice
>algorithm, produce a run of say 16 samples for that voice, then tear it
>down and set up for the next voice etc, than set up for one voice,
>produce a single sample, tear down and set up for the next voice only to
>produce another single audio sample. The downside is increased latency.
>
>My appologies if i'm explaining something that you already know, but I
>think this is a valuable concept to getting the most out of general
>purpose DSPs.
>
>I look forward to reading and hearing your further developments,
>
>Best regards,
>
>-Richie,
>
>PS. Adding some peaks and dips to the overall frequency response might
>also sound nice to mimmic body resonances in a real stringed instrument.
>Does anyone know how close you can get to an acoustic guitar sound
>using a bunch of Karplus-strong algorithms and a few IIR filters to
>model the key body resonances?
I've not done so, but have wanted to for some time, especially in the FPGA world where
I have a good deal more freedom.
>On 2012-11-12 02:08, Scott Gravenhorst wrote:
>> My Brain Is An Electron:
>> http://electro-music.com/forum/download.php?id=32641
>>
>> Sometimes I Repeat Myself:
>> http://electro-music.com/forum/download.php?id=32653
>>
>> These two pieces were done using an 8 voice polysynth implemented on
>> dsPICs
>> (a big thank you goes out to Tom Wiltshire for his help).
>>
>> The "Harpie" instrument consists of 3 dsPIC33F ICs, one serves as a
>> MIDI
>> controller and voice assigner which communicates via a one way SPI
>> channel
>> to the other two "voice engines". Each voice engine manifests 4
>> Karplus-Strong string models. The Karplus-Strong model parameters
>> for
>> reflection filter Fc and linear damping are controlled by pots
>> feeding ADC
>> input pins.
>>
>> The second piece was done by recording the same MIDI sequence twice -
>> one
>> time an octave higher than the other and using Audacity, the two
>> recordings
>> were offset some small number of milliseconds in time and then mixed
>> into a
>> stereo track.
>>
>> The synthesizer is built on 2 stripboards and all 3 dsPICs run at 40
>> MIPS
>> with a 20 MHz xtal.
>>
>> -- ScottG
>>
>> ________________________________________________________________________
>> -- Scott Gravenhorst
>> -- FPGA MIDI Synth Info: jovianpyx.dyndns.org:8080/public/FPGA_synth/
>> -- FatMan Mods Etc.: jovianpyx.dyndns.org:8080/public/fatman/
>> -- Some Random Electronics Bits:
>> jovianpyx.dyndns.org:8080/public/electronics/
>> -- When the going gets tough, the tough use the command line.
>>
>> _______________________________________________
>> Synth-diy mailing list
>> Synth-diy at dropmix.xs4all.nl
>> http://dropmix.xs4all.nl/mailman/listinfo/synth-diy
>
>_______________________________________________
>Synth-diy mailing list
>Synth-diy at dropmix.xs4all.nl
>http://dropmix.xs4all.nl/mailman/listinfo/synth-diy
>
-- ScottG
________________________________________________________________________
-- Scott Gravenhorst
-- FPGA MIDI Synth Info: jovianpyx.dyndns.org:8080/public/FPGA_synth/
-- FatMan Mods Etc.: jovianpyx.dyndns.org:8080/public/fatman/
-- Some Random Electronics Bits: jovianpyx.dyndns.org:8080/public/electronics/
-- When the going gets tough, the tough use the command line.
More information about the Synth-diy
mailing list