[sdiy] New to list - and DSP development
ASSI
Stromeko at Compuserve.DE
Sun Dec 12 23:27:54 CET 2004
On Sonntag, 12. Dezember 2004 20:33, Kenneth Elhardt wrote:
> I did notice the divide down problem wouldn't allow for it. I've
> never understood why the MIDI people didn't use 38.4 kBaud which is a
> standard rate.
One of the early drafts had a rationale on that and IIRC correctly it
was so that they'd be able to use a single crystal oscillator and
divide down the various clocks in the system.
> A couple of things about what you say. First, what I'm programming
> now is a form of additive synthesis that only uses sine waves. As
> pitches rise I'll drop the higher harmonics.
You will probably not be able to hear it, but you are still forgetting
about the quantization effects you are creating both in time and in
amplitude. Depending on exactly how you produce the audio and whether
it's modulated or not you'd want to oversample the rendering process
and filter/decimate just before the DAC.
> I also thought audio DACs or external circuitry provided some form of
> lowpass filtering to help eliminate the problem.
If you are talking about the reconstruction filter at the DAC output,
yes - it's there to remove some of the unavoidable artifacts of going
from digital to analog. But they are not normally designed to remove
any artifacts you may have woven into the digital signal itself.
On Sonntag, 12. Dezember 2004 06:20, Kenneth Elhardt wrote:
> On the Mac or PC I'll have plenty of free time where the processor it
> just waiting so I don't want to have to resort to interrupts.
You may have plenty of time, but you don't know if you have time at
exactly that moment you'll have to send the next sample. That's why you
will need to produce samples in advance and leave it to some OS routine
to use the hardware to it's fullest and keep the timing.
> I just plan on polling
> whatever I need until a buffer is done playing. I guess that will
> limit the speed of my code to the audio throughput rate needed. I
> just hope the OS will let me know when one buffer is finished so it
> can be swapped before the next stream of samples is needed, or
> whether it can start the queued sound on its own, or whether I
> specifically need a callback function to do it. Since nobody really
> knows for sure I guess I'll have to experiment on my own.
There's basically two schools of thought for userland audio streaming on
modern hardware: one is to register the producer with the OS, the OS
will then call it to produce a given number of samples whenever it
needs more of them. The other is to to produce samples and send it to
the OS via a special call that will block until the OS needs more
samples. In both cases the OS takes care of the timing and the
application will automatically follow. It's easy to see that you can
write wrappers to emulate one method with the other and some libraries
may offer both.
As a concrete example, the Java sound API uses blocking writes and has
functions to ask for how many sample could be written without blocking
and to chose an appropriate buffer size. With these functions it's also
possible to write a wrapper class that uses callbacks instead. It
doesn't get any simpler than that and it isn't specific to any OS at
all, so before delving into more complicated libraries you may want to
check it out.
Achim.
--
+<[Q+ Matrix-12 WAVE#46 Neuron microQkb Andromeda XTk sonic heaven]>+
Factory and User Sound Singles for Waldorf Q, microQ and rackAttack:
http://Stromeko.Synth.net/#WaldorfSounds
More information about the Synth-diy
mailing list