[sdiy] Techniques for Multiplying MIDI Clock frequency?
Brian Willoughby
brianw at audiobanshee.com
Sat Dec 18 22:00:39 CET 2021
Your hunch is right: Any function like delayMicroseconds() is bad design for a high performance system, unless it's used in the right places for the right reasons.
The biggest disadvantage of such software delays is that they force you to design firmware that only handles one thing at a time. In a complex system that's keeping rock-solid timing while also handling multiple features, you need a design that's handling multiple operations in the background via interrupts and hardware peripherals. Those background operations can make delayMicroseconds() take longer than it should, depending upon how it's implemented. There are ways to maintain accuracy, but not without limiting the ability to handle lots of things at the same time.
Most processors that are appropriate for handling MIDI will have several hardware timer peripherals on the chip. If you need to generate pulses, your best performance will be to select a timer peripheral that can directly drive an external pin, and just alter the period when needed. There's no way that a software pin output can compete with direct hardware timing of the pin state changes.
For situations where you're not manipulating an external pin, but have some other timing requirement, a hardware timer peripheral can generate an interrupt. Then you write an interrupt handler that immediate performs the required operation, followed by setting up the timer to trigger again (if the timer mode requires manual restart - some timer modes automatically repeat the triggers).
That said, there are sometimes cases where the CPU clock speed is too slow, and the delay is shorter than one clock cycle. Or, given that it takes several instructions of delay to handle an interrupt, the number of clock cycles needed for an interrupt handler is greater than the number of clock cycles for the delay. In these cases, you might need a "busy" wait, but that seems unlikely. What's more likely is that the Arduino interrupt handlers have way more overhead than a custom-designed piece of code might have.
delayMicroseconds() might be useful on a platform where all of the critical timing is handled directly by hardware peripherals or interrupts, leaving the slow user interface as the "top level" code running whenever the interrupts are busy and there's time left over. If you want to blink a non-timing LED, or display something and pause before displaying something else, then delayMicroseconds() would be perfectly fine. Users probably won't notice that the LCD shows text for 0.9 seconds or 1.1 seconds - that's not critical timing. This only works when 100% of the critical timing operations are handled without a delay, leaving only the non-critical operations in the leftover CPU cycles.
I haven't used Arduino, but there are a few ways to write firmware. There are big Linux systems that have lots of overhead, lots of threads, and more latency. I doubt any of the Arduino-sized platforms support Linux, but it's important to point out that Linux doesn't have the best timing performance (unless you write drivers for everything). The next option is a RTOS (Real Time Operating System) where there are still multiple threads, and performance can be a lot better than Linux, but there is still a lot of overhead in code - even the interrupts take more cycles to process. The best performance comes from a bare metal code design where there aren't really any threads. Instead of threads, the code can use interrupt handlers to run separate operations at the same time, without the typical overhead of switching the processor between threads. Once the critical pieces are handled with well-timed interrupts, the remainder of the code (user interface, non-timing-critical features) can be handled in the single foreground thread, but you just have to realize that all of the interrupts are taking time away from that foreground code.
Brian
On Dec 18, 2021, at 12:15, Benjamin Tremblay <btremblay at me.com> wrote:
> One more thing. I see a few examples online where the arduino is told to delayMicroseconds(n) to simulate the extra pulses. This seems inherently buggy because it makes the whole program only stable within a certain frequency range.
More information about the Synth-diy
mailing list