[sdiy] Quick C query

cheater cheater cheater00 at gmail.com
Mon Nov 22 09:17:39 CET 2010


Hi guys,
Phillip, thanks for a very interesting post. Regarding optimizations,
isn't Tom's MCU supported by some sort of gcc-based toolchain?

Veronica,

On Sun, Nov 21, 2010 at 21:24, Veronica Merryfield
<veronica.merryfield at shaw.ca> wrote:
>
> On 2010-11-21, at 11:56 AM, cheater cheater wrote:
>
>> higher level languages and you have to break out to assembler (be it
>> use of SIMD, or atomic changes to data)
>>
>> There is a real demand for this kind of work being performed.
>
> You'd be surprised how much you can hand assemble in C if you understand what the compiler is doing. Most embedded application projects do not change the tools during the life of the project so tuning the C to the output is not the heinous sin it would first appear.

yes, but then if you want a piece of code to fit exactly into cache
then it's much easier to do with ASM. But this discussion diverges
very far from Tom Wiltshire's original questions.

>> With MCU's you're dealing with something that's slower than a 386.
>> Most applications for the 386 were written in pure assembler.
>
> I doubt that somehow even if by virtue of the compiler wars between Borland and Microsoft.

I think you're more right than I am, given that you must be viewing
the computer based on where it was intended to go - i.e. companies and
scientific usage. Where I come from, computerization of businesses
hadn't happened until around the windows 95 days, and that was pretty
much the end of the 386. So the 386 was mostly used for things like
computer games, "having a computer to learn how to program it", and
heavyweight scientific calculations, especially given that many 386's
were second-hand at that point.

> Going back to the OP, single logic flags are normally instantiated as ints, being the most natural size for the under lying architecture and therefore normally the most efficient. One should check the flag against 0, i.e. equal to or not equal to. Some define FALSE as 0 and TRUE as 1, but one should stick to checking against 0. THe underlying assembler to this on most machines is similar.

that's also a good design pattern for hardware that's e.g. used for
extended periods of time (decades) or used in radio-active
environments. Bits can flip on their own.

> Whilst some compilers support bitfields, it is not universal - check the tools set text for this. Those that do support it are often expensive in code size and execution speed although some architectures offer bitfield support that the compiler writers can use. In PIC land, the register definition headers often use unions of structures that include a bitfield definition and a non-bitfield overlay.
>
> It is common to find the bits defined in binary powers of two with defined meanings and use the | and & operators to combine and test. This is great for packing but can lead to lengthy chains of if elseif else code which can get cumbersome and expensive depending on the compiler.
>
> Which gets me back to knowing what the compiler does with code. Output to a listing file and look can get you a very long way before having to goto assembly.

same reason why I have suggested a disassembler!

>>> Obviously even if it's not strictly necessary, there's nothing wrong with
>>> enthusiasts doing it to squeeze the last possible drop of performance out of
>>> the hardware (which e.g. demo scene coders still do in the PC world).
>
> Many professionals do it all the time although an equal number seem to think that buying a faster machine is a reasonable solution too.

Premature optimization is one thing, but writing dumb idiot-code is
another. You don't want to do either, unfortunately the latter group
you mentioned seems to correlate well with the latter group I mention.
It's much easier for newbies to say "buy a faster computer" than for
them to learn how to do things right.

Cheers,
D.



More information about the Synth-diy mailing list