[sdiy] Additive Synthesis - Wendy would say it works...

Batz Goodfortune batzman-nr at all-electric.com
Wed Mar 23 15:09:50 CET 2005


Y-ellow Harry 'n' all.

At 09:50 PM 3/22/05 -0800, harrybissell wrote:
>Yknow Batz I get some great ideas from you :^P

Well I'm glad it's you stealing them. Usually it's some corporate ass-clown 
or  some already rich dilldo in a grotesquely commercial geetar band. Go 
for it. Though I should warn you. Sometimes I'm full of good ideas. Other 
times I'm full of shit. As Australians, we acquire a taste for eating turd 
at an early age.

>while reading your post, It made me think of a graphics program I use
>called "the Gimp"  (which is a sort of photoshop clone).  People have made 
>some
>
>amazing image processing and transforming software in the form of macros or
>functions.  Our music waveforms could probably have the same kinds of things,
>like 'gaussian blur' or 'unsharp mask' etc...

Yeah. I know the Gimp. Oh and the graphics program as well. Imagine how 
much better such a program would be if only it could access the processing 
power of some of these U-bute graphics engines they have on video cards 
these days. But alas... Therefore, I guess the idea here would be for 
someone (you know who you are) to put some kind of universal, (Hopefully 
multi-voice) Additive synth engine on a chunk of xilinx and make access to 
it open source.

The idea here would be to make the hardware do all the hard grind. All the 
software has to do is be a front end. A way to down-translate a synthesis 
concept into the re-synth engine.

The irony is that when I worked for the afore mentioned (and failed) sound 
card company, We would have used Altera Flex 10K. which is very similar to 
Xilinx. Though Xilinx is probably leaps and bounds now. But we did some 
research initially and found that to make such a render-engine, would 
require something like 6 Motorola DSP96Ks just to get off the ground. They 
were the bee's knees back then. This is because a DSP farm needs an 
operating system. And although we were working with a company that 
specialized in fast, real-time DSP OSes, we ended up pulling about half the 
available computing power on house-keeping.

Our format looked rather like MP4 audio. Long before MP4 was even imagined. 
In fact I think they were still using MP2 audio at the time. IUMA was in a 
fairly anumeral phase at the time as I recall. It was based in part on a 
re-synthesis engine but could actually achieve any kind of synthesis. And 
processing for that matter. I had vocoding in mind at the time. I'm still 
amused however that the MP4/SAOLC community are still scratching their 
heads how to do this in hardware but I'd already figured this out way back 
then. I should imagine that given that the DSP/PGA hardware is 
significantly better these days, this stuff would actually be a snap. Since 
the idea was never patented I'll leave it at that. Although some of you 
might be able to make the conceptual leap already. But if anyone seriously 
wants to do this, I have no problem with spilling the beans to the right 
people. Once again, you know who you are. :)

OK. I just read Ed's other comment about programming the K5K. ADSRs for 
partials are just not good enough. I'm sorry but they're not. But before 
you freak out at how hard they'd be to program if they were a 256 step wave 
table,  let's apply some lateralism here. Apart from the top level, no-one 
in in their right mind would program one of these things from the front 
panel. Although I could wizz through a DX7 from the front panel fairly 
quickly, I usually do it all on-screen. With the aid of graphic helpers if 
you like. And although I CAN program a DX sound from scratch, more often 
than not, the procedure is to find a sound that's sort of close to what I'm 
looking for and massage it till it has the properties I need.

With a 100 or 128 partial Additive, resynthesizer, you would draw the 
envelope transient (wave form) on a screen. Assign it properties such as 
[Key-on] [hold] [hold loop]/[key off point]. Then copy that master 
transient to all 128  envelopes. At this point you could employ various 
algorithms to copy with harmonic series. So that instead of all harmonics 
having the same transient, they have a progression that emulates some 
real-world harmonic progressions. But still based on the original transient 
shape. The over-all transient will be the same. You could then go back to 
your master transient panel and tweeze further. Using a +/- transient to 
kick up or kick down harmonics at certain points over the transient time 
scale. Rather like an EQ but over time. In this way, most of the 
painstaking work of setting up 128 transients is already done for you and 
you can hear it immediately.

In fact you could end the process right there but this is only the base 
level of what you could do. Let's take the humble filter sweep for example. 
Create a transient that pushes amplitude up across the range of partials 
over time. So a peak of harmonic content sweeps up and back over the 
transient. Or assign an LFO to do this. This is pretty much how you do a 
filter sweep on a DX7. (Yes kiddies, It is possible.) Ok. so it was hard to 
make it sound like the filter was in screech mode on a DX but if this synth 
engine was capable of reproducing any sound then there's no reason why it 
couldn't do that as well.

The engine it self is programmable at the base level but the tools you use 
to attack it can be modular. The beauty would be that no matter what way 
you attacked it conceptually, the patch data it self would always end up 
being the same. And rather small compared to sampling a sound. But you 
might end up with two patch types. Once we'll call a conceptual patch. The 
other we'll call the stored patch. And here's something to think about. 
Suppose you had a real nice patch someone gave you that was made by 
physical modeling. (Or granular or some other form of synthesis concept you 
didn't feel like messing with.) You don't have to pull out your physical 
modelling tools to tweak the patch. You could pull out your lowly 
analog-esk programming tools and give it a bit of a tease with a VCF 
emulation. Or make the upper harmonics wobble or phase maybe.

But the part I like is the tantalizing prospect of taking two or more 
samples and morphing them. Especially doing something like; Take a 
resynthesized sound of a crash cymbal or maybe even a gong and morphing 
that with say, a Piano. So that you end up with a chromatic crash cymbal 
you can play like a piano. I can only JUST imagine what that would be like 
to play but it's gotta be one weird instrument. This kind of thing would of 
course, require yet another set of tools.

And by making these tools modular or at least pluggable in layers, you can 
mix and match your weapons of choice. But the base system would at least 
have to come with the ability to tweak the EGs and to do a fourier on a 
sample and translate that directly into the EG's transients.

With the sound card project, it was always our intention to make the format 
open. We had intended to have a web site where users could contribute patch 
manipulation objects to a database. To encourage people to program their 
own ways of attacking sound. As mentioned, part of our data structure was 
an MP4-like system. It wasn't really designed so much for audio compression 
though. It was designed with musical collaboration in mind. A small 
(effectively compressed) universal data set that musicians could swap with 
each other across the net. What one musician had in mind would be exactly 
what another musician hears at the other end.

The point is that we were to make a core product and let the users drive 
it's direction. Those kinds of concepts appeal to me and are probably very 
much suitable for a DIY synth. Where you might have some Linus Torvalds 
type person(s) overseeing the kernel and everyone can mess with it at 
whatever level they feel comfortable with.

As for using graphics programs with sound. If memory serves, someone on 
this list tried this quite some years ago. Like perhaps 7? I seem to recall 
someone figuring out how to get Photoshop to load a wav file and feed it 
through the gausian blur filter. I don't recall the result but I figure it 
can't have been all that impressive or I would have remembered.

Once again with the sound card project, our maths expert investigated using 
fractal compression. We'd heard that it could re-create detail that was not 
recorded in the original file. Apart from the fact this claim was too good 
to be true, all these things were only applicable to graphics and video. 
The properties of wave form data didn't lend it self terribly well to this 
type of processing. The exact problems allude me for the moment.

Anyway, that's probably more waffle than anyone here really wanted to read. 
Assuming anyone got this far through it anyway. My brain is frying nicely 
again and I have some research to do. And I should get round to reading the 
WWN. But I'd just like to close with the notion that I'd seriously consider 
kissing the hairy freckle of anyone who actually managed to pull this off. 
All the VST plugins and soft synths in the world couldn't make up for a 
small but serious bit of hardware.

Be absolutely Icebox.

  _ __        _       ____ International Nihilist ____
| "_ \      | |
| |_)/  __ _| |_ ____       ALL ELECTRIC KITCHEN
|  _ \ / _` | __|___ |Your source of Armageddon in a musically crass world
| |_) | (_| | |_  / /
|_,__/ \__,_|\__|/ /   Disgusting-> http://all-electric.com
                 / ,__  Music -----> all-electric.com-click music/download
Goodfortune    |_____| Cult  -----> http://www.subgenius.com




More information about the Synth-diy mailing list