[sdiy] SVF & phase

Magnus Danielson cfmd at swipnet.se
Thu Nov 28 02:47:33 CET 2002


From: Grant Richter <grichter at asapnet.net>
Subject: Re: [sdiy] SVF & phase
Date: Wed, 27 Nov 2002 08:57:44 -0600

> Mark's question makes me question my assumptions about filters. Maybe
> someone can clarify.
> 
> I am under the impression that a filter without phase change would not be a
> filter at all?

That's a though one... (or... you gonna hate me after this)

There is several views on this:

1) No, *any* curcuit introduce phase shifts in at least the form of propagation
   delay of the electromagnetic wave. You can't beet the speed of light even if
   you like to. Also, the actual speed of light is usually slower than that in
   vacuum, since you run with stray capacitance through non-vacuum material.

OK, but if we disregard the "electrical" delay?

2) No, assuming that we have a linear invariant filter (OK, let's translate
   into plain english - no non-linearities and it does not change with time or
   signal, it has the same properties all the time) wich is usually the
   assumption in use and BTW the assumption under which all of the popular
   methods _only_ work. That includes Fourier-analysis, LaPlace-analysis as
   well as simpler jw-analysis and a whole stock of classic formulas that you
   meet in "classic physics" in high school and uni's etc.

   So, under this assumption, we have an n-th grade linear diffrential system.
   Beware that n can be very high sometimes, and for some systems you can't
   actually model them exactly with less that n is infinite... pure electric
   delay is in there, and we've allready assumed we could ignore it, so it
   seems "right" that we also assume we don't allow infinit n system...

   Linear differential systems have been of much study in classical analysis
   and one of the best methods to cheat (another way of saying, this is a very
   strong tool in the right hands) is to use the Laplace transform. This
   converts (in its forward direction) a function of time into a function of
   the complex (yeah, you could say that again) variable s, where s is a
   two-dimensional per-time (1/t) "frequency" variable. When analysing the
   diffrential systems that filters are, we find that at certain points on the
   surface that forms when attemping any complex number s, we have an absolute
   value of 0, let's call them zeros. At still other places we have absolute
   values popping up to infinity, let's call them poles (since they look like
   poles holding up a very slack circus-tent canvas. When we do some figuring
   we conclude that the function we have will match that of

          (s-z ) * (s-z ) * ... * (s-z )
              1        2              m
   H(s) = ------------------------------
          (s-p ) * (s-p ) * ... * (s-p )
              1        2              n

   where the z:s represent the m positions of absolute value 0 and
   where the p:s represent the n positions of infinit absolute value

   Thus, we can represent the porperties of this linear invariant diffrential
   system by a number of pole and zero positions. We can from that derive all
   other linear properties of this system (filter in our case).

   For the curious, an delay of T seconds will have the form of

                              2       3
           -Ts            (Ts)    (Ts)
   H(s) = e    = 1 - Ts + ----- - ----- ...
                           2!      3!

   which takes up an infinit number of zeros to sort out... (which is why we
   avoided it).

   Normaly SVFs have 2 poles and 0, 1 or 2 zeros depending on which output you
   tap it... or actually, you allways have 2 zeros, it is only a matter on
   where they where placed.

   There is exactly one solution to placement of equalently many poles and
   zeros which produces no phase shift - such that the position of one pole is
   matched by the position of a zero, and that all poles and zeros is matched
   accordingly. Now, this effectively makes z1 = p1, z2 = p2 etc. and those
   makes perfert cancelation. There is no way for any amplitude response to
   change (which is BTW the absolute response of H(s) when s = jomega), since
   H(s) is now exactly 1!

   So, no...

3) Yes, but no...

   You can allways deviate from exact theoretical results and make working
   approximations. This is what engineers are supposed to be doing when
   everything else fails (which it BTW usually do).

   There are many forms of approximation methods (Min-square error, Pade etc)
   but the most common is probably the Q&DBWC (Quick and Dirty But Who Cares?).
   The goal of a well engineered approximation is to approach some (possibly
   unreachable design goal ideal) at a limited implementation cost. The less
   we may spend, the less perfect solution we get.

   Now, normally in music, it's 2-pole/2-zero and 4-pole/4-zero filters that
   gets tweaked around. We run into longer lengths in some phasers (24-stage =>
   24-pole/24-zero) and vocoders (maaaaaaaaaaaaaaany poles and zeros, in a
   parallel/serial setup). Even attempting to approximate minimal intrusion in
   phase while adjusting amplitude is basically turning out to leave you with
   a bad result. A normal SVF is 2-pole/2-zero, higher forms are rare
   (4-pole/4-zero is in use but not very common). You have to toss quite a lot
   of poles and zeros at the job to even out the phase response. Now, making
   such a solution tweakable? Naw - I don't beleive it is achievable in analog.

   So, an analog filter which is able to materialize itself at a reasnoble
   price is not what I would expect.

   You'd have to go into the digital domain and do a FIR filter design, you can
   do very high n FIR filters having a very good approximation cheap these
   days. But this is not what was asked for, now was it?

   So... in reality no, not in sufficiently cheap analog.

So, unless you've got more money than I suspect, you have to live with the
varying phase. Also, it's not the varying phase that is _really_ annoying, so
I would not spend the money there anyway, you can look elsewhere... but the
deviation into math done above however brings in the tools we need and want.

> I mean, the phase change with frequency is what allows the amplitude change
> with frequency?

You may beleive that if you will sleep through the nights in that beleive, but
I'd like to point out that both the phase and amplitude responce are both
consequences of the poles and zeros, and they are connected through the H(s)
function. Recall that our complex frequency s for stable amplitudes is only
jw where w is 2*Pi*f and f is our everyday frequency in Hertz. The amplitude
responce will then be:

abs(H(s)) = abs(H(jw)) = abs(H(j*2*Pi*f))

and the phase responce the angle of H(s):

arg(H(s)) = arg(H(jw)) = arg(H(j*2*Pi*f))

H(s) thus forms the "phasor arrow" that occurs in some literature to help
confuse people (which I probably have here too).

When we change the cut-off frequency of a filter we change the pole/zero
locations. When we change the Q-value of a filter, we change the pole
positions. When we change the "responce curve" from LP, BP, HP, notch,
cut/boost of an EQ we change the zero locations.

That's in rough how the poles and zeros act in our everyday music equipment.

Those bass-drivers which peaks at a low "bass" frequency, they have a pole-pair
placed near the jw-axis, near self resonance, or put in another way... with
very low loss of energy. The apparent lack of response in another frequency is
a pair of zeros messing with us. The colorisation of sound due to multiple
reflections is also a whole bunch of zeros acting for us.

Once you got friends in the s-plane, your buddies poles and zeros (there are a
Swedish pun burried in there, not intended but anyway...) you've got yourself
a different view on what you can do and how you can do it.

> Because the amplitude response is the "real" axis of the S plane and
> movement along the "imaginary" (phase) axis is how the shape of the real
> axis is determined?

EEEH! WRONG! The amplitude response is the absolute of the H(s), not either the
real or imaginary of s itself.

The imaginary of s relates to normal frequency.

The real of s relates to speed of amplitude change! When the real of s is zero,
we have a stable amplitude, or put in another way, we are at the jw-axis.
The so much hailed Fourier analysis _only_ analyses at the jw-axis, the Laplace
analysis breaks away from the jw-axis and allows analyses elsewhere... too!
This is why I allways claims that Laplace analysis is a stronger analytical
tool than Fourier analysis, since you can view the Fourier analysis as a
special case of Laplace, but an important specialcase (I don't mind special
cases, they are as such very usefull!).

Amplitude response, phase responce, phase-delay, group delay, dispersion,
impulse response, step responce etc. etc. can all be derived from the pole-zero
positions.

Many of the classical responces like Butterworth, Bessel-Thomson, Chebyshev,
Lagrange, Gauss etc. are nothing but different methods of placing poles in
order to approximate a brickwall filter, but with different optimization
criterias. Butterworth is for instance maximal flat amplitude response where
Bessel-Thomson is maximal flat group-delay (both relating to the pass-band).
Chebyshev makes the corner optimally sharp but at the cost of a certain amount
of pass-band ripple, so the ripple limit gives the exact values. The Gauss
response however attempts to approximate the best impulseresponse.

I am going to end with a quote of a colleague:

  "Don't learn anything about DSPs, you will only end up designing
   echo-cancellers, and that ain't much fun!"

In a telco-town like this it is especially true. Also, by now you'd probably be
fired anyway.

Cheers,
Magnus - in pole position with zero gas



More information about the Synth-diy mailing list