Ad Widget

Collapse

Announcement

Collapse
No announcement yet.

Phase at frequency w/speaker emulator circuit (Juan?)

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Phase at frequency w/speaker emulator circuit (Juan?)

    I'm working on a passive circuit to emulate speaker frequency response. I'm pretty happy with the EQ but I'm concerned about the phase differential between LF and HF (almost 360*) and was wondering what sort of problems it might cause in actual listening perception or electronically at the input of a mixer/PA. I know that phase error is basically a time lag and if I interpret what I've read correctly I'm dealing with about a 1ms differential. I don't think that's going to be terribly audible, but I don't have much experience with this. Below is a graph of the frequency response of a G12H with the plot for my circuit overlaid on top. Phase is indicated by the dash. It's pretty rough because of the overlay, but I hope the info gets across.

    TIA

    EDIT: Also, does anyone know the real world phase error for an average speaker?
    Attached Files
    "Take two placebos, works twice as well." Enzo

    "Now get off my lawn with your silicooties and boom-chucka speakers and computers masquerading as amplifiers" Justin Thomas

    "If you're not interested in opinions and the experience of others, why even start a thread?
    You can't just expect consent." Helmholtz

  • #2
    If the phase of the speaker response is similar, then I think there is no problem. Do you know the past of the speaker response? There might be no problem anyway; phase often does not matter, but it can.

    EDIt: Not sure what you mean by phase error of a speaker, but speakers have phase shifts just as other electronic components do. In fact the phase response of a speaker can be complicated. I think Eminence has phase plots for many of its speakers.

    Comment


    • #3
      Thank you Mike. By "error" I just meant any differential from ideal. I mean, you wouldn't want your LF appearing 300ms behind the HF. That's extreme and ridiculous of course. Just illustrating. I know Eminence has impedance plots. I'll check there and see if they have phase plots as well. This circuit is strictly for EQ. It won't be a load for anything and it won't be driving anything with current. I did manage to find some speaker phase plots for high end audio stuff and it looks like phase relative to frequency can shift up to 150*. I'll bet a guitar speaker, with it's harder roll off top and bottom is worse. I only wanted to know if there was some inherent problem with phase shifts approaching 180* with a HF/LF differential approaching 360* for EQ purposes.
      Last edited by Chuck H; 11-13-2017, 09:06 PM.
      "Take two placebos, works twice as well." Enzo

      "Now get off my lawn with your silicooties and boom-chucka speakers and computers masquerading as amplifiers" Justin Thomas

      "If you're not interested in opinions and the experience of others, why even start a thread?
      You can't just expect consent." Helmholtz

      Comment


      • #4
        What Mike Sulzer said: speakers have terrible phase shift problems, in any case your electronic circuit will always be better than the mechanical version, worst case will approach it, so that´s what we are used to hearing anyway.
        Juan Manuel Fahey

        Comment


        • #5
          Thanks guys.
          "Take two placebos, works twice as well." Enzo

          "Now get off my lawn with your silicooties and boom-chucka speakers and computers masquerading as amplifiers" Justin Thomas

          "If you're not interested in opinions and the experience of others, why even start a thread?
          You can't just expect consent." Helmholtz

          Comment


          • #6
            Dumb thought: presumably at some point after this emulation circuit someone is going to actually listen to the signal (either directly or from a recording), and they will be doing that through a speaker. If you try to build in the absolute phase response of a particular speaker once it goes through the final speaker you may end up with "extra" phase impact.

            Comment


            • #7
              Originally posted by J M Fahey View Post
              What Mike Sulzer said: speakers have terrible phase shift problems, in any case your electronic circuit will always be better than the mechanical version, worst case will approach it, so that´s what we are used to hearing anyway.
              Yup.No problems normal btw phasing problems could be when sound reinforcement request special for low freq.Slaving works,micing could be dificult and needs phase adjustments
              Last edited by catalin gramada; 11-13-2017, 11:03 PM.
              "If it measures good and sounds bad, it is bad. If it measures bad and sounds good, you are measuring the wrong things."

              Comment


              • #8
                There is a hidden truth lurking under this thread. That is, there can be no filtering without phase shift, at least in analog electronics. The fundamental way that filtering happens is with the interaction of resistive (i.e. no phase shift) elements and reactive elements, things which have a differing impedance with frequency, and that effect by its very nature causes phase shift. The same thing happens in mechanical, acoustic, etc. systems.

                It is probably possible to use DSP programming to affect amplitude as a function of frequency and then to correct signal phase back to no phase shift (or any arbitrary phase shift) but the normal sorts of digital filters also introduce phase shift with amplitude variations.

                So if you want filtering, you get phase shift too.
                Amazing!! Who would ever have guessed that someone who villified the evil rich people would begin happily accepting their millions in speaking fees!

                Oh, wait! That sounds familiar, somehow.

                Comment


                • #9
                  Originally posted by glebert View Post
                  Dumb thought: presumably at some point after this emulation circuit someone is going to actually listen to the signal (either directly or from a recording), and they will be doing that through a speaker. If you try to build in the absolute phase response of a particular speaker once it goes through the final speaker you may end up with "extra" phase impact.
                  True.
                  That said, using a speaker emulator , which to be more precise should be called a "guitar" speaker emulator , or it would not be needed to begin with, sort of implies that final sound will be played through a Hi Fi, Studio or, worst case, PA speaker ... all of which have (or try hard to) flattest response and minimal phase shifts.
                  And we add the speaker emulator to that (flat but unexciting) mix precisely to add that off taste flavour we like
                  Juan Manuel Fahey

                  Comment


                  • #10
                    Originally posted by R.G. View Post
                    There is a hidden truth lurking under this thread. That is, there can be no filtering without phase shift, at least in analog electronics. The fundamental way that filtering happens is with the interaction of resistive (i.e. no phase shift) elements and reactive elements, things which have a differing impedance with frequency, and that effect by its very nature causes phase shift. The same thing happens in mechanical, acoustic, etc. systems.

                    It is probably possible to use DSP programming to affect amplitude as a function of frequency and then to correct signal phase back to no phase shift (or any arbitrary phase shift) but the normal sorts of digital filters also introduce phase shift with amplitude variations.

                    So if you want filtering, you get phase shift too.
                    Yes, there is no problem constructing a digital filter with no phase shifts, but you might not like the transient response! There are only so many free parameters, and if you specify the amplitude and phase as a function of frequency, do not expect anything else to be what you might want. You might think of such a filter as using the Fourier transform (in some clever way so that finite length transforms can coupled together to give a continuous signal), and you can modify the Fourier coefficients in amplitude and phase as you wish, and then transform back to the time domain. But as for the time domain response, you get what you get. Remember, phase is quite audible if introduced in a correlated way over a range of frequencies. The simplest example is time stretching, where you can take a short transient and make it much longer while keeping the high frequencies, and changing only the phase. The result sounds nothing like the original. On the other hand, taking a musical instrument signal and shifting the relative phase of the various harmonics of a note can produce very little effect if done right, and played through a linear system. The waveform shape is modified, of course, and so if here are gross nonlinearities (guitar amp played loud) then the harmonics added by that distortion are a function of the waveform shape to some extent. So this can get very complicated.

                    Comment


                    • #11
                      Fantastic and thank you to all for continuing discussion on the topic. FWIW I absolutely do intend to do listening tests and the final circuit will allow switching the emulator circuit out for use with either sound reinforcement or a guitar speaker cabinet. My assumption being that sound reinforcement speakers are designed for a flat(ish) response. I could well find this to be false to a greater or lesser degree requiring circuit modification.
                      "Take two placebos, works twice as well." Enzo

                      "Now get off my lawn with your silicooties and boom-chucka speakers and computers masquerading as amplifiers" Justin Thomas

                      "If you're not interested in opinions and the experience of others, why even start a thread?
                      You can't just expect consent." Helmholtz

                      Comment


                      • #12
                        Originally posted by Chuck H View Post
                        ...I'm concerned about the phase differential between LF and HF (almost 360*)
                        If you look at the low frequency response of a typical (sealed enclosure or free air) speaker, it has the form of a second-order high pass filter. In a perfect world, that would be accompanied by a total of 180 degrees of phase shift as you go through the resonance frequency of the speaker. There will be 180 degrees of phase lead well below resonance, and zero degrees phase shift well above resonance.

                        I happen to be working on creating my own speaker simulation software at the moment, so I'll attach a screenshot showing this, for a fictional speaker with a resonance at 100 Hz.

                        If you had an ideal infinitely stiff speaker cone, that would be the whole story. In practice, as you go higher in frequency, eventually you get cone break-up modes, which are themselves mechanical resonances, each one accompanied by 180 degrees of phase *lag* as you sweep through it (from a frequency well below, to a frequency well above).

                        So if you were dealing with a nearly ideal speaker, with only its fundamental (bass) resonance, plus one single cone break up mode, you would already have 360 degrees of phase shift within the frequency spectrum - 180 degrees lead at very low frequencies, zero phase in the midrange, and 180 degrees lag at high frequencies well above that cone breakup frequency.

                        Real life is far worse, with additional breakup modes coming thick and fast as you go up in frequency...each one accompanied by yet another 180 degrees in phase.

                        And all this is if you were placing your ear right on top of the dust-cap. If you are at a normal distance from the speaker, there is additional phase shift as the sound travels through the air to your ears - three hundred and sixty degrees of phase shift for every wavelength travelled. The speed of sound in a home at normal temperature is around 340 metres/second, so if you were listening to a 3.4 kHz tone, and your ear was one metre away from the speaker, there would be ten wavelengths of sound between the speaker and your ear. This means three thousand, six hundred additional degrees of phase shift, on top of the 360+ in the speaker driver itself!

                        Note, by the way, that if you were listening to 34 Hz (from your 5-string bass guitar, say), you are only one-tenth of a wavelength away, so only 36 degrees of phase. In other words, very little phase shift at 34 Hz, but lots and lots of phase shift at 3.4 kHz...

                        In other words, at one metre distance from the speaker, there is more than three thousand degrees of phase shift between deep bass and mid-treble, just because of the way sound behaves when it travels through air!

                        (And, at a more realistic listening distance, there may be two or three or four times as much - literally, over ten thousand degrees of phase shift between bass and treble, even if you were in an anechoic chamber!)

                        All this is why I pay no attention to most of the Audiophool fussing over speaker phasing. Phase only really seems to matter when you have two or more drivers simultaneously emitting the same signal, with a phase shift between them. In that particular case, the multiple signals will interfere with each other, and cause peaks and dips in the frequency response.

                        This sort of thing happens during the crossover region in Hi-Fi speakers (where both woofer and tweeter are emitting the same sound), and it happens all over the spectrum if you stuff four Celestions in one cab and drive them all with a full-range guitar signal.

                        But one speaker (or one simulated speaker) by itself? Your ear doesn't care. The big bass drum in the marching band sounds the same whichever side of the road you happen to be standing on when the parade goes by. Clear proof that 180 degrees of phase-shift in the bass makes no difference whatsoever to the way it sounds!

                        And the million dollar question: this speaker emulator is for a project involving running a micro valve guitar amp direct into a P.A. system, perhaps? Any nifty stuff to share?

                        -Gnobuddy
                        Attached Files

                        Comment


                        • #13
                          Originally posted by Gnobuddy View Post

                          Note, by the way, that if you were listening to 34 Hz (from your 5-string bass guitar, say), you are only one-tenth of a wavelength away, so only 36 degrees of phase. In other words, very little phase shift at 34 Hz, but lots and lots of phase shift at 3.4 kHz...

                          In other words, at one metre distance from the speaker, there is more than three thousand degrees of phase shift between deep bass and mid-treble, just because of the way sound behaves when it travels through air!


                          -Gnobuddy
                          Maybe I'm missing something, but I don't think that is the way it works. Propogation delay is not the same as phase shift.

                          Comment


                          • #14
                            Originally posted by glebert View Post
                            Maybe I'm missing something, but I don't think that is the way it works. Propogation delay is not the same as phase shift.
                            For a wave moving in time (sound wave, in this case), time delay and phase delay are inextricably linked. If one cycle lasts for, say, one millisecond, then one millisecond causes 360 degrees of phase increase.

                            Put another way, there is relative phase shift (between, say, two different sine waves at the same frequency). There is also absolute phase - the "wt" in the equation Y = A sin(wt). It takes 360 degrees of phase to make one one full wave, i.e, wt increases by 360 degrees from its initial value to create one full wave. There will be 360 more degrees of phase for each subsequent wave.

                            Still another way to think of it: put one microphone a half-wavelength further from the source than another, and we agree that the two mics will put out signals 180 degrees apart, yes?

                            Now move the further microphone another quarter-wavelength away, and now there will be 270 degrees phase shift between the two signals, yes?

                            Keep moving the further microphone in little steps, say one-hundredth of a wavelength further each time. You get another 3.6 degrees of phase with each additional increment in distance.

                            So what happens when the second microphone is a full wavelength further than the first? We kept increasing the phase shift beyond 270 degrees in steps. Clearly, the two signals are now 360 degrees apart in phase.

                            360 degrees might look like zero degrees on an oscilloscope, but that's because oscilloscopes are not good tools for looking at total phase. Take Fourier transforms of those two time-delayed waveforms, and you will see the additional 360 degrees of phase in one of them.

                            -Gnobuddy

                            Comment


                            • #15
                              If all the wave frequencies move at the same speed, then the shape of the total waveform does not change, and so the relative phases remain the same. This is what counts, and it is not the same as introducing frequency dependent phase shifts at the source.

                              For example (https://brilliant.org/wiki/amplitude...r-phase-shift/), propagation of a simple wave can be described by

                              sin(k(x - vt)) where:

                              the phase is the argument of the sine,
                              x is the spatial coordinate,
                              t is the time coordinate,
                              and k = 2*pi/(wavelength),
                              v is the phase velocity.

                              If the phase velocity is the same for all frequencies, the phase us the same at all frequencies for each x. (Frequency is not in the equation!)
                              Then we have v = omega/k ,
                              where omega is 2*pi*f,
                              and the equation can be written:
                              sin(kx - omega*t)

                              This form contains the frequency explicitly.

                              If v is a function of frequency, then the relative phase does change as a function of frequency.

                              Originally posted by Gnobuddy View Post
                              For a wave moving in time (sound wave, in this case), time delay and phase delay are inextricably linked. If one cycle lasts for, say, one millisecond, then one millisecond causes 360 degrees of phase increase.

                              Put another way, there is relative phase shift (between, say, two different sine waves at the same frequency). There is also absolute phase - the "wt" in the equation Y = A sin(wt). It takes 360 degrees of phase to make one one full wave, i.e, wt increases by 360 degrees from its initial value to create one full wave. There will be 360 more degrees of phase for each subsequent wave.

                              Still another way to think of it: put one microphone a half-wavelength further from the source than another, and we agree that the two mics will put out signals 180 degrees apart, yes?

                              Now move the further microphone another quarter-wavelength away, and now there will be 270 degrees phase shift between the two signals, yes?

                              Keep moving the further microphone in little steps, say one-hundredth of a wavelength further each time. You get another 3.6 degrees of phase with each additional increment in distance.

                              So what happens when the second microphone is a full wavelength further than the first? We kept increasing the phase shift beyond 270 degrees in steps. Clearly, the two signals are now 360 degrees apart in phase.

                              360 degrees might look like zero degrees on an oscilloscope, but that's because oscilloscopes are not good tools for looking at total phase. Take Fourier transforms of those two time-delayed waveforms, and you will see the additional 360 degrees of phase in one of them.

                              -Gnobuddy
                              Last edited by Mike Sulzer; 12-01-2017, 12:15 PM. Reason: typo

                              Comment

                              Working...
                              X