Announcement

Collapse
No announcement yet.

Phase Coherence as a measure of Acoustic Quality

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Phase Coherence as a measure of Acoustic Quality

    I came on a very interesting theory of acoustic perception in an odd way: My wife met the author, who has a PhD in physics from Harvard. She couldn't follow the theory, but got his business card, which led me to his website:

    http://www.davidgriesinger.com/

    The first three articles are the key, all with titles beginning with Phase Coherence as a measure of Acoustic Quality. The first paper (on the neural mechanism) gives the basic theory. The articles can be understood reasonably well even if one ignores the equations. My wife reports that Dr Griesinger is very excited by his new theory, but does not yet know if it is correct.

    Anyway, I find the theory plausible. And it certainly fits in with people's experience that reproducing the shape of the attack waveform (which requires linear phase response) is acoustically significant.

    Dr Griesinger appears to be a bigfoot in Acoustics: LARES - Wikipedia, the free encyclopedia.

  • #2
    Thanks, Joe.

    Delectable reading.

    For everyone else, I say that if the equations don't speak to you, ignore them and read on.
    The overview is an excellent aggregation of necessary ideas.
    "Det var helt Texas" is written Nowegian meaning "that's totally Texas." When spoken, it means "that's crazy."

    Comment


    • #3
      Thanks for that. I look forward to digging in further.

      The role of phase coherence and more realistic-sounding reproduction has fascinated me ever since we put together our OA for my band in 1979. We built some folded-horn type bass bins, and made some 5-sided tweeter arrays with a quartet of tweeters in each cab, and bi-amped the whole thing. Our original intent was just to park each tweeter bin on top of the bass bin. But while testing it out in the space where we were building, we experimented with different positions.

      When the tweeters sat directly on top of the bass bin, it sounded like any other 2-way system of the era, full but strident. Think EV Eliminators. When we moved the tweeter bin about a foot above the bass bin, though, there was this miraculous transformation from "PA sound" to "high fidelity". It was truly striking. And all we did was change the position of the drivers, relative to each other. Not only did it lose the stridency, but it just sounded much more "relaxed" and natural.

      Now, I don't know yet if this was what Griesinger is partly getting at. I'm just saying that ever since then, I've tried to pay attention to the manner in which all the sonic content "lines up" such that the ear/brain can easily link stuff that shold be linked, so as to form a clear soundstage with this thing here, that thing there, this one in front of the other one, and so on.

      Comment


      • #4
        The interesting thing is that phase incoherence is at the heart of the signature tone of different acoustic guitars. That was the very basis of my work in developing the first digital acoustic guitar modeling preamp, Mama Bear, with D-TAR. The phase signature is much more important than the frequency response when trying to make a relatively phase coherent pickup like an undersaddle piezo sound more like the guitar itself. We indirectly used the ability to mess with phase in the digital realm to achieve pleasing results.

        Comment


        • #5
          So, hands up all who can hear an allpass filter in a blind test.

          (the answer apparently is "none")
          Some Experiments With Time

          Time offsets in speaker systems cause comb filtering in the frequency response around the crossover frequency. These are the effects that were so audible in Winer's first experiment.

          On the other hand, an allpass filter produces wild phase shifts without altering the frequency response, and Winer's second experiment showed that this is not audible.

          When you speak about the audibility of phase shift, you have to be very careful that you're not actually talking about the audibility of comb filtering. It's easy to get them confused: the sound of a phaser pedal is actually comb filtering, after all.

          And I bet the sound of Rick's DTAR preamp is too: Julius O. Smith talks about modelling body resonators with "digital waveguides", which are just delay lines.

          https://ccrma.stanford.edu/~jos/jnmr...esonators.html
          Last edited by Steve Conner; 01-26-2011, 08:26 AM.
          "Enzo, I see that you replied parasitic oscillations. Is that a hypothesis? Or is that your amazing metal band I should check out?"

          Comment


          • #6
            The introduction to his three papers states in part:
            My latest work on hearing involves the development of a possible neural network that detects sound from multiple sources through phase information encoded in harmonics in the vocal formant range. These harmonics interfere with each other in frequency selective regions of the basilar memebrane, creating what appears to be amplitude modulated signals at a carrier frequency of each critical band. My model decodes these modulations with a simple comb filter - a neural delay line with equally spaced taps, each sequence of taps highly selective of individual musical pitches.
            So he is modeling the neural process, at some level, as a comb filter. It is probably important to remember that the basil membrane is dispersive: different frequencies travel at different velocities. So the neural network must be designed to use that property to extract information about spatial location, etc.

            Comment


            • #7
              Originally posted by Steve Conner View Post
              So, hands up all who can hear an allpass filter in a blind test.

              (the answer apparently is "none")
              Some Experiments With Time

              Time offsets in speaker systems cause comb filtering in the frequency response around the crossover frequency. These are the effects that were so audible in Winer's first experiment.

              On the other hand, an allpass filter produces wild phase shifts without altering the frequency response, and Winer's second experiment showed that this is not audible.
              So the ear brain system is sensitive to small ripples in the frequency response. Why? A reasonable idea to start with would be that the ear brain system creates such ripples, or the effect of them, to aid in analysis. Thus it is sensitive to them when you create them with, for example, a pair of displaced speakers. Could this be related to DG's neural delay line?

              Comment


              • #8
                Wild phase shifts in absolute phase covering the audio band may not be audible, but you cannot have wild phase shifts at different places within the audio band without causing comb filtering.

                And that said, there have been experiments that show that some people are sensitive to absolute phase reversals. Our ears are not necessarily linear in the sense of being equally sensitive to pressure and rarefaction. In other words, you may "hear" or sense a difference at the attack of an impulse depending on whether it pushes your ear drums in or sucks them out.

                There is quite a movement toward single driver loudspeaker systems in the high end audio world. This avoids having a crossover. An alternative is to have a two or three way system with very wide band mid range drivers, thus keeping the crossover out of the most sensitive region of human hearing...approximately 300 Hz to 3 K Hz.

                Comment


                • #9
                  BTW, the LARES thing is interesting, but I was involved with what was probably the first use of time delayed auxiliary speaker towers for a PA. The gig was the Grateful Dead, Waylon Jennings, and the New Riders at Kezar Stadium in San Francisco in the summer of 1971. We had the main PA (pre Wall of Sound) as normal, and then two delay towers forward about 120 feet into the stadium. One was driven through a delay from one of the very first Eventide delay units, and the other tower was driven old style...using the time delay between record and playback head of an Ampex 350 tape machine.

                  Comment


                  • #10
                    Originally posted by Steve Conner View Post
                    So, hands up all who can hear an allpass filter in a blind test.

                    (the answer apparently is "none")
                    Some Experiments With Time

                    Time offsets in speaker systems cause comb filtering in the frequency response around the crossover frequency. These are the effects that were so audible in Winer's first experiment.

                    On the other hand, an allpass filter produces wild phase shifts without altering the frequency response, and Winer's second experiment showed that this is not audible.

                    When you speak about the audibility of phase shift, you have to be very careful that you're not actually talking about the audibility of comb filtering. It's easy to get them confused: the sound of a phaser pedal is actually comb filtering, after all.
                    Dr Griesinger's focus is the acoustics of music halls, with serious money riding on the result. Such buildings cost many tens of millions of dollars to construct. If the current theory of how the ear works were correct and complete, all recently built music halls would have perfect acoustics for their intended purpose.

                    But this is at variance with experience - halls are built and turn out to have bad acoustics all the time. Nobody knows why, and it's not for lack of effort by some very skilled people. The current understanding of how the ear works just isn't quite good enough.

                    Now, the debate about the sensitivity or insensitivity of the ear to phase in music has been an article of contention for many decades, and there are many experiments proving insensitivity and other equally valid experiments proving sensitivity, and nobody knows why. The assumption is that something fundamental has been missed.

                    More generally, none of the human senses are fully understood.

                    As for Dr Griesinger's phase-coherence theory, my guess is that his theory will turn out to be correct, but will not be the whole story. It has always been thus in physiological acoustics.
                    Last edited by Joe Gwinn; 01-26-2011, 10:40 PM.

                    Comment


                    • #11
                      It's hard to beat some of the classic halls. I've played Symphony Hall in Boston (the best), Orchestra Hall in Chicago (damned good), and the first iteration of Lincoln Center in New York (mediocre), and Sabine and the Harvard boys did well in Boston. One of the big issues is taking a hall past about 2,200 in capacity. It seems a lot easier to do well in halls that hold between 1,000 and 2,000 people, but it's hard to make them pay for themselves.

                      There's a great book called "the Soundscape of Modernity" by Emily Thompson on this whole subject of acoustical architecture. The brave new world is with amplified sound reinforcement designed to be totally unobtrusive, but carry coherent sound to every nook and cranny in a hall.

                      I took Don Davis' "SynAudCon" course many years ago, and along with phase coherence from the sound system, the big thing is to extend the critical listening area where the direct sound predominates over the reflected sound where things just wash out. That is done by carefully controlling the dispersion of the sound system, but beyond a certain point, architectural problems can only be solved through architectural means. EQ'ing the direct sound to make the reverberant sound better only makes the direct sound worse. You can't "tune a room" without detuning the main system.

                      Comment


                      • #12
                        Originally posted by Joe Gwinn View Post
                        As for Dr Griesinger's phase-coherence theory, my guess is that his theory will turn out to be correct, but will not be the whole story. It has always been thus in physiological acoustics.
                        One thing it addresses very nicely is the ancient question: if you cannot hear less than about 1% distortion on a sine wave, why is it necessary to make the distrotion in a really good audio system much lower? With this analysis, the human audio system does not care much about sine waves, but rather patterns in the harmonics of complex sounds. 1% must be about the minimum level for such an analysis to begin to work.

                        It has been clear for decades that the hearing system is a set of bandpass filters with amplitude detectors on the outputs. It has also been clear that the analysis must consist of simultaneous analysis of the outputs of these filters in time. What else is there? What is new here is the hypothesis describing how this analysis might work, how detailed information can be extracted with a computationally efficient system made from neurons.

                        The test that that Steve described suggest ways to check the hypothesis. This would involve developing complex signals to fool the system, that is, that would induce odd responses that would in effect describe the processing.

                        Comment


                        • #13
                          Not exactly the right forum for this discussion, but what the hell, you guys are a good bunch to discuss it with, so let's carry on.

                          At a perceptual level, the principal challenge facing the brain/nervous system/ears/person is that, while there are many sounds in the world, each with their own set of associated harmonics and other spectral thingies, there are ONLY two ears, and each of those suckers can hear things from all over. What "goes together"? How will I be able to know it's one of THOSE over THERE? For the nervous system, it's like a thousand decks of cards with not 4, but 16 suits, and 24 values within each suit thrown up into the air and blown at you by one of those big fans they use to mimic gale force winds on movie sets. And as the cards are all being blown at you, you have to sort them into suits. And of course, sound never stops coming at you, so there is no pause to catchup and accomplish that computation in. It's done in real-time, baby.

                          "Going together" occurs at multiple levels. My old cognitive psych prof at McGill, Albert Bregman, whose spot Daniel Levitin filled when Al retired, spent a chunk of his career studying how humans sort sound into "streams" and "auditory scenes". That level of analysis examines what sound events go together, but what about the spectral sorting that each of those sound events comprises? How do I know that THIS 8th harmonic goes with THAT fundamental? Yes, I suppose their mathematical relationship might suggest it, as might the synchrony of their occurrence and overall amplitude envelope, but how do I know it's a harmonic of something, and not just part of some white/pink/purple noise in the background, or the sound of blood in your ears that John Cage heard in an anechoic chamber? How do I know its part of what I am attending to? How does it become part of the signal and not become part of the noise?

                          The answer would seem to be that when harmonic content accompanies a sound event, there are predictable, or at least familiar or expected, relationships between the fundamental and the "other stuff". Griesinger is interested in how we use the phase relationship to "tell us things" about "all that sound" out there so that we can sort those cards lickety-split into decks. I may be a sucker for introspection, but my experience tells me that when there is less effort required to sort those acoustic cards into decks, things just sound "better" because they are easily identifiable, and easily locatable in space as a result.

                          The trick in audio reproduction and processing over the generations has been to get that blasted mechanical step that sits between listener and the sound source (whether it be speaker or microphone or room acoustics) to retain the phase relationships...for EVERYTHING...that are inherent in the original source. There is also the matter of group delay and such at the electronic level, but I suspect the mechanical step is the larger one to take.

                          Comment


                          • #14
                            Ears and brains evolved together with millions of years of survivors making it past that "big comb filter in the sky". I'm willing to bet all we need is an 'educated guess' about where a noise is coming from and what's making it. If we can narrow it down to 4 or 8 possible places from infinity, our other senses will jump in and tell us which one is the most likely. Seeing as we start life as synaesthetes with all our senses connected I wouldn't be surprised if there were other senses that we can't acknowledge because they are internal and or subconscious. I'm old enough where I shouldn't be able to hear much over 12khz but I can't listen to CDs anymore, MP3s sound like total garbage to me. Not sure where I'm going with this, I should read the articles first...

                            Comment


                            • #15
                              Mark, major effort in sorting audio data = major listener fatigue, and I suspect that you already know that, but indirectly.

                              One of the fundamental issues here is sound "REPRODUCTION" or "REINFORCEMENT". That's second generation sound at best, and usually third to sixth generation with each generation adding its own phase and frequency screwups to the original sound. As soon as you try to reproduce or reinforce a "first generation natural" sound, you're in trouble.

                              Electric guitars through amplifiers ARE a natural first generation sound; the amp and speaker ARE an integral part of the instrument. Amplifying acoustic guitars is at least second generation, and if you consider buffer, preamp, amplifier, and loudspeaker...without even counting EQ or processors, you're suddenly into four generations of phase shift possibilities away from reality.

                              Ditto vocals.

                              If you really want to test a stereo system, just try listening to spoken voice recordings. And if it's one voice, pan everything full left or full right. When Carl Sandburg or Dylan Thomas spoke, they used only one mouth apiece...

                              You want to know about phase? Well, try this. Stereo doesn't work...

                              Binaural may.

                              Comment

                              Working...
                              X