Announcement

Collapse
No announcement yet.

Interesting Pickup Design from Fishman

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #46
    I agree, it is not that hard to demonstrate suitable linearity in a radar receiver. The frequency range above the usually defined audio range in an audio amplifier is much harder. Non-linearity rises very quickly with frequency, but the usual harmonic distortion is not the issue, obviously. The actual problem is not even that easy to define. Let signal A have fast transients with significant components above 20KHz. Signal B is A low pass filtered to the usual 20KHz bandwidth. Signal C is A after passing through the amplifier with high frequency non-linearity. Signal D is C low passed filtered by the same filter as B. How does D differ from B? The ear should be able to detect those differences because they are within the audio range. I have never heard of anyone attacking this problem. Have you?

    Originally posted by Joe Gwinn View Post
    It's not hard to demonstrate linearity in the electronics, even in the high harmonics. We demonstrate linearity in radar receivers working in the tens of GHz every day.

    The real issue is how much the eay cares about these high harmonics and attach transients. Said another way, how much blurring of the transients is detectable, and how much is acceptable?

    Comment


    • #47
      Originally posted by Mike Sulzer View Post
      I agree, it is not that hard to demonstrate suitable linearity in a radar receiver. The frequency range above the usually defined audio range in an audio amplifier is much harder. Non-linearity rises very quickly with frequency, but the usual harmonic distortion is not the issue, obviously. The actual problem is not even that easy to define. Let signal A have fast transients with significant components above 20KHz. Signal B is A low pass filtered to the usual 20KHz bandwidth. Signal C is A after passing through the amplifier with high frequency non-linearity. Signal D is C low passed filtered by the same filter as B. How does D differ from B? The ear should be able to detect those differences because they are within the audio range. I have never heard of anyone attacking this problem. Have you?
      Not in that form.

      It strikes me that there is an unspoken assumption here, that the ear acts like a perfect 20 KHz low pass filter. This is clearly a simplification, and the question turns on what part of the simplification is in fact an oversimplification.

      And there is another twist: To reproduce transients fully, one requires not just adequate bandwidth, one requires phase linearity as well.

      Comment


      • #48
        Originally posted by Joe Gwinn View Post
        Not in that form.

        It strikes me that there is an unspoken assumption here, that the ear acts like a perfect 20 KHz low pass filter. This is clearly a simplification, and the question turns on what part of the simplification is in fact an oversimplification.

        And there is another twist: To reproduce transients fully, one requires not just adequate bandwidth, one requires phase linearity as well.
        No, no unspoken assumption here. Nothing I am saying is inconsistent with what is known about normal hearing, for example, the fact the threshold of hearing (for the young) is lowest at about 4KHz, and is about 20 db higher at 20KHz, increasing with a slope of many db per octave at that frequency and above. Much worse for the not so young! It is so steep that anyone proposing that frequencies up to about 100KHz matter must make every effort to show that the effect cannot be from imperfections in the equipment used in the test. I do not believe that anyone has done this.

        Phase: I do not think that getting good amplitude and phase response above 20 KHz is unusual in audio circuits. Linearity is another matter. In any case, I do not see any way that the human ear could be sensitive to phase at such high frequencies.

        Comment


        • #49
          Originally posted by Mike Sulzer View Post
          No, no unspoken assumption here. Nothing I am saying is inconsistent with what is known about normal hearing, for example, the fact the threshold of hearing (for the young) is lowest at about 4KHz, and is about 20 db higher at 20KHz, increasing with a slope of many db per octave at that frequency and above. Much worse for the not so young! It is so steep that anyone proposing that frequencies up to about 100KHz matter must make every effort to show that the effect cannot be from imperfections in the equipment used in the test. I do not believe that anyone has done this.
          The danger is that word "known". It isn't clear that the known description is complete.

          Phase: I do not think that getting good amplitude and phase response above 20 KHz is unusual in audio circuits. Linearity is another matter. In any case, I do not see any way that the human ear could be sensitive to phase at such high frequencies.
          That is the key dispute. The evidence is that humans seem to be able to detect phase error leading to blurring of transients. Remember, we are looking for the difference between very good and perfect (meaning transparent).

          Another piece of evidence is that I could (in my 20s) detect 26 KHz sound, as could my then semi girlfriend. By the books we all read, this would be impossible. But it was easy, and almost universal.

          Comment


          • #50
            Originally posted by Joe Gwinn View Post
            The danger is that word "known". It isn't clear that the known description is complete.
            The need for a more complete description is demonstrated with really good evidence that the current description is inadequate. The quality of marginal evidence is not bolstered by claiming that the current description might be incomplete.


            Originally posted by Joe Gwinn View Post
            That is the key dispute. The evidence is that humans seem to be able to detect phase error leading to blurring of transients. Remember, we are looking for the difference between very good and perfect (meaning transparent).

            Another piece of evidence is that I could (in my 20s) detect 26 KHz sound, as could my then semi girlfriend. By the books we all read, this would be impossible. But it was easy, and almost universal.
            Humans vary in nearly all things. Some people hear a bit above 20KHz, some do not ever get there. This has little do with 100KHz.

            If you have a reference showing that humans can detect phase errors in high speed transient signals, I would love to see it. The tests I am familiar with, some of which you can do yourself easily, show the opposite.

            Comment


            • #51
              The question now is; "Does anyone around here hear above 20,000Hz".

              Comment


              • #52
                Originally posted by David King View Post
                The question now is; "Does anyone around here hear above 20,000Hz".
                Not any more!
                If it still won't get loud enough, it's probably broken. - Steve Conner
                If the thing works, stop fixing it. - Enzo
                We need more chaos in music, in art... I'm here to make it. - Justin Thomas
                MANY things in human experience can be easily differentiated, yet *impossible* to express as a measurement. - Juan Fahey

                Comment


                • #53
                  I would give up 2K of high-end hearing response to be able to play like a combination of Chet Atkins, Les Paul, Tommy Emmanuel, Eddie, Yngwie, Eric Johnson, Satch, Joe Bonamassa, Frank Zappa, and the folks who have just floored me walking on the street by the club entrances whose names I will never know.

                  Comment


                  • #54
                    Originally posted by Mike Sulzer View Post
                    The need for a more complete description is demonstrated with really good evidence that the current description is inadequate. The quality of marginal evidence is not bolstered by claiming that the current description might be incomplete.
                    Huh?

                    Humans vary in nearly all things. Some people hear a bit above 20KHz, some do not ever get there. This has little do with 100KHz.
                    There is lots of evidence that humans can sense transients that require more than 20 KHz to accurately reproduce. All the hearing-range tests are performed with sine waves. Such tests cannot explore transient response, as the ear may handle transients in a different pathway than sine waves. Human senses are far more complex than the instruments we use to measure those senses.

                    If you have a reference showing that humans can detect phase errors in high speed transient signals, I would love to see it.
                    Phase errors in high-speed transients? Where did that come from?

                    The tests I am familiar with, some of which you can do yourself easily, show the opposite.
                    The standard tests were conducted by Bell Labs to determine if long-distance telephone circuits (including frequency division common carrier equipment) needed to preserve phase in the voice frequency range, 300 to 3000Hz, the metric being intelligibility of speech. Turns out that phase need not be preserved for speech, which greatly simplified the required long-distance transmission equipment.

                    But we are talking about music, not speech. And telephone circuits just mangle music.

                    Phase linearity preserves waveshape, and there is a school of thought that this is useful in high fidelity. If one also wants to reproduce sharp transients (like attack transients in guitars), one requires both phase linearity and wider than 20 KHz response.

                    But, as mentioned before, we are talking about small differences, between very good and perfectly transparent.

                    There is a parallel in photography: The equivalent to transparency is when one cannot tell a photograph from a window looking out at the scene. This was achievable even before digital came, by using a large piece of film, at least 4x5 inches (marginal), with 8x10 inches being customary. Why did film area matter? Because film is grainy and random, and the more film area per resolution element, the smoother and more accurate the rendition of intensity and color.

                    Comment


                    • #55
                      Originally posted by Joe Gwinn View Post
                      Huh?
                      In science, the need for a better model is determined by good evidence that the current one is inadequate. The logic must flow in this direction and be kept simple.


                      Originally posted by Joe Gwinn View Post
                      There is lots of evidence that humans can sense transients that require more than 20 KHz to accurately reproduce.
                      Yes, but those transients also contain frequencies less than 20 KHz, which people obviously can hear.

                      Originally posted by Joe Gwinn View Post
                      All the hearing-range tests are performed with sine waves. Such tests cannot explore transient response, as the ear may handle transients in a different pathway than sine waves. Human senses are far more complex than the instruments we use to measure those senses.
                      The brain can only work with what the hearing hardware provides. The basilar membrane resonates at different frequencies as a function of location. The nerves provide the brain with a time history of the vibration amplitudes as a function of location. The time resolution is limited, and thus so is transient analysis. Also, this limits the importance of phase in human hearing, and simple tests verify that it is not very important in general. There is every reason, of course, to believe that the phase in slow transients (dominant components for example in the range of the dominant harmonics in musical instruments), is important, especially if the shifts are arbitrary and extreme. (Audibility of Phase Distortion) But fast ones? Beyond the resonances of the basilar membrane and way beyond the effective sampling rate of the nerves?



                      Originally posted by Joe Gwinn View Post
                      Phase errors in high-speed transients? Where did that come from?
                      I thought you added that "wrinkle".


                      Originally posted by Joe Gwinn View Post
                      The standard tests were conducted by Bell Labs to determine if long-distance telephone circuits (including frequency division common carrier equipment) needed to preserve phase in the voice frequency range, 300 to 3000Hz, the metric being intelligibility of speech. Turns out that phase need not be preserved for speech, which greatly simplified the required long-distance transmission equipment.

                      But we are talking about music, not speech. And telephone circuits just mangle music.

                      Phase linearity preserves waveshape, and there is a school of thought that this is useful in high fidelity. If one also wants to reproduce sharp transients (like attack transients in guitars), one requires both phase linearity and wider than 20 KHz response.

                      But, as mentioned before, we are talking about small differences, between very good and perfectly transparent.

                      There is a parallel in photography: The equivalent to transparency is when one cannot tell a photograph from a window looking out at the scene. This was achievable even before digital came, by using a large piece of film, at least 4x5 inches (marginal), with 8x10 inches being customary. Why did film area matter? Because film is grainy and random, and the more film area per resolution element, the smoother and more accurate the rendition of intensity and color.
                      Where is the evidence that any manipulation of the components above 20KHz of transients is audible? Where is the evidence that components significantly above 20KHz matter at all?

                      Comment


                      • #56
                        My grandpa spent most of his career at Bell Labs trying to make multiplexed phone connections understandable. The poor fellow was deaf as a doorpost in later years.

                        Comment


                        • #57
                          Well, that's easy - there isn't any consistent, repeatable research that demonstrates audibility of phase distortion within a certain amount, much less that bandwidth in the signal chain above 20 KHZ per se matters for reproducing sound. There's more than 60 years of research into audio and human perception of sound and music from an electronic signal and transduction chain and any number of professional (Audio Engineering Society, IEEE) organizations whose journals have reported that research. So as far as published research goes, +1 to Mike.

                          But in fairness to Joe, group delay, transient distortion, and phase "smearing" are a very real phenomenon in high-fidelity music reproduction systems and over a certain amount, those kind of things can be audible on a complex signal. Just look at what the loudspeaker design engineers have been doing for quite a while with time alignment of drivers, Klippel testing. etc. Just doing a frequency sweep test with a single frequency and fixed amplitude like in the early days did tend to miss some of these things. Wide bandwidth (over 20K on the top end) was a circuit design methodology which allowed the designers of audio electronics especially in the early transistor days to produce equipment that "sounded better" even though they really didn't have an objective, measurable set of correlates for why at the time ( I am talking about mostly 50's, 60's, and 70's for the most part.) I think that a wide-bandwidth claim also helped sell a lot of gear, too. But there were some hits and misses along the way. Just talk to any salty old studio engineer about mixing boards and this subject and you'll probably get an earful.

                          We really, really don't want to get into the same debate which has been going on for decades in the hi-fi enthusiast crowd. There's ruts in that road about a hundred feet deep.

                          If we are looking for some "ultimately clear" instrument signal from guitar and/or bass, relative to this forum and this thread, I am not sure whether that could ever be agreed upon, or even pass a blind A-B test with a reasonable number of listeners -- or even be generally preferred in a musical context.

                          But as I pointed out in an earlier post, I personally would like to get a reasonably clean, flat 20K bandwidth magnetic guitar signal out of the instrument to feed whatever processing I want to do on it, live or recorded.

                          If we can do that, then the differences between "good" and "better" (not perfect) sound are more likely to be in the strings, instrument construction, magnetics, position, sensing distance along the string, etc. than in absolute phase linearity, zero distortion, -120 dB noise floor and 80KHz bandwidth. And after all that, and above all, the differences will depend on the human being who is playing the darn thing.
                          Last edited by charrich56; 01-28-2014, 10:02 PM.

                          Comment


                          • #58
                            Originally posted by Mike Sulzer View Post
                            In science, the need for a better model is determined by good evidence that the current one is inadequate. The logic must flow in this direction and be kept simple.
                            And in art...

                            Originally posted by Mike Sulzer View Post
                            The brain can only work with what the hearing hardware provides. The basilar membrane resonates at different frequencies as a function of location. The nerves provide the brain with a time history of the vibration amplitudes as a function of location. The time resolution is limited, and thus so is transient analysis. Also, this limits the importance of phase in human hearing, and simple tests verify that it is not very important in general. There is every reason, of course, to believe that the phase in slow transients (dominant components for example in the range of the dominant harmonics in musical instruments), is important, especially if the shifts are arbitrary and extreme. (Audibility of Phase Distortion) But fast ones? Beyond the resonances of the basilar membrane and way beyond the effective sampling rate of the nerves?
                            My struggle with this statement is that first, if the brain can only work with what the hardware provides, then why can the deaf tell when loud music is being played? Why will inaudible frequencies induce headaches? Why does Ultrasound therapy work? I'm not saying these are things we need to worry about in guitar pickup design, but what is ignored by taking science's audio research is how any/all of this behaves under gain/distortion, into a loud tube amp, standing 3 feet from the amp inducing a the bio-feedback loop between the player and the SPL-that's where phase anomalies and information beyond simple "hearing through a clean Fender" have a greater impact than any of us want to admit. The information, as part of gain stages (even clean compression, power tube sag) even if beyond the listener's audible range will change the character of the sound under gain. Furthermore, it will change how the guitar you're holding in your hands reacts to the SPL in the room.

                            You can take the world's best digital delay, and the player will say the delay sounds different than the note he played. If you recorded both and played them back later, in random orders, the player may then concede the two sound "the same". But the biggest difference is the bio transient received from the guitar itself. The guy banged the chord out, and that sent a shockwave through his body. The interaction with the amp is where phase anomalies AND absolute phase can have an impact on the playing experience. Some people lose their mind saying that there's no way the musician can detect absolute phase. (the global phase relationship) But its just not true. Phase inversion plays a role in the feedback loop between the SPL and the guitar in your hand.

                            Originally posted by Mike Sulzer View Post
                            Where is the evidence that any manipulation of the components above 20KHz of transients is audible? Where is the evidence that components significantly above 20KHz matter at all?
                            It matters to the information within human hearing by affecting the components within the signal path. I can't elaborate without a breach but we've seen and heard it.

                            Comment


                            • #59
                              All the hearing-range tests are performed with sine waves. Such tests cannot explore transient response
                              Yes they can, because *any* transient/pulse/waveform can be decomposed into a combination of sinewaves.

                              Which, if necessary, can be tested one by one and provide a "map" of how you will hear said transient.
                              Juan Manuel Fahey

                              Comment


                              • #60
                                Frank, you make some very good points, especially this:
                                It matters to the information within human hearing by affecting the components within the signal path. I can't elaborate without a breach but we've seen and heard it.
                                For example, the distortion of components above 20 KHz affecting the components below 20KHz in an amp, and thus being indirectly audible, as I described to Joe, above. Or more to the point for electric guitar, the components above 5KHz in the amp that are not so much directly audible from the speaker, but which nonetheless affect, through very high non-linearity, the components throughout the band below 5KHz.

                                And in art...
                                The art happens inside in response to what happens outside and how the inside works. Science describes what happens outside, but we have no good idea what happens inside.

                                My struggle with this statement is that first, if the brain can only work with what the hardware provides, then why can the deaf tell when loud music is being played?
                                First, you have to figure out all he ways really loud sounds affect the body + brain*. Good luck! (*And everything else in the room, too.)

                                Comment

                                Working...
                                X