Announcement

Collapse
No announcement yet.

Interesting Pickup Design from Fishman

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by J M Fahey View Post
    Yes they can, because *any* transient/pulse/waveform can be decomposed into a combination of sinewaves.

    Which, if necessary, can be tested one by one and provide a "map" of how you will hear said transient.
    But if the system is non linear, then it is not so simple. There is some evidence that human hearing is sensitive to polarity, what some call changing the phase by 180 degrees, but really is better called polarity. The proposed mechanism is a non-linearity.

    Comment


    • #62
      Originally posted by Mike Sulzer View Post
      In science, the need for a better model is determined by good evidence that the current one is inadequate. The logic must flow in this direction and be kept simple.
      Generically true, but specific application is the issue. See following.

      Yes, but those transients also contain frequencies less than 20 KHz, which people obviously can hear.
      True, but there is a school of thought that says that 20 KHz isn't quite enough, and they have their body of evidence.

      The brain can only work with what the hearing hardware provides. The basilar membrane resonates at different frequencies as a function of location. The nerves provide the brain with a time history of the vibration amplitudes as a function of location. The time resolution is limited, and thus so is transient analysis. Also, this limits the importance of phase in human hearing, and simple tests verify that it is not very important in general. There is every reason, of course, to believe that the phase in slow transients (dominant components for example in the range of the dominant harmonics in musical instruments), is important, especially if the shifts are arbitrary and extreme. (Audibility of Phase Distortion) But fast ones? Beyond the resonances of the basilar membrane and way beyond the effective sampling rate of the nerves?
      This argument is actually circular, as it posits a model of hearing as evidence that observed exceptions to that model cannot have been observed. This isn't the correct process to follow. One first examines the exceptions to see if they are reproducible, then tries to explain them. Such exercises often cause the then current model to be improved. Every so often, the current model is overthrown.

      I thought you added that "wrinkle".
      While phase linearity does impose a requirement on phase response, it doesn't have to be that precise for transients to be well enough reproduced.

      War story: I designed a system to distribute a time and phase reference signal over a system the size of a large building. This signal contains a pulse per second and a sinewave in the megahertz. The EEs came up with a big COTS power amplifier to drive the distribution system. Didn't work. Amplifier was designed for noise-like signals, where phase is totally irrelevant, and the pulse per second pips were smeared flat.

      Where is the evidence that any manipulation of the components above 20KHz of transients is audible? Where is the evidence that components significantly above 20KHz matter at all?
      I'll refer you to the posting by charrich56, being #58 in this thread. As he mentions, this debate has long been with us. Which implies that the difference isn't very large, but then again neither is the difference between very good and perfect.

      Comment


      • #63
        Originally posted by Joe Gwinn View Post

        This argument is actually circular, as it posits a model of hearing as evidence that observed exceptions to that model cannot have been observed. This isn't the correct process to follow. One first examines the exceptions to see if they are reproducible, then tries to explain them. Such exercises often cause the then current model to be improved. Every so often, the current model is overthrown.

        No it is not circular, you are just misreading it that way. All I did was state what the current model implies, and then below that ask for observed exceptions. There were no actual specific exceptions under discussion, and so I could not have been attempting to disprove them with an assumed model. Again, you state that there is a body of evidence, but do not give any, referring instead to a post that does not either. As far as I can see, I am the only one who has posted any evidence that messing up the phase can horrendously alter the sound. Did you listen to the two files linked to in the link I provided? Do you understand how the current model I discussed explains that effect, given that there are large phase changes at all frequencies, including the low frequencies?

        Comment


        • #64
          Originally posted by Mike Sulzer View Post
          No it is not circular, you are just misreading it that way. All I did was state what the current model implies, and then below that ask for observed exceptions. There were no actual specific exceptions under discussion, and so I could not have been attempting to disprove them with an assumed model. Again, you state that there is a body of evidence, but do not give any, referring instead to a post that does not either. As far as I can see, I am the only one who has posted any evidence that messing up the phase can horrendously alter the sound. Did you listen to the two files linked to in the link I provided? Do you understand how the current model I discussed explains that effect, given that there are large phase changes at all frequencies, including the low frequencies?
          Well, I have read a lot of neurobiology articles over the years, largely in Nature and Science, and one thing that comes out quite clearly is that while some models are pretty good, all the models are known to be inadequate, and the focus of much research is to come up with better models.

          Take the simple-appearing matter of encoding visual and aural signals for transmission to the brain. We don't actually know the exact code used to carry such data. Originally, it was thought that a pulse-rate code was enough, but there are proofs that this cannot be correct. Options include pulse timing (and correlation) codes, phase codes, combinations of rate and phase/correlation, and so on. This is not at all settled.

          As for the ear itself, there was a long-standing mystery that was only recently solved: Measurement of the bandwidth of the basilar membrane resonance was far too wide to explain the frequency acuity of human hearing. Eventually it was discovered that if one measured the frequency response in vivo, one got far sharper responses than those found in fresh but dead preparations. This led to the question of why the difference. It turned out that the ear is regenerative - there is active mechanical amplification on the basilar membrane, involving the hair cells.

          Hair cell - Wikipedia, the free encyclopedia

          Regenerative circuit - Wikipedia, the free encyclopedia

          The hardest sense to understand is olfaction, probably because this is the oldest sense. The brain is far harder.

          As for cites, I no longer remember where I saw these various articles over the years, but I can report any that I come across.

          Comment


          • #65
            Originally posted by Joe Gwinn View Post
            The hardest sense to understand is olfaction, probably because this is the oldest sense.
            It requires arcane methods for understanding, quantum chemistry to be sure.

            Luca Turin advanced the understanding of olfaction by noting that scents correlated strongly with a compound's characteristic vibration modes. This doesn't explain so much as strongly suggest. Before this, fragrance chemists would sift through 1000 compounds for the desirable collection of properties.

            Using ab initio methods to predict vibration frequencies, the field of synthetic compounds would narrow to 10 or 20 (in fact, his proof of concept was with a $50 student version of Spartan software). With this method, Turin created a chemically stable vanilla substitute and a non-carcinogenic coumarin fragrance for men's perfumes.
            "Det var helt Texas" is written Nowegian meaning "that's totally Texas." When spoken, it means "that's crazy."

            Comment


            • #66
              Originally posted by salvarsan View Post
              It [olfaction] requires arcane methods for understanding, quantum chemistry to be sure.

              Luca Turin advanced the understanding of olfaction by noting that scents correlated strongly with a compound's characteristic vibration modes. This doesn't explain so much as strongly suggest. Before this, fragrance chemists would sift through 1000 compounds for the desirable collection of properties.
              Very interesting. I had not heard of the connection to vibration spectra, or the effect of deuteration on smell. Sounds like Turin is on to something.

              Comment


              • #67
                The usability or sufficiency of a neural code, and the information that is coded, will be a function of what the sensing organism needs to derive from the information. That is, none of us are simply sensing systems with unlimited capability. We likely have extended bandwidth in some domains beyond what has traditionally been documented, but it requires exotic circumstances, sometimes, to be relevant, and as such, to show up in testing as information that humans actively use, whether consciously or not (and being aware of it is not the requirement for it to be actively used).

                I regularly have this conversation with folks who make a big deal over the subtle qualitative aspects of signal processing, cables, and such. One of the points I make to them is that if one is listening to busy wide-bandwidth multi-source program-material, such as a clean recording of a large orchestra, the sorting and assignment of harmonic content to each of the various sources, such that they are perceptually separable into instruments with identifiable locations, is a very demanding task, and is aided immensely by phase coherence and the absence of group delay along the processing path. Humans are burdened with having only two ears, yet have to engage in "auditory scene analysis" ( Auditory scene analysis - Wikipedia, the free encyclopedia ) to be able to identify and place multiple sound sources that have been blended together in a manner sending all that content to both ears. As such, the aural content that is crucial for achieving that objective will depend on what the listener needs/hopes to achieve, and the difficulty of the task. Are the requirements for engaging in scene analysis of one bongo drum played by the listener in front of them as stringent? I doubt it. Of course, in this instance, the sound-producer's knowledge factors in, whereas in the orchestral recording instance, it can't, changing the demands imposed on the signal itself, and the manner in which the nervous system will integrate different sources of information to derive a coherent image of the world.

                So, in short, sometimes the stuff we are all debating here just might matter (if your hearing is good enough), although most of the time it won't. One needs to distinguish between the theoretically possible, and the practical.

                Comment


                • #68
                  Originally posted by Mark Hammer View Post
                  ...One needs to distinguish between the theoretically possible, and the practical.
                  A very good point, one that perhaps can be exploited to learn more about how the system works. For example, many spots are sampled on the multiply resonant basilar membrane. In order to assume a theoretical limit on what this system is capable of, one wold assume that the signal processing on each "channel" is perfect, as perfect as can be, given that there must be some limit on the sampling rate of the signal, and one would further assume that the overall processing that assembles the information from all the channels is also perfect. Then one would devise listening tests involving signals designed to find the existence of such a sampling rate, and determine approximately what it is. If these tests are successful, then one could say that the hypothesized theoretical functioning is a practical reality to at least some reasonable degree.

                  The reason for assuming perfect processing is that with at least one other sense, it is extremely good. Have you ever tried to write a computer algorithm to detect an arbitrarily curved line in a noisy image? If so, you know how hard it is to do nearly as well as the eye/brain can.

                  Comment


                  • #69
                    The World Beyond 20 KHz

                    One interesting reference I had on my disk with respect to response beyond 20 KHz is the following. I downloaded my copy in 2005. This author believes that 40 KHz is needed, but I've also seen 100 KHz claims.

                    http://www.fullcompass.com/common/fi...yond_20khz.pdf

                    I would have gone directly to the Earthworks website, but Google threw up draconian and specific malware warnings.

                    Edit: The author died in 2002. He invented the DBX noise reduction system. Here is his Wiki entry:

                    http://en.wikipedia.org/wiki/David_E._Blackmer
                    Last edited by Joe Gwinn; 02-05-2014, 06:10 PM. Reason: Add Wiki reference.

                    Comment


                    • #70
                      This is a fascinating discussion.

                      But ... has anyone perusing this thread ever actually <heard> guitar string harmonics over 20 KHz, magnetically transduced to an electrical signal?

                      And ... even assuming all the arguments for bandwidth over 20 KHz, phase linearity, etc. are completely valid, does it really <matter> for this specific signal source?

                      Do we even <want> that ultra-high part of the signal, as I alluded to in an earlier post, or do we have to preserve it in the signal chain to make the sound of the final performance or recording "better"?

                      Is there any "air" up there when it comes to the guitar?

                      What is "complete realism" when the signal we are talking about is synthetic (and/or not existing at a hearable level more than a couple of feet away) to start with, and limited to picking up a small area of the vibrating strings?

                      I would love to see some comments on this.

                      Comment


                      • #71
                        Joe, you provided the information earlier that shows that this quote from the paper you refer too is known to be wrong now:

                        The outer hair cells clearly do something else, but what?

                        There are about 12,000 'outer' hair cells arranged in three or four rows.There are four times as many outer hair cells as inner hair cells(!) However, only about 20% of the total available nerve paths connect them to the brain.The outer hair cells are interconnected by nerve fibers in a distributed network.This array seems to act as a waveform analyzer, a low-frequency transducer, and as a command center for the super fast muscle fibers (actin) which amplify and sharpen the travelling waves which pass along the basilar membrane thereby producing the comb filter. It also has the ability to extract information and transmit it to the analysis centers in the olivary complex, and then on to the cortex of the brain where conscious awareness of sonic patterns takes place.The information from the outer hair cells, which seems to be more related to waveform than frequency, is certainly correlated with the frequency domain and other information in the brain to produce the auditory sense.
                        The natural vibrations of the basilar membrane are too weak to give adequate SNR in the sensing and transmission process. The SNR can be enhanced by thousands of local sensors/transducers, the outer hairs, that act as electromechanical amplifiers for the vibrations. This is explained here (Hearing and Hair Cells). It is the destruction of the outer hair cells that causes deafness from loud sounds. Strong waves on the basilar membrane simply break the cells.

                        Then there are those two tweeters that supposedly differ in their response only above 20 KHz, and that explains why they sound different, even though he cannot hear above 20 KHz. And he can hear the effect even though he has not demonstrated that there was any source material above 20 KHz in what he listened to. Crap!

                        Originally posted by Joe Gwinn View Post
                        One interesting reference I had on my disk with respect to response beyond 20 KHz is the following. I downloaded my copy in 2005. This author believes that 40 KHz is needed, but I've also seen 100 KHz claims.

                        http://www.fullcompass.com/common/fi...yond_20khz.pdf

                        I would have gone directly to the Earthworks website, but Google threw up draconian and specific malware warnings.

                        Edit: The author died in 2002. He invented the DBX noise reduction system. Here is his Wiki entry:

                        David E. Blackmer - Wikipedia, the free encyclopedia

                        Comment


                        • #72
                          Originally posted by Mike Sulzer View Post
                          Joe, you provided the information earlier that shows that this quote from the paper you refer too is known to be wrong now:

                          The natural vibrations of the basilar membrane are too weak to give adequate SNR in the sensing and transmission process. The SNR can be enhanced by thousands of local sensors/transducers, the outer hairs, that act as electromechanical amplifiers for the vibrations. This is explained here (Hearing and Hair Cells). It is the destruction of the outer hair cells that causes deafness from loud sounds. Strong waves on the basilar membrane simply break the cells.

                          Then there are those two tweeters that supposedly differ in their response only above 20 KHz, and that explains why they sound different, even though he cannot hear above 20 KHz. And he can hear the effect even though he has not demonstrated that there was any source material above 20 KHz in what he listened to. Crap!
                          Whoa! The bit about regeneration was discovered recently, so Blackmer (who died in 2002) may not have ever known. But look into the literature just before any breakthrough, and you will find lots of incorrect statements.

                          But are you saying that Blackmer didn't hear what he claims to have heard? He was not exactly a tin-ear nobody in audio.

                          Again there is an air of circularity here. What's needed are double-blind tests to see if the effect survives and is reproducable.

                          More generally, you are insisting that it's totally settled, but clearly it is not, given the number of articles to the contrary.

                          Human senses are not simple, although we try to use simple models to describe them. This is OK so long as one keeps the limitations in mind.


                          My favorite definition of an expert is someone who knows what part of the theory is wrong.

                          Comment


                          • #73
                            Funny that I'd assumed that DBX was associated with Dolby. Were the earthworks mics really good? I knew live recording nuts that owned nothing but never saw them in a studio.

                            Comment


                            • #74
                              Originally posted by Joe Gwinn View Post
                              Whoa! The bit about regeneration was discovered recently, so Blackmer (who died in 2002) may not have ever known. But look into the literature just before any breakthrough, and you will find lots of incorrect statements.
                              I did not imply it was known. The source of my amusement is that he starts out by saying it is unknown, but by the end of the paragraph has convinced himself (if not the reader) that he has solved it.
                              Originally posted by Joe Gwinn View Post
                              But are you saying that Blackmer didn't hear what he claims to have heard? He was not exactly a tin-ear nobody in audio.

                              Again there is an air of circularity here. What's needed are double-blind tests to see if the effect survives and is reproducable.
                              I am not sure what you mean, but of course I believe he heard the difference between two different speakers. When have two different kinds of speakers ever sounded the same? What is funny is that given the ubiquitous difference between different speakers, he attributes it in this case to the response above 20KHz.
                              Originally posted by Joe Gwinn View Post

                              More generally, you are insisting that it's totally settled, but clearly it is not, given the number of articles to the contrary.

                              Human senses are not simple, although we try to use simple models to describe them. This is OK so long as one keeps the limitations in mind.


                              My favorite definition of an expert is someone who knows what part of the theory is wrong.
                              I did not say it was totally settled. But I do suspect that all those other articles that you mention have their flaws as well.

                              Comment


                              • #75
                                Originally posted by David King View Post
                                Funny that I'd assumed that DBX was associated with Dolby. Were the earthworks mics really good? I knew live recording nuts that owned nothing [else?] but never saw them in a studio.
                                I don't doubt that Earthworks mics worked well, regardless of the correctness of Blackmer's theories, simply because the art of microphone design is well understood, and he was a big dog in audio and would not produce something that didn't sound good to him.

                                Studio mics are decided on a number of non-acoustic rationales as well. Like weapons-grade robustness.

                                Comment

                                Working...
                                X