Ad Widget

Collapse

Announcement

Collapse
No announcement yet.

More on the world above 20 KHz - a reference

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by Joe Gwinn View Post
    Sure they do, in the limit.
    There is no limit here. There are just two waveforms. He writes:
    Figure 1 shows two waveforms that have identical (power) spectra, and yet sound very different.
    The lower waveform does not have the same power spectrum as the upper one. If you want to know the exact spectrum of the lower waveform, you have to compute it, but you know it is not flat, but spikey, because that is how the spectra of samples of random noise look, like the example I posted. The claim that the two waveforms sound different even though they have the same power spectrum is not only false, but very misleading. He has fooled himself, and thus he can fool others.

    Comment


    • #17
      Originally posted by Joe Gwinn View Post
      Flatness issue addressed in a parallel thread.

      Fact is, we do use random sequences, so long as they are long enough. Barker codes are far too short for modern radars. Some people use Barker conds with complex elements, other people combine multiple orthogonal Barker codes in interesting ways, but even so, a million-sample pseudorandom sequence is near-optimal. (Simple shift-register sequences are not usually used for this.)

      Gold codes are also used: Gold code - Wikipedia, the free encyclopedia

      These are bigger than Barker codes, but still can be too short.

      By the way, what does "acf" stand for?
      "acf" stands for autocorrelation function.

      I have no idea what your first sentence means, nor do I see how the rest of your post addresses the issue. The acf of a sample of band limited random noise does not have as low values at non zero lags as a good code. I am familiar with pseudo random codes as used in cw radars to effectively generate a sequence of uniformly spaced short pulses and a constant very low side lobe level of -1, I believe. Not sure if that is what you mean, but if so, it has little to do with a sample of random noise.

      Comment


      • #18
        By the way, I think I did not communicate well the reason for bringing up the code example. It is just this: What would be the significance of using the "limiting" spectrum, that is, flat, in analyzing how well a sample of random noise does as a code? Well, it implies that such a code would have much smaller far out side lobes than the actual acf of a sample of random noise. That is, the definition fails to match reality. If it fails there, why would it be acceptable in audio? It also fails to predict the actual spectrum in that case as well. In fact, it is just a conceptual error, and such errors generally get you in trouble.

        Comment


        • #19
          Here are the promised sound files with various delays between the two ears. The number on the mp3 file tells how many samples of delay. Samples are separated by about 23 microseconds. An archive of the original aiff files has also been attached. The transient is the same one used as the basis for chirped waveforms in the discussion concerning phase effects. I will make another set using a waveform with more high frequencies later. As the number of samples of relative delay is increased, the sound does not stay centered. It takes a lot more than one sample for me, but my hearing is not great in one ear. Others could do much better, but if anyone thinks they can hear one sample, it would be good to set up a double blind test for verification.

          iet0.mp3, iet1.mp3, iet10.mp3, iet25.mp3, iet50.mp3, iet100.mp3, iet.zip

          Comment


          • #20
            Originally posted by Mike Sulzer View Post
            There is no limit here. There are just two waveforms. He writes: ...

            The lower waveform does not have the same power spectrum as the upper one. If you want to know the exact spectrum of the lower waveform, you have to compute it, but you know it is not flat, but spikey, because that is how the spectra of samples of random noise look, like the example I posted. The claim that the two waveforms sound different even though they have the same power spectrum is not only false, but very misleading. He has fooled himself, and thus he can fool others.
            What Story is saying is correct, but again it is understood that he is assuming an ergodic process, as is customary in such analyses. Gaussian noise is ergodic.

            Ergodicity - Wikipedia, the free encyclopedia

            Comment


            • #21
              Originally posted by Mike Sulzer View Post
              I have no idea what your first sentence means, nor do I see how the rest of your post addresses the issue. The acf [autocorrelation function] of a sample of band limited random noise does not have as low values at non zero lags as a good code. I am familiar with pseudo random codes as used in cw radars to effectively generate a sequence of uniformly spaced short pulses and a constant very low side lobe level of -1, I believe. Not sure if that is what you mean, but if so, it has little to do with a sample of random noise.
              I was reacting to the assertion that random sequences cannot match Barker codes for low sidelobes around the central peak. For short codes, this is true. For long codes, it is not true, and modern radar systems in fact use long random codes for many things.

              While we speak of "random" sequences, most commonly they are in fact pseudorandom, for practical reasons. But one can just as well use true random sequences, and some systems do just that.

              Gold codes use an orthogonal set of pseudorandom sequences.


              For the audience:

              Orthogonal sequences are those that don't much resemble each other mathematically, and so mixtures of such sequences can easily be separated from one another. The major use of orthogonal sequences is modern cell phones - in effect, each call is using its own sequence.

              Random versus pseudorandom: With a random sequence, one cannot tell what the next value will be in advance. Think a geiger counter ticking at ransom. By contrast, while pseudorandom sequences look like (and sound like) random sequences, a pseudorandom sequence is generated according to some complicated rule, and so the next value can be predicted far in advance.

              Sidelobes are the undesired responses away from the peak. Sidelobes are like ghost images in a photograph - they make it harder to see what was photographed.

              Comment


              • #22
                I can hear the difference between iet0 and iet1 when I loop the chirp into a continuous loop. But then, a chorus uses 5ms and that's very noticeable. Trying to find out why something sounds better is useful because it justifies the added expense. I bought DVD-Audio disks when they first came out and was really blown away by how much better it sounds. I used my PC and sound card to listen to them and never bought more then a six because I wasn't sure where they were going with it. Sony's SACD was looking like they may win the standard battle.

                Here's a AES paper that was done on DVD-Audio vs DSDold.hfm-detmold.de/eti/projekte/diplomarbeiten/dsdvspcm/aes_paper_6086.pdf
                Last edited by guitician; 02-22-2014, 06:32 PM.
                Now Trending: China has found a way to turn stupidity into money!

                Comment


                • #23
                  Joe, you are taking something very smiple and making it very complicated. The claim is that the two specific wavforms in Figure 1 have the same power spectrum. They do not for reasons described.

                  The reason Story claims they do is another matter. Apparently he does not know that the spectrum of a sample of Gaussian random noise has a spikey spectrum. He apparently thinks that each sample drawn from the process has the spectrum of the process. This is not true.

                  The spectrum of a process is like a sequence of expected values. Do the values of a sample of a process equal the expected values? Of course not. Take a simple example. Suppose we have a fair coin with +1 on one side and -1 on the other. Make a toss. The expected value is zero, but the sample value is either -1 or 1. The expected value does not even have to be a possible result of a sample of the process!

                  Originally posted by Joe Gwinn View Post
                  What Story is saying is correct, but again it is understood that he is assuming an ergodic process, as is customary in such analyses. Gaussian noise is ergodic.

                  Ergodicity - Wikipedia, the free encyclopedia

                  Comment


                  • #24
                    Not sure what you mean about the chirp. These waveforms (iet) have not been chirped. There is just a delay between the left and right ears.

                    Originally posted by guitician View Post
                    I can hear the difference between iet0 and iet1 when I loop the chirp into a continuous loop. But then, a chorus uses 5ms and that's very noticeable. Trying to find out why something sounds better is useful because it justifies the added expense. I bought DVD-Audio disks when they first came out and was really blown away by how much better it sounds. I used my PC and sound card to listen to them and never bought more then a six because I wasn't sure where they were going with it. Sony's SACD was looking like they may win the standard battle.

                    Here's a AES paper that was done on DVD-Audio vs DSDold.hfm-detmold.de/eti/projekte/diplomarbeiten/dsdvspcm/aes_paper_6086.pdf

                    Comment


                    • #25
                      Originally posted by Mike Sulzer View Post
                      Joe, you are taking something very simple and making it very complicated. The claim is that the two specific waveforms in Figure 1 have the same power spectrum. They do not for reasons described.
                      Yeah, but you are missing the forest for one tree. Aside from the odd name (ergodic), the concept isn't complicated: Get the spectrum of one very long sequence, versus averaging the spectra of a large number pf shorter sequences taken from the same noise source. If the noise source is ergodic, you'll get the same statistical answer. Gaussian is ergodic, so in the limit (of a very large sample set) the impulse and gaussian (white) noise will have the same spectra. There are textbooks making this very point. I bet it's in Papoulis, if memory serves.

                      In any event, the very definition of white noise is that the average spectrum is flat, just like the spectrum of an impulse. The fact that short samples may deviate from this proves only that the sample is too short for the averages to settle down. In engineering, "Gaussian noise" is understood to be "white Gaussian" noise as described in the following wiki.

                      White noise - Wikipedia, the free encyclopedia

                      The reason Story claims they do is another matter. Apparently he does not know that the spectrum of a sample of Gaussian random noise has a spikey spectrum. He apparently thinks that each sample drawn from the process has the spectrum of the process. This is not true.
                      I doubt that Story thinks this. See above. And his point about the spectra being the same is correct, and others make the same point in various ways.

                      The spectrum of a process is like a sequence of expected values. Do the values of a sample of a process equal the expected values? Of course not. Take a simple example. Suppose we have a fair coin with +1 on one side and -1 on the other. Make a toss. The expected value is zero, but the sample value is either -1 or 1. The expected value does not even have to be a possible result of a sample of the process!
                      True enough, and if you do it a billion times, the spectrum will be pretty flat.

                      Comment


                      • #26
                        Audio Equipment Testing

                        I found another useful and amusing reference:

                        Audio equipment testing - Wikipedia, the free encyclopedia

                        Sound familiar? The earliest reference is dated 1977, but I bet the arguments started long before, with the invention of the phonograph, and the Radiotron Designers Handbook.

                        Comment


                        • #27
                          I guess I must have been talking about short codes, since the longest Barker code is 13. So you agree with my point: the acf of a random code composed of random noise is likely to be much worse than the acf of a 13 baud Barker code, and thus the spectrum is less flat than that of the Barker code, and certainly not completely flat. Then you agree that it is the spectrum of the sample of the random process that counts here, not the spectrum of the random process (flat). If you listen to an audio clip made form the random code, you hear the spectrum of that sample, not the spectrum of the process. The same is true of the lower signal in Story's Figure 1. The spectrum of that sample is not flat either, and therefore it is not the same as the spectrum of the impulse.

                          Originally posted by Joe Gwinn View Post
                          I was reacting to the assertion that random sequences cannot match Barker codes for low sidelobes around the central peak. For short codes, this is true. For long codes, it is not true, and modern radar systems in fact use long random codes for many things.

                          While we speak of "random" sequences, most commonly they are in fact pseudorandom, for practical reasons. But one can just as well use true random sequences, and some systems do just that.

                          Gold codes use an orthogonal set of pseudorandom sequences.


                          For the audience:

                          Orthogonal sequences are those that don't much resemble each other mathematically, and so mixtures of such sequences can easily be separated from one another. The major use of orthogonal sequences is modern cell phones - in effect, each call is using its own sequence.

                          Random versus pseudorandom: With a random sequence, one cannot tell what the next value will be in advance. Think a geiger counter ticking at ransom. By contrast, while pseudorandom sequences look like (and sound like) random sequences, a pseudorandom sequence is generated according to some complicated rule, and so the next value can be predicted far in advance.

                          Sidelobes are the undesired responses away from the peak. Sidelobes are like ghost images in a photograph - they make it harder to see what was photographed.

                          Comment


                          • #28
                            Originally posted by Mike Sulzer View Post
                            I guess I must have been talking about short codes, since the longest Barker code is 13. So you agree with my point: the acf of a random code composed of random noise is likely to be much worse than the acf of a 13 baud Barker code, and thus the spectrum is less flat than that of the Barker code, and certainly not completely flat.
                            Some 13-sample random sequences will be really bad, some will be OK, and so on. It's random. There are only 2^13= 8192 such sequences, so one can analyze this by exhaustion. Many of the sequences will be pretty good.

                            But anyway, I seriously doubt that Story was limiting himself to 13-sample datasets.

                            Then you agree that it is the spectrum of the sample of the random process that counts here, not the spectrum of the random process (flat). If you listen to an audio clip made form the random code, you hear the spectrum of that sample, not the spectrum of the process. The same is true of the lower signal in Story's Figure 1. The spectrum of that sample is not flat either, and therefore it is not the same as the spectrum of the impulse.
                            In a pure mathematical sense, no sample will be exactly identical to the limiting case. But this is hairsplitting - it doesn't take a huge number of samples before the difference between the ideal and the actual becomes too small to matter. And in the analysis of such things, it is standard to use the limiting cases, precisely to avoid becoming entangled in a huge number of slightly imperfect random examples.

                            Comment


                            • #29
                              Originally posted by Mike Sulzer View Post
                              Not sure what you mean about the chirp. These waveforms (iet) have not been chirped. There is just a delay between the left and right ears.
                              I played the waveform, and to my ears it sounds like a birds chirp. If two birds were chirping the exact tone some distance apart, I could tell where they are blindfolded. But listening to those waveforms with headphones is really too short(.007 sec) for me to notice anything happening.
                              I just love this technical stuff about sound, but I really think audio is a realm of deep mystery, and science has a long way to go.
                              Now Trending: China has found a way to turn stupidity into money!

                              Comment


                              • #30
                                Originally posted by Joe Gwinn View Post

                                I doubt that Story thinks this. See above. And his point about the spectra being the same is correct, and others make the same point in various ways.
                                Then why did he say that the two waveforms have the same spectrum? It is waveforms that you hear, not the spectrum of a process. How can you compare the spectrum of a specific waveform, a bandlimited impulse, to the spectrum of a process, a sequence of expected values? You cannot. You have to compare it to another waveform, just as he does, but he has the spectrum wrong.


                                The fact that short samples may deviate from this proves only that the sample is too short for the averages to settle down.
                                No, you misunderstand what is happening here. The spectrum of a sample of random noise is always "spikey": as the length increases, the frequency resolution increases, and the "spikeyness" remains constant. A sample never approaches the spectrum of the process (with the one exception given earlier). Sure, you can average across frequency, but that is not the spectrum of a sample, it is an estimate of the spectrum of the process with limited frequency resolution. The concept of expected value is crucial. That the spectrum of the process is flat is just a statement that the expected values have the same value independent of frequency. The spectrum of a sample depends on more than just the spectrum of the process, a useful description, but with limited information.

                                Comment

                                Working...
                                X