Ad Widget

Collapse

Announcement

Collapse
No announcement yet.

More on the world above 20 KHz - a reference

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Joe Gwinn View Post
    Some 13-sample random sequences will be really bad, some will be OK, and so on. It's random. There are only 2^13= 8192 such sequences, so one can analyze this by exhaustion. Many of the sequences will be pretty good.

    But anyway, I seriously doubt that Story was limiting himself to 13-sample datasets.



    In a pure mathematical sense, no sample will be exactly identical to the limiting case. But this is hairsplitting - it doesn't take a huge number of samples before the difference between the ideal and the actual becomes too small to matter. And in the analysis of such things, it is standard to use the limiting cases, precisely to avoid becoming entangled in a huge number of slightly imperfect random examples.
    1. I was not talking about 13 baud sequences with random baud length, but rather actual random noise, where the amplitude can vary as well. The number is infinite.

    2. My example has nothing to do with what Story might or might not limit himself to.

    3. No hair splitting here. The spectra of samples of random noise sequences do not become closer to flat as the length is increased. I showed an example with a length of 1024 samples. It is not any flatter than a shorter sequence. And it should not be.

    4. Really? It is standard to use limiting cases? There is no limiting case. Where are you getting this from?

    Comment


    • #32
      Originally posted by Mike Sulzer View Post
      1. I was not talking about 13 baud sequences with random baud length, but rather actual random noise, where the amplitude can vary as well. The number is infinite.

      2. My example has nothing to do with what Story might or might not limit himself to.

      3. No hair splitting here. The spectra of samples of random noise sequences do not become closer to flat as the length is increased. I showed an example with a length of 1024 samples. It is not any flatter than a shorter sequence. And it should not be.

      4. Really? It is standard to use limiting cases? There is no limiting case. Where are you getting this from?
      Yes, it is standard to use limiting cases. See for instance Papoulis. Or many other standard textbooks.

      Your basic claim is that the spectrum of an impulse and the spectrum of of white noise are different. This bears directly on the issue if phase matters. Well, the two spectra are not different in any practical sense, so long as the finite sequence isn't trivially short. And one waveform sounds like a click, while the other sounds like a hiss.

      Comment


      • #33
        Originally posted by Joe Gwinn View Post
        Yes, it is standard to use limiting cases. See for instance Papoulis. Or many other standard textbooks.

        Your basic claim is that the spectrum of an impulse and the spectrum of of white noise are different. This bears directly on the issue if phase matters. Well, the two spectra are not different in any practical sense, so long as the finite sequence isn't trivially short. And one waveform sounds like a click, while the other sounds like a hiss.
        You claim that the spectrum of a sample of white noise approaches flat, that is, the spectrum of the process, as the length of the sample increases. That is false, as I have shown.

        I am saying that the spectrum of an impulse and the spectrum of a sample of random noise are different. I have shown this to be true.

        That a spectrum of an impulse is flat, and that a random white noise process also has a flat spectrum are not connected as you think they are. You must compare a waveform to a wave form, not to a process.

        Comment


        • #34
          To summarize one way to understand the spectrum of a sample of random noise:

          1. Take a sequence of n independent random numbers drawn from the distribution in question (say Gaussian).

          2. Compute the discrete Fourier transform (say with an fft).

          3. Square and add the real and imaginary parts at each frequency.

          4. Note that the result has a chi square distribution with two degrees of freedom.

          #4 is true no matter how large n is. There is no change in the statistics at large n. The spectrum is always as "spikey" as determined by the chi square distribution. It does not approach a flat spectrum. The flat spectrum is that of the process, and its meaning is that the expected value at each frequency is the same.

          Comment


          • #35
            Originally posted by Mike Sulzer View Post
            4. Note that the result has a chi square distribution with two degrees of freedom.

            #4 is true no matter how large n is. There is no change in the statistics at large n. The spectrum is always as "spikey" as determined by the chi square distribution. It does not approach a flat spectrum. The flat spectrum is that of the process, and its meaning is that the expected value at each frequency is the same.
            Which approaches a Gaussian in the limit, as the degrees of freedom exceeds something like 30.

            Chi-squared distribution - Wikipedia, the free encyclopedia

            Anyway, there are many books on analysis os signals and noise. I've mentioned Papoilis. Also classic are Schwartz, and Black.

            Comment


            • #36
              Originally posted by Joe Gwinn View Post
              Which approaches a Gaussian in the limit, as the degrees of freedom exceeds something like 30.

              Chi-squared distribution - Wikipedia, the free encyclopedia

              Anyway, there are many books on analysis os signals and noise. I've mentioned Papoilis. Also classic are Schwartz, and Black.
              But there are only two degrees of freedom in the relevant Chi square distribution. Always just two because the spectrum is made by squaring and adding the real and imaginary part at each frequency, no matter how many frequencies there are. So you have pointed out something that has no relevance, just as stating that there are many books on this topic has no relevance.

              Comment


              • #37
                Originally posted by Mike Sulzer View Post
                But there are only two degrees of freedom in the relevant Chi square distribution. Always just two because the spectrum is made by squaring and adding the real and imaginary part at each frequency, no matter how many frequencies there are. So you have pointed out something that has no relevance, just as stating that there are many books on this topic has no relevance.
                You know, you're right. I left a step or two out in my haste.

                The basic claim is that the power spectrum of white noise is other than flat, in fact that it is blue. The talk of spiky waveforms implies more power at higher frequencies. But this is not the case:

                Colors of noise - Wikipedia, the free encyclopedia

                The connection is that a power spectrum is more precisely known as a power density spectrum, and the differential elements all add incoherently. By the central limit theorem, the sum (integral) tends to gaussian.

                Illustration of the central limit theorem - Wikipedia, the free encyclopedia

                Comment


                • #38
                  Originally posted by Joe Gwinn View Post
                  You know, you're right. I left a step or two out in my haste.

                  The basic claim is that the power spectrum of white noise is other than flat, in fact that it is blue. The talk of spiky waveforms implies more power at higher frequencies. But this is not the case:

                  Colors of noise - Wikipedia, the free encyclopedia

                  The connection is that a power spectrum is more precisely known as a power density spectrum, and the differential elements all add incoherently. By the central limit theorem, the sum (integral) tends to gaussian.

                  Illustration of the central limit theorem - Wikipedia, the free encyclopedia
                  My claim is that the spectrum of a sample of white noise is spikey, but the expected value at each frequency is the same. No, "spikeyness" does not imply more power at high frequencies. The power spectrum of a sample of random noise is not Gaussian, it is Chi Square with two degrees of freedom.

                  If you add together the spectra of many samples of random noise (in order to estimate the spectrum of the process) you tend to Gaussian with a positive mean.

                  Comment


                  • #39
                    Originally posted by Mike Sulzer View Post
                    My claim is that the spectrum of a sample of white noise is spikey, but the expected value at each frequency is the same. No, "spikeyness" does not imply more power at high frequencies. The power spectrum of a sample of random noise is not Gaussian, it is Chi Square with two degrees of freedom.

                    If you add together the spectra of many samples of random noise (in order to estimate the spectrum of the process) you tend to Gaussian with a positive mean.
                    Yes. I would hazard that the power spectrum of a sample of white gaussian noise is also white gaussian, with positive mean and a sigma that becomes smaller as the number of samples increases. If I recall, sigma varies inversely with the square root of the total number of samples. What matters is the total number of samples, not how they are acquired (one long sequence versus the sum of many shorter sequences).

                    Life is much smoother with thousands of samples.

                    Comment


                    • #40
                      Classic textbooks on the properties and handling of noise

                      For those who wish to dig deeper into the matter of noise spectra et al, there are two classic engineering textbooks that are a good place to start:

                      "Probability, Random Variables, and Stochastic Processes", third edition, Athanasios Papoulis, McGraw-Hill 1991, 666 pages.

                      "Information Transmission, Modulation, and Noise", fourth edition, Mischa Schwartz, McGraw-Hill 1990, 742 pages.

                      In both cases, prior editions are good, and are widely available used for reasonable dollars.

                      These books are intended for engineers, and assume familiarity with complex variables and calculus.

                      Comment


                      • #41
                        Not to be contentious, but how does any of this relate to electronic music? Yeah, the people that design electronics have to take ALL things into consideration while designing, but I think that when they do, they lose sight of what really matters, how it sounds.

                        A lot of what has been done with electronic music devices is change for changes sake, not to improve how people will enjoy what they are hearing.
                        Now Trending: China has found a way to turn stupidity into money!

                        Comment


                        • #42
                          Originally posted by guitician View Post
                          Not to be contentious, but how does any of this relate to electronic music? Yeah, the people that design electronics have to take ALL things into consideration while designing, but I think that when they do, they lose sight of what really matters, how it sounds.
                          Although it may be hard to follow as the technical folk hurl textbooks at one another, the core of the argument is how important phase is to how sound is perceived.

                          Comment


                          • #43
                            Thanks, I know my phaser makes my guitar sound different, but I realize your talking minute differences and into inaudibility, which probably wasn't in anyone's mind in the early days of electronic audio design.
                            Now Trending: China has found a way to turn stupidity into money!

                            Comment


                            • #44
                              Originally posted by Joe Gwinn View Post
                              Yes. I would hazard that the power spectrum of a sample of white gaussian noise is also white gaussian, ...
                              So why not check it yourself, because you do not believe the example I calculated some time ago. Take the fft of some sample of Gaussian random noise of length n and you get Gausian random noise. But this is not the power spectrum. For that you have to square all n numbers (which includes positive and negative frequency) of the real part, and all n numbers of the imaginary part and add the real and imaginary parts. That is the power spectrum of the sample of Gaussian noise. It does not have a Gaussian distribution. Obviously not since it has a hard lower limit of zero, and Gaussian cannot. As said before it is chi square with two degrees of freedom, not Gaussian. Why do you keep denying this?

                              Comment


                              • #45
                                Originally posted by Mike Sulzer View Post
                                So why not check it yourself, because you do not believe the example I calculated some time ago. Take the fft of some sample of Gaussian random noise of length n and you get Gausian random noise. But this is not the power spectrum. For that you have to square all n numbers (which includes positive and negative frequency) of the real part, and all n numbers of the imaginary part and add the real and imaginary parts. That is the power spectrum of the sample of Gaussian noise. It does not have a Gaussian distribution. Obviously not since it has a hard lower limit of zero, and Gaussian cannot. As said before it is chi square with two degrees of freedom, not Gaussian. Why do you keep denying this?
                                Because I'm talking about big samples. Lots of unipolar distributions became gaussian in some limit.

                                And because the original question arose from comparing the spectrum of an impulse to that of white noise.

                                Comment

                                Working...
                                X