IPB

Welcome Guest ( Log In | Register )

3 Pages V   1 2 3 >  
Reply to this topicStart new topic
44 KHz (CD) not enough !? (Nyquist etc.), plethora of distortion frequencies?
zephirus
post May 11 2003, 17:40
Post #1





Group: Members
Posts: 16
Joined: 11-May 03
Member No.: 6542



Remarks and conclusions added May 12 2003 - 1:55 PM, and edited May 14 2003 - 08:35 AM :

My dubious claims unfortunately had a very short life span due to the very successful enlightenment efforts of tigre, 2Bdecided, KikeG and mrosscook.

In short: I failed to come up with evidence that cd quality (I mean 44.1 KHz digital sampling) is somehow problematic. It basically was a story of using the wrong tools, jumping to the wrong conclusions, and not having enough of a clue about signal processing.

Nevertheless, I tried again to make less daunting claims that the 44.1 KHz digital sampling rate is not enough to represent all signals less than 22.05 KHz correctly.

And again my claims had a very short life span. This time due to further enlightenment efforts by DonP, 2Bdecided, KikeG, mrosscook and SikkeK.

The conclusion: Arguing against the technical specification of cd quality (44.1 KHz/16 bit) should not be tried by someone that severely lacks in signal processing clue (like me).

If the cd sound quality is perceived as suboptimal, it may have more to do with poor recording, poor mastering, and suboptimal reproduction equipment (i.e. cd-player and sound system/headphones).

What one still could try are listening tests:

Such tests would need to be done with one and the same high end hardware for all signals and all tests (preferably with 192 KHz resolution, with 20-24 bit, and with a DAC that is perfectly shielded and outside of any system that is rich of EM signals, like a computer, and has a near perfect analog circuitry). And when testing the 192 KHz signal against the 44.1 KHz signal, the latter would need to be a digitally downsampled version (to 44.1 KHz), which was upsampled to 192 KHz again. Using the best available algorithms (Cool Edit may do a resonable job here).

And still, asking the test persons for audible artifacts would most likely not work at all. It might be more rewarding letting them rate how the music "felt" (e.g.: more or less "relaxing" for music that should be "relaxing" but is rich in high frequency content nonetheless). This could be done in a way that is scientifically sound and statistically relevant.

My original post:

____________

I have to admit: This 44.1 KHz topic more or less has been discussed to death already. It also seems likely that the following problem has been discussed on Hydrogenaudio several times as well (but I had no luck with the search function).

The 44.1 KHz sampling rate (CD quality) seems to create an infinite number of "mirrors" at its harmonics. These in turn create a complex set of distortion frequencies for every frequency in the analog source.

The strongest "mirror" is at at 22.05 KHz (44.1 KHz/2). But the problem can easily be demonstrated with the one at 11025 Hz (44.1 KHz/4) as well: if one creates a sine signal of 11025-1000 = 10025 Hz in a sound editor (e.g. Audacity, using a 44.1 KHz sampling rate) and plots the spectrum, then two additional frequencies are shown: one at 1000 Hz and one at 22050-1000 = 21050 Hz. More distortion signals can be seen if the FFT resolution is increased above 1024.

The general problem seems to be that a sampling frequency of 44.1 KHz does not guarantee that frequencies below 22.05 KHz are represented faithfully (as is mostly believed). Instead it probably more or less only guarantees that in the resulting complex signal the source frequency is significantly stronger than the numerous distortion signals.

Of course, the remaining question is if these distortions are audible (they resemble pretty much amplitude modulation). I cannot really test this with 44.1 KHz since I donīt have a 96 KHz soundcard. But the example with 11024 Hz surely looks rather disturbing (when looking at the waveform) and doesnīt sound very clean as well.

Did anyone do any respective (blind) listening tests?

zephirus

PS:
The following example is very audible: When using a sampling frequency of only 2000 Hz (instead of 44100 Hz) and creating a sine frequency of 750 Hz (well below the Nyquist limit of 1000 Hz) then the result sounds pretty ugly (itīs some kind of mixed signal of 750 Hz, 250 Hz and 1250 Hz).

This post has been edited by zephirus: May 19 2003, 15:49
Go to the top of the page
+Quote Post
tigre
post May 11 2003, 22:59
Post #2


Moderator


Group: Members
Posts: 1434
Joined: 26-November 02
Member No.: 3890



44.1KHz is enough to represent everything below 22.05Khz perfectly (unless you're talking about too high amplitude causes clipping - or quantization distortion/dither noise).

The additonal frequencies you see in spectral view are most likely caused by poor / "time-efficient" algorithms used by Audacity to create spectral view. To prove this you could try to
1. use another program (e.g. Cool Edit trial version)
2. upsamle using something decent (SSRC/foobar2000+diskwriter to a high sampling rate like 88200 or 96000 Hz and have a look at it with spectral view again. If the additional frequencies are visible somewhere else (or not at all) it's proven that they have not really been there before, otherwise they'd have been there after resampling too.

About your "audible" example: What do you use for playback? I could bet it's something that resamples (probably poorly) causing the "ugly" sound. Again: resample using something decent - and be sure you choose a volume for your test tones that can be handled by your soundcard('s driver). If you lower volume by e.g. 10dB and it sounds better/fine you'll know who's the guilty.


--------------------
Let's suppose that rain washes out a picnic. Who is feeling negative? The rain? Or YOU? What's causing the negative feeling? The rain or your reaction? - Anthony De Mello
Go to the top of the page
+Quote Post
mrosscook
post May 12 2003, 04:32
Post #3





Group: Members
Posts: 82
Joined: 14-December 02
From: Amherst MA
Member No.: 4077



The most recent thread to flog this issue is here. The posts by DigitalMan and KikeG speak to your aliasing issues, I think.
Go to the top of the page
+Quote Post
2Bdecided
post May 12 2003, 12:08
Post #4


ReplayGain developer


Group: Developer
Posts: 5362
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



QUOTE (zephirus @ May 11 2003 - 04:40 PM)
I have to admit: This 44.1 KHz topic more or less has been discussed to death already. It also seems likely that the following problem has been discussed on Hydrogenaudio several times as well (but I had no luck with the search function).

Follow the link that's been posted.

QUOTE
The 44.1 KHz sampling rate (CD quality) seems to create an infinite number of "mirrors" at its harmonics.

Yes. The correct term is "image" - they're images of the orginal spectrum from 0-22.05kHz. The 22.05-44.1kHZ one is a reflection, the 44.1-66.15kHZ is a direct copy etc etc etc etc

QUOTE
These in turn create a complex set of distortion frequencies for every frequency in the analog source.


No, they don't. They're filtered out perfectly in a theoretically "ideal" DAC, and well enough in many real-world ones.

QUOTE
The strongest "mirror" is at at 22.05 KHz (44.1 KHz/2).


This is nonesense. Sorry!

QUOTE
But the problem can easily be demonstrated with the one at 11025 Hz (44.1 KHz/4) as well: if one creates a sine signal of 11025-1000 = 10025 Hz in a sound editor (e.g. Audacity, using a 44.1 KHz sampling rate) and plots the spectrum, then two additional frequencies are shown: one at 1000 Hz and one at 22050-1000 = 21050 Hz. More distortion signals can be seen if the FFT resolution is increased above 1024.


You're doing something wrong, or at least, your software isn't working properly.

QUOTE
The general problem seems to be that a sampling frequency of 44.1 KHz does not guarantee that frequencies below 22.05 KHz are represented faithfully (as is mostly believed).


Yes it does. Not everyone follows the full implications of this, but it is true.

QUOTE
PS:
The following example is very audible: When using a sampling frequency of only 2000 Hz (instead of 44100 Hz) and creating a sine frequency of 750 Hz (well below the Nyquist limit of 1000 Hz) then the result sounds pretty ugly (itīs some kind of mixed signal of 750 Hz, 250 Hz and 1250 Hz).


Your sound card probably can't reproduce a 2k sampled digital audio signal. It is probably making a complete mess of resampling it to some other value, and/or not using the correct filer to remove the high frequency (i.e. over 1kHz wink.gif) image frequencies.

If you get Cool Edit, you can generate a 1kHz tone sampled at, say, 44.1kHz. Resample it to 2kHz. Resample it back to 44.1kHz (so your sound card can play it). You should find that it survived it's little trip through 2kHz sampling pretty will. If the tone started and ended with a "click" then those clicks may sound different, but the tone shouldn't.

Have fun!

Cheers,
David.
Go to the top of the page
+Quote Post
zephirus
post May 12 2003, 13:07
Post #5





Group: Members
Posts: 16
Joined: 11-May 03
Member No.: 6542



QUOTE (tigre @ May 11 2003 - 01:59 PM)
The additonal frequencies you see in spectral view are most likely caused by poor / "time-efficient" algorithms used by Audacity to create spectral view.

You are right, of course. I donīt see these distortion frequencies in the Cool Edit spectrum analyser. I thank you for your suggestion to use Cool Edit!
Nevertheless, I played around with Audacity and Cool Edit some more and now believe that I understand all this somewhat better now (details in a later posting).

QUOTE
44.1KHz is enough to represent everything below 22.05Khz perfectly

Iīm very confident that this is not the case. Just try this: Create a sine signal (using e.g. Cool Edit) at 11024 Hz (with a few seconds duration, using a sampling frequency of 44.1 KHz). Then simply look at the waveform (itīs obviously strongly amplitude modulated). This is a corner case, admittedly (extremely close to a strong harmonic of the sampling frequency).

QUOTE
About your "audible" example: What do you use for playback? I could bet it's something that resamples (probably poorly) causing the "ugly" sound.

Yes, that 2000/750 Hz example is fishy. Audacity simply doesnīt seem to do the necessary upsampling filtering. Cool Edit filters pretty much perfectly.

zephirus
Go to the top of the page
+Quote Post
zephirus
post May 12 2003, 13:14
Post #6





Group: Members
Posts: 16
Joined: 11-May 03
Member No.: 6542



QUOTE (mrosscook @ May 11 2003 - 07:32 PM)
The most recent thread to flog this issue is here.  The posts by DigitalMan and KikeG speak to your aliasing issues, I think.

Thanks, these posts are very interesting.

But the following assertion of KikeG is simply wrong, as it seems (see my previous post about creating a sine signal with Cool Edit).

QUOTE
...any decent 44.1 KHz DAC is free of aliasing problems, frequency response problems, phase problems, ripple problems, etc, up to around 21 KHz or more.


zephirus
Go to the top of the page
+Quote Post
KikeG
post May 12 2003, 13:33
Post #7


WinABX developer


Group: Developer
Posts: 1578
Joined: 1-October 01
Member No.: 137



QUOTE (zephirus @ May 12 2003 - 01:07 PM)
Iīm very confident that this is not the case. Just try this: Create a sine signal (using e.g. Cool Edit) at 11024 Hz (with a few seconds duration, using a sampling frequency of 44.1 KHz). Then simply look at the waveform (itīs obviously strongly amplitude modulated). This is a corner case, admittedly (extremely close to a strong harmonic of the sampling frequency).

This apparent amplitude modulation is just a side effect of viewing the samples in time domain. If you play the tone, you'll hear no amplitude modulation. Surprise?? If you zoom a little bit into the wave, you will see how Cool Edit "interpolates" between samples and creates a continuous waveform that is not modulated. This interpolation is more or less the same one that a DAC performs: looking at the analog output of a 44.1 DAC won't show and amplitude modulation in this particular case. You can check it with an oscilloscope, you will see the same kind of continuous shape that Cool Edit interpolates.
Go to the top of the page
+Quote Post
tigre
post May 12 2003, 14:16
Post #8


Moderator


Group: Members
Posts: 1434
Joined: 26-November 02
Member No.: 3890



QUOTE (zephirus @ May 12 2003 - 04:07 AM)
... Just try this: Create a sine signal (using e.g. Cool Edit) at 11024 Hz (with a few seconds duration, using a sampling frequency of 44.1 KHz). ...

I've done this already.

[EDIT] The following can't be the reason for "amplitude modulation" of 11024 Hz signal. The (hopefully) correct explanation is given in maths part (*).
___________________________
The following is true for a test tone near to Nyquist limit, e.g. a 22049Hz tone at 44100Hz sampling frequency:
[/EDIT]

The reason for the "amplitude modulation" visible in waveform view is the limited number of samples ("window") used for calculating the waveform. Try this: Create silence with Cool Edit and change one single sample in the middle to e.g. +30000. Now zoom in that you can see the waveform between the samples and zoom in vertically. You'll see that the changed sample causes a changed waveform in a range of 42 samples. In reallity this range is (should be) not 42, of course, but infinite. 42 is chosen, probably because it's a compromise between exactness of the result and computation power needed.
So if more samples e.g. 420 or 4200 arround a gap between two sample values are taken into account to compute the shape of the waveform in this gap, there won't be an "amplitude modulation" left.

Of cours you could choose a "test frequency" of 22049.99 Hz and you'll see "amplitude modulation" again, but I hope you get the point ...
______________
*
OK. Finally some maths: I've created the 11024 signal as you suggested and zoomed in at the max. and min. positions of so called "amplitude modulation" to get the sample values:

max.: 0 10361 0 -10361 0 10361 0 -10361 ...
min.: 7326 7326 -7326 -7326 7326 7326 -7326 ...

This corresponds to y=a*sin(alpha) with alpha values:
max: 0° 90° 180° 270° 360° ...; a = 10361/sin(90°) = 10361
min: 45° 135° 225° 315° 405° ...; a = 7326/sin(45°) = 7326*2^(1/2)= 10361

Result: at both positions the amplitude is identical, the reason for the visible "amplitude modulation" must be something cool Edit related.

This post has been edited by tigre: May 12 2003, 14:33


--------------------
Let's suppose that rain washes out a picnic. Who is feeling negative? The rain? Or YOU? What's causing the negative feeling? The rain or your reaction? - Anthony De Mello
Go to the top of the page
+Quote Post
zephirus
post May 12 2003, 14:23
Post #9





Group: Members
Posts: 16
Joined: 11-May 03
Member No.: 6542



QUOTE (KikeG @ May 12 2003 - 04:33 AM)
QUOTE (zephirus @ May 12 2003 - 01:07 PM)
Iīm very confident that this is not the case. Just try this: Create a sine signal (using e.g. Cool Edit) at 11024 Hz (with a few seconds duration, using a sampling frequency of 44.1 KHz). Then simply look at the waveform (itīs obviously strongly amplitude modulated). This is a corner case, admittedly (extremely close to a strong harmonic of the sampling frequency).

This apparent amplitude modulation is just a side effect of viewing the samples in time domain. If you play the tone, you'll hear no amplitude modulation. Surprise?? If you zoom a little bit into the wave, you will see how Cool Edit "interpolates" between samples and creates a continuous waveform that is not modulated. This interpolation is more or less the same one that a DAC performs: looking at the analog output of a 44.1 DAC won't show and amplitude modulation in this particular case. You can check it with an oscilloscope, you will see the same kind of continuous shape that Cool Edit interpolates.

You are right, of course. So I better retract all my claims.

Seems like just another futile attempt in proving some inferiority of cd quality sampling. Oh dear.

I checked this in Cool Edit - thanks for your detailed description. So this again was more or less an artifact of the missing filtering/interpolation in Audacity (where it really looks like an amplitude modulation at any zoom factor). I should have verified this more thoroughly in Cool Edit. And generally should use more high end programs for such things anyways.

I also upsampled the signal to 192 KHz with Cool Edit. And, as suspected, the result was a clean 11024 Hz signal with no amplitude modulation.
Go to the top of the page
+Quote Post
zephirus
post May 12 2003, 17:12
Post #10





Group: Members
Posts: 16
Joined: 11-May 03
Member No.: 6542



QUOTE (tigre @ May 12 2003 - 05:16 AM)
OK. Finally some maths: I've created the 11024 signal as you suggested and zoomed in at the max. and min. positions of so called "amplitude modulation" to get the sample values:

max.: 0 10361 0 -10361 0 10361 0 -10361 ...
min.: 7326 7326 -7326 -7326 7326 7326 -7326 ...

This corresponds to y=a*sin(alpha) with alpha values:
max: 0° 90° 180° 270° 360° ...; a = 10361/sin(90°) = 10361
min: 45° 135° 225° 315° 405° ...; a = 7326/sin(45°) = 7326*2^(1/2)= 10361

Result: at both positions the amplitude is identical, the reason for the visible "amplitude modulation" must be something cool Edit related.

Thanks for your detailed explanations!

I believe I get this. As it seems one shouldnīt expect the digital sample values to be a visual representation of the analog source signal. And instead perhaps view the digital values simply as the right sequence of kicks that need to be delivered to the output filter. Which then indeed seems to recreate the original signal very well.

QUOTE
The following can't be the reason for "amplitude modulation" of 11024 Hz signal


Perhaps Cool Edit doesnīt bother with the interpolation/filtering business if one hasnīt zoomed in sufficiently. Then it might just average a rather small number of sample-values and calculate the wave amplitudes for display this way (for performance reasons maybe). Which would work well pretty much always - except in extreme cases like the 11024 Hz signal.

Anyways, all three artifacts I saw/heared were basically due to the missing filtering in Audacity (and my lack of knowledge that this filtering step is absolutely essential, even for just looking at the waveform). And in Cool Edit I obviously should have looked more thoroughly.

QUOTE
Of course you could choose a "test frequency" of 22049.99 Hz


Indeed I did... see next post.

Thanks again, and I hope you found this all not too much a waste of your time.

zephirus
Go to the top of the page
+Quote Post
zephirus
post May 12 2003, 18:22
Post #11





Group: Members
Posts: 16
Joined: 11-May 03
Member No.: 6542



QUOTE (2Bdecided @ May 12 2003 - 03:08 AM)
QUOTE

The 44.1 KHz sampling rate (CD quality) seems to create an infinite number of "mirrors" at its harmonics.

Yes. The correct term is "image" - they're images of the orginal spectrum from 0-22.05kHz. The 22.05-44.1kHZ one is a reflection, the 44.1-66.15kHZ is a direct copy etc etc etc etc

Thanks for your details (which are new to me). But I believed to see "distortion mirrors" at frequencies below 44.1 KHz (at "sub-harmonics", perhaps).

QUOTE
QUOTE

These in turn create a complex set of distortion frequencies for every frequency in the analog source.

No, they don't. They're filtered out perfectly in a theoretically "ideal" DAC, and well enough in many real-world ones.

QUOTE
This is nonesense. Sorry!

I feel forced to agree.

QUOTE
You're doing something wrong, or at least, your software isn't working properly.

Yes - unfortunately you hit the nail on the head here.

Nevertheless:

QUOTE
If you get Cool Edit, you can generate a 1kHz tone sampled at, say, 44.1kHz. Resample it to 2kHz. Resample it back to 44.1kHz (so your sound card can play it). You should find that it survived it's little trip through 2kHz sampling pretty will.


Here, at last, you seem to be wrong. A 1000 Hz signal doesnīt survive the roundtrip. A 999 Hz signal survives partly, but is very much off. A 995 Hz signal is better. And a 980 Hz signal survives pretty well. Pre/post-filtering was enabled (which generally seems to be a good idea), and conversion quality was set to maximum (very computation intensive). The filters seemingly need some significant frequency headroom. But not really much.

QUOTE
QUOTE

The general problem seems to be that a sampling frequency of 44.1 KHz does not guarantee that frequencies below 22.05 KHz are represented faithfully (as is mostly believed).

Yes it does. Not everyone follows the full implications of this, but it is true.


22050 Hz seems to be the first frequency that cannot be reproduced (Cool Edit simply produces silence in this case, which basically seems to be correct). Frequencies below but near 22050 Hz do not seem to be reproduced correctly as well. So at least I have a very minor point. But anything below 20 KHz most likely can be reproduced very well. So this point is irrelevant. Just a little headroom needed for the output filter.

Anyways, thanks for you detailed reply!

zephirus

This post has been edited by zephirus: May 12 2003, 18:28
Go to the top of the page
+Quote Post
zephirus
post May 12 2003, 20:54
Post #12





Group: Members
Posts: 16
Joined: 11-May 03
Member No.: 6542



While my original dubious claims obviously had a very short life span, one minor issue remains:

The idea that digital sampling is able to reproduce any frequency up to half the sampling rate still seems to be wrong.

My rationale here is: A very steep output (lowpass cutoff) filter is absolutely necessary for reconstructing the original input signal. Without it there are aliasing distortions from the digital signal. And these aliasing distortions will create distortion frequencies well below 22.05 KHz (hopefully).

But no filter can realistically work without any frequency headroom.

Therefore: some frequency headroom is absolutely needed, and the claim that a sampling rate of X can correctly reproduce any frequency up to X/2 is technically not really true.

But the headroom needed seems to be rather small. Perhaps 2 KHz for a sampling rate of 44.1 KHz. Or 10% of the useful frequency range. So this issue seems to be irrelevant for frequencies up to 20 KHz (with a sampling rate of 44.1 KHz).

Therefore, I still do not have a point. But Iīm still trying...

Next time, perhaps.

zephirus
Go to the top of the page
+Quote Post
_Shorty
post May 12 2003, 21:42
Post #13





Group: Banned
Posts: 694
Joined: 19-April 02
Member No.: 1820



I thought that's what over-sampling DACs were for.
Go to the top of the page
+Quote Post
zephirus
post May 12 2003, 22:39
Post #14





Group: Members
Posts: 16
Joined: 11-May 03
Member No.: 6542



QUOTE (_Shorty @ May 12 2003 - 12:42 PM)
I thought that's what over-sampling DACs were for.

Hm... good point.

But it seems: Whatever you do with the digital signal before it becomes an analog output signal simply is some kind of filtering. Oversampling may be a part of it (or not).

Judging from the cpu intensity of Cool Editīs upsampling (or oversampling) algorithm: Upsampling seems to be a very difficult task.

Upsampling/oversampling cannot recreate information that was lost at the time when the source signal was digitally sampled.

And for a sine signal of 22.05 KHz: When sampling it with 44.1 KHz the amplitude of the resulting digital signal is completely random (this can easily be demonstrated - for real this time).

For me, it seems inevitable that frequencies below 22.05 Khz suffer from the same problem (but to a lesser extent, of course). But this does not seem to be significant for frequencies below 20 KHz.

Therefore, this all does not seem to be really relevant anyways.

zephirus

PS: And I still believe that cd quality digital sampling creates subtle but complex and ugly distortions in the complete frequency spectrum. But this belief belongs to the realm of religious beliefs. Unfortunately.
Go to the top of the page
+Quote Post
Doctor
post May 12 2003, 22:54
Post #15





Group: Members
Posts: 160
Joined: 16-January 03
Member No.: 4597



An oversampling DAC still needs to lowpass the signal, it's just simpler to implement the brick wall filter in digital domain. The digital filter is almost guaranteed to be finite impulse response.

Suppose you are trying to reproduce a 22049 Hz sine when your DAC is running at 44100 Hz. Digitally, the signal will appear modulated as the phase of the sine very slowly lags behind the sampling. The modulation frequency will be 2 Hz.

Now, if your FIR filter stores 20 thousand samples, it will reproduce the sine perfectly (up to quantization noise). In other words, a perfect brick wall will operate perfectly. But the cost of such a filter, either in hardware or software, is prohibitive. So, both the DAC and the editor software will let the modulation through.

On the other hand, a 20 KHz sine will modulate at 4100 Hz requiring the filter to average about 12 samples. This is acceptable.

(I am a little unsure about these numbers. Factor of two, not two orders of magnitude kind of unsure. wink.gif )

In analogue domain the situation is exactly the same. Steep filtering, expensive hardware.

Zephirus is correct that very close to Nyquist limit real-world limitations necessitate distortion. However, the original spec for CD audio, and our knowledge of human hearing, require exact reproduction up to 20 KHz, tops. So a less steep filter that cuts well below 22 KHz is perfectly acceptable and there is nothing to bitch about.
Go to the top of the page
+Quote Post
Doctor
post May 12 2003, 22:57
Post #16





Group: Members
Posts: 160
Joined: 16-January 03
Member No.: 4597



The distortions you believe in are probably either quantization noise (ADC) or filter nonlinearity (DAC). They do exist in the entire frequency range anthough there are techniques (dithering, noise shaping and good filter design) that can make them practically inaudible.

Ed: sp

This post has been edited by Doctor: May 12 2003, 22:58
Go to the top of the page
+Quote Post
KikeG
post May 13 2003, 07:54
Post #17


WinABX developer


Group: Developer
Posts: 1578
Joined: 1-October 01
Member No.: 137



QUOTE (zephirus @ May 12 2003 - 08:54 PM)
The idea that digital sampling is able to reproduce any frequency up to half the sampling rate still seems to be wrong.

A more correct formulation on Nyquist theorem would be that digital sampling is able to capture and reproduce perfectly any frequency below half the sampling rate. How below? Any below. In theory, with a sampling rate of 40000 Hz you could perfectly capture and reproduce up to 19999.99999... Hz. But this is from a theorical and mathematical point of view, in order for that to be possible you need a perfect filter that doesn't exist in reality, just in maths.

To cope with real world filter limitations, you need some headroom, and a sampling rate significantly over the double of the max. frequency is needed. For that reason (and others not related to this issue), 44100 Hz sampling rate was chosen for being able to reproduce up to around 20000 Hz , and not just 40000 Hz.

This post has been edited by KikeG: May 13 2003, 07:57
Go to the top of the page
+Quote Post
Canar
post May 13 2003, 08:10
Post #18





Group: Super Moderator
Posts: 3373
Joined: 26-July 02
From: To:
Member No.: 2796



QUOTE (KikeG @ May 12 2003 - 04:33 AM)
If you zoom a little bit into the wave, you will see how Cool Edit "interpolates" between samples and creates a continuous waveform that is not modulated.

How is an "interpolator" like this coded? Can it only be written using FIR filters, or is there some easy way to code it, like a linear interpolator?


--------------------
You cannot ABX the rustling of jimmies.
No mouse? No problem.
Go to the top of the page
+Quote Post
tigre
post May 13 2003, 11:15
Post #19


Moderator


Group: Members
Posts: 1434
Joined: 26-November 02
Member No.: 3890



To approximate the waveform between two samples S[0](x[0]/y[0]), S[0](x[1]/y[1]) I'd use a function like

f: y = c0 + c1*x + c2*x^2 + c3*x^3 ... + cn*x^n ; n=odd number, the higher, the more exact will be the result.

Putting the x's and y's for the samples S[0.5-n/2] ... S[0.5+n/2] into it results in a linear simultaneus equation. Solve it and you have the c0 ... cn values, so using f will result in a nice curve between S[0] and S[1].

for every c value n-1 multiplications need to be done and n-1 additions; afterwards the same for every step you want to calculate between s[0] and S[1].

If you want to compute many steps between S[0] and S[1], judging the importance of c0 ... cn values and maybe disregarding some of them (for at least the steps near to S[0]) can save time (Because for x -> 0 : x^n -> 0 fast, so y -> c0 + c1*x).

I think this is easy to code, but I can't tell how fast it is compared to using FIR filters.

Sorry... In this field my English is really bad. If it's impossible to understand, tell me - I'll try again using a better dictionary.

This post has been edited by tigre: May 13 2003, 11:18


--------------------
Let's suppose that rain washes out a picnic. Who is feeling negative? The rain? Or YOU? What's causing the negative feeling? The rain or your reaction? - Anthony De Mello
Go to the top of the page
+Quote Post
2Bdecided
post May 13 2003, 11:24
Post #20


ReplayGain developer


Group: Developer
Posts: 5362
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



zephirus,
You were right about the 1kHz tone in the 2kHz system - I hadn't even read the numbers that I typed! As KikeG has said, nyquist says "below fs/2" - not at fs/2.
EDIT: btw, I agree that CD quality digital audio isn't enough, though the reasons may be more practical than theoretical. I'm in a small minority here wink.gif

Canar
The interpolation is idealy performed with a sync function. There's a very good site here:
http://ccrma-www.stanford.edu/~jos/resampl...e/resample.html
which goes into detail about the implementation issues.

Cheers,
David.

This post has been edited by 2Bdecided: May 13 2003, 11:28
Go to the top of the page
+Quote Post
tigre
post May 13 2003, 11:42
Post #21


Moderator


Group: Members
Posts: 1434
Joined: 26-November 02
Member No.: 3890



Nice link you provided, 2Bdecided.

So it seems like I tried to wink.gif talk about Larange interpolation.

This post has been edited by tigre: May 13 2003, 11:43


--------------------
Let's suppose that rain washes out a picnic. Who is feeling negative? The rain? Or YOU? What's causing the negative feeling? The rain or your reaction? - Anthony De Mello
Go to the top of the page
+Quote Post
Canar
post May 14 2003, 07:35
Post #22





Group: Super Moderator
Posts: 3373
Joined: 26-July 02
From: To:
Member No.: 2796



QUOTE (2Bdecided @ May 13 2003 - 02:24 AM)
Canar
The interpolation is idealy performed with a sync function. There's a very good site here:
http://ccrma-www.stanford.edu/~jos/resampl...e/resample.html
which goes into detail about the implementation issues.

*whoosh* That's the sound of it all going right over my head. I'll try focussing more and re-reading until I actually get it all. It took me forever to understand what was going on with Wavelets when I first started studying them too, but I get 'em now. Hopefully this'll work like that. wink.gif

I was hoping it'd be a nice simple implementation, but no.... It has to go and deal with infinities and things. Icky... biggrin.gif Bah. Shoulda figured. It is DSP stuff after all.

Anyhow, thanks for the link. I needed something mind-expanding, and all my usual hookups for such things are either disappearing or depleting.

The whole Lagrange interpolation bit reminds me of dealing with Taylor series.


--------------------
You cannot ABX the rustling of jimmies.
No mouse? No problem.
Go to the top of the page
+Quote Post
zephirus
post May 14 2003, 12:23
Post #23





Group: Members
Posts: 16
Joined: 11-May 03
Member No.: 6542



Doctor, KikeG: Thanks for your explanations!

QUOTE (KikeG @ May 12 2003 - 01:54 PM)
In theory, with a sampling rate of 40000 Hz you could perfectly capture and reproduce up to 19999.99999... Hz. But this is from a theorical and mathematical point of view, in order for that to be possible you need a perfect filter that doesn't exist in reality, just in maths.

I believe I found a rather simple possibility to practically demonstrate and theoretically argue the claim that signal distortions happen well below the Nyquist frequency. Even with an ideal and perfect filter.

A continuous signal of 21800 Hz (with 44.1 KHz sampling, -1.5 dB, 0.5s duration) looks very much amplitude modulated in Cool Edit. An ideal 192 KHz upsampling filter will create a correct (not modulated) signal regardless (the Cool Edit upsampling does a pretty good job here as well in highest quality mode).

But now (before upsampling) letīs silence 0.0000-0.0017 and 0.0023-0.0100. What remains is a small snippet between 0.0017 and 0.0023 (with silence around it).

Without the context around this short snippet, no filter on earth (or in the mathematical domain) should be able to know if that short snippet is meant as a low amplitude signal at around 21800 Hz or a full amplitude signal at exactly 21800 Hz (the upsampling filter will go for the wrong interpretation, and "smears" the signal as well).

The digital representation of such a short 21800 Hz snippet simply seems to be ambiguous due to the inevitable information loss that occurs when trying to represent the analog source as a digital sequence of numbers at a rate of 44.1 KHz.
(When digitizing anything analog - sound, video, whatever - itīs in principle inevitable that many different analog signals lead to the identical digital representation. When converting back to analog, it should be impossible to decide which of the possible source signals was the right one.)

Iīm not sure however if it can be successfully argued (with signal theory) that the amplitude loss is irrelevant. But if not:

The major point is that such information loss (and therefore the inevitable distortion) occurs well below the Nyquist frequency.

doctorīs post seems to deliver further evidence:
QUOTE
...if your FIR filter stores 20 thousend samples, it will reproduce the sine perfectly...

But even a near perfect filter does not have 20 thousand samples of the signal if it is too short - like the above one.

I also tried this with 21500 Hz, with similar results.

So it seems: The Nyquist theorem only works for long, continuous signals, but not for short ones. Which are distorted well below the Nyquist frequency. Even with mathematically perfect filters.

I wouldnīt try to claim that any of this is audible. But now with possibly a slight dent in the Nyquist theorem, the next question would be if such digital ambiguities could be found with more complex source signals with the resulting distortions being well below 20000 Hz.

zephirus

PS: I suppose that Nyquist formulated his theorem for long continuous signals only. So there most likely is not really a dent in his theory.

This post has been edited by zephirus: May 14 2003, 12:35
Go to the top of the page
+Quote Post
SikkeK
post May 14 2003, 12:36
Post #24





Group: Members
Posts: 48
Joined: 31-October 01
Member No.: 384



I think your snippet has alot of frequence components above 22.05 kHz......
Go to the top of the page
+Quote Post
DonP
post May 14 2003, 13:00
Post #25





Group: Members (Donating)
Posts: 1477
Joined: 11-February 03
From: Vermont
Member No.: 4955



QUOTE (zephirus @ May 14 2003 - 06:23 AM)
The digital representation of such a short 21800 Hz snippet simply seems to be ambiguous due to the inevitable information loss that occurs when trying to represent the analog source as a digital sequence of numbers at a rate of 44.1 KHz.
(When digitizing anything analog - sound, video, whatever - itīs in principle inevitable that many different analog signals lead to the identical digital representation. When converting back to analog, it should be impossible to decide which of the possible source signals was the right one.)

Hoo boy... That's why at school they start with trig and basic waves then work up through communications
systems hitting Nyquist along the way rather than just spitting out the 2x sampling thing leaving it at that.

I risk backing up not enough, but I'm not planning to write a whole book here.

A pure single frequency by its nature exists for all time. If you want to limit the time of a signal, you
have to introduce other frequencies which sum up to what you want. In other words, during the time
the signal is decaying it is not a pure sine. The shorter this pulse of signal is in relation to its
period (1/frequency), the stronger the other frequency components will be compared to your base
frequency component. The ultimate degenerate case is an impulse, or single instant of non-zero
amplitude, which contains ALL frequencies equally.

Anyhow, if you have just a very few non-zero samples, it will be ambigous how
to reconstruct the signal, but that ambiguity is due to components of the original signal higher than the Nyquist frequency.

This post has been edited by DonP: May 14 2003, 13:02
Go to the top of the page
+Quote Post

3 Pages V   1 2 3 >
Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



RSS Lo-Fi Version Time is now: 22nd December 2014 - 02:29