IPB

Welcome Guest ( Log In | Register )

5 Pages V  « < 2 3 4 5 >  
Reply to this topicStart new topic
xiphmont’s ‘There is no point to distributing music in 24 bit/192 kHz’, Article: “24/192 Music Downloads are Very Silly Indeed”
Porcus
post Mar 14 2012, 07:42
Post #76





Group: Members
Posts: 1842
Joined: 30-November 06
Member No.: 38207



QUOTE (Wombat @ Mar 14 2012, 01:52) *
Where is Adam Savage & Jamie Hyneman when you need them? Time for another show of Mythbusters!!


Oh, no, this will only apply to ordinary consumers' hearing. The fact that they cannot pick up what we have heard, proves that audiophiles hear better ...


QUOTE (Wombat @ Mar 14 2012, 01:52) *
We only need some idea how to blow up some audio gear to inspire them smile.gif


Turn up the '40 kHz' button until test subject notices and then watch the tweeter burn? We will have to think up a really creative visual effect to make this look dramatic.




--------------------
One day in the Year of the Fox came a time remembered well
Go to the top of the page
+Quote Post
2Bdecided
post Mar 14 2012, 10:33
Post #77


ReplayGain developer


Group: Developer
Posts: 5057
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



QUOTE (drewfx @ Mar 13 2012, 16:43) *
QUOTE (icstm @ Mar 13 2012, 10:43) *
I would suggest that the same applied here that to get the most of your mixing, ie your post processing, you would want as much information as possible.


No. You would only want as much information as necessary. Any information that, after all mixing/processing/etc., doesn't make it to the (audible portion of the) output only wastes time/resources without adding or improving anything.
What would be a real waste of time / resources would be to try to figure this out for a given mix beforehand wink.gif

Adding more bits to capture is solving a problem that isn't there. "Adding more bits to processing" would be rather a simplistic thing to support or denounce - it makes a huge difference whether we're talking about the accumulator within an IIR or FIR filter, coefficients, the pipeline between effects, etc etc etc. Huge numbers of bits has been common in some of these for years.

Cheers,
David.
Go to the top of the page
+Quote Post
wakibaki
post Mar 14 2012, 15:07
Post #78





Group: Members
Posts: 37
Joined: 23-July 11
Member No.: 92474



QUOTE (xiphmont @ Mar 9 2012, 09:12) *
QUOTE (wakibaki @ Mar 6 2012, 19:54) *
I thought he made good points, but he quoted Meyer and Moran. I just think it's better that we're all informed as to the counter-arguments rather than ending up with egg on our faces. crying.gif


I'm actually curious as to your specific objection/concern. I've read the various critiques written by detractors of the BAS tests over the years, but too many of those arguments relied on willful obtuseness and eye rolling. I'd like to hear the methodology/implementation critiques from those who nevertheless agreed with the conclusions.

The point has also been made that [in the article] first I argue "ultrasonics hurt fidelity" and then cite M&M, which supposedly undermines the argument because no one could hear a difference. In no way does M&M rebut the assertion that ultrasonics _can_ cause audible distortion. They were using high end setups
designed at expense for audiophile-grade frequency extension, and the results show they obviously weren't affected by audible IMD. Am I missing something else?


Sorry, I haven't looked at this thread for a while.

The suggestion is not that M&M rebuts the assertion that ultrasonics can hurt fidelity but it demonstrates that ultrasonics did not hurt fidelity.

I don't suggest that the reference to M&M should have been omitted in order not to draw attention to the fact that it demonstrates that ultrasonics did not hurt fidelity, I merely draw attention to the fact that it demonstrates that ultrasonics did not hurt fidelity in the case examined.

The article contends that building to accommodate ultrasonics necessarily sacrifices performance in the audible range. This may be true, but it is not demonstrated that the degradation is audible. Technology, moreover, moves forward apace, so that even if there is audible degradation, this may not always be the case.

All this merely leads to the suggestion that equipment must necessarily be built to a higher standard i.e 'designed at expense for audiophile-grade frequency extension'.

While it is probably possible to establish reasonably accurately at what point THD becomes audible, it is preferable in some ways to sidestep any argument by exceeding the threshold of audibility by some margin, since, in the case of amplifiers anyway, this is technologically feasible. It may not be desirable to resist too strongly exceeding the threshold of audibility in terms of frequency response where this is feasible without degrading performance to the point where it no longer offers a margin over the threshold of audibility in other areas.

w

This post has been edited by wakibaki: Mar 14 2012, 15:15


--------------------
wakibaki.com
Go to the top of the page
+Quote Post
drewfx
post Mar 14 2012, 16:41
Post #79





Group: Members
Posts: 74
Joined: 17-October 09
Member No.: 74078



QUOTE (2Bdecided @ Mar 14 2012, 04:33) *
QUOTE (drewfx @ Mar 13 2012, 16:43) *
QUOTE (icstm @ Mar 13 2012, 10:43) *
I would suggest that the same applied here that to get the most of your mixing, ie your post processing, you would want as much information as possible.


No. You would only want as much information as necessary. Any information that, after all mixing/processing/etc., doesn't make it to the (audible portion of the) output only wastes time/resources without adding or improving anything.
What would be a real waste of time / resources would be to try to figure this out for a given mix beforehand wink.gif

Adding more bits to capture is solving a problem that isn't there. "Adding more bits to processing" would be rather a simplistic thing to support or denounce - it makes a huge difference whether we're talking about the accumulator within an IIR or FIR filter, coefficients, the pipeline between effects, etc etc etc. Huge numbers of bits has been common in some of these for years.

Cheers,
David.


Yes. Higher bit depths (and floating point) and upsampling/downsampling is common during processing and is almost always used today where useful or necessary.

I didn't quote icstm's entire post, but my impression was that he may have been talking not just about during processing, but for recording as well. So my response was more to that (perhaps incorrect) interpretation.
Go to the top of the page
+Quote Post
krabapple
post Mar 14 2012, 16:44
Post #80





Group: Members
Posts: 2181
Joined: 18-December 03
Member No.: 10538



There is no point in distributing audio to consumers in a 24bit/192kHz format. The only possible convenience I see is that (AIUI) modern AVRs commonly convert incoming audio signals to 24 (32?) bits before applying DSP. Some also upconvert
sample rate to 96kHz. If the AVR does those functions poorly, providing the audio already 'upconverted' would be a way to avoid degradation. But I have no evidence that AVRs are doing it poorly. And too not everyone wants to use DSP when they play music. Often the upconversion can be turned off by setting the AVR to a 'Pure' mode.

However, there is a fascinating discussion going on on one of the pro audio lists that reminded me of one possible legitimate use of really high SR ADC for digital *capture* of taped (analog) audio: use of tape bias tones to
correct wow and flutter as is done by Plangent Processes . Such tones can be well into the hundreds of kHz depending on the tape machine originally used to make the
recording. So some of these tones are well beyond the capabilities of even 192kHz SR to capture.

It turns out, though, that what PP actually do is use a circuit to 'downshift' the ultrahigh frequency bias tones into a range that can be captured by common sample rates. It's described in a scant detail here )

This post has been edited by krabapple: Mar 14 2012, 16:50
Go to the top of the page
+Quote Post
icstm
post Mar 15 2012, 16:34
Post #81





Group: Members
Posts: 121
Joined: 25-January 12
Member No.: 96698



QUOTE (drewfx @ Mar 14 2012, 15:41) *
Yes. Higher bit depths (and floating point) and upsampling/downsampling is common during processing and is almost always used today where useful or necessary.

I didn't quote icstm's entire post, but my impression was that he may have been talking not just about during processing, but for recording as well. So my response was more to that (perhaps incorrect) interpretation.
I completely agree about the processing.
What I was saying about the recording (and this may not be true) is that there could be cases where you want to shift the information that is in above audible range down or there could be cases where you wish to expand the difference between 2 sounds.

The example I giving was in image processing where you are trying to change how the highlights are shown. HDR photography I would have thought is analogous to this?

Also, though completely unrelated, if much of the sound energy of keys jangling is above the range of hearing, why would not try to map this down to better capture this lost power?

If you are going to process above 16/44, even if you are going to playback at 16/44 I would have thought there are cases where recording at 16/44 may not be enough if you are going to PP?
(or is that rubbish?)
Go to the top of the page
+Quote Post
drewfx
post Mar 15 2012, 17:35
Post #82





Group: Members
Posts: 74
Joined: 17-October 09
Member No.: 74078



I'm unaware of any audio processing where inaudible frequencies are used for enhancing audible ones somehow in general audio production/reproduction. But perhaps someone else knows of something? If there was something, then a higher sampling rate might indeed make some sense.

Now if you wanted to, say, pitch shift ultrasonics down by several octaves, or intentionally create audible inter-modulation distortion from inaudible frequencies for some reason, it would indeed make sense to record the higher frequencies. But I'd say those sorts of things would be unusual exceptions that requires special procedures, not a general case.

This post has been edited by greynol: Mar 15 2012, 17:45
Reason for edit: Removed unnecessary full quotation of the previous post. Please try to curteous of other readers who do not wish to have to read the same thing twice.
Go to the top of the page
+Quote Post
greynol
post Mar 15 2012, 17:48
Post #83





Group: Super Moderator
Posts: 10000
Joined: 1-April 04
From: San Francisco
Member No.: 13167



It should be pretty clear from the title of the discussion that recording and processing is not on-topic. wink.gif


--------------------
YOUR EYES CANNOT HEAR!!!!!!!!!!!
Go to the top of the page
+Quote Post
2Bdecided
post Mar 15 2012, 17:58
Post #84


ReplayGain developer


Group: Developer
Posts: 5057
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



QUOTE (forart.eu @ Mar 15 2012, 13:25) *
So you're substantially claiming that non-ABXable lossy-encoded sound is equal to a lossless one ?
If no one can ABX two sounds under any conditions, then they're perceptually equivalent.

QUOTE
If true, than what's the meaning of lossless encoders ?

1) lossless existed before lossy.
2) where is this 100% unABX-able lossy encoder?
3) where is this store that guarantees to use a particular lossy encoder that I have decided I'm happy with?
4) lossless is a perfect source for re-mixing, lossy encoding, broadcasting, etc etc - any number of things I might want to do myself.


QUOTE
BTW - once again - my position is that we don't need more Hz, but we need more bits !
You don't need more bits. Which is also explained perfectly in the article this thread is about.

DSD seems to work quite well - 2.8224 MHz / 1 bit. With just 3 or 4 bits it could be essentially perfect (120dB+ SNR, 50kHz+ bandwidth, zero distortion). Aren't dither and noise shaping amazing?

Cheers,
David.

This post has been edited by 2Bdecided: Oct 12 2012, 10:09
Go to the top of the page
+Quote Post
sld
post Mar 15 2012, 19:09
Post #85





Group: Members
Posts: 1016
Joined: 4-March 03
From: Singapore
Member No.: 5312



QUOTE (forart.eu @ Mar 15 2012, 16:01) *
QUOTE (bandpass @ Mar 15 2012, 08:40) *
I don't think it matters—when you're sitting in front of the speakers, doing an ABX test, you can use any organ of your body you like to help make the determination (you might want to lock the door first though).

Well, in this perspective lossless is useless if you can't ABX lossy... blink.gif

No, ABX isn't the unique method to evaluate sound quality, IMHO.

BTW just think about low frequency effect on your floor (then to your feets): even if your ears hear certain frequencies, the listening experience wouldn't be the same if the floor would not vibrate. wink.gif

ABX the vibrations?
Go to the top of the page
+Quote Post
greynol
post Mar 15 2012, 19:14
Post #86





Group: Super Moderator
Posts: 10000
Joined: 1-April 04
From: San Francisco
Member No.: 13167



It's really a moot point since 16/44.1 can reproduce the same vibrations.


--------------------
YOUR EYES CANNOT HEAR!!!!!!!!!!!
Go to the top of the page
+Quote Post
krabapple
post Mar 16 2012, 00:39
Post #87





Group: Members
Posts: 2181
Joined: 18-December 03
Member No.: 10538



QUOTE (forart.eu @ Mar 15 2012, 08:25) *
So you're substantially claiming that non-ABXable lossy-encoded sound is equal to a lossless one ?


Try running the concept of 'non-ABX-able lossy' past some codec tweakers. That is, people who have trained themselves to be very sensitive to lossy artifacts , so they can improve the codecs.

Even 320kbps LAME is ABX-able, as evidenced by reports here on HA (look up posts by user \mnt for example). It's just that such listeners are rare. For most people who have reported trying, high bitrates using a decent codec produce lossy versions that are *effectively* indistinguishable from source by ABX. (The source counts too ...considering 'killer' samples and all that.)


QUOTE
BTW - once again - my position is that we don't need more Hz, but we need more bits !


We don't. More bits are useful during recording and production, far less so in playback at home.

If you've been reading HA regularly since 2001, you should know all this already.

This post has been edited by krabapple: Mar 16 2012, 00:40
Go to the top of the page
+Quote Post
icstm
post Mar 16 2012, 11:22
Post #88





Group: Members
Posts: 121
Joined: 25-January 12
Member No.: 96698



QUOTE (greynol @ Mar 15 2012, 16:48) *
It should be pretty clear from the title of the discussion that recording and processing is not on-topic. wink.gif

which is why my first post to this thread was that the orginial article linked from the OP finally answers my post here on HA where I was asking about the playback format! rolleyes.gif
Go to the top of the page
+Quote Post
krabapple
post Oct 11 2012, 20:53
Post #89





Group: Members
Posts: 2181
Joined: 18-December 03
Member No.: 10538



xiphmont: evolver's reposting of your article this week

http://evolver.fm/2012/10/04/guest-opinion...-make-no-sense/

lead me down the rabbit hole to a post of yours on slashdot from earlier this year

http://slashdot.org/comments.pl?sid=2857759&cid=40038991

where you take some shots at the AES, e.g.

QUOTE
It's not an attack, it's more a statement of truth. The AES publishes all sorts of things. Papers with interesting ideas and no data (eg, the J. Dunn 'equiripple filters cause preecho' paper, which presents a fascinating insight, even if it doesn't work out in practice), papers with data that are effectively WTFLOL (the famous Oohashi MRI paper) and papers that are more careful controlled studies. It runs the whole gamut on both sides, just as I said.


Just want to point out that there's a sustantial diff in terms of peer-review between AES convention presenations/publications, and JAES publications. Oohashi et al. never made it past convention, as far as I can tell. Their work ended up in a low-impact neurophysiology journal.

This post has been edited by krabapple: Oct 11 2012, 20:54
Go to the top of the page
+Quote Post
bandpass
post Oct 12 2012, 05:51
Post #90





Group: Members
Posts: 326
Joined: 3-August 08
From: UK
Member No.: 56644



QUOTE
Papers with interesting ideas and no data (eg, the J. Dunn 'equiripple filters cause preecho' paper, which presents a fascinating insight, even if it doesn't work out in practice)

Dunn refers to R. Lagadec and T. G. Stockham, ‘Dispersive Models for A-to-D and D-to-A Conversion Systems’ for data. Is there data elsewhere to support that it doesn't work out in practice?
Go to the top of the page
+Quote Post
Patrunjica
post Oct 13 2012, 23:51
Post #91





Group: Members
Posts: 28
Joined: 25-April 10
Member No.: 80142



Fascinating read, there has been one sticking point for me though that relates to the Nyquist frequency and I'd appreciate if anyone can offer an answer to my question.

Is a sampling rate of 44.1kHz sufficient to accurately reproduce all waves under 20kHz? I ask because I have been doing some tests using wave generators operating under various sampling rates and I stumbled upon something that confused me greatly - since my understanding of the Nyquist theorem is very limited.

The following is a 13001Hz sine wave generated using SineGen 2.5 using three different sampling rates and then imported in Reaper



Why doesn't the first sine wave resemble a sine wave anymore and does this mean anything as to the resolution necessary to fully and accurately reproduce one in the first place?

This post has been edited by Patrunjica: Oct 13 2012, 23:52
Go to the top of the page
+Quote Post
greynol
post Oct 13 2012, 23:59
Post #92





Group: Super Moderator
Posts: 10000
Joined: 1-April 04
From: San Francisco
Member No.: 13167



Your wave editor is connecting the samples with straight lines rather than with sinc pulses. This is not even remotely how it is supposed to be done either in theory or in practice.

To answer your question, a 44.1kHz is perfectly adequate to capture any frequency below 22.05kHz.


--------------------
YOUR EYES CANNOT HEAR!!!!!!!!!!!
Go to the top of the page
+Quote Post
saratoga
post Oct 14 2012, 00:01
Post #93





Group: Members
Posts: 4844
Joined: 2-September 02
Member No.: 3264



QUOTE (Patrunjica @ Oct 13 2012, 18:51) *
Is a sampling rate of 44.1kHz sufficient to accurately reproduce all waves under 20kHz?


Yes, and more actually.

QUOTE (Patrunjica @ Oct 13 2012, 18:51) *
Why doesn't the first sine wave resemble a sine wave anymore and does this mean anything as to the resolution necessary to fully and accurately reproduce one in the first place?


Your software isn't actually trying to draw the waveform that those PCM samples would generate. Thats just a linear interpolation of those points, not the PCM waveform. Just ignore it.
Go to the top of the page
+Quote Post
Patrunjica
post Oct 14 2012, 00:13
Post #94





Group: Members
Posts: 28
Joined: 25-April 10
Member No.: 80142



QUOTE (saratoga @ Oct 14 2012, 02:01) *
Your software isn't actually trying to draw the waveform that those PCM samples would generate. Thats just a linear interpolation of those points, not the PCM waveform.

Figured as much, but I can't quite wrap my head as to how exactly a constant PCM sine wave can be generated from what looks to be like (and by all accounts should be) chaotic, non-repeating data.

This post has been edited by Patrunjica: Oct 14 2012, 00:16
Go to the top of the page
+Quote Post
Wombat
post Oct 14 2012, 00:26
Post #95





Group: Members
Posts: 977
Joined: 7-October 01
Member No.: 235



QUOTE (Patrunjica @ Oct 14 2012, 00:51) *
Why doesn't the first sine wave resemble a sine wave anymore and does this mean anything as to the resolution necessary to fully and accurately reproduce one in the first place?


http://www.hydrogenaudio.org/forums/index....nction+audacity

Edit: i answered while Patrunjica had a similar picure in and a relating question as in the thread i linked to. His post was edited while i answered.

Edit2: If the above sentense makes no sense you are absolutely right! Me didn´t scroll up to see the pic and didn´t realize the answeres inbetween, sorry. Nonetheless the thread i linked to should help you Patrunjica

This post has been edited by Wombat: Oct 14 2012, 00:45
Go to the top of the page
+Quote Post
drewfx
post Oct 14 2012, 00:28
Post #96





Group: Members
Posts: 74
Joined: 17-October 09
Member No.: 74078



QUOTE (Patrunjica @ Oct 13 2012, 19:13) *
Figured as much, but I can't quite wrap my head as to how exactly a constant PCM sine wave can be generated from what looks to be like (and by all accounts should be) chaotic, non-repeating data.


What makes you think it should be chaotic and non-repeating?
Go to the top of the page
+Quote Post
saratoga
post Oct 14 2012, 00:46
Post #97





Group: Members
Posts: 4844
Joined: 2-September 02
Member No.: 3264



QUOTE (Patrunjica @ Oct 13 2012, 19:13) *
from what looks to be like (and by all accounts should be) chaotic, non-repeating data


Look again. Your points are not chaotic and actually do repeat, even with linear interpolation. You posted several periods. Now fit a sensible function between those points instead of a straight line and it'll repeat like the original sin wave.

If you want to know which function you need to use, look up how PCM works. Or just assume that since PCM very clearly does work, the needed function does in fact exist and whoever made your sound card and stereo implemented it.
Go to the top of the page
+Quote Post
Patrunjica
post Oct 14 2012, 01:13
Post #98





Group: Members
Posts: 28
Joined: 25-April 10
Member No.: 80142



True, but each time they repeat they are different then before, dividing up the sampling rate to that particular frequency results in a number with an infinite number of decimal places, approximating the location of one of the sampling points will influence the position of the next sample point and so on and so forth so that each time the wave repeats it shifts out of phase relative the sample points. And since the respective frequency is closer to the Nyquist any such deviation is more noticeable compared to a deviation that might occur using a higher sampling rate.

I know the waveform can be rebuild since obviously I can hear it, but I can't help but doubt that it's a more of an imperfect reconstruction then compared to pretty much anything lower then a 11025Hz sine.
Go to the top of the page
+Quote Post
greynol
post Oct 14 2012, 01:24
Post #99





Group: Super Moderator
Posts: 10000
Joined: 1-April 04
From: San Francisco
Member No.: 13167



QUOTE (Patrunjica @ Oct 13 2012, 17:13) *
I know the waveform can be rebuild since obviously I can hear it, but I can't help but doubt that it's a more of an imperfect reconstruction then compared to pretty much anything lower then a 11025Hz sine.

You've placed your doubt in the wrong thing. Why you're bothering to continue down the wrong path instead of acknowledging what has been stated to you very plainly (that your wave editor is not connecting the samples together properly) is a mystery to me.

This post has been edited by greynol: Oct 14 2012, 01:25


--------------------
YOUR EYES CANNOT HEAR!!!!!!!!!!!
Go to the top of the page
+Quote Post
splice
post Oct 14 2012, 01:39
Post #100





Group: Members
Posts: 119
Joined: 23-July 03
Member No.: 7935



QUOTE (Patrunjica @ Oct 13 2012, 17:13) *
... I know the waveform can be rebuild since obviously I can hear it, but I can't help but doubt that it's a more of an imperfect reconstruction then compared to pretty much anything lower then a 11025Hz sine.


Play the waveform back through a digital to analog converter. Look at the resulting waveform with an oscilloscope. It's a smooth sine wave. Magic.

The magic trick is that the D to A converter doesn't just "join the dots". It passes its output through a reconstruction filter that only passes frequencies below half the sampling rate. Say you digitise a 20 KHz signal at a 44.1 KHz sampling rate. That 20 KHz signal can only ever be a sine wave. For it to be any other shape it would have to contain harmonics, and those harmonics would start at 40 KHz. The harmonics would be filtered out at the input to the A to D converter.

When you think about it, any digitised signal above 11.025 KHz must be a sine wave. It may itself be a harmonic of a lower frequency signal, but it won't in turn have any higher harmonics because they would be above 22.05 KHz.

So if you take your "jagged" join-the-dots line and join the dots with a sine wave curve, you'll find that there's exactly one curve that will join all the dots: one with the same frequency as the original frequency. Drawing that curve is exactly what the D to A reconstruction filter does.

This post has been edited by splice: Oct 14 2012, 01:45


--------------------
Regards,
Don Hills
Go to the top of the page
+Quote Post

5 Pages V  « < 2 3 4 5 >
Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



RSS Lo-Fi Version Time is now: 23rd July 2014 - 03:47