IPB

Welcome Guest ( Log In | Register )

2 Pages V   1 2 >  
Reply to this topicStart new topic
Jeff Atwood's "Great MP3 Bitrate Experiment", From the Coding Horror blog.
kinnerful
post Jun 25 2012, 17:28
Post #1





Group: Members
Posts: 7
Joined: 3-March 09
Member No.: 67563



http://lifehacker.com/5920793/the-great-mp...rate-experiment
http://www.codinghorror.com/blog/2012/06/t...experiment.html

I guess lifehacker has a larger audience than hydrogenaudio... could be interesting
Go to the top of the page
+Quote Post
JJZolx
post Jun 25 2012, 17:40
Post #2





Group: Members
Posts: 396
Joined: 26-November 04
Member No.: 18345



Anyone who would pay a kid $1 per CD to rip their music collection and then encode it in a lossy format to save a little hard drive space can't be very bright. Ten or fifteen years ago ... maybe it made some sense, but certainly not in today's world of disk space for under $1/GB.
Go to the top of the page
+Quote Post
db1989
post Jun 25 2012, 17:57
Post #3





Group: Super Moderator
Posts: 5275
Joined: 23-June 06
Member No.: 32180



QUOTE
The point of this exercise is absolutely not piracy; I have no interest in keeping both digital and physical copies of the media I paid for the privilege of owning[/]temporarily licensing.
I totally agree about CDs being redundant after ripping, but Im always wary that some might class this as piracy. Setting aside complicated and often retroactive analyses of copyright and fair-use regulations aside, this is a nice compromise:
QUOTE
I'll donate all the ripped CDs to some charity or library

or
QUOTE
and if I can't pull that off, I'll just destroy them outright. Stupid atoms!
Hahaha.

Back to the main topic, Im very interested in the results of this though I imagine they wont be surprising to people at Hydrogenaudio! In any case, anything that can publicise the effectiveness of perceptual encoding, and possibly debunk a good few myths, to a large readership is very welcome.
Go to the top of the page
+Quote Post
Canar
post Jun 27 2012, 20:07
Post #4





Group: Super Moderator
Posts: 3352
Joined: 26-July 02
From: princegeorge.ca
Member No.: 2796



http://www.codinghorror.com/blog/2012/06/c...experiment.html

QUOTE
Running T-Test and Analysis of Variance (it's in the spreadsheet) on the non-insane results, I can confirm that the 128kbps CBR sample is lower quality with an extremely high degree of statistical confidence. Beyond that, as you'd expect, nobody can hear the difference between a 320kbps CBR audio file and the CD. And the 192kbps VBR results have a barely statistically significant difference versus the raw CD audio at the 95% confidence level. I'm talking absolutely wafer thin here.


Seems pretty well-done. Thanks to Zao for the link.

This post has been edited by Canar: Jun 27 2012, 20:08


--------------------
You cannot ABX the rustling of jimmies.
No mouse? No problem.
Go to the top of the page
+Quote Post
greynol
post Jun 27 2012, 20:24
Post #5





Group: Super Moderator
Posts: 10000
Joined: 1-April 04
From: San Francisco
Member No.: 13167



QUOTE
Beyond that, as you'd expect, nobody can hear the difference between a 320kbps CBR audio file and the CD.

I wouldn't necessarily have such an expectation considering there are members here who can (or at least claim to be able to) regularly ABX 320 CBR against lossless on normal music samples.

This post has been edited by greynol: Jun 27 2012, 22:29


--------------------
Concern trolls: not a myth.
Go to the top of the page
+Quote Post
halb27
post Jun 28 2012, 09:22
Post #6





Group: Members
Posts: 2424
Joined: 9-October 05
From: Dormagen, Germany
Member No.: 25015



For a mere bitrate comparison it's a pity that VBR 128 kbps wasn't tested.
Current Lame CBR 128 seems to be suboptimal as was found in 3.98's time (-V5 -b128 -B128 being better than CBR 128 on the sample examined then). Lame 3.99 development did not improve upon CBR behavior AFAIK.

This post has been edited by halb27: Jun 28 2012, 09:26


--------------------
lame3100m -V1 --insane-factor 0.75
Go to the top of the page
+Quote Post
2Bdecided
post Jun 28 2012, 10:57
Post #7


ReplayGain developer


Group: Developer
Posts: 5089
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



QUOTE
Lately I've been trying to rid my life of as many physical artifacts as possible.
This is clearly a mental disease, and should be recognised as such. It's the opposite of hoarding (which is what I have!). Up to a point, both are rational responses to some facets of life - the former to the lack of space in modern housing, or the number of times people move these days with the associated hassle of having lots of "stuff" to pack, unpack, arrange, etc; the latter to the transitory nature of parts of life, and the realisation that some things which you thought would always be available, won't be, unless you keep them yourself. Both attitudes to life become a problem (almost a mental health issue) if they start to take over your life or impede more important parts of your life.

QUOTE
Ripping to uncompressed audio is a non-starter. I don't care how much of an ultra audio quality nerd you are, spending 7 or 5 the bandwidth and storage for completely inaudible "quality" improvements is a dagger directly in the heart of this efficiency-loving nerd, at least.
If you're choosing to keep your own audio files (which itself could be considered eccentric by some in the age of Spotify etc), it's easy to rationalise the need to keep something that will transition to whatever formats/devices arrive in the future - especially when the cost is so low. Hence it's easy to justify FLAC. If you have shorter term goals, it probably doesn't matter.

Encoding to mp3 today is like recording to a decent cassette tape a couple of decades ago. It's a pretty good substitute for the original CD or vinyl - but fast forward 20 years and you'll probably wish you had the original CD or vinyl to make a pristine transfer. FLAC is that pristine transfer.


I heard a DJ last week who should have used lossless. He was DJing for a kids dancing competition. His CD player failed to read one of the kid's CDs. No problem - he had the same track on his laptop. Problem was, the kid was using the version without vocals (for reasons that will become apparent), and he only had the vocal version. Ah, no problem again - the vocal cut feature in the software would sort that. If only it hadn't been an mp3. Vocal cut only works (sometimes) on the highest quality mp3s, and this one wasn't. The vocal bled through as horrible mp3 artefacts. It was so bad that he gave up and switched the vocal cut off. Just at the point where the lyrics said something like "...and you're no fucking use to me..." - as the five year old girl continued through her dancing routine. I doubt they'll be using that DJ again.


Fascinating comparison though. It's always surprising how good even low-ish bitrate mp3 sounds. Better than 99% of digital radio and TV in most of the world. And of course I use it every day.

Cheers,
David.

P.S. It's interesting that some of the comments suggest this track is a bad choice for a codec test because it's old. While it's hardly a "codec killer" for the current lame mp3 implementation, the stereo effects, tape noise, soft transients, and synths are all things that mp3 encoders have choked on in the past. Plus the dynamic range of parts of this track make a welcome change from modern pop which is trashed by dynamic range compression whether you use lossy compression or not. The fact that some of the effects on the raw track sound a bit like codec artefacts doesn't help in a non-referenced comparison like this, but for typical codec testing that's often the kind of thing that makes the codec misbehave. So, historically at least, this is a pretty good mp3 test track.
Go to the top of the page
+Quote Post
lvqcl
post Jun 28 2012, 11:32
Post #8





Group: Developer
Posts: 3357
Joined: 2-December 07
Member No.: 49183



QUOTE (halb27 @ Jun 28 2012, 12:22) *
Current Lame CBR 128 seems to be suboptimal as was found in 3.98's time (-V5 -b128 -B128 being better than CBR 128 on the sample examined then). Lame 3.99 development did not improve upon CBR behavior AFAIK.

Changelog for 3.99 beta 0:
QUOTE
All encoding modes use the PSY model from new VBR code, addresses Bugtracker item [ 3187397 ] Strange compression behavior

However I'm under impression that sometimes it makes more harm than good - at least for low bitrates.
Go to the top of the page
+Quote Post
krabapple
post Jun 28 2012, 20:21
Post #9





Group: Members
Posts: 2215
Joined: 18-December 03
Member No.: 10538



QUOTE (2Bdecided @ Jun 28 2012, 05:57) *
P.S. It's interesting that some of the comments suggest this track is a bad choice for a codec test because it's old. While it's hardly a "codec killer" for the current lame mp3 implementation, the stereo effects, tape noise, soft transients, and synths are all things that mp3 encoders have choked on in the past. Plus the dynamic range of parts of this track make a welcome change from modern pop which is trashed by dynamic range compression whether you use lossy compression or not. The fact that some of the effects on the raw track sound a bit like codec artefacts doesn't help in a non-referenced comparison like this, but for typical codec testing that's often the kind of thing that makes the codec misbehave. So, historically at least, this is a pretty good mp3 test track.



Would you mind posting this over there? The ignorance and snobbery in the comments there really begs for a response. When people insist that only an orchestral or symphonic work will do as a 'real' test of a codec, I can't help recalling that some famous 'codec killers' consisted of solo harpsichord , castanets, or entirely synthetic club music.

This post has been edited by krabapple: Jun 28 2012, 20:22
Go to the top of the page
+Quote Post
mjb2006
post Jun 28 2012, 21:24
Post #10





Group: Members
Posts: 780
Joined: 12-May 06
From: Colorado, USA
Member No.: 30694



QUOTE (krabapple @ Jun 28 2012, 13:21) *
When people insist that only an orchestral or symphonic work will do as a 'real' test of a codec, I can't help recalling that some famous 'codec killers' consisted of solo harpsichord , castanets, or entirely synthetic club music.

Well yes, only testing with one specific type of music like orchestral/symphonic is bogus unless the goal is to test the codec's quality in regard to that type of music alone, rather than its quality in general. But likewise, only using killer samples to evaluate a codec's general quality is inappropriate, and only relevant to the extent that 1. someone is sensitive to pre-echo (or whatever) and 2. their collection has moments of solo castanets/harpsichord/etc.
Go to the top of the page
+Quote Post
greynol
post Jun 28 2012, 21:30
Post #11





Group: Super Moderator
Posts: 10000
Joined: 1-April 04
From: San Francisco
Member No.: 13167



QUOTE (mjb2006 @ Jun 28 2012, 13:24) *
only using killer samples to evaluate a codec's general quality is inappropriate

+1 with a bullet!


--------------------
Concern trolls: not a myth.
Go to the top of the page
+Quote Post
Canar
post Jun 28 2012, 22:56
Post #12





Group: Super Moderator
Posts: 3352
Joined: 26-July 02
From: princegeorge.ca
Member No.: 2796



QUOTE (greynol @ Jun 28 2012, 13:30) *
+1 with a bullet!

  • +1
?


--------------------
You cannot ABX the rustling of jimmies.
No mouse? No problem.
Go to the top of the page
+Quote Post
greynol
post Jun 28 2012, 22:58
Post #13





Group: Super Moderator
Posts: 10000
Joined: 1-April 04
From: San Francisco
Member No.: 13167



One of these:


This post has been edited by greynol: Jun 28 2012, 22:58


--------------------
Concern trolls: not a myth.
Go to the top of the page
+Quote Post
JJZolx
post Jun 28 2012, 23:17
Post #14





Group: Members
Posts: 396
Joined: 26-November 04
Member No.: 18345



Was this experiment done using ABX?
Go to the top of the page
+Quote Post
greynol
post Jun 28 2012, 23:27
Post #15





Group: Super Moderator
Posts: 10000
Joined: 1-April 04
From: San Francisco
Member No.: 13167



You mean did I actually hear eig, trumpet, herding calls or whatever other single sample some "expert" uses to suggest which codec and bitrate I should use for my entire library cry out as it was put to death?

This post has been edited by greynol: Jun 28 2012, 23:36


--------------------
Concern trolls: not a myth.
Go to the top of the page
+Quote Post
db1989
post Jun 28 2012, 23:38
Post #16





Group: Super Moderator
Posts: 5275
Joined: 23-June 06
Member No.: 32180



Its more likely that JJZolx is asking about the titular experiment, not questioning your scepticism about exceptional samples being extrapolated as representative of entire libraries. In which case, reading the page is predictably instructive, but for convenience:
Behold The Great MP3 Bitrate Experiment!

As proposed on our very own Audio and Video Production Stack Exchange, we're going to do a blind test of the same 2 minute excerpt of a particular rock audio track at a few different bitrates, ranging from 128kbps CBR MP3 all the way up to raw uncompressed CD audio. Each sample was encoded (if necessary), then exported to WAV so they all have the same file size. Can you tell the difference between any of these audio samples using just your ears?

1. Listen to each two minute audio sample
[links]

2. Rate each sample for encoding quality
Once you've given each audio sample a listen with only your ears please, not analysis software fill out this brief form and rate each audio sample from 1 to 5 on encoding quality, where one represents worst and five represents flawless.
So, no: ABX was not used, but the test was blind nonetheless.
Go to the top of the page
+Quote Post
JJZolx
post Jun 28 2012, 23:39
Post #17





Group: Members
Posts: 396
Joined: 26-November 04
Member No.: 18345



Was that in response to my question? I don't get it. I'm asking how the public test was performed by the participants.

Edit: yeah, signals crossed

This post has been edited by JJZolx: Jun 28 2012, 23:40
Go to the top of the page
+Quote Post
greynol
post Jun 28 2012, 23:43
Post #18





Group: Super Moderator
Posts: 10000
Joined: 1-April 04
From: San Francisco
Member No.: 13167



Seriousness: +1 (no bullet this time)
Playful banter: 0


--------------------
Concern trolls: not a myth.
Go to the top of the page
+Quote Post
halb27
post Jun 28 2012, 23:52
Post #19





Group: Members
Posts: 2424
Joined: 9-October 05
From: Dormagen, Germany
Member No.: 25015



In one of the comments the author found that many of the listeners assigned the numbers 1 to 5 to the contenders. This suggests that many participants did a ranking instead of a quality judgement. Having looked at the spreadsheet of the listening results I feel like this as well.

This post has been edited by halb27: Jun 28 2012, 23:58


--------------------
lame3100m -V1 --insane-factor 0.75
Go to the top of the page
+Quote Post
krabapple
post Jun 29 2012, 01:34
Post #20





Group: Members
Posts: 2215
Joined: 18-December 03
Member No.: 10538



QUOTE (mjb2006 @ Jun 28 2012, 16:24) *
QUOTE (krabapple @ Jun 28 2012, 13:21) *
When people insist that only an orchestral or symphonic work will do as a 'real' test of a codec, I can't help recalling that some famous 'codec killers' consisted of solo harpsichord , castanets, or entirely synthetic club music.

Well yes, only testing with one specific type of music like orchestral/symphonic is bogus unless the goal is to test the codec's quality in regard to that type of music alone, rather than its quality in general. But likewise, only using killer samples to evaluate a codec's general quality is inappropriate, and only relevant to the extent that 1. someone is sensitive to pre-echo (or whatever) and 2. their collection has moments of solo castanets/harpsichord/etc.


Some of the audio snobs on that thread assume that symphonic music must be the hardest music to encode...and it's a common assumption. That's what I was addressing.

Beyond that, I'm not following the point of your reply, nor greynol's thumbsup. I would expect a codec that had gone through iterative improvement , involving (but not restricted to) serial challenge from different 'killer' samples with different artifacts and coming from different genres, to perform generally better , subjectively as well as objectively, than a codec that had not been put to such tests. The audio snobs, on the other hand, would seem to hold that codecs should be tuned to symphonic music preferably, for best results generally.

This post has been edited by krabapple: Jun 29 2012, 01:39
Go to the top of the page
+Quote Post
db1989
post Jun 29 2012, 01:53
Post #21





Group: Super Moderator
Posts: 5275
Joined: 23-June 06
Member No.: 32180



As I interpret their posts, mjb2006 and greynol want exactly what you do, i.e. to discourage conclusions (proclamations?) based upon single or small groups of killer sample(s) or specific genre(s),as their specificity makes for narrow applicability to general use. I dont think you folks actually have anything to disagree about! wink.gif
Go to the top of the page
+Quote Post
greynol
post Jun 29 2012, 02:27
Post #22





Group: Super Moderator
Posts: 10000
Joined: 1-April 04
From: San Francisco
Member No.: 13167



We already know the pointlessness of catering to audiophiles since they are never satisfied (which is practically the working definition of an audiophile). This is without considering their rampant lack of objectivity.

My general concerns that people often draw sweeping generalizations from a small data set are once again front and center here. Forget about killer samples; lossy codecs are not universally transparent across all music to all people. It's foolish to think a test like this proves otherwise, not that I believe people here think that.

With regards to what genres are most difficult to encode, I think we've discussed this before. IIRC when looking at average bitrate at any given VBR quality level on a per genre basis, metal requires far more data than classical. In terms of samples and positive ABX results submitted to the forum indicating a lack of transparency at high bitrates, again it is usually metal and rarely classical music.

That said, I still applaud any effort that attempts to gather objective data, especially when it places an emphasis on the importance that the data actually be objective.

This post has been edited by greynol: Jun 29 2012, 04:33


--------------------
Concern trolls: not a myth.
Go to the top of the page
+Quote Post
2Bdecided
post Jun 29 2012, 10:45
Post #23


ReplayGain developer


Group: Developer
Posts: 5089
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



QUOTE (krabapple @ Jun 28 2012, 20:21) *
Would you mind posting this over there?
Good grief, no...
http://english.stackexchange.com/questions...gue-with-idiots

...though my reason for resisting is that life can be frustrating enough without going looking for more of it.

QUOTE
The ignorance and snobbery in the comments there really begs for a response. When people insist that only an orchestral or symphonic work will do as a 'real' test of a codec, I can't help recalling that some famous 'codec killers' consisted of solo harpsichord , castanets, or entirely synthetic club music.
As well as the experience here, there are AES conference papers that say the same. A quick search couldn't find them though. It's quite telling that the original EBU SQAM CD included so many orchestral instruments and some clips of classical music, while more recent official codec tests don't, because codecs cope with almost all of these just fine (the specific solo harpsichord recording on that CD being the notable exception for a long time).

Cheers,
David.
Go to the top of the page
+Quote Post
Arnold B. Kruege...
post Jun 29 2012, 14:28
Post #24





Group: Members
Posts: 3687
Joined: 29-October 08
From: USA, 48236
Member No.: 61311



QUOTE (kinnerful @ Jun 25 2012, 12:28) *
http://lifehacker.com/5920793/the-great-mp...rate-experiment
http://www.codinghorror.com/blog/2012/06/t...experiment.html

I guess lifehacker has a larger audience than hydrogenaudio... could be interesting


Ironically, the source music for this alleged test is "We Built This City On Rock And Roll" by Jefferson Starship first released August 1, 1985. If memory serves this recording is pretty highly processed, even to the point where its original CD track had some MP3-like artifacts.

So without getting involved with Blender Magazine's nomination and VH1's seconding of it as "One of the worst songs ever released", it seems like one of the worst tracks that could ever be chosen for the purpose.
Go to the top of the page
+Quote Post
nevermind
post Jul 3 2012, 04:21
Post #25





Group: Members
Posts: 11
Joined: 1-June 10
Member No.: 81054



Sorry to resurrect this thread (without waiting at least several years) but I was having a look at the excel data form this test, thinking, maybe if i remove some of the people who rated the lowest sample higher then cd it would be more interesting, and I think i have found something unusual. It seems that if you look only at people who rated 128 mp3 > cd there is a trend for them to rate it in reverse order, such as 128>160>192>320>cd . I think this might be interesting because this trend seems to occurs in abx's from time to time.

Maybe someone can calculate if this is statistically relevant, it is sort of getting over my head.
Go to the top of the page
+Quote Post

2 Pages V   1 2 >
Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



RSS Lo-Fi Version Time is now: 20th August 2014 - 17:56