IPB

Welcome Guest ( Log In | Register )

2 Pages V  < 1 2  
Reply to this topicStart new topic
Nine different codecs 100-pass recompression test
Arnold B. Kruege...
post Mar 25 2013, 14:42
Post #26





Group: Members
Posts: 4484
Joined: 29-October 08
From: USA, 48236
Member No.: 61311



QUOTE (zerowalker @ Mar 23 2013, 14:35) *
Interesting.
Though i am surprised that Vorbis did so bad.

Have you tried with aoTuVb6.03?
Cause it should be more resilient then LibVorbis.


That AAC did so well is no surprise to me, as JJ said that this sort of thing was one of their design goals. For all I know AAC may have code that recognizes files that are processed by it.
Go to the top of the page
+Quote Post
Mach-X
post Mar 26 2013, 07:09
Post #27





Group: Members
Posts: 288
Joined: 29-July 12
From: Windsor, On, Ca
Member No.: 101859



QUOTE (zerowalker @ Mar 23 2013, 13:35) *
Interesting.
Though i am surprised that Vorbis did so bad.

Have you tried with aoTuVb6.03?
Cause it should be more resilient then LibVorbis.


Shouldn't make a difference. aoTuV's betas do not stray from the libvorbis spec, they only are more efficient. IE quality level for encode at same setting is identical, only file size is different. Unless I stand to be corrected?
Go to the top of the page
+Quote Post
eahm
post Mar 26 2013, 17:17
Post #28





Group: Members
Posts: 1171
Joined: 11-February 12
Member No.: 97076



QUOTE (Mach-X @ Mar 25 2013, 23:09) *
Shouldn't make a difference. aoTuV's betas do not stray from the libvorbis spec, they only are more efficient. IE quality level for encode at same setting is identical, only file size is different. Unless I stand to be corrected?

I would love to know this as well, from a Vorbis developer.

Is Ogg Vorbis improving/being developer anymore? Is all your attention on Opus now? Thanks.

This post has been edited by eahm: Mar 26 2013, 17:19
Go to the top of the page
+Quote Post
lvqcl
post Mar 26 2013, 17:40
Post #29





Group: Developer
Posts: 3468
Joined: 2-December 07
Member No.: 49183



QUOTE (Mach-X @ Mar 26 2013, 10:09) *
IE quality level for encode at same setting is identical

No, it's not.
Go to the top of the page
+Quote Post
2Bdecided
post Mar 26 2013, 18:22
Post #30


ReplayGain developer


Group: Developer
Posts: 5364
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



Anyone else think there's an error in here?

e.g. compare...
RESULTS BY CODEC (100 PASSES, FROM BEST TO WORST): VBR, HIGH QUALITY (~256 KBPS) 1 (tie) MP3 (LAME)
...with...
DETAILED RESULTS: lame, vbr high quality

The latter is worst at 10 passes than the former is at 100. In the latter section at 100 passes, it sounds far worse than in the first set of samples (also supposedly after 100 passes).

Apologies if I've misunderstood or missed something.

Cheers,
David.
Go to the top of the page
+Quote Post
2Bdecided
post Mar 26 2013, 18:25
Post #31


ReplayGain developer


Group: Developer
Posts: 5364
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



QUOTE (greynol @ Mar 24 2013, 14:58) *
Sound quality of lossy codecs is determined though DBT, full stop.
Agree 100%.

This test is an interesting insight into what codecs do when pushed beyond their limits, and shows you what a specific unlikely transcoding scenario will produce. However, the codec that performs best over 100 iterations is not necessarily the one that's best in a single iteration. e.g. one might do all the damage in the first iteration, and then make no change in the other 99.

It is interesting and worthwhile, but it's not the last word (and maybe not even the first word) in choosing a codec for a given application.

Cheers,
David.

P.S. reminds me of this...
http://www.youtube.com/watch?v=mES3CHEnVyI


This post has been edited by 2Bdecided: Mar 26 2013, 18:34
Go to the top of the page
+Quote Post
Porcus
post Mar 26 2013, 23:44
Post #32





Group: Members
Posts: 1995
Joined: 30-November 06
Member No.: 38207



QUOTE (2Bdecided @ Mar 26 2013, 18:25) *
e.g. one might do all the damage in the first iteration, and then make no change in the other 99


And that is more than just theory. LossyWAV.


--------------------
One day in the Year of the Fox came a time remembered well
Go to the top of the page
+Quote Post
Mach-X
post Mar 27 2013, 05:39
Post #33





Group: Members
Posts: 288
Joined: 29-July 12
From: Windsor, On, Ca
Member No.: 101859



QUOTE (lvqcl @ Mar 26 2013, 11:40) *
QUOTE (Mach-X @ Mar 26 2013, 10:09) *
IE quality level for encode at same setting is identical

No, it's not.

Care to explain? My understanding of the betas is that quality level 2 is quality level 2 regardless, and that the tunings only reduce filesize, not change in sound quality.
Go to the top of the page
+Quote Post
hankwang
post Mar 27 2013, 12:22
Post #34





Group: Members
Posts: 11
Joined: 18-January 04
Member No.: 11345



About the noise in the Vorbis sample: I experienced that "Vorbis exhibits an analog noise-like failure mode" (phrasing from Wikipedia). I wonder whether this noise is really an artifact of the quantization in the codec, or is deliberately added by the decoder using a pseudorandom generator in order to mask other encoding artifacts. In a normal low-bitrate Vorbis sample with noise-like artifacts, I find that less disturbing than the warbling sounds in MP3. It would make sense to mask artifacts with noise and it would explain the huge noise after 100 re-encodes.

Anyone who knows the internals of Vorbis who could chime in?
Go to the top of the page
+Quote Post
Primius
post Mar 27 2013, 14:49
Post #35





Group: Members
Posts: 21
Joined: 11-April 06
Member No.: 29419



If codec A was better than codec B after 100 iterations , wouldn't it be also better on the first iteration?
Is lossyWAV a "realistic" counterexample?, how would lossyWAV perform if a random time shift between the compression iterations was introduced?
(to be fair, this would also be a applied to the other codecs in the test)

Would optimizing an existing encoder to perform well this test inevitably result in regressions in the first encode iteration?

Could the reason why opus ranked low be because "it has no psychoacoustic model"?

Could the High frequency noise caused by 100 iterations of Vorbis be the same underlying problem, that caused the "HF noise boost" complaints in the past, I read about in the wiki?
Go to the top of the page
+Quote Post
greynol
post Mar 27 2013, 15:00
Post #36





Group: Super Moderator
Posts: 10339
Joined: 1-April 04
From: San Francisco
Member No.: 13167



QUOTE (Primius @ Mar 27 2013, 06:49) *
If codec A was better than codec B after 100 iterations , wouldn't it be also better on the first iteration?

Not necessarily.

At the end of the day you have to rely on DBT for any particular codec/setting/sample/iteration/etc. so I don't see the point in such a lazy end-around.


--------------------
Your eyes cannot hear.
Go to the top of the page
+Quote Post
db1989
post Mar 27 2013, 15:01
Post #37





Group: Super Moderator
Posts: 5275
Joined: 23-June 06
Member No.: 32180



QUOTE (Primius @ Mar 27 2013, 13:49) *
If codec A was better than codec B after 100 iterations , wouldn't it be also better on the first iteration?
Maybe you missed the discussion about the potential for codecs to recognise that the input signal had previously being processed by that format and act accordingly. Itís not been verified AFAIK, but itís a very real possibility, so you canít just generalise like this. There are plenty of reasons that such simple rules may not be true and are generally a bad idea.

Anyway, in case it hasnít already been said enough, DBT of properly encoded first-generation files is the only way to judge a codecsí performances in the normal use-cases for which theyíre designed. Any extrapolation from 100 passes is pointless at best, dangerously misleading at worst.
Go to the top of the page
+Quote Post
Mach-X
post Mar 27 2013, 16:40
Post #38





Group: Members
Posts: 288
Joined: 29-July 12
From: Windsor, On, Ca
Member No.: 101859



db1989 and,greynol 100%, and greynol I hadn't meant to imply that you intended to bin the discussion,I was simply suggesting to all mods that while not particularly useful on a practical level, nor should ANY conclusions about ANY codec be drawn from the results (and all such claims SHOULD be binned), I find the tests and results interesting on a casual academic level. Indeed on a casual listen of the samples I must say I am a bit embarrased to say I might not be able to abx the 100 pass aac vs original. Along the lines of what arnie was saying is it possible the aac encoder can detect what has already been processed? After one pass does it simply spit the same file out 99 times? Can we use filesize or some other measurement to find out?
Go to the top of the page
+Quote Post
lvqcl
post Mar 27 2013, 17:07
Post #39





Group: Developer
Posts: 3468
Joined: 2-December 07
Member No.: 49183



QUOTE (Mach-X @ Mar 27 2013, 08:39) *
Care to explain? My understanding of the betas is that quality level 2 is quality level 2 regardless, and that the tunings only reduce filesize, not change in sound quality.

http://en.wikipedia.org/wiki/Vorbis#Tuned_versions
QUOTE
Various tuned versions of the encoder (Garf, aoTuV or MegaMix) attempt to provide better sound at a specified quality setting, usually by dealing with certain problematic waveforms by temporarily increasing the bitrate.
Go to the top of the page
+Quote Post
Mach-X
post Mar 27 2013, 17:24
Post #40





Group: Members
Posts: 288
Joined: 29-July 12
From: Windsor, On, Ca
Member No.: 101859



I see the word "attempt" in there, but no evidence that anything audible nor tested actually was accomplished. In fact since I cant abx libvorbis at -q2 or higher it stands to reason that those tunings offer no improvements at settings higher than that, including those used in this experiment.
Go to the top of the page
+Quote Post
Nick.C
post Mar 27 2013, 19:11
Post #41


lossyWAV Developer


Group: Developer
Posts: 1815
Joined: 11-April 07
From: Wherever here is
Member No.: 42400



QUOTE (Mach-X @ Mar 27 2013, 16:24) *
In fact since I cant abx libvorbis at -q2 or higher it stands to reason that those tunings offer no improvements at settings higher than that, including those used in this experiment.
[my emphasis]
So, on the basis of one failed ABX result, you contend that no improvements can be made? Which material did you use? Were any of the samples known problem samples for Vorbis?

On the topic of recursive lossyWAV processing - at the same quality settings, lossyWAV stops changing the audio at about the fourth iteration.


--------------------
lossyWAV -q X -a 4 --feedback 4| FLAC -8 ~= 320kbps
Go to the top of the page
+Quote Post
saratoga
post Mar 27 2013, 19:47
Post #42





Group: Members
Posts: 5161
Joined: 2-September 02
Member No.: 3264



QUOTE (Mach-X @ Mar 27 2013, 11:24) *
In fact since I cant abx libvorbis at -q2 or higher it stands to reason that those tunings offer no improvements at settings higher than that, including those used in this experiment.


Tuning in this context usually means improving transparency on rare problem files. Its no surprise you don't notice a difference, at those bitrates most codecs are generally transparent except for the sorts of problem files tuning is meant to help with.
Go to the top of the page
+Quote Post
Mach-X
post Mar 28 2013, 02:14
Post #43





Group: Members
Posts: 288
Joined: 29-July 12
From: Windsor, On, Ca
Member No.: 101859



Precisely the point I was getting at. At the bitrates *used* in this experiment, on the sample *used*, there is no evidence to suggest that using a 'tuner' fork of vorbis would produce results any different than already presented. *I* didn't put forth a claim, somebody else did. Still waiting on the abx test of the 100 pass libvorbis vs the 100 pass tuner. wink.gif
Go to the top of the page
+Quote Post
Spikey
post Apr 9 2013, 18:39
Post #44





Group: Members
Posts: 113
Joined: 4-July 06
Member No.: 32545



QUOTE
Anyway, in case it hasnít already been said enough, DBT of properly encoded first-generation files is the only way to judge a codecsí performances in the normal use-cases for which theyíre designed. Any extrapolation from 100 passes is pointless at best, dangerously misleading at worst.

I think in addition to this, it misses the obvious point that say after 3 reencodes instead of 100, is the 'loser' from the 100 experiment ABX'able from the 'winner'? Or, any versus any other for that matter. So while after 100 passes things might be really obvious (or really confusing), in just 1-3 reencodes all may be non ABX-able from one another (although of course, the test still needs to be done!).

Interesting thread, although I think it's confusing/oversimplifying a good topic rather than clarifying it. (Scary to see some oldtimers relying on a wave graph with obvious limitations rather than their own ears/logic!)
Go to the top of the page
+Quote Post

2 Pages V  < 1 2
Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



RSS Lo-Fi Version Time is now: 25th December 2014 - 14:37