IPB

Welcome Guest ( Log In | Register )

4 Pages V   1 2 3 > »   
Closed TopicStart new topic
MPC vs VORBIS vs MP3 vs AAC at 180 kbps, 2nd checkup with classical music
guruboolez
post Aug 21 2005, 19:33
Post #1





Group: Members (Donating)
Posts: 3474
Joined: 7-November 01
From: Strasbourg (France)
Member No.: 420



Preliminary notes


Last summer I performed a blind listening comparison between three different audio formats, all set for ~175 kbps encodings. The purpose of the test was to investigate about encoding quality with classical music (and only classical) and to see which format would be the most efficient (i.e. the closest to transparency at lowest bitrate possible) for this kind of music. As jumping-off place for bitrate I took MPC –standard preset which was indisputably recognized as the best encoding solution outputing at 175...190 kbps on average. And indeed, the test ended on musepack superiority. MPC was even superior to Vorbis and MP3 at presets presenting higher bitrate (~195 kbps for LAME, ~185 for Vorbis against ~175 for musepack). Consequently, MPC encodings appeared to sound better and to be smaller at the same time. Amazed by the existent gap between all contenders I conclude my specific test with these words: “I didn’t think that MPC –standard was so in advance”.

My vacation are now quite over. I performed during my free time a big checkup of lossy quality at 80 kbps and 96 kbps (this one has to be translated in english dry.gif ), and it’s too late now to complete the 128 kbps I planned to do in the same silly conditions (150+35 tested samples). But I used my small remaining time to do again the listening test at 175 kbps I did last year, with the same 18 samples and the same hardware.



Why doing again the same test?


As a result of constant evolution of most audio encoders I consider my previous results as really outdated. I recall that Vorbis encodings were done with MEGAMIX I (hybrid encoder melting aoTuV beta 2, Garf Tuned 2 and Quantum Knot tunings). This encoder didn't subsist for a long time... and doesn’t exist anymore; it was replaced by MEGAMIX II, then official 1.1 with Impulse Trigger Profile + Impulse Noisetune switches, which was finally followed by aoTuV beta 3 and beta 4. The same goes for LAME: 3.97 alpha 3 was tested, and during this time LAME developers have submitted eight new versions of this alpha and a few other ones (lame_may, lame_june...)! MPC has also changed: from 1.14 beta to 1.15 alpha which is now considered as safe to use.
As a consequence of this evolution, problems audible last years (kind of ringing for LAME, or noise and coarseness for Vorbis) may be corrected or at least be lowered. The first purpose of my test is therefore to check the outcomes of recent tunings done for high bitrate settings.

There’s also a second point which stimulated me to do again the test and this point is called AAC. I haven’t tested AAC last year for technical and moral reasons. Technically, iTunes encoder couldn’t be set to ~175 kbps; Apple's AAC encoder wasn't also gapless and is for my purpose unsuitable for my conception of artefact-free encodings. I also felt as dishonest the inclusion of Nero AAC: it had recognized issues with classical first and a new encoder supposed to solve these problems was announced as imminent. Some readers suggest me to include faac as competitor, but I felt as unfair to test an encoder which was probably not the state of the art of AAC format and to compare it to the most advanced implementation of other formats (MEGAMIX and LAME 3.97).
I never regret my choice. But this absence of AAC frustrated my curiosity for a long time, because I had strictly no idea about comparative performance of this format with other contenders. That’s why I decided to absolutely include AAC this time. WMAPro will also be tested this time if possible.

The purpose of my test is therefore to obtain a fresh photography of the current performance of all modern lossy formats with classical music using the most advanced implementations for each of them.


I. Choosing the encoders


My purpose being to test most advanced encoders the choice of format hasn't to be controversial for most of them:

MP3: LAME 3.97 alpha 11. Release date: July 2005. Note: --vbr-new encoding mode.
MPC: mppenc 1.15v. Release date: march 2005.
Vorbis: aoTuV beta 4. Release date: June 2005, updated in July 2005 (merged with SVN 1.1.1).
WMAPro: no choice here: it's 9.1 or nothing. Release date: during 2004.

Choosing the good AAC encoder is much harder:
Apple AAC: There's still no VBR mode with iTunes. Consequently it's currently impossible to use Apple's AAC encoder unless other contenders will output an average bitrate close to either 160 kbps or 192 kbps. It's unlikely...
Nero Digital AAC: the most advanced VBR AAC encoder and therefore better placed to represent the AAC format. Big problem: should I use the 'high' and defaulted encoder or rather the 'fast' one which is really better at lower bitrate with classical music? The first one is still recommended by all Nero's developers and it's a valid reason to choose it instead of something they don't consider as stable enough (Garf, JohnV and Ivan Dimokovic). But the situation has maybe changed since their recommendation; I wouldn't also discard too quickly the possibility of using an encoder working better for the difficulties proper to the musical genre I'll test. The debate could be endless if a trivial but objective argument hadn't close the debate: the average bitrate of VBR mode of both encoders (see below).
faac AAC: testing faac might also be interesting. And even for fun, it would give me the possibility to oppose four different open-source implementations of four different formats smile.gif But such friendly comparison has a price: increasing the onerousness of the test which is anything but easy at this bitrate...


II. Targeting a bitrate


The purpose of my test is not to see what encoders could do with xxx kbps for each sample; I don't plan to force each encoding reaching a precise bitrate. My purpose is to stay close to the real usage of a vast majority of listeners (if not all...): using for every encoding one fixed setting which should statistically corresponds on average to the desired bitrate. That's why it's really fundamental to precisely know the average bitrate corresponding to a defined preset. And there's only one way to get it: encoding several tracks or albums.
Last year, I used as reference ~20 classical (+3 non-classical) albums. This year, I decided to be more methodical. I’m now using 150 different tracks (I mean full tracks) coming from 150 different CD in order to increase the variety of encoded tracks. It’s important to note that I didn’t choose randomly those tracks. I meticulously worked to get a representative microcosm of my full classical library, balanced between different grand ensemble (vocal, orchestral, chamber, soloist recording). This collection is nothing more than the 150 full tracks from which I’ve extracted 150 short samples in order to build a “catalogue raisonné” of musical situations occurring with classical music (see this test).

I genuinely expect from this methodically constructed library to be a highly representative panel of my classical collection. My assumption could be verified by checking the average bitrate of the entire collection encoded with WavPack -fx5 (all my >1000 CD digital library is encoded with this preset): 642 kbps for the selection of 150 tracks against 635 kbps for a complete set of more than 15000 tracks. The deviation is inferior to 1%! blink.gif Statistics are really magical.


III. Observing bitrates


I started with MPC which must give the reference bitrate. All other competitors have to be set in order to get a similar value.


MPC: --quality 5 corresponds precisely to 184,54 kbps. This is higher from what I expected first (~175 kbps). The 150 reference tracks are maybe not as representative as supposed. I also tried 1.14 (used last year) with the same preset and --xlevel: 176,28 kbps, much closer to the native average bitrate of --standard profile and reassuring me about the representativity of my collection of tracks. The bitrate has therefore inflated by 4.7% from 1.14 to 1.15v with classical.
=> I'll therefore try to get from all other encoders a setting which outputs to 184,5 kbps ±2% (180,5...188,1 kbps).

MP3: I first tried -V2 --vbr-new, which corresponds to the former --preset fast standard. Average bitrate is 181,79 kbps. Now, this value is lower from what I estimated last year (and that's why I tested -V3 in addition to -V2)... Indeed, 3.97alpha3 -V2 would output to 192,99 kbps. Nice gain (-5.80%). Obviously LAME developpers also worked on efficiency. Gain is great enough that LAME --preset standard could now be fairly compared to MPC --standard. But I recall another time that it only applies for classical (I suppose that bitrate is higher with other musical suffering from sb21 issue).

Vorbis: aoTuV beta 4 -q6,00 leads to 181,48 kbps. This is lower than what I expected, and it's also lower than MPC --standard bitrate. I get 186,99 kbps for the old MEGAMIX I. Bitrate has therefore be lowered with latest aoTuV.
-q6,00 could therefore be directly compared to MPC --standard and LAME --preset fast standard (for classical music).

WMAPro: VBR75 leads to 150,24 kbps. The next available preset is VBR90 and it leads to 203,96 kbps. Both are very far for the range I fixed and consequently WMAPro can't compete in this test.

Nero Digital AAC: Like LAME and WMAPro Nero Digital doesn't offer any precise VBR scale but seven presets. -internet leads to ~142 kbps for both 'high' and 'fast' encoders. -streaming high corresponds to 176,14 kbps and -streaming fast to 193,33 kbps. Consequently none of them is inside the fixed range; the closest one is -streaming high and is therefore the less unacceptable solution (I recall that the 'high' encoder is still the recommended one).

faac AAC: this is the only encoder able to fit into the fixed bitrate range (thanks to the precise VBR scale alla vorbis & mpc). AAC faac –q 175 leads to 180,92 kbps. This –q setting won’t probably correspond to 180 kbps with other musical genre and that’s the occasion to recall another time that the whole test is specific to classical music and nothing else.


Recapitulative table

CODE
         bitrate_2004   bitrate_2005     evolution in kbps   ...in %

MPC          176,28         184,54            +8,26 kbps      +4,69 %
MP3          192,99         181,79           -11,20 kbps      -5,80 %  
Vorbis       186,99         181,48            -5,51 kbps      -2,95 %
AAC faac   not tested       180,92              --              --
AAC Nero   not tested       176,14              --              --


=> faac, LAME, aoTuV are very close each others (difference is inferior to 0,9 kbps!). MPC presents a higher bitrate (+3 kbps) and Nero Digital a lower one (-5 kbps). The gap between the extreme is worrying: approximately 5% corresponding to 8 kbps. That's not a huge difference but these eight missing kbps may lead to a significant difference in quality. I could discard Nero Digital for this test but I would consider this choice as a mistake. For my own curiosity I'm also very impatient to see how would perform an advanced implementation of AAC in comparison to other formats, even if bitrate are not fully comparable.

=> As a consequence I decided to test both Nero Digital AAC and faac AAC, and I will consider Nero Digital presence as a "bonus" interesting to watch rather than an entire competitor. That's why my final diagramme (plots) will graphically separate Nero AAC results from other contenders. I hope this will avoid unecessary debate about any kind of unfairness based on bitrate disparity.



SUMMARY

Are going to be test:

AAC: faac 1.24.1. Release date: end 2004 (?). Setting: -q175
AAC: Nero Digital aacenc32 v.3.2.0.15. Release date: June 2005. Setting: -streaming (high/default encoder).
MP3: LAME 3.97 alpha 11. Release date: July 2005. Setting: -V2 --vbr-new
MPC: mppenc 1.15v. Release date: march 2005. Setting: --quality 5
Vorbis: aoTuV beta 4 based on 1.1.1. Release date: July 2005. Setting: -q6,00



IV. Additional information


I performed all my last listening tests on a Creative Audigy 2 soundcard, which resamples everything to 48000 KHz. Some people consider that internal resampling (transparent in my opinion) is treating unfairly musepack and would biased any listening test. To cut the controversial short, I installed my (better) Terratec DMX6Fire 24/96 which doesn't resample 44100 KHz files (I'm not using it anymore for daily listening because of interference with my VIA chipset).

HARDWARE & SOFTWARE SETTINGS:

soundcard: Terratec DMX6Fire 24/96
headphone: BeyerDynamic DT-531
amp: Onkyo MT-5
software player: Java ABC/HR 0.5 beta 5.
software decoder: foobar2000 0.83 (in order to automatically get files free of offset and to solve my incompatibility issues occuring with Vorbis).

TESTING PRINCIPLES:

ABX phase : To limit the listening fatigue and to end the test before I left my appartment, I restricted the ABX tests to the most transparent encodings (note > 4.00).
Number of trials : eight trials as a minimum. I recall that schnofler's ABC/HR software doesn't reveal to score until the test is closed by the user (and it also can't be resume). Therefore the number of trails hasn't to be fixed: as long as score is hidden the pval isn't ruined. That's why I add more trials when I suspect bad results. I never exceed 16 trials: if something is really transparent I didn't persecute the encoding smile.gif
Notation : My notation was very severe last year, with a full dynamic range of notation (a lot of notes were inferior to 2.0). That's why I decided to add 10 points to each score (in order to disconnect the notation from the usual corresponding scale). This year, I tried to respect the ITU scale. When a difference is audible but not really annoying, the notation is at least equal to 4.0 and my hairs must stand on end to allow a notation inferior to 2.0 (from "annoying" to "very annoying"). Notation is still severe (I keep in mind that all encodings were set at 180 kbps) and that's why results I get here can't absolutely not be compared to other listening tests I done, especially those performed for low bitrate settings. By the way, there are no anchors in this test (high anchor is of course unecessary here).
Samples: Same as last year. See this thread.
Gain: I hadn't modify the gain of any file. All were played at their original volume.

This post has been edited by guruboolez: Aug 31 2005, 11:00
Go to the top of the page
+Quote Post
guruboolez
post Aug 21 2005, 19:34
Post #2





Group: Members (Donating)
Posts: 3474
Joined: 7-November 01
From: Strasbourg (France)
Member No.: 420



V. Results and detailed comments







Sample 01: Krall
Short description: the only non-classical sample (Jazz). Cymbals, drums and voice.
Possible problems: smearing on cymbals and drums, distorted cymbals.
replaygain_sample_gain: = -5.50 dB
(indicative only: files were tested at their original volume)

CODE

ABC/HR for Java, Version 0.5a, 17 août 2005
Testname:

Tester: guruboolez

1L = E:\SUMMER TESTS 2005\HQ180\01_krall [AAC faac].wav
2L = E:\SUMMER TESTS 2005\HQ180\01_krall [AAC Nero Digital].wav
3L = E:\SUMMER TESTS 2005\HQ180\01_krall [MPC 1.15v].wav
4R = E:\SUMMER TESTS 2005\HQ180\01_krall [vorbis aoTuV].wav
5L = E:\SUMMER TESTS 2005\HQ180\01_krall [MP3 LAME].wav

---------------------------------------
General Comments: This is what I wrote for 4R, before cancelling my notation after a bad ABX test:

"Last file I have to find. After many trials, I found one small distorted passage [3.86 - 5.67]"

failed on ABX -> 5.0 / 5.0
---------------------------------------
1L File: E:\SUMMER TESTS 2005\HQ180\01_krall [AAC faac].wav
1L Rating: 3.0
1L Comment: pre-echo is a bit worrying; it seems to distort the cymbal
---------------------------------------
2L File: E:\SUMMER TESTS 2005\HQ180\01_krall [AAC Nero Digital].wav
2L Rating: 2.3
2L Comment: pre-echo is also very perceptible ; cymbals on the beginning are even more distorted than previous file. They sound false, unatural... something weird.
---------------------------------------
3L File: E:\SUMMER TESTS 2005\HQ180\01_krall [MPC 1.15v].wav
3L Rating: 4.0
3L Comment: cymbals sound false, distorted. No pre-echo this time
---------------------------------------
5L File: E:\SUMMER TESTS 2005\HQ180\01_krall [MP3 LAME].wav
5L Rating: 2.7
5L Comment: smearing is really perceptible (probably the worse of the serie) ; cymbals are also suffering from false sounding.
---------------------------------------

ABX Results:
Original vs E:\SUMMER TESTS 2005\HQ180\01_krall [vorbis aoTuV].wav
7 out of 12, pval = 0.387


---- Detailed ABX results ----
Original vs E:\SUMMER TESTS 2005\HQ180\01_krall [vorbis aoTuV].wav
Playback Range: 03.867 to 05.671
3:39:07 PM f 0/1 pval = 1.0
3:39:10 PM p 1/2 pval = 0.75
3:39:12 PM p 2/3 pval = 0.5
3:39:15 PM f 2/4 pval = 0.687
3:39:18 PM p 3/5 pval = 0.5
3:39:20 PM p 4/6 pval = 0.343
3:40:02 PM p 5/7 pval = 0.226
3:40:04 PM f 5/8 pval = 0.363
3:40:07 PM f 5/9 pval = 0.5
3:40:10 PM f 5/10 pval = 0.623
3:40:15 PM p 6/11 pval = 0.5
3:40:19 PM p 7/12 pval = 0.387


Cymbals are still a problem for lossy encoders at this bitrate: often smeared and sometimes distorted. Nero Digital is the worst¹, followed by LAME which suffers from strongest pre-echo; then faac. Contrary to other encodings MPC doesn’t have any smearing issue, but cymbals don't sound true. aoTuV is the best: I really had to insist in order to unmask the encoding, but I totally missed the ABX phase.
¹ Nero Digital allocate much less bits (150 kbps) than competitors, all at more than 200 kbps.



Sample 02: Fuga
Short description: harpsichord.
Possible problems: pre-echo, out of tune (tremolos, vibrating notes).
replaygain_sample_gain: +14.97 dB
(indicative only: files were tested at their original volume)


CODE

ABC/HR for Java, Version 0.5a, 17 août 2005
Testname:

Tester: guruboolez

1R = E:\SUMMER TESTS 2005\HQ180\02_fuga [AAC faac].wav
2L = E:\SUMMER TESTS 2005\HQ180\02_fuga [MP3 LAME].wav
3L = E:\SUMMER TESTS 2005\HQ180\02_fuga [MPC 1.15v].wav
4R = E:\SUMMER TESTS 2005\HQ180\02_fuga [AAC Nero Digital].wav
5R = E:\SUMMER TESTS 2005\HQ180\02_fuga [vorbis aoTuV].wav

---------------------------------------
General Comments:
---------------------------------------
1R File: E:\SUMMER TESTS 2005\HQ180\02_fuga [AAC faac].wav
1R Rating: 3.0
1R Comment: smearing on most harpsichord note.
One is audibly distorted (tremolo between 8.52 - 10.00]
---------------------------------------
2L File: E:\SUMMER TESTS 2005\HQ180\02_fuga [MP3 LAME].wav
2L Rating: 2.5
2L Comment: harpsichord notes are smeared, and sometimes distorted. I suspect LAME, used to have this kind of issue with this instrument.
---------------------------------------
3L File: E:\SUMMER TESTS 2005\HQ180\02_fuga [MPC 1.15v].wav
3L Rating: 4.5
3L Comment: no smearing, no 'tremolo' distortion. Very good, excepted on one note [~9.00].

AFTER ABX: slight smearing is also audible
---------------------------------------
4R File: E:\SUMMER TESTS 2005\HQ180\02_fuga [AAC Nero Digital].wav
4R Rating: 2.5
4R Comment: very good encoding; smearing is ultra-slight ; most often there's no distortions. But from ~6.00 to the end, distortions are perceptible, sometimes annoying, and smearing becomes audible too [see between 11.18 - 14.11!].
---------------------------------------
5R File: E:\SUMMER TESTS 2005\HQ180\02_fuga [vorbis aoTuV].wav
5R Rating: 4.5
5R Comment: slight smearing (?)
Hard to ABX
---------------------------------------

ABX Results:
Original vs E:\SUMMER TESTS 2005\HQ180\02_fuga [MPC 1.15v].wav
15 out of 16, pval < 0.001
Original vs E:\SUMMER TESTS 2005\HQ180\02_fuga [vorbis aoTuV].wav
16 out of 24, pval = 0.075


---- Detailed ABX results ----
Original vs E:\SUMMER TESTS 2005\HQ180\02_fuga [MPC 1.15v].wav
Playback Range: 07.915 to 12.024
3:57:03 PM p 1/1 pval = 0.5
3:57:07 PM p 2/2 pval = 0.25
3:57:11 PM p 3/3 pval = 0.125
3:57:15 PM p 4/4 pval = 0.062
3:57:19 PM f 4/5 pval = 0.187
3:57:25 PM p 5/6 pval = 0.109
3:57:29 PM p 6/7 pval = 0.062
3:57:34 PM p 7/8 pval = 0.035
3:57:39 PM p 8/9 pval = 0.019
3:57:42 PM p 9/10 pval = 0.01
3:57:46 PM p 10/11 pval = 0.0050
3:57:50 PM p 11/12 pval = 0.0030
3:57:56 PM p 12/13 pval = 0.0010
3:58:00 PM p 13/14 pval < 0.001
3:58:03 PM p 14/15 pval < 0.001
3:58:07 PM p 15/16 pval < 0.001

Original vs E:\SUMMER TESTS 2005\HQ180\02_fuga [vorbis aoTuV].wav
Playback Range: 07.463 to 14.285
4:02:43 PM p 1/1 pval = 0.5
4:02:48 PM f 1/2 pval = 0.75
4:02:59 PM f 1/3 pval = 0.875
4:03:04 PM p 2/4 pval = 0.687
4:03:08 PM p 3/5 pval = 0.5
4:03:12 PM p 4/6 pval = 0.343
4:03:19 PM p 5/7 pval = 0.226
Playback Range: 10.290 to 13.645
4:03:29 PM p 6/8 pval = 0.144
4:03:33 PM p 7/9 pval = 0.089
4:03:36 PM p 8/10 pval = 0.054
4:03:38 PM p 9/11 pval = 0.032
4:03:41 PM f 9/12 pval = 0.072
4:03:44 PM p 10/13 pval = 0.046
4:03:47 PM f 10/14 pval = 0.089
4:03:50 PM p 11/15 pval = 0.059
4:04:02 PM f 11/16 pval = 0.105
4:04:05 PM f 11/17 pval = 0.166
4:04:12 PM p 12/18 pval = 0.118
4:04:15 PM f 12/19 pval = 0.179
4:04:18 PM p 13/20 pval = 0.131
4:04:20 PM p 14/21 pval = 0.094
4:04:23 PM p 15/22 pval = 0.066
4:04:25 PM f 15/23 pval = 0.105
4:04:28 PM p 16/24 pval = 0.075


Not a surprise for me: MP3 and especially LAME has still weakness with this instrument, which is distorted by the encoding (notes are trembling) and also has smearing issue. Nero Digital is excellent on the beginning despite of smallest bitrate (163 kbps); but quality suddenly drops on the second half. Faac is a bit better on average. Both aoTuV and MPC are excellent here, with only a very subtle smearing. MPC¹ was easier to ABX due to a distortion occurring on one short moment; for aoTuV, I changed the tested range during the ABX phase in order to catch an easier segment for comparison. Pval is > 0,05 < 0,10 on overall but < 0,05 on the second range I tested.
¹ very high bitrate (230 kbps) for MPC, used to bloat the bitrate with solo harpsichord.



Sample 03: Mahler
Short description: mixed chorus (without instrument). Very tonal (no attacks).
Possible problems: kind of ringing, distortions on sibilant consonant (“s”).
replaygain_sample_gain: +0.87 dB
(indicative only: files were tested at their original volume)


CODE

ABC/HR for Java, Version 0.5a, 17 août 2005
Testname:

Tester: guruboolez

1R = E:\SUMMER TESTS 2005\HQ180\03_mahler [MP3 LAME].wav
2R = E:\SUMMER TESTS 2005\HQ180\03_mahler [AAC faac].wav
3L = E:\SUMMER TESTS 2005\HQ180\03_mahler [MPC 1.15v].wav
4L = E:\SUMMER TESTS 2005\HQ180\03_mahler [AAC Nero Digital].wav
5L = E:\SUMMER TESTS 2005\HQ180\03_mahler [vorbis aoTuV].wav

---------------------------------------
General Comments:
---------------------------------------
1R File: E:\SUMMER TESTS 2005\HQ180\03_mahler [MP3 LAME].wav
1R Rating: 3.5
1R Comment: distortions on voice are audible, but not as worrying than 2R or 3L.
---------------------------------------
2R File: E:\SUMMER TESTS 2005\HQ180\03_mahler [AAC faac].wav
2R Rating: 2.0
2R Comment: voices are distorted, with strange 'pop' during the song. I suspect faac, which has this bug (pop/warbling audible on tonal moment) for a long time.
example: 4.16 - 6.07
---------------------------------------
3L File: E:\SUMMER TESTS 2005\HQ180\03_mahler [MPC 1.15v].wav
3L Rating: 2.5
3L Comment: There's something ugly in the voice, unstable, wrong. It's not natural
---------------------------------------
4L File: E:\SUMMER TESTS 2005\HQ180\03_mahler [AAC Nero Digital].wav
4L Rating: 4.5
4L Comment: This one was close to be perfect. It was the encoding I unmasked. But there's a small pasage which sounded distorted, not very hard to ABX.
On overall seems to be the less wounded by distortions.
---------------------------------------
5L File: E:\SUMMER TESTS 2005\HQ180\03_mahler [vorbis aoTuV].wav
5L Rating: 3.8
5L Comment: It seems that some 'vocal matter' was removed. Audible for example during 20.56 - 25.16. It distorts the voice, but in a different manner as 3L, and closer to 2R. Slightly better than 1R.
---------------------------------------

ABX Results:
Original vs E:\SUMMER TESTS 2005\HQ180\03_mahler [AAC Nero Digital].wav
8 out of 8, pval = 0.0030


---- Detailed ABX results ----
Original vs E:\SUMMER TESTS 2005\HQ180\03_mahler [AAC Nero Digital].wav
Playback Range: 22.289 to 25.568
4:22:37 PM p 1/1 pval = 0.5
4:22:41 PM p 2/2 pval = 0.25
4:22:46 PM p 3/3 pval = 0.125
4:22:50 PM p 4/4 pval = 0.062
4:22:56 PM p 5/5 pval = 0.031
4:23:01 PM p 6/6 pval = 0.015
4:23:06 PM p 7/7 pval = 0.0070
4:23:10 PM p 8/8 pval = 0.0030


I recognized faac, which is the only encoder to my knowledge betraying some warbling issues on this kind of (tonal) samples. In addition to this problem voice are distorted (maybe a consequence of low bitrate allocation¹). MPC has serious issue here, already noticed in the past. Voices are not natural, a bit ugly and also unstable. Less annoying are distortions audible with LAME and aoTuV. This last one seems to remove some noise/matter. Nero Digital is the best, close to perfection with problem occurring on a very small portion of the sample (and ABXed without difficulties).
¹ 157 kbps which corresponds to the faac lowest bitrate of the whole test.


Sample 04: Weihnachts-Oratorium (Oratorio de Noël)
Short description: exulting orchestra (period instruments), with brass, percussions and mixed chorus.
Possible problems: loss in details, distorted on voices or instruments (especially brass).
replaygain_sample_gain: -2.62 dB
(indicative only: files were tested at their original volume)


CODE

ABC/HR for Java, Version 0.5a, 17 août 2005
Testname:

Tester: guruboolez

1L = E:\SUMMER TESTS 2005\HQ180\04_Oratorio Noël [AAC Nero Digital].wav
2L = E:\SUMMER TESTS 2005\HQ180\04_Oratorio Noël [MP3 LAME].wav
3L = E:\SUMMER TESTS 2005\HQ180\04_Oratorio Noël [MPC 1.15v].wav
4R = E:\SUMMER TESTS 2005\HQ180\04_Oratorio Noël [vorbis aoTuV].wav
5R = E:\SUMMER TESTS 2005\HQ180\04_Oratorio Noël [AAC faac].wav

---------------------------------------
General Comments:
---------------------------------------
1L File: E:\SUMMER TESTS 2005\HQ180\04_Oratorio Noël [AAC Nero Digital].wav
1L Rating: 4.0
1L Comment: Slight distortion on chorus, exactly as if vocal matter was removed (it reminds me distortions I perceived with previous sample).
---------------------------------------
2L File: E:\SUMMER TESTS 2005\HQ180\04_Oratorio Noël [MP3 LAME].wav
2L Rating: 3.2
2L Comment: orchestra is distorted. Annoying.
---------------------------------------
3L File: E:\SUMMER TESTS 2005\HQ180\04_Oratorio Noël [MPC 1.15v].wav
3L Rating: 4.3
3L Comment: Some details are missing (not annoying). After ABX phase: brass are distorted, a bit wrong
---------------------------------------
4R File: E:\SUMMER TESTS 2005\HQ180\04_Oratorio Noël [vorbis aoTuV].wav
4R Rating: 4.5
4R Comment: AFTER ABX: I've just lost my previous comment, I've wrote for the reference...
What I've just heard: fatness, coarseness. Would it be Vorbis? Are this problems (typical for this encoder) still audible at this bitrate?
The distortions is nevertheless really subtle (so subtle that I missed it during the ABC/HR phase). It's less annoying in my opinion that the distortion heard with previous file ('wrong color').
---------------------------------------
5R File: E:\SUMMER TESTS 2005\HQ180\04_Oratorio Noël [AAC faac].wav
5R Rating: 3.0
5R Comment: distortions (a bit more annoying [on brass] than 2R)
---------------------------------------

ABX Results:
Original vs E:\SUMMER TESTS 2005\HQ180\04_Oratorio Noël [vorbis aoTuV].wav
8 out of 8, pval = 0.0030
Original vs E:\SUMMER TESTS 2005\HQ180\04_Oratorio Noël [MPC 1.15v].wav
8 out of 8, pval = 0.0030


---- Detailed ABX results ----
Original vs E:\SUMMER TESTS 2005\HQ180\04_Oratorio Noël [vorbis aoTuV].wav
Playback Range: 00.769 to 02.102
4:38:40 PM p 1/1 pval = 0.5
4:38:45 PM p 2/2 pval = 0.25
4:38:51 PM p 3/3 pval = 0.125
4:38:56 PM p 4/4 pval = 0.062
4:39:01 PM p 5/5 pval = 0.031
4:39:06 PM p 6/6 pval = 0.015
4:39:13 PM p 7/7 pval = 0.0070
4:39:20 PM p 8/8 pval = 0.0030

Original vs E:\SUMMER TESTS 2005\HQ180\04_Oratorio Noël [MPC 1.15v].wav
Playback Range: 00.769 to 02.102
4:37:47 PM p 1/1 pval = 0.5
4:37:52 PM p 2/2 pval = 0.25
4:37:57 PM p 3/3 pval = 0.125
4:38:01 PM p 4/4 pval = 0.062
4:38:06 PM p 5/5 pval = 0.031
4:38:15 PM p 6/6 pval = 0.015
4:38:17 PM p 7/7 pval = 0.0070
4:38:20 PM p 8/8 pval = 0.0030


faac and LAME are the worse and both have the same kind of distortions altering the orchestral part. Nero AAC was one step better, with minor problems on chorus part. I noticed something I associated to noise reduction. MPC and aoTuV are very good. Brass is slightly distorted with MPC which also removes some details in the background. Vorbis is even better than MPC. So good that I first rated the reference, but the notation was automatically cancelled by ABC/HR after a positive ABX test. It’s important to note that I’ve suspected Vorbis: for the first time I heard coarseness/fatness used to affect this format.



Sample 05: Dom Bedos
Short description: organ, with long continuous tonal notes
Possible problems: kind of ringing (constant tonal part being fluctuating).
replaygain_sample_gain: +5.10 dB
(indicative only: files were tested at their original volume)


CODE

ABC/HR for Java, Version 0.5a, 17 août 2005
Testname:

Tester: guruboolez

1L = E:\SUMMER TESTS 2005\HQ180\05_Dom Bedos [AAC faac].wav
2L = E:\SUMMER TESTS 2005\HQ180\05_Dom Bedos [MP3 LAME].wav
3R = E:\SUMMER TESTS 2005\HQ180\05_Dom Bedos [vorbis aoTuV].wav
4L = E:\SUMMER TESTS 2005\HQ180\05_Dom Bedos [MPC 1.15v].wav
5L = E:\SUMMER TESTS 2005\HQ180\05_Dom Bedos [AAC Nero Digital].wav

---------------------------------------
General Comments:
---------------------------------------
1L File: E:\SUMMER TESTS 2005\HQ180\05_Dom Bedos [AAC faac].wav
1L Rating: 2.7
1L Comment: Distortions are perceptible, not really deranging first but more irritating on the second part.
---------------------------------------
2L File: E:\SUMMER TESTS 2005\HQ180\05_Dom Bedos [MP3 LAME].wav
2L Rating: 2.7
2L Comment: beginning: very subtle distortion
end (last note): distortion is clearly more annoying
---------------------------------------
3R File: E:\SUMMER TESTS 2005\HQ180\05_Dom Bedos [vorbis aoTuV].wav
3R Rating: 4.0
3R Comment: last note has something strange, unconstant, slightly raucous.
---------------------------------------
4L File: E:\SUMMER TESTS 2005\HQ180\05_Dom Bedos [MPC 1.15v].wav
4L Rating: 3.7
4L Comment: First part is excellent, but second part (especially last note) is slightly distorted (a bit more irritating than previous file)
---------------------------------------
5L File: E:\SUMMER TESTS 2005\HQ180\05_Dom Bedos [AAC Nero Digital].wav
5L Rating: 2.5
5L Comment: The most (immediately) shoking. There are weird disotrtions on tonal moments. Ugly. Last tonal note is nevertheless better, but still distorted.
---------------------------------------

ABX Results:
Original vs E:\SUMMER TESTS 2005\HQ180\05_Dom Bedos [vorbis aoTuV].wav
8 out of 8, pval = 0.0030
Original vs E:\SUMMER TESTS 2005\HQ180\05_Dom Bedos [MPC 1.15v].wav
13 out of 16, pval = 0.01


---- Detailed ABX results ----
Original vs E:\SUMMER TESTS 2005\HQ180\05_Dom Bedos [vorbis aoTuV].wav
Playback Range: 08.051 to 10.063
9:57:12 PM p 1/1 pval = 0.5
9:57:19 PM p 2/2 pval = 0.25
9:57:23 PM p 3/3 pval = 0.125
9:57:26 PM p 4/4 pval = 0.062
9:57:29 PM p 5/5 pval = 0.031
9:57:32 PM p 6/6 pval = 0.015
9:57:35 PM p 7/7 pval = 0.0070
9:57:38 PM p 8/8 pval = 0.0030

Original vs E:\SUMMER TESTS 2005\HQ180\05_Dom Bedos [MPC 1.15v].wav
Playback Range: 05.344 to 06.524
9:54:41 PM f 0/1 pval = 1.0
9:54:44 PM p 1/2 pval = 0.75
9:54:48 PM p 2/3 pval = 0.5
9:54:51 PM p 3/4 pval = 0.312
9:54:55 PM f 3/5 pval = 0.5
9:54:58 PM p 4/6 pval = 0.343
9:55:02 PM p 5/7 pval = 0.226
9:55:05 PM p 6/8 pval = 0.144
Playback Range: 09.300 to 11.475
9:55:17 PM p 7/9 pval = 0.089
9:55:21 PM p 8/10 pval = 0.054
9:55:27 PM f 8/11 pval = 0.113
9:55:30 PM p 9/12 pval = 0.072
9:55:33 PM p 10/13 pval = 0.046
9:55:36 PM p 11/14 pval = 0.028
9:55:48 PM p 12/15 pval = 0.017
9:55:51 PM p 13/16 pval = 0.01


Nero Digital appears as the worst encoding. Distortions are ugly. The encoding of the problematic last note was better and it preserves Nero AAC from a biting notation¹. Not far from Nero Digital: faac² and LAME³. Both offers a really good sound on beginning but quality drops for each on the end of the sample. Again, MPC and aoTuV are on top. Both are excellent on the first half of this sample. But second half and especially last note is apparently much harder to encode for all encoders. MPC is here distorted. Last year, I heard something wrong but failed on ABX phase; this time, ABX was successful. aoTuV presents a different form of distortion, less annoying: something raucous, unconstant and hard to describe – subtle but not too hard to ABX. Like often with very tonal part encoded by MPC: bloated bitrate.
¹ This is exactly the kind of sample for which Nero ‘fast’ encoder is perfect (but with bloated bitrate as other side of the coin: 233 kbps for this sample with 'fast' encoder and -streaming preset against 165 kbps for the tested encoding!)
² faac bitrate is very low: 150 kbps. It’s 80 kbps less than MPC!
³ LAME quality is now much better than the one tested one year ago (ugly distortions are ringing are gone).



Sample 06: Platée
Short description: orchestra representing a rainstorm (period instruments).
Possible problems: background is detailed due to the presence of instruments ensuring continuo; loss in details is expected
replaygain_sample_gain: -0.07 dB
(indicative only: files were tested at their original volume)


CODE

ABC/HR for Java, Version 0.5a, 17 août 2005
Testname:

Tester: guruboolez

1R = E:\SUMMER TESTS 2005\HQ180\06_Platée [AAC Nero Digital].wav
2L = E:\SUMMER TESTS 2005\HQ180\06_Platée [vorbis aoTuV].wav
3L = E:\SUMMER TESTS 2005\HQ180\06_Platée [AAC faac].wav
4L = E:\SUMMER TESTS 2005\HQ180\06_Platée [MP3 LAME].wav
5R = E:\SUMMER TESTS 2005\HQ180\06_Platée [MPC 1.15v].wav

---------------------------------------
General Comments:
---------------------------------------
1R File: E:\SUMMER TESTS 2005\HQ180\06_Platée [AAC Nero Digital].wav
1R Rating: 4.0
1R Comment: harpsichord on background is slightly imprecise, subtly softened. Not irritating. Slightly distorted between 3.83 - 5.40
---------------------------------------
2L File: E:\SUMMER TESTS 2005\HQ180\06_Platée [vorbis aoTuV].wav
2L Rating: 3.5
2L Comment: Details are also missing; harpsichord edges are softened - a bit more than previous file. Not irritating, just imprecise.
---------------------------------------
3L File: E:\SUMMER TESTS 2005\HQ180\06_Platée [AAC faac].wav
3L Rating: 2.5
3L Comment: This one is now distorted. The harpsichord in background sounds false and not only imprecise. Irritating.
---------------------------------------
4L File: E:\SUMMER TESTS 2005\HQ180\06_Platée [MP3 LAME].wav
4L Rating: 3.8
4L Comment: Again, I've lost my previous comments, because I've rated first the reference...
I give 4.5 first, but after the ABX phase I have to reconsider this note. First I've only heard a subtle loss in details, but now that I'm listening the good file and after a positive ABX tests this encoding appears as less enjoying: apart missing details (like other encodings) there's also audible distortions on harpsichord.
---------------------------------------
5R File: E:\SUMMER TESTS 2005\HQ180\06_Platée [MPC 1.15v].wav
5R Rating: 4.3
5R Comment: Softened - harpsichord edges are a bit vague.
---------------------------------------

ABX Results:
Original vs E:\SUMMER TESTS 2005\HQ180\06_Platée [MP3 LAME].wav
14 out of 16, pval = 0.0020


---- Detailed ABX results ----
Original vs E:\SUMMER TESTS 2005\HQ180\06_Platée [MP3 LAME].wav
Playback Range: 01.609 to 03.684
10:11:50 PM p 1/1 pval = 0.5
10:11:54 PM p 2/2 pval = 0.25
10:12:02 PM f 2/3 pval = 0.5
10:12:05 PM f 2/4 pval = 0.687
10:12:10 PM p 3/5 pval = 0.5
10:12:14 PM p 4/6 pval = 0.343
10:12:29 PM p 5/7 pval = 0.226
10:12:33 PM p 6/8 pval = 0.144
10:12:38 PM p 7/9 pval = 0.089
10:12:42 PM p 8/10 pval = 0.054
10:12:46 PM p 9/11 pval = 0.032
10:12:49 PM p 10/12 pval = 0.019
10:12:54 PM p 11/13 pval = 0.011
10:13:02 PM p 12/14 pval = 0.0060
10:13:06 PM p 13/15 pval = 0.0030
10:13:09 PM p 14/16 pval = 0.0020


faac presents the most distorted sound on orchestra again (see sample_04, and later sample_10 & sample_15). Details are either softened or distorted. aoTuV occupies here an unusual last but one place. Sound is too imprecise, and edges of harpsichord are softened. LAME sounds in a similar manner, offering a bit more precisions than aoTuV but also slight distortions on continuo. I discovered the problem during ABX phase, and first rated the reference instead of encoded file. Nero Digital is very good, with a subtle loss in details and located distortions. It’s a remarkable performance for such low bitrate encoding¹. Same goes for MPC but without distortions this time: just a slight smoothing of details in background.
¹ 161 kbps, whereas all other contenders have a bitrate comprise between 194 [aoTuV] and 215 kbps [LAME]!


Sample 07: Marche Royale (00.00 – 12.00)
Short description: Chamber orchestra. First part of a sample divided in two. Here: drums, violin, trumpet, cello, clarinet.
Possible problems: pre-echo (drums) and usual distortions on instruments like violin or clarinet.
replaygain_sample_gain: +1.02 dB
(indicative only: files were tested at their original volume)


CODE

ABC/HR for Java, Version 0.5a, 17 août 2005
Testname:

Tester: guruboolez

1R = E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [MP3 LAME].wav
2R = E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [MPC 1.15v].wav
3R = E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [AAC faac].wav
4L = E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [vorbis aoTuV].wav
5L = E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [AAC Nero Digital].wav

---------------------------------------
General Comments: The evaluation is based on the first twelve seconds of this sample: drums, violin, trumpet, cello, clarinet.
The second part (cymbals mostly) will be the subject of the next test
---------------------------------------
1R File: E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [MP3 LAME].wav
1R Rating: 3.6
1R Comment: drums are smeared
---------------------------------------
2R File: E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [MPC 1.15v].wav
2R Rating: 3.8
2R Comment: Clearly smeared. Drums are not totally clean. A bit hollowed.
---------------------------------------
3R File: E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [AAC faac].wav
3R Rating: 3.5
3R Comment: smearing
---------------------------------------
4L File: E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [vorbis aoTuV].wav
4L Rating: 4.0
4L Comment: Few pre-echo but drums are 'noisy' from within.
---------------------------------------
5L File: E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [AAC Nero Digital].wav
5L Rating: 2.5
5L Comment: worse pre-echo; drums are also distorted.
---------------------------------------

ABX Results:


Disappointing performance of Nero Digital which presents the worst pre-echo performance. The same issue also appeared on sample_01. faac has the same situation, but annoyance is clearly lower. The encoder is followed by LAME, which seems to be very slightly better, and then MPC, also (slightly) smeared with other minor problems (hollowed drums). aoTuV is the best, with very few pre-echo but rather a distortion coming from within the drums (boosted noise).


Sample 08: Marche Royale (12.00 – 29.00)
Short description: Chamber orchestra – 2nd part. Here: cymbals (orchestral ones, different from those heard in sample_O1) are introduced; I focused my rating on these.
Possible problems: smearing and distortions (swoosh).
replaygain_sample_gain: +1.02 dB
(indicative only: files were tested at their original volume)


CODE

ABC/HR for Java, Version 0.5a, 17 août 2005
Testname:

Tester: guruboolez

1L = E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [vorbis aoTuV].wav
2R = E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [AAC Nero Digital].wav
3L = E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [AAC faac].wav
4R = E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [MP3 LAME].wav
5R = E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [MPC 1.15v].wav

---------------------------------------
General Comments: I tried to ABX every file. At one moment I was so confused that I couln't tell which file was distorted, which one was smeared, etc... That's why I started a complete ABX checkup. I only failed for the first one. I'd still say that 1L is the encoded one, but for this test, a bad ABX test imply 5.0 as notation.
---------------------------------------
2R File: E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [AAC Nero Digital].wav
2R Rating: 3.5
2R Comment: Smeared. Edges are really softened. Cymbals sound a bit wrong
---------------------------------------
3L File: E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [AAC faac].wav
3L Rating: 2.5
3L Comment: Slight smearing, but much more annoying are the distortions on cymbals. Brrrrr...
---------------------------------------
4R File: E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [MP3 LAME].wav
4R Rating: 2.1
4R Comment: Cymbals are the most distorted with this encoding
---------------------------------------
5R File: E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [MPC 1.15v].wav
5R Rating: 4.5
5R Comment: Distortions are very small. It was the hardest to ABX, apart the first sample for which I failed.
---------------------------------------

ABX Results:
Original vs E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [AAC Nero Digital].wav
8 out of 8, pval = 0.0030
Original vs E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [AAC faac].wav
7 out of 8, pval = 0.035
Original vs E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [MPC 1.15v].wav
13 out of 16, pval = 0.01
Original vs E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [MP3 LAME].wav
8 out of 8, pval = 0.0030
Original vs E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [vorbis aoTuV].wav
8 out of 16, pval = 0.598


---- Detailed ABX results ----
Original vs E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [AAC Nero Digital].wav
Playback Range: 11.963 to 15.198
10:51:30 PM p 1/1 pval = 0.5
10:51:35 PM p 2/2 pval = 0.25
10:51:39 PM p 3/3 pval = 0.125
10:51:43 PM p 4/4 pval = 0.062
10:51:47 PM p 5/5 pval = 0.031
10:51:51 PM p 6/6 pval = 0.015
10:51:57 PM p 7/7 pval = 0.0070
10:52:01 PM p 8/8 pval = 0.0030

Original vs E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [AAC faac].wav
Playback Range: 11.963 to 15.198
10:52:40 PM f 0/1 pval = 1.0
10:53:01 PM p 1/2 pval = 0.75
10:53:05 PM p 2/3 pval = 0.5
10:53:09 PM p 3/4 pval = 0.312
10:53:13 PM p 4/5 pval = 0.187
10:53:17 PM p 5/6 pval = 0.109
10:53:21 PM p 6/7 pval = 0.062
10:53:26 PM p 7/8 pval = 0.035

Original vs E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [MPC 1.15v].wav
Playback Range: 11.963 to 15.198
10:54:51 PM p 1/1 pval = 0.5
10:54:59 PM p 2/2 pval = 0.25
10:55:08 PM p 3/3 pval = 0.125
10:55:16 PM p 4/4 pval = 0.062
10:55:21 PM p 5/5 pval = 0.031
10:55:27 PM p 6/6 pval = 0.015
10:55:32 PM f 6/7 pval = 0.062
10:55:41 PM p 7/8 pval = 0.035
10:55:48 PM f 7/9 pval = 0.089
10:55:53 PM p 8/10 pval = 0.054
10:55:57 PM f 8/11 pval = 0.113
10:56:01 PM p 9/12 pval = 0.072
10:56:09 PM p 10/13 pval = 0.046
10:56:13 PM p 11/14 pval = 0.028
10:56:18 PM p 12/15 pval = 0.017
10:56:22 PM p 13/16 pval = 0.01

Original vs E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [MP3 LAME].wav
Playback Range: 11.963 to 15.198
10:53:49 PM p 1/1 pval = 0.5
10:53:53 PM p 2/2 pval = 0.25
10:53:57 PM p 3/3 pval = 0.125
10:54:01 PM p 4/4 pval = 0.062
10:54:05 PM p 5/5 pval = 0.031
10:54:08 PM p 6/6 pval = 0.015
10:54:11 PM p 7/7 pval = 0.0070
10:54:15 PM p 8/8 pval = 0.0030

Original vs E:\SUMMER TESTS 2005\HQ180\07-08_Marche Royale [vorbis aoTuV].wav
Playback Range: 11.963 to 15.198
10:49:40 PM f 0/1 pval = 1.0
10:49:43 PM f 0/2 pval = 1.0
10:49:45 PM p 1/3 pval = 0.875
10:49:48 PM p 2/4 pval = 0.687
10:49:50 PM f 2/5 pval = 0.812
10:49:53 PM p 3/6 pval = 0.656
10:50:01 PM p 4/7 pval = 0.5
10:50:03 PM f 4/8 pval = 0.636
10:50:06 PM p 5/9 pval = 0.5
10:50:11 PM p 6/10 pval = 0.376
10:50:19 PM f 6/11 pval = 0.5
10:50:22 PM f 6/12 pval = 0.612
10:50:24 PM p 7/13 pval = 0.5
10:50:27 PM f 7/14 pval = 0.604
10:50:30 PM p 8/15 pval = 0.5
10:50:32 PM f 8/16 pval = 0.598


First, short remark: I ABXed everything here, because my feeling became so confuse and imprecise on ABCHR phase that I wasn’t unable anymore to structure the notation and the hierarchy.
LAME is the worse; it wins its worst note for the entire test with this instrument (cymbals). Distorted and really unpleasant. faac also presents worrying distortions (and audible smearing). Nero Digital is much less annoying but smearing as well as distortions are still moderately perceptible. Much better is MPC, harder to ABX (13/16) and barely distorted. I noticed a very small difference for aoTuV during ABCHR phase but I couldn’t confirm it on ABX module (8/16) and I therefore manually cancelled the notation (4.8) I give him first.



Sample 09: Orion II
Short description: trombones –one of the most difficult brass instrument to encode for transform encoders- here meticulously recorded. It corresponds to usual “micro-attacks” problems.
Possible problems: micro-attacks encoded with noise or -in worst case- distorted.
replaygain_sample_gain: -4.80 dB
(indicative only: files were tested at their original volume)


CODE

ABC/HR for Java, Version 0.5a, 17 août 2005
Testname:

Tester: guruboolez

1R = E:\SUMMER TESTS 2005\HQ180\09_Orion II [AAC faac].wav
2L = E:\SUMMER TESTS 2005\HQ180\09_Orion II [MPC 1.15v].wav
3L = E:\SUMMER TESTS 2005\HQ180\09_Orion II [MP3 LAME].wav
4L = E:\SUMMER TESTS 2005\HQ180\09_Orion II [vorbis aoTuV].wav
5L = E:\SUMMER TESTS 2005\HQ180\09_Orion II [AAC Nero Digital].wav

---------------------------------------
General Comments:
---------------------------------------
1R File: E:\SUMMER TESTS 2005\HQ180\09_Orion II [AAC faac].wav
1R Rating: 3.3
1R Comment: There's noise covering the microattacks which are slightly blurred.
---------------------------------------
2L File: E:\SUMMER TESTS 2005\HQ180\09_Orion II [MPC 1.15v].wav
2L Rating: 3.8
2L Comment: Noise between micro-attacks. Not really annoying, but clearly perceptible.
AFTER ABX: there's a clear 'pshhhhh-artefact' that identifies this encoding when compared directly to 4L.
---------------------------------------
3L File: E:\SUMMER TESTS 2005\HQ180\09_Orion II [MP3 LAME].wav
3L Rating: 3.0
3L Comment: Noise is higher than 1R. Noise reachs a peak somewhere between 3.90 - 6.67.
---------------------------------------
4L File: E:\SUMMER TESTS 2005\HQ180\09_Orion II [vorbis aoTuV].wav
4L Rating: 4.0
4L Comment: Blurring and noise; better than 1R & 3R, very similar to 2L.
AFTER ABX: this one is cleaner compared to 2L. No artefact here, just a slight constant noise.
---------------------------------------
5L File: E:\SUMMER TESTS 2005\HQ180\09_Orion II [AAC Nero Digital].wav
5L Rating: 1.0
5L Comment: Very ugly distortion. Micro-attacks are slaughtered. It's very annoying, and 1.0 isn't severe at all.
---------------------------------------

ABX Results:
E:\SUMMER TESTS 2005\HQ180\09_Orion II [MPC 1.15v].wav vs E:\SUMMER TESTS 2005\HQ180\09_Orion II [vorbis aoTuV].wav
12 out of 12, pval < 0.001
Original vs E:\SUMMER TESTS 2005\HQ180\09_Orion II [MPC 1.15v].wav
14 out of 16, pval = 0.0020
Original vs E:\SUMMER TESTS 2005\HQ180\09_Orion II [vorbis aoTuV].wav
13 out of 16, pval = 0.01


---- Detailed ABX results ----
E:\SUMMER TESTS 2005\HQ180\09_Orion II [MPC 1.15v].wav vs E:\SUMMER TESTS 2005\HQ180\09_Orion II [vorbis aoTuV].wav
Playback Range: 03.875 to 05.639
11:31:21 PM p 1/1 pval = 0.5
11:31:29 PM p 2/2 pval = 0.25
11:31:32 PM p 3/3 pval = 0.125
11:31:35 PM p 4/4 pval = 0.062
11:31:38 PM p 5/5 pval = 0.031
11:31:41 PM p 6/6 pval = 0.015
11:31:43 PM p 7/7 pval = 0.0070
11:31:46 PM p 8/8 pval = 0.0030
11:31:49 PM p 9/9 pval = 0.0010
11:31:52 PM p 10/10 pval < 0.001
11:31:55 PM p 11/11 pval < 0.001
11:31:58 PM p 12/12 pval < 0.001

Original vs E:\SUMMER TESTS 2005\HQ180\09_Orion II [MPC 1.15v].wav
Playback Range: 03.904 to 06.150
11:26:14 PM p 1/1 pval = 0.5
11:26:18 PM f 1/2 pval = 0.75
11:26:22 PM p 2/3 pval = 0.5
11:26:25 PM p 3/4 pval = 0.312
11:26:28 PM p 4/5 pval = 0.187
11:26:30 PM p 5/6 pval = 0.109
11:26:34 PM p 6/7 pval = 0.062
11:26:38 PM p 7/8 pval = 0.035
11:26:42 PM p 8/9 pval = 0.019
11:26:45 PM p 9/10 pval = 0.01
11:26:49 PM p 10/11 pval = 0.0050
11:26:53 PM p 11/12 pval = 0.0030
11:26:58 PM p 12/13 pval = 0.0010
11:27:02 PM f 12/14 pval = 0.0060
11:27:06 PM p 13/15 pval = 0.0030
11:27:10 PM p 14/16 pval = 0.0020

Original vs E:\SUMMER TESTS 2005\HQ180\09_Orion II [vorbis aoTuV].wav
Playback Range: 03.904 to 06.150
11:28:21 PM p 1/1 pval = 0.5
11:28:24 PM p 2/2 pval = 0.25
11:28:27 PM f 2/3 pval = 0.5
11:28:30 PM p 3/4 pval = 0.312
11:28:33 PM p 4/5 pval = 0.187
11:28:39 PM p 5/6 pval = 0.109
11:28:42 PM p 6/7 pval = 0.062
11:28:49 PM p 7/8 pval = 0.035
11:28:51 PM f 7/9 pval = 0.089
11:29:15 PM p 8/10 pval = 0.054
11:29:21 PM p 9/11 pval = 0.032
11:29:27 PM p 10/12 pval = 0.019
11:29:31 PM p 11/13 pval = 0.011
11:29:34 PM p 12/14 pval = 0.0060
11:29:40 PM p 13/15 pval = 0.0030
11:29:44 PM f 13/16 pval = 0.01


Nero Digital is very bad. Artefact is so terrible that I gave (thoughtful decision) the lowest note (1.0) to this sample¹. LAME and faac are much better and they only present noise as defect. MPC is even better (few noise) but presents a short artefact I identified during the ABX phase (14/16). aoTuV sounded similar to MPC but after several listenings it appears as slightly cleaner and without artefact².
¹ This is the only notation inferior to 2.0 for the entire test.
² Direct blind comparison between MPC and aoTuV was easy: 12/12 pval < 0.001. It should be worth noticing that Vorbis has made clear progress in this area (micro-attacks, used to be smeared at this preset) compared to last test including MEGAMIX.




Sample 10: “Dover, giustizia”
Short description: mezzo-soprano voice accompanied by orchestra.
Possible problems: voice can be distorted and instrument may also be softened.
replaygain_sample_gain: -4.73 dB
(indicative only: files were tested at their original volume)


CODE

ABC/HR for Java, Version 0.5a, 17 août 2005
Testname:

Tester: guruboolez

1R = E:\SUMMER TESTS 2005\HQ180\10_Dover, giustizia [vorbis aoTuV].wav
2L = E:\SUMMER TESTS 2005\HQ180\10_Dover, giustizia [MPC 1.15v].wav
3L = E:\SUMMER TESTS 2005\HQ180\10_Dover, giustizia [MP3 LAME].wav
4L = E:\SUMMER TESTS 2005\HQ180\10_Dover, giustizia [AAC faac].wav
5L = E:\SUMMER TESTS 2005\HQ180\10_Dover, giustizia [AAC Nero Digital].wav

---------------------------------------
General Comments:
---------------------------------------
2L File: E:\SUMMER TESTS 2005\HQ180\10_Dover, giustizia [MPC 1.15v].wav
2L Rating: 3.8
2L Comment: Distortions on voice, clearly audible and easy to ABX. Subtle details in instrumental part are softened.
---------------------------------------
3L File: E:\SUMMER TESTS 2005\HQ180\10_Dover, giustizia [MP3 LAME].wav
3L Rating: 4.5
3L Comment: Slight loss in details and few distortions. Very good.
---------------------------------------
4L File: E:\SUMMER TESTS 2005\HQ180\10_Dover, giustizia [AAC faac].wav
4L Rating: 3.5
4L Comment: Not perfect. Some problems are a bit annoying (distortions on voice).
---------------------------------------
5L File: E:\SUMMER TESTS 2005\HQ180\10_Dover, giustizia [AAC Nero Digital].wav
5L Rating: 4.2
5L Comment: Very good. Subtle details are out (harpsichord in continuo), but it needs a direct comparison to be perceived. Slight distortions on harpsichord at the very end.
---------------------------------------

ABX Results:
Original vs E:\SUMMER TESTS 2005\HQ180\10_Dover, giustizia [MP3 LAME].wav
9 out of 12, pval = 0.072
Original vs E:\SUMMER TESTS 2005\HQ180\10_Dover, giustizia [MPC 1.15v].wav
10 out of 12, pval = 0.019
Original vs E:\SUMMER TESTS 2005\HQ180\10_Dover, giustizia [vorbis aoTuV].wav
5 out of 12, pval = 0.806


---- Detailed ABX results ----
Original vs E:\SUMMER TESTS 2005\HQ180\10_Dover, giustizia [MP3 LAME].wav
Playback Range: 04.419 to 06.694
11:48:13 PM p 1/1 pval = 0.5
11:48:16 PM p 2/2 pval = 0.25
11:48:19 PM f 2/3 pval = 0.5
11:48:29 PM p 3/4 pval = 0.312
11:48:34 PM f 3/5 pval = 0.5
11:48:37 PM p 4/6 pval = 0.343
11:48:41 PM p 5/7 pval = 0.226
11:48:47 PM p 6/8 pval = 0.144
11:48:57 PM p 7/9 pval = 0.089
11:49:01 PM p 8/10 pval = 0.054
11:49:13 PM p 9/11 pval = 0.032
11:49:16 PM f 9/12 pval = 0.072

Original vs E:\SUMMER TESTS 2005\HQ180\10_Dover, giustizia [MPC 1.15v].wav
Playback Range: 04.419 to 06.694
11:47:20 PM f 0/1 pval = 1.0
11:47:24 PM p 1/2 pval = 0.75
11:47:28 PM f 1/3 pval = 0.875
11:47:32 PM p 2/4 pval = 0.687
11:47:35 PM p 3/5 pval = 0.5
11:47:38 PM p 4/6 pval = 0.343
11:47:41 PM p 5/7 pval = 0.226
11:47:45 PM p 6/8 pval = 0.144
11:47:48 PM p 7/9 pval = 0.089
11:47:51 PM p 8/10 pval = 0.054
11:47:54 PM p 9/11 pval = 0.032
11:47:56 PM p 10/12 pval = 0.019

Original vs E:\SUMMER TESTS 2005\HQ180\10_Dover, giustizia [vorbis aoTuV].wav
Playback Range: 04.419 to 06.694
11:45:55 PM f 0/1 pval = 1.0
11:45:58 PM f 0/2 pval = 1.0
11:46:02 PM p 1/3 pval = 0.875
11:46:06 PM f 1/4 pval = 0.937
11:46:24 PM p 2/5 pval = 0.812
11:46:27 PM f 2/6 pval = 0.89
11:46:30 PM p 3/7 pval = 0.773
11:46:44 PM f 3/8 pval = 0.855
11:46:48 PM f 3/9 pval = 0.91
11:46:52 PM f 3/10 pval = 0.945
11:46:56 PM p 4/11 pval = 0.886
11:47:01 PM p 5/12 pval = 0.806


faac gets the last place, but with a flattering notation corresponding to a small distortion on voice (3.5). This is the highest note I gave for a worst encoding. Obviously, this sample doesn’t contain anything excessively difficult for ours five competitors. Unusual place for MPC which presents distortion on voice and which smooth subtle details in the background. Then comes Nero Digital which doesn’t distort voice but which softens some details in orchestral background. But Nero Digital also clearly distorts the harpsichord on the very end. LAME is similar to Nero Digital but without the distortions noticed before on harpsichord. aoTuV is transparent to my ears (5/12) despite of its low bitrate (164 kbps).


Sample 11: Trumpet Voluntar
Short description: mezzo trumpet with organ – the recording is very noisy on loud moments only (maybe a consequence of the organ?).
Possible problems: trumpet sometimes trigger artefacts; noise may cause ringing.
replaygain_sample_gain: -4.98 dB
(indicative only: files were tested at their original volume)


CODE

ABC/HR for Java, Version 0.5a, 18 août 2005
Testname:

Tester: guruboolez

1L = E:\SUMMER TESTS 2005\HQ180\11_trumpet [AAC Nero Digital].wav
2R = E:\SUMMER TESTS 2005\HQ180\11_trumpet [MP3 LAME].wav
3R = E:\SUMMER TESTS 2005\HQ180\11_trumpet [vorbis aoTuV].wav
4L = E:\SUMMER TESTS 2005\HQ180\11_trumpet [MPC 1.15v].wav
5R = E:\SUMMER TESTS 2005\HQ180\11_trumpet [AAC faac].wav

---------------------------------------
General Comments:
---------------------------------------
1L File: E:\SUMMER TESTS 2005\HQ180\11_trumpet [AAC Nero Digital].wav
1L Rating: 3.5
1L Comment: Trumpet is slightly distorted. Not bad.
---------------------------------------
2R File: E:\SUMMER TESTS 2005\HQ180\11_trumpet [MP3 LAME].wav
2R Rating: 4.5
2R Comment: Excellent encoding. Distortion is very low (I still wonder how I get this score during ABX phase...)
---------------------------------------
3R File: E:\SUMMER TESTS 2005\HQ180\11_trumpet [vorbis aoTuV].wav
3R Rating: 4.7
3R Comment: No distortion, not irregularities, but a subtle additionnal noise.
---------------------------------------
4L File: E:\SUMMER TESTS 2005\HQ180\11_trumpet [MPC 1.15v].wav
4L Rating: 3.8
4L Comment: Trumpet is irregular, like an additionnal vibrato.
---------------------------------------
5R File: E:\SUMMER TESTS 2005\HQ180\11_trumpet [AAC faac].wav
5R Rating: 3.0
5R Comment: This one is the most immediately distorted. Trumpet has something coarse, and sound a bit false. Nothing bad, but audible and slightly annoying.
---------------------------------------

ABX Results:
Original vs E:\SUMMER TESTS 2005\HQ180\11_trumpet [MPC 1.15v].wav
9 out of 10, pval = 0.01
Original vs E:\SUMMER TESTS 2005\HQ180\11_trumpet [AAC faac].wav
10 out of 10, pval < 0.001
Original vs E:\SUMMER TESTS 2005\HQ180\11_trumpet [AAC Nero Digital].wav
9 out of 10, pval = 0.01
Original vs E:\SUMMER TESTS 2005\HQ180\11_trumpet [MP3 LAME].wav
8 out of 10, pval = 0.054
Original vs E:\SUMMER TESTS 2005\HQ180\11_trumpet [vorbis aoTuV].wav
8 out of 10, pval = 0.054


---- Detailed ABX results ----
Original vs E:\SUMMER TESTS 2005\HQ180\11_trumpet [MPC 1.15v].wav
Playback Range: 01.245 to 03.052
12:25:01 AM p 1/1 pval = 0.5
12:25:06 AM p 2/2 pval = 0.25
12:25:14 AM p 3/3 pval = 0.125
12:25:18 AM p 4/4 pval = 0.062
12:25:21 AM p 5/5 pval = 0.031
12:25:24 AM p 6/6 pval = 0.015
12:25:27 AM p 7/7 pval = 0.0070
12:25:32 AM p 8/8 pval = 0.0030
12:25:35 AM f 8/9 pval = 0.019
12:25:38 AM p 9/10 pval = 0.01

Original vs E:\SUMMER TESTS 2005\HQ180\11_trumpet [AAC faac].wav
Playback Range: 01.245 to 03.052
12:26:04 AM p 1/1 pval = 0.5
12:26:07 AM p 2/2 pval = 0.25
12:26:10 AM p 3/3 pval = 0.125
12:26:13 AM p 4/4 pval = 0.062
12:26:16 AM p 5/5 pval = 0.031
12:26:19 AM p 6/6 pval = 0.015
12:26:21 AM p 7/7 pval = 0.0070
12:26:24 AM p 8/8 pval = 0.0030
12:26:27 AM p 9/9 pval = 0.0010
12:26:30 AM p 10/10 pval < 0.001

Original vs E:\SUMMER TESTS 2005\HQ180\11_trumpet [AAC Nero Digital].wav
Playback Range: 01.245 to 03.052
12:21:10 AM p 1/1 pval = 0.5
12:21:20 AM p 2/2 pval = 0.25
12:21:30 AM p 3/3 pval = 0.125
12:21:34 AM p 4/4 pval = 0.062
12:21:38 AM p 5/5 pval = 0.031
12:21:50 AM p 6/6 pval = 0.015
12:21:54 AM p 7/7 pval = 0.0070
12:21:59 AM f 7/8 pval = 0.035
12:22:03 AM p 8/9 pval = 0.019
12:22:06 AM p 9/10 pval = 0.01

Original vs E:\SUMMER TESTS 2005\HQ180\11_trumpet [MP3 LAME].wav
Playback Range: 01.245 to 03.052
12:22:30 AM p 1/1 pval = 0.5
12:22:33 AM f 1/2 pval = 0.75
12:22:36 AM f 1/3 pval = 0.875
12:22:42 AM p 2/4 pval = 0.687
12:22:48 AM p 3/5 pval = 0.5
12:22:53 AM p 4/6 pval = 0.343
12:22:58 AM p 5/7 pval = 0.226
12:23:03 AM p 6/8 pval = 0.144
12:23:09 AM p 7/9 pval = 0.089
12:23:14 AM p 8/10 pval = 0.054

Original vs E:\SUMMER TESTS 2005\HQ180\11_trumpet [vorbis aoTuV].wav
Playback Range: 01.245 to 03.052
12:23:45 AM p 1/1 pval = 0.5
12:23:51 AM p 2/2 pval = 0.25
12:23:55 AM p 3/3 pval = 0.125
12:24:00 AM p 4/4 pval = 0.062
12:24:05 AM f 4/5 pval = 0.187
12:24:12 AM p 5/6 pval = 0.109
12:24:16 AM f 5/7 pval = 0.226
12:24:32 AM p 6/8 pval = 0.144
12:24:37 AM p 7/9 pval = 0.089
12:24:43 AM p 8/10 pval = 0.054


The inconstant noise of the reference file is very disturbing (to be honest, I don’t like this sample). I therefore ABXed everything in order to be sure to not being betray by the weird reference sound.
faac is the worse and offers distortions (trumpet) that were immediately perceptible. Nero Digital and then MPC both present slight problems with trumpet; MPC encodes trumpets with some irregularities (phenomenon already noticed in chorus/sample_03). LAME and aoTuV are really excellent. I heard a very subtle distortion with LAME on ABC/HR ABXed with 8/10. aoTuV has no distortion but it adds a small amount of noise (also noticed with sample_07).


This post has been edited by guruboolez: Dec 29 2005, 22:42
Go to the top of the page
+Quote Post
guruboolez
post Aug 21 2005, 19:35
Post #3





Group: Members (Donating)
Posts: 3474
Joined: 7-November 01
From: Strasbourg (France)
Member No.: 420



Sample 12: Liebestod
Short description: Soprano voice and full orchestra (modern instruments) – submitted by harashin.
Possible problems: same as sample_10. Violins (in orchestra) are often difficult to encode and may introduce another encoding difficulty.
replaygain_sample_gain: -3.57 dB
(indicative only: files were tested at their original volume)


CODE

ABC/HR for Java, Version 0.5a, 18 août 2005
Testname:

Tester: guruboolez

1L = E:\SUMMER TESTS 2005\HQ180\12_Liebestod [AAC faac].wav
2L = E:\SUMMER TESTS 2005\HQ180\12_Liebestod [MPC 1.15v].wav
3L = E:\SUMMER TESTS 2005\HQ180\12_Liebestod [MP3 LAME].wav
4L = E:\SUMMER TESTS 2005\HQ180\12_Liebestod [vorbis aoTuV].wav
5L = E:\SUMMER TESTS 2005\HQ180\12_Liebestod [AAC Nero Digital].wav

---------------------------------------
General Comments:
---------------------------------------
1L File: E:\SUMMER TESTS 2005\HQ180\12_Liebestod [AAC faac].wav
1L Rating: 3.5
1L Comment: Audible distortion on voice; strings are not totally transparent neither.
---------------------------------------
2L File: E:\SUMMER TESTS 2005\HQ180\12_Liebestod [MPC 1.15v].wav
2L Rating: 4.0
2L Comment: Irregularities audible with both orchestra and soprano. ABXed during ~13.00 - ~18.00. Voice has something hollowed, not really pleasant, but this problem is confined to short part of this sample.
---------------------------------------
5L File: E:\SUMMER TESTS 2005\HQ180\12_Liebestod [AAC Nero Digital].wav
5L Rating: 3.0
5L Comment: Voice is distorted, like 1L. Most annoying are the violins (orchestra: 23.00-27.00), not really pleasant.
---------------------------------------

ABX Results:
Original vs E:\SUMMER TESTS 2005\HQ180\12_Liebestod [MPC 1.15v].wav
7 out of 8, pval = 0.035
Original vs E:\SUMMER TESTS 2005\HQ180\12_Liebestod [MP3 LAME].wav
8 out of 16, pval = 0.598
Original vs E:\SUMMER TESTS 2005\HQ180\12_Liebestod [AAC faac].wav
8 out of 8, pval = 0.0030
Original vs E:\SUMMER TESTS 2005\HQ180\12_Liebestod [vorbis aoTuV].wav
10 out of 16, pval = 0.227
Original vs E:\SUMMER TESTS 2005\HQ180\12_Liebestod [AAC Nero Digital].wav
8 out of 8, pval = 0.0030


---- Detailed ABX results ----
Original vs E:\SUMMER TESTS 2005\HQ180\12_Liebestod [MPC 1.15v].wav
Playback Range: 14.167 to 20.529
1:04:38 AM p 1/1 pval = 0.5
1:04:41 AM p 2/2 pval = 0.25
1:04:45 AM p 3/3 pval = 0.125
1:04:49 AM p 4/4 pval = 0.062
1:04:55 AM p 5/5 pval = 0.031
1:04:59 AM p 6/6 pval = 0.015
1:05:04 AM p 7/7 pval = 0.0070
1:05:08 AM f 7/8 pval = 0.035

Original vs E:\SUMMER TESTS 2005\HQ180\12_Liebestod [MP3 LAME].wav
Playback Range: 21.919 to 29.992
1:05:43 AM f 0/1 pval = 1.0
1:05:47 AM f 0/2 pval = 1.0
1:05:52 AM p 1/3 pval = 0.875
1:05:56 AM f 1/4 pval = 0.937
1:06:04 AM p 2/5 pval = 0.812
1:06:07 AM p 3/6 pval = 0.656
1:06:11 AM p 4/7 pval = 0.5
1:06:18 AM f 4/8 pval = 0.636
1:06:21 AM p 5/9 pval = 0.5
1:06:26 AM f 5/10 pval = 0.623
1:06:29 AM p 6/11 pval = 0.5
1:06:44 AM f 6/12 pval = 0.612
1:06:50 AM p 7/13 pval = 0.5
1:06:55 AM f 7/14 pval = 0.604
1:07:03 AM p 8/15 pval = 0.5
1:07:23 AM f 8/16 pval = 0.598

Original vs E:\SUMMER TESTS 2005\HQ180\12_Liebestod [AAC faac].wav
Playback Range: 22.372 to 29.992
1:03:36 AM p 1/1 pval = 0.5
1:03:40 AM p 2/2 pval = 0.25
1:03:44 AM p 3/3 pval = 0.125
1:03:48 AM p 4/4 pval = 0.062
1:03:57 AM p 5/5 pval = 0.031
1:04:02 AM p 6/6 pval = 0.015
1:04:05 AM p 7/7 pval = 0.0070
1:04:08 AM p 8/8 pval = 0.0030

Original vs E:\SUMMER TESTS 2005\HQ180\12_Liebestod [vorbis aoTuV].wav
Playback Range: 21.919 to 29.992
1:07:35 AM f 0/1 pval = 1.0
1:07:41 AM p 1/2 pval = 0.75
1:07:47 AM f 1/3 pval = 0.875
Playback Range: 12.082 to 20.796
1:08:10 AM p 2/4 pval = 0.687
1:08:16 AM f 2/5 pval = 0.812
1:08:23 AM p 3/6 pval = 0.656
1:08:29 AM p 4/7 pval = 0.5
1:08:35 AM p 5/8 pval = 0.363
1:08:41 AM p 6/9 pval = 0.253
1:08:48 AM p 7/10 pval = 0.171
1:08:55 AM f 7/11 pval = 0.274
1:09:03 AM f 7/12 pval = 0.387
1:09:08 AM p 8/13 pval = 0.29
1:09:16 AM f 8/14 pval = 0.395
1:09:21 AM p 9/15 pval = 0.303
1:09:27 AM p 10/16 pval = 0.227

Original vs E:\SUMMER TESTS 2005\HQ180\12_Liebestod [AAC Nero Digital].wav
Playback Range: 17.161 to 22.186
1:09:55 AM p 1/1 pval = 0.5
1:10:00 AM p 2/2 pval = 0.25
1:10:05 AM p 3/3 pval = 0.125
1:10:10 AM p 4/4 pval = 0.062
1:10:15 AM p 5/5 pval = 0.031
1:10:19 AM p 6/6 pval = 0.015
1:10:22 AM p 7/7 pval = 0.0070
1:10:26 AM p 8/8 pval = 0.0030


Nero Digital is the worst and pay for distortions occurring in orchestral moments¹. faac offers similar performance but with less distortions on violins. MPC is better and presents some irregularities especially on voice (see sample_10 for similar problem). Note: last year I unmasked MPC on ABCHR phase but missed the ABX test. Has the encoding quality regressed or did my hearing improved a bit?
Two files are transparent to my ears (a record for this test): LAME (8/16) and aoTuV (10/16). Note: there were a clear ringing audible with alpha 3 of LAME; now it’s totally gone.

¹ I already noticed similar distortions in the past with violins/chorus with older version of Nero AAC; the new ‘fast’ encoder is used to perform better. The same problem is audible in similar conditions with sample_18.



Sample 13: LadyMacbeth
Short description: full and hysterical orchestra (modern instruments) with a lot of percussions and cymbals – submitted by harashin.
Possible problems: pre-echo is likely; distortions on other instruments (especially cymbals and brass) may also occur.
replaygain_sample_gain: -5.27 dB
(indicative only: files were tested at their original volume)


CODE

ABC/HR for Java, Version 0.5a, 18 août 2005
Testname:

Tester: guruboolez

1R = E:\SUMMER TESTS 2005\HQ180\13_LadyMacbeth [vorbis aoTuV].wav
2R = E:\SUMMER TESTS 2005\HQ180\13_LadyMacbeth [AAC faac].wav
3R = E:\SUMMER TESTS 2005\HQ180\13_LadyMacbeth [AAC Nero Digital].wav
4L = E:\SUMMER TESTS 2005\HQ180\13_LadyMacbeth [MPC 1.15v].wav
5L = E:\SUMMER TESTS 2005\HQ180\13_LadyMacbeth [MP3 LAME].wav

---------------------------------------
General Comments:
---------------------------------------
1R File: E:\SUMMER TESTS 2005\HQ180\13_LadyMacbeth [vorbis aoTuV].wav
1R Rating: 4.2
1R Comment: Cymbals are slightly boosted by noise. Not unpleasant, but audible on direct comparison.
---------------------------------------
2R File: E:\SUMMER TESTS 2005\HQ180\13_LadyMacbeth [AAC faac].wav
2R Rating: 3.5
2R Comment: Cymbals are smeared.
Percussive instruments (11.00 - 16.00) are slightly hollowed.
---------------------------------------
3R File: E:\SUMMER TESTS 2005\HQ180\13_LadyMacbeth [AAC Nero Digital].wav
3R Rating: 3.0
3R Comment: Cymbals are smeared and also distorted.
Pre-echo on percussive part.
---------------------------------------
4L File: E:\SUMMER TESTS 2005\HQ180\13_LadyMacbeth [MPC 1.15v].wav
4L Rating: 4.0
4L Comment: Cymbals are distorted (they are frail and sound a bit false, like after a noise reduction process).
---------------------------------------
5L File: E:\SUMMER TESTS 2005\HQ180\13_LadyMacbeth [MP3 LAME].wav
5L Rating: 3.0
5L Comment: Cymbals are distorted. Annoying.
Slight smearing on percussive segment.
---------------------------------------

ABX Results:
Original vs E:\SUMMER TESTS 2005\HQ180\13_LadyMacbeth [MPC 1.15v].wav
8 out of 8, pval = 0.0030
Original vs E:\SUMMER TESTS 2005\HQ180\13_LadyMacbeth [vorbis aoTuV].wav
10 out of 12, pval = 0.019


---- Detailed ABX results ----
Original vs E:\SUMMER TESTS 2005\HQ180\13_LadyMacbeth [MPC 1.15v].wav
Playback Range: 04.620 to 08.096
1:32:43 AM p 1/1 pval = 0.5
1:32:50 AM p 2/2 pval = 0.25
1:32:56 AM p 3/3 pval = 0.125
1:33:05 AM p 4/4 pval = 0.062
1:33:10 AM p 5/5 pval = 0.031
1:33:13 AM p 6/6 pval = 0.015
1:33:16 AM p 7/7 pval = 0.0070
1:33:20 AM p 8/8 pval = 0.0030

Original vs E:\SUMMER TESTS 2005\HQ180\13_LadyMacbeth [vorbis aoTuV].wav
Playback Range: 04.620 to 08.096
1:33:54 AM p 1/1 pval = 0.5
1:34:05 AM f 1/2 pval = 0.75
1:34:08 AM p 2/3 pval = 0.5
1:34:11 AM p 3/4 pval = 0.312
1:34:14 AM p 4/5 pval = 0.187
1:34:17 AM p 5/6 pval = 0.109
1:34:21 AM p 6/7 pval = 0.062
1:34:28 AM p 7/8 pval = 0.035
1:34:34 AM p 8/9 pval = 0.019
1:34:38 AM p 9/10 pval = 0.01
1:34:43 AM p 10/11 pval = 0.0050
1:34:46 AM f 10/12 pval = 0.019


LAME and Nero Digital both produce the worst sounding: cymbals are distorted and smearing is also audible. Problems occurring with faac are less disturbing, despite of an unusual form of artefact (hollowed sound on drums). MPC and aoTuV are both very good. Musepack distorts cymbals which don't sound true again (see sample_01 and to a less extend sample_08) whereas Vorbis adds noise (see sample_07 and sample_11) which is less unpleasant to my ears and also harder to detect. ABX: 8/8 for MPC and 10/12 for aoTuV.


Sample 14: Vivaldi RV93
Short description: small concertino performed on period instruments.
Possible problems: on mandolin, softened or distorted harpsichord in continuo.
replaygain_sample_gain: -1.32 dB
(indicative only: files were tested at their original volume)


CODE

ABC/HR for Java, Version 0.5a, 18 août 2005
Testname:

Tester: guruboolez

1R = E:\SUMMER TESTS 2005\HQ180\14_Vivaldi RV93 [MP3 LAME].wav
2R = E:\SUMMER TESTS 2005\HQ180\14_Vivaldi RV93 [AAC Nero Digital].wav
3L = E:\SUMMER TESTS 2005\HQ180\14_Vivaldi RV93 [AAC faac].wav
4R = E:\SUMMER TESTS 2005\HQ180\14_Vivaldi RV93 [vorbis aoTuV].wav
5R = E:\SUMMER TESTS 2005\HQ180\14_Vivaldi RV93 [MPC 1.15v].wav

---------------------------------------
General Comments:
---------------------------------------
1R File: E:\SUMMER TESTS 2005\HQ180\14_Vivaldi RV93 [MP3 LAME].wav
1R Rating: 3.0
1R Comment: Harpsichord is distorted, and slightly smeared (rather imprecise).
---------------------------------------
2R File: E:\SUMMER TESTS 2005\HQ180\14_Vivaldi RV93 [AAC Nero Digital].wav
2R Rating: 3.5
2R Comment: Harpsichord is not perfect; better than previous file (more precise).
Violin suffers from distortions too [it's the only encoding in this situation].
---------------------------------------
3L File: E:\SUMMER TESTS 2005\HQ180\14_Vivaldi RV93 [AAC faac].wav
3L Rating: 3.2
3L Comment: Slightly better than 1R, but worse than 2R (same issues).
---------------------------------------
4R File: E:\SUMMER TESTS 2005\HQ180\14_Vivaldi RV93 [vorbis aoTuV].wav
4R Rating: 4.0
4R Comment: Violins are not transparent: they sound a bit false... can't define the problem: it's like a mix between thinness and fatness, noise reduction and additionnal noise. Weird... But it's very limited.
---------------------------------------
5L File: E:\SUMMER TESTS 2005\HQ180\14_Vivaldi RV93 [REFERENCE].wav
5L Rating: 4.5
5L Comment: Very subtle distortion on string [ABXed with pval = 0.054...]
---------------------------------------

ABX Results:
Original vs E:\SUMMER TESTS 2005\HQ180\14_Vivaldi RV93 [vorbis aoTuV].wav
9 out of 10, pval = 0.01
Original vs E:\SUMMER TESTS 2005\HQ180\14_Vivaldi RV93 [MPC 1.15v].wav
8 out of 10, pval = 0.054


---- Detailed ABX results ----
Original vs E:\SUMMER TESTS 2005\HQ180\14_Vivaldi RV93 [vorbis aoTuV].wav
Playback Range: 01.134 to 04.214
1:58:25 AM p 1/1 pval = 0.5
1:58:30 AM p 2/2 pval = 0.25
1:58:34 AM p 3/3 pval = 0.125
1:58:38 AM p 4/4 pval = 0.062
1:58:47 AM f 4/5 pval = 0.187
1:58:52 AM p 5/6 pval = 0.109
1:58:55 AM p 6/7 pval = 0.062
1:59:02 AM p 7/8 pval = 0.035
1:59:07 AM p 8/9 pval = 0.019
1:59:13 AM p 9/10 pval = 0.01

Original vs E:\SUMMER TESTS 2005\HQ180\14_Vivaldi RV93 [MPC 1.15v].wav
Playback Range: 01.134 to 04.214
1:59:40 AM p 1/1 pval = 0.5
2:00:01 AM f 1/2 pval = 0.75
2:00:09 AM p 2/3 pval = 0.5
2:00:14 AM p 3/4 pval = 0.312
2:00:24 AM p 4/5 pval = 0.187
2:00:27 AM p 5/6 pval = 0.109
2:00:34 AM p 6/7 pval = 0.062
2:00:38 AM p 7/8 pval = 0.035
2:00:43 AM f 7/9 pval = 0.089
2:00:46 AM p 8/10 pval = 0.054


LAME is worst (problems are some distortions and a lack of precision) followed by faac and then Nero Digital which is –by far- the most efficient encoding¹. aoTuV has something strange: a kind of mix between thinness and fatness. The fatness issue also occurred with sample_04. MPC was transparent: I rated the reference and missed by hair’s breadth the ABX test (pval=0.054).
¹ 168 kbps against 201 kbps (vorbis) and 213 (mpc) which are better, and 193 (faac) and 187 (mp3) which are worse.


Sample 15: Troisème Ballet
Short description: Concerto for bagpipe played on period instruments and including some others rare and rustic instruments (like hurdy-gurdy).
- See
this page for more informations about these old instruments.

Possible problems: ringing for bagpipe, lack of details for hurdy-gurdy.
replaygain_sample_gain: +3.21 dB (indicative only: files were tested at their original volume)


CODE

ABC/HR for Java, Version 0.5a, 18 août 2005
Testname:

Tester: guruboolez

1L = E:\SUMMER TESTS 2005\HQ180\15_Troisème Ballet [MP3 LAME].wav
2L = E:\SUMMER TESTS 2005\HQ180\15_Troisème Ballet [AAC faac].wav
3L = E:\SUMMER TESTS 2005\HQ180\15_Troisème Ballet [vorbis aoTuV].wav
4L = E:\SUMMER TESTS 2005\HQ180\15_Troisème Ballet [AAC Nero Digital].wav
5L = E:\SUMMER TESTS 2005\HQ180\15_Troisème Ballet [MPC 1.15v].wav

---------------------------------------
General Comments:
---------------------------------------
2L File: E:\SUMMER TESTS 2005\HQ180\15_Troisème Ballet [AAC faac].wav
2L Rating: 3.0
2L Comment: bagpipe sounds false, with annoying distortions. Slight warbling problems also (faac?).
---------------------------------------
4L File: E:\SUMMER TESTS 2005\HQ180\15_Troisème Ballet [AAC Nero Digital].wav
4L Rating: 4.5
4L Comment: Distorted.
---------------------------------------
5L File: E:\SUMMER TESTS 2005\HQ180\15_Troisème Ballet [MPC 1.15v].wav
5L Rating: 4.5
5L Comment: Bagpipe is distorted on tonal notes (like an echo within the sound).
---------------------------------------

ABX Results:
Original vs E:\SUMMER TESTS 2005\HQ180\15_Troisème Ballet [MPC 1.15v].wav
10 out of 12, pval = 0.019
Original vs E:\SUMMER TESTS 2005\HQ180\15_Troisème Ballet [MP3 LAME].wav
10 out of 16, pval = 0.227
Original vs E:\SUMMER TESTS 2005\HQ180\15_Troisème Ballet [AAC Nero Digital].wav
9 out of 12, pval = 0.072
Original vs E:\SUMMER TESTS 2005\HQ180\15_Troisème Ballet [vorbis aoTuV].wav
5 out of 12, pval = 0.806


---- Detailed ABX results ----
Original vs E:\SUMMER TESTS 2005\HQ180\15_Troisème Ballet [MPC 1.15v].wav
Playback Range: 00.000 to 04.404
2:31:23 AM p 1/1 pval = 0.5
2:31:29 AM p 2/2 pval = 0.25
2:31:35 AM p 3/3 pval = 0.125
2:31:41 AM p 4/4 pval = 0.062
2:31:53 AM p 5/5 pval = 0.031
2:32:00 AM p 6/6 pval = 0.015
2:32:05 AM p 7/7 pval = 0.0070
2:32:09 AM f 7/8 pval = 0.035
2:32:14 AM p 8/9 pval = 0.019
2:32:21 AM p 9/10 pval = 0.01
2:32:28 AM f 9/11 pval = 0.032
2:32:36 AM p 10/12 pval = 0.019

Original vs E:\SUMMER TESTS 2005\HQ180\15_Troisème Ballet [MP3 LAME].wav
Playback Range: 00.000 to 15.253
2:25:48 AM p 1/1 pval = 0.5
2:25:54 AM p 2/2 pval = 0.25
2:26:00 AM p 3/3 pval = 0.125
2:26:12 AM f 3/4 pval = 0.312
Playback Range: 09.434 to 13.404
2:26:35 AM f 3/5 pval = 0.5
2:26:40 AM p 4/6 pval = 0.343
2:26:44 AM p 5/7 pval = 0.226
2:26:49 AM f 5/8 pval = 0.363
2:26:53 AM p 6/9 pval = 0.253
2:26:58 AM p 7/10 pval = 0.171
2:27:01 AM p 8/11 pval = 0.113
2:27:09 AM f 8/12 pval = 0.193
2:27:12 AM f 8/13 pval = 0.29
2:27:17 AM p 9/14 pval = 0.211
2:27:21 AM f 9/15 pval = 0.303
2:27:25 AM p 10/16 pval = 0.227

Original vs E:\SUMMER TESTS 2005\HQ180\15_Troisème Ballet [AAC Nero Digital].wav
Playback Range: 00.000 to 04.404
2:30:21 AM p 1/1 pval = 0.5
2:30:27 AM f 1/2 pval = 0.75
2:30:33 AM p 2/3 pval = 0.5
2:30:36 AM p 3/4 pval = 0.312
2:30:40 AM p 4/5 pval = 0.187
2:30:43 AM p 5/6 pval = 0.109
2:30:49 AM f 5/7 pval = 0.226
2:30:53 AM p 6/8 pval = 0.144
2:30:57 AM p 7/9 pval = 0.089
2:31:00 AM p 8/10 pval = 0.054
2:31:06 AM f 8/11 pval = 0.113
2:31:10 AM p 9/12 pval = 0.072

Original vs E:\SUMMER TESTS 2005\HQ180\15_Troisème Ballet [vorbis aoTuV].wav
Playback Range: 00.000 to 04.404
2:28:51 AM p 1/1 pval = 0.5
2:28:57 AM p 2/2 pval = 0.25
2:29:04 AM p 3/3 pval = 0.125
2:29:09 AM f 3/4 pval = 0.312
2:29:15 AM p 4/5 pval = 0.187
2:29:20 AM p 5/6 pval = 0.109
2:29:25 AM f 5/7 pval = 0.226
2:29:29 AM f 5/8 pval = 0.363
2:29:34 AM f 5/9 pval = 0.5
2:29:37 AM f 5/10 pval = 0.623
2:29:42 AM f 5/11 pval = 0.725
2:29:45 AM f 5/12 pval = 0.806


Most encoders are able to encode this sample perfectly or very limited problems. Only exception is faac, which distorts the bagpipe and which also adds a slight warbling, typical of this encoder (see sample_03 for similar artefact)¹. Nero Digital and MPC are both² close to perfection to my ears and they only fail (very slightly) on tonal notes of the bagpipe (10/12 for MPC³ and 9/12 for Nero Digital). LAME and aoTuV are both perfect to my ears (10/16 & 5/12).
¹ It might be interesting to link this poor performance to the consummed bitrate: faac used 224 kbps for this short sample (the highest one for faac on 18 samples, and also higher than other competitors for this sample).
² But Nero Digital is by far the most efficient, using 154 kbps instead of 213 kbps for Musepack
³ As for sample_12 I succeed in ABXing MPC 1.15v whereas 1.14 was transparent last year: did my ears become more sensitive or has 1.15 regressed in some areas?


Sample 16: basson
Short description: solo bassoon with light instrumental accompaniment.
Possible problems: micro-attacks on one moment may be softened or distorted.
replaygain_sample_gain: +0.68 dB
(indicative only: files were tested at their original volume)


CODE

ABC/HR for Java, Version 0.5a, 18 août 2005
Testname:

Tester: guruboolez

1L = E:\SUMMER TESTS 2005\HQ180\16_basson [AAC Nero Digital].wav
2R = E:\SUMMER TESTS 2005\HQ180\16_basson [MPC 1.15v].wav
3L = E:\SUMMER TESTS 2005\HQ180\16_basson [MP3 LAME].wav
4L = E:\SUMMER TESTS 2005\HQ180\16_basson [vorbis aoTuV].wav
5L = E:\SUMMER TESTS 2005\HQ180\16_basson [AAC faac].wav

---------------------------------------
General Comments:
---------------------------------------
1L File: E:\SUMMER TESTS 2005\HQ180\16_basson [AAC Nero Digital].wav
1L Rating: 4.0
1L Comment: Excellent, but audible distortions during 13.0 - 17.0 range.
---------------------------------------
2R File: E:\SUMMER TESTS 2005\HQ180\16_basson [MPC 1.15v].wav
2R Rating: 4.5
2R Comment: Tiny difference during 13.0 - 17.0
---------------------------------------
3L File: E:\SUMMER TESTS 2005\HQ180\16_basson [MP3 LAME].wav
3L Rating: 4.2
3L Comment: Excellent, but again, small distortions during 13.0 - 17.0 (less than 1L and slightly more than 2R)
---------------------------------------
4L File: E:\SUMMER TESTS 2005\HQ180\16_basson [vorbis aoTuV].wav
4L Rating: 3.5
4L Comment: Sound, excellent, becomes "fat" during the problematic 13-17 range.
---------------------------------------
5L File: E:\SUMMER TESTS 2005\HQ180\16_basson [AAC faac].wav
5L Rating: 2.5
5L Comment: The most distorted. Weak performance during 13.04 - 17.04 segment
---------------------------------------

ABX Results:
Original vs E:\SUMMER TESTS 2005\HQ180\16_basson [MP3 LAME].wav
8 out of 10, pval = 0.054
Original vs E:\SUMMER TESTS 2005\HQ180\16_basson [AAC Nero Digital].wav
9 out of 10, pval = 0.01
Original vs E:\SUMMER TESTS 2005\HQ180\16_basson [MPC 1.15v].wav
10 out of 12, pval = 0.019
E:\SUMMER TESTS 2005\HQ180\16_basson [MPC 1.15v].wav vs E:\SUMMER TESTS 2005\HQ180\16_basson [MP3 LAME].wav
11 out of 12, pval = 0.0030


---- Detailed ABX results ----
Original vs E:\SUMMER TESTS 2005\HQ180\16_basson [MP3 LAME].wav
Playback Range: 13.002 to 17.037
2:50:57 AM p 1/1 pval = 0.5
2:51:02 AM p 2/2 pval = 0.25
2:51:08 AM p 3/3 pval = 0.125
2:51:16 AM f 3/4 pval = 0.312
2:51:22 AM p 4/5 pval = 0.187
2:51:32 AM p 5/6 pval = 0.109
2:51:38 AM p 6/7 pval = 0.062
2:51:46 AM p 7/8 pval = 0.035
2:51:55 AM p 8/9 pval = 0.019
2:52:03 AM f 8/10 pval = 0.054

Original vs E:\SUMMER TESTS 2005\HQ180\16_basson [AAC Nero Digital].wav
Playback Range: 13.002 to 17.037
2:48:22 AM p 1/1 pval = 0.5
2:48:26 AM p 2/2 pval = 0.25
2:48:29 AM p 3/3 pval = 0.125
2:48:32 AM p 4/4 pval = 0.062
2:48:36 AM p 5/5 pval = 0.031
2:48:39 AM p 6/6 pval = 0.015
2:48:43 AM p 7/7 pval = 0.0070
2:48:47 AM p 8/8 pval = 0.0030
2:48:51 AM f 8/9 pval = 0.019
2:48:55 AM p 9/10 pval = 0.01

Original vs E:\SUMMER TESTS 2005\HQ180\16_basson [MPC 1.15v].wav
Playback Range: 13.002 to 17.037
2:49:17 AM p 1/1 pval = 0.5
2:49:30 AM p 2/2 pval = 0.25
2:49:35 AM f 2/3 pval = 0.5
2:49:42 AM p 3/4 pval = 0.312
2:49:50 AM f 3/5 pval = 0.5
2:49:57 AM p 4/6 pval = 0.343
2:50:03 AM p 5/7 pval = 0.226
2:50:08 AM p 6/8 pval = 0.144
2:50:15 AM p 7/9 pval = 0.089
2:50:23 AM p 8/10 pval = 0.054
2:50:27 AM p 9/11 pval = 0.032
2:50:34 AM p 10/12 pval = 0.019

E:\SUMMER TESTS 2005\HQ180\16_basson [MPC 1.15v].wav vs E:\SUMMER TESTS 2005\HQ180\16_basson [MP3 LAME].wav
Playback Range: 12.921 to 15.443
2:53:48 AM p 1/1 pval = 0.5
2:53:55 AM p 2/2 pval = 0.25
2:53:59 AM p 3/3 pval = 0.125
2:54:03 AM p 4/4 pval = 0.062
2:54:07 AM p 5/5 pval = 0.031
2:54:11 AM p 6/6 pval = 0.015
2:54:15 AM p 7/7 pval = 0.0070
2:54:19 AM p 8/8 pval = 0.0030
2:54:24 AM f 8/9 pval = 0.019
2:54:28 AM p 9/10 pval = 0.01
2:54:44 AM p 10/11 pval = 0.0050
2:54:48 AM p 11/12 pval = 0.0030


There were no micro-attacks issue (less stronger than those produced by trombone on sample_09. But encoding problems occur elsewhere -and for all encoders- : during one precise range when bassoon is playing at higher volume. It may be interesting to note that all encoders also allocate a lower bitrate than average (150…160 kbps and 175 kbps for MPC).
faac presents the most distorted rendering. The encoder is followed by aoTuV which presents this time (I would say another time… ¹) a fat texture on the described range; this is slightly irritating. The three remaining encoders present non-annoying distortions: Nero Digital, LAME (slightly better) and MPC (even better²).

¹ Same problems on sample_04 and sample_14. Last year, MEGAMIX also obtained an unusually low note on this sample (and exactly the same range).
² ABX comparison opposing directly LAME to MPC: 11/12



Sample 17: Seminarist
Short description: baritone voice accompanied by piano; the sample contains a lot of sibilant consonants.
Possible problems: there’s a big risk of distortion on sibilant.
replaygain_sample_gain: -0.45 dB
(indicative only: files were tested at their original volume)


CODE

ABC/HR for Java, Version 0.5a, 18 août 2005
Testname:

Tester: guruboolez

1L = E:\SUMMER TESTS 2005\HQ180\17_seminarist [MPC 1.15v].wav
2L = E:\SUMMER TESTS 2005\HQ180\17_seminarist [AAC faac].wav
3R = E:\SUMMER TESTS 2005\HQ180\17_seminarist [vorbis aoTuV].wav
4L = E:\SUMMER TESTS 2005\HQ180\17_seminarist [MP3 LAME].wav
5L = E:\SUMMER TESTS 2005\HQ180\17_seminarist [AAC Nero Digital].wav

---------------------------------------
General Comments:
---------------------------------------
1L File: E:\SUMMER TESTS 2005\HQ180\17_seminarist [MPC 1.15v].wav
1L Rating: 3.2
1L Comment: Sibilant consonent are unatural, "slow".
Issues on voice: during 12.3 - 16.0 the baritone voice has something hard to define: false, hollowed, fluctuating?
---------------------------------------
2L File: E:\SUMMER TESTS 2005\HQ180\17_seminarist [AAC faac].wav
2L Rating: 3.0
2L Comment: Voice is distorted.
---------------------------------------
3R File: E:\SUMMER TESTS 2005\HQ180\17_seminarist [vorbis aoTuV].wav
3R Rating: 4.0
3R Comment: Similar to 1L for sibilants
---------------------------------------
4L File: E:\SUMMER TESTS 2005\HQ180\17_seminarist [MP3 LAME].wav
4L Rating: 3.7
4L Comment: Sibilants are "slow", a bit stretched like chewing-gum. Not shoking (reference file is agressive in my opinion...).
Distorted vocal part between 23.66 - 25.00
---------------------------------------
5L File: E:\SUMMER TESTS 2005\HQ180\17_seminarist [AAC Nero Digital].wav
5L Rating: 3.5
5L Comment: Similar to 1L and 3R for sibilants, but a bit more distorted (clear on 9.00 - 10.00)
---------------------------------------

ABX Results:


faac is not really good and is betrayed by distortions on voice. MPC has presents distortions on sibilant but also on tonal part of voice. Musepack is now used to have problems with voice¹. Nero Digital is slightly better and is followed by LAME; both only show problem on sibilants, moderately annoying (reference is, as I already wrote it during the test, very aggressive). aoTuV is with MPC the one which offers the lesser problem on sibilant, but with a perfect quality on tonal part of the voice. It’s a nice progress since last year².
¹ See comments for sample_03 (chorus), sample_10 (mezzo-soprano) and sample_12 (soprano). Now it’s the turn to male voice to trigger distortion.
² Vorbis MEGAMIX obtained its worst note (2.3) and was clearly worse than LAME 3.97a3; Vorbis aoTuV beta 4 now corrects all problems that were previously audible on voice.



Sample 18: Butterfly Lovers
Short description:violin concerto. Violin is playing alone, then is intertwined with the orchestra
Possible problems: like Liebestod, orchestral strings may be risky for some encoders. Blending between violin and violins is possible.
replaygain_sample_gain: +3.34 dB
(indicative only: files were tested at their original volume)


CODE

ABC/HR for Java, Version 0.5a, 18 août 2005
Testname:

Tester: guruboolez

1L = E:\SUMMER TESTS 2005\HQ180\18_Butterfly [AAC faac].wav
2L = E:\SUMMER TESTS 2005\HQ180\18_Butterfly [AAC Nero Digital].wav
3L = E:\SUMMER TESTS 2005\HQ180\18_Butterfly [MPC 1.15v].wav
4R = E:\SUMMER TESTS 2005\HQ180\18_Butterfly [MP3 LAME].wav
5L = E:\SUMMER TESTS 2005\HQ180\18_Butterfly [vorbis aoTuV].wav

---------------------------------------
General Comments:
---------------------------------------
1L File: E:\SUMMER TESTS 2005\HQ180\18_Butterfly [AAC faac].wav
1L Rating: 4.5
1L Comment: Maybe... I guess... I'm not sure... Go to ABX module.
After ABX: the minor "thing" really exists. Strings are a bit different.
---------------------------------------
2L File: E:\SUMMER TESTS 2005\HQ180\18_Butterfly [AAC Nero Digital].wav
2L Rating: 3.5
2L Comment: Violins are not transparent; there's a form of 'acidity' [I see it like this] on these instruments (I mean orchestra, not solo violin).
---------------------------------------
3L File: E:\SUMMER TESTS 2005\HQ180\18_Butterfly [MPC 1.15v].wav
3L Rating: 3.5
3L Comment: Orchestra sounds fat. It's clear on 8.0 - 12.0 range. I'd bet it's Vorbis and usual coarseness issue.
---------------------------------------
4R File: E:\SUMMER TESTS 2005\HQ180\18_Butterfly [MP3 LAME].wav
4R Rating: 4.8
4R Comment: Minor difference I can 'feel' - I need to ABX them.
After ABX: difference was very hard to catch. Really good encoding, though it's not *fully* transparent.
---------------------------------------

ABX Results:
Original vs E:\SUMMER TESTS 2005\HQ180\18_Butterfly [MP3 LAME].wav
13 out of 16, pval = 0.01
Original vs E:\SUMMER TESTS 2005\HQ180\18_Butterfly [AAC faac].wav
12 out of 12, pval < 0.001
Original vs E:\SUMMER TESTS 2005\HQ180\18_Butterfly [vorbis aoTuV].wav
9 out of 16, pval = 0.401


---- Detailed ABX results ----
Original vs E:\SUMMER TESTS 2005\HQ180\18_Butterfly [MP3 LAME].wav
Playback Range: 09.518 to 16.630
4:09:27 AM p 1/1 pval = 0.5
4:09:33 AM p 2/2 pval = 0.25
4:09:41 AM f 2/3 pval = 0.5
4:09:46 AM p 3/4 pval = 0.312
4:09:54 AM f 3/5 pval = 0.5
4:09:59 AM f 3/6 pval = 0.656
4:10:05 AM p 4/7 pval = 0.5
4:10:10 AM p 5/8 pval = 0.363
4:10:17 AM p 6/9 pval = 0.253
4:10:20 AM p 7/10 pval = 0.171
4:10:34 AM p 8/11 pval = 0.113
4:10:43 AM p 9/12 pval = 0.072
4:10:56 AM p 10/13 pval = 0.046
4:11:02 AM p 11/14 pval = 0.028
4:11:15 AM p 12/15 pval = 0.017
4:11:21 AM p 13/16 pval = 0.01

Original vs E:\SUMMER TESTS 2005\HQ180\18_Butterfly [AAC faac].wav
Playback Range: 03.850 to 13.903
4:06:54 AM p 1/1 pval = 0.5
4:07:06 AM p 2/2 pval = 0.25
4:07:16 AM p 3/3 pval = 0.125
4:07:23 AM p 4/4 pval = 0.062
4:07:30 AM p 5/5 pval = 0.031
4:07:37 AM p 6/6 pval = 0.015
4:07:43 AM p 7/7 pval = 0.0070
4:07:48 AM p 8/8 pval = 0.0030
4:08:01 AM p 9/9 pval = 0.0010
Playback Range: 17.592 to 27.699
4:08:27 AM p 10/10 pval < 0.001
4:08:35 AM p 11/11 pval < 0.001
4:08:40 AM p 12/12 pval < 0.001

Original vs E:\SUMMER TESTS 2005\HQ180\18_Butterfly [vorbis aoTuV].wav
Playback Range: 09.518 to 16.630
4:11:43 AM f 0/1 pval = 1.0
4:11:49 AM f 0/2 pval = 1.0
4:11:56 AM f 0/3 pval = 1.0
4:12:02 AM f 0/4 pval = 1.0
4:12:08 AM p 1/5 pval = 0.968
4:12:11 AM p 2/6 pval = 0.89
4:12:18 AM p 3/7 pval = 0.773
4:12:22 AM p 4/8 pval = 0.636
4:12:30 AM f 4/9 pval = 0.746
4:12:34 AM p 5/10 pval = 0.623
4:12:44 AM p 6/11 pval = 0.5
4:12:48 AM p 7/12 pval = 0.387
4:12:51 AM f 7/13 pval = 0.5
4:13:00 AM p 8/14 pval = 0.395
4:13:08 AM f 8/15 pval = 0.5
4:13:17 AM p 9/16 pval = 0.401


Nero Digital produces an irritating sound. It’s a consequence of encoding problems with violins, already heard and commented for sample_12. MPC offers a similar quality. It sounded fat & coarse, and I was ready to bet that this encoding corresponded to Vorbis, former specialist of this typical problem… faac is in comparison much better. I had nothing but “warm fuzzy feeling” during ABCHR phase, and it was clarified (and verified!) by ABX (12/12): minor distortions on strings. Same thing for LAME: just a feeling on ABCHR which was confirmed by a positive ABX comparison. But the issue is so small that I really can’t describe it nor calling it “distortion”. Nice progress since last year¹. aoTuV is fully transparent to my ears again².
¹ 3.97a3 has a clear ringing on strings, totally solved by alpha 11.
² Again… because Vorbis was also transparent with MEGAMIX last year. It’s interesting to note that it was the only transparent encoding of Vorbis during last listening test. This year, Vorbis aoTuV beta 4 ends with six transparent encodings.

This post has been edited by guruboolez: Aug 21 2005, 19:51
Go to the top of the page
+Quote Post
guruboolez
post Aug 21 2005, 19:51
Post #4





Group: Members (Donating)
Posts: 3474
Joined: 7-November 01
From: Strasbourg (France)
Member No.: 420



VI. Statistical analysis and conclusions





It may be worth examining results from two standpoints: results by themselves and comparison with previous results.


Vorbis: Vorbis is now –thanks to Aoyumi– an excellent audio format for 180 kbps encodings (and classical music). It has apparently no flaws (lowest note was 3.5) but only minor problems audible from time to time: additional noise, fatness, or softened details. One third of the tested samples are transparent to my ears with aoTuV encodings. My own limits are close to be reached. Vorbis also gets 14 times (on 18) the first place. Those amazing performance can be compared to MPC which is transparent only once and get the first place four times (on 18). Statistically aoTuV is still tied to MPC on those samples. But the ANOVA and the FRIEDMAN pval indicates that aoTuV is the winner with a high probability of 92% (ANOVA) or 93% (friedman). Bootstraped results are nevertheless less favorable (pval=0,264) and seems to indicate that additional samples won’t with certainty increase the gap (correct me if I’m wrong).
Anyway, Vorbis is now excellent.


Musepack Audio: MPC is still impressive. Like Vorbis aoTuV MPC is free of major problems (the most irritating one occurs with the chorus/sample_03). The most constant one consists on something I described like ‘instability’, ‘fluctuating sound’ or ‘ringing’ which especially affects voice (all samples involving voice are concerned). This problem seems to have increase with time (see this comparison).
As said before, MPC isn’t second: the performance are statistically tied to Vorbis and to LAME.


MPEG-I Layer 3: LAME MP3 appears to be a very good encoding solution for classical music. But I must nuance this global statement: performance isn’t homogeneous: contrary to our two ‘winners’ LAME suffers sometimes from encoding artefacts which are by definition more annoying than slight and constant distortions. Pre-echo on percussions, smeared cymbals, tremolos on harpsichord: they’re irritating.
LAME is tied to MPC for classical, but is clearly inferior to aoTuV (pval < 0,01%).


faac AAC: faac performance are less enjoying than competitors, but it doesn’t perform poorly. All samples (except one) are distorted: instrument or voice don’t ring true. Some artifacts too: warbling and pre-echo.
Statistically faac could be considered as inferior to LAME (confidence=98% for ANOVA and ‘only’ 94% for friedman and 92% for boostrap).


Nero Digital AAC: Nero Digital –which I recall doesn’t really compete– performance are good. Overall performance are on average better (not statistically) than faac despite of lower bitrate. I often noted in the previous post the high level of efficiency of this encoder (used to get comparable notes to other encoders and with a handicap of 40…50 kbps!). Obviously AAC has a great potential! Even greater if we keep in mind that the tested encoder has audible problems with classical music (it was clear with samples 05, 12 and 18) and that Nero Digital should quickly (?) offer by default a better encoder, especially for classical. Too bad that VBR mode of Nero Digital couldn’t be scaled to 180 kbps in order to make comparison fairer.
To note: Nero AAC win hands down a sad trophy: the golden medal of the worst artefact (micro-attacks on trombones in sample 09). dry.gif



Now, we could compare the evolution during one year (between MP3, MPC and Vorbis only).


Musepack Audio: From an evolution standpoint, MPC is the clear looser: it lost the quality crown, stolen by Vorbis. Last year MPC ends the test by getting an uncontestable first place; now the format is tied with Vorbis (which is better on average) and LAME (worse on average). On 18 samples MPC was ranked first 15 times in 2004; now it’s four times only! Musepack has also lost the efficiency trophy: with classical at least the bitrate is now superior to LAME and Vorbis. I remind that last year, MPC ends at first place with 10 kbps less than MEGAMIX and even 20 kbps less than LAME 3.97 alpha 3.
As a consequence of increased bitrate and stagnant quality, I would say that MPC is loosing its former attractiveness (for classical music). It’s not really surprising considering the low evolution of the format in a world of constant progress. Other format have simply catch up their lost time.


Vorbis: Vorbis is now impressive. Last year constant noise boost or coarseness spoiled the performance of this format. I was surprised to hear those problems on –q6,00 setting supposed to be free of them due to lossless coupling. Aoyumi has apparently identifies precisely the cause of this problem and he worked to solve it. Not entirely though: some remaining trails are still audible with few samples but the intensity is now really frail (at least on the tested samples). Other artifacts are also corrected: micro-attacks on Orion_II (sample 09) are now much less dusty (aoTuV performed even better than any other tested format at this bitrate!). Performance are remarkable and the slightly gain in bitrate consumption is the icing on the cake. Just a question: Does aoTuV mean Tuned for Victory? Or Tuned by Visitors? From what planet is Aoyumi coming from?wink.gif
By the way, I wouldn’t say anymore than Vorbis is not mature enough. At least not for classical, which appeared to be a weak point for this format smile.gif


LAME 3.97: LAME vitality defies the common sense. The format is supposed to have reached maturity for years and therefore to stagnate. The tested preset is not only better but is also faster (thanks to –vbr-new) and more efficient (-11 kbps!). The progresses are important. To precisely check them I reencoded all reference files with alpha 3 –V2 and compared them to alpha 11 –V2 –vbr-new. Indeed obvious problems are solved: the audible ringing in orchestra (sample_18) has totally disappeard, the weird distortion on organ (sample_05) is truly lowered… 2005 seems to be an exceptional vintage for LAME, comparable I would say to the release of LAME 3.90 in December 2001.




To finish, I would recall (for the nth time) that all results, analysis and also tested settings are valid 1/ for classical music and 2/ for the 18 samples and 3/ for my sensitivity. Results may differ with other musical genre (MP3 would probably have more annoying issues with sharp electronic music I suppose; bitrate of -q175 and faac won't probably lead to 180 kbps with metal...), and it would be nice to see similar checkups to get a modern idea about performances of AAC, MP3, MPC and Vorbis at this transparent (or rather near transparent) setting smile.gif


P.S. ABX logs are available here.
P.S.#2: I made the 18 samples avaialble last year but I can't currently upload them (dial-up and I don't have 30 MB free yet).


Thanks for reading and comment smile.gif

This post has been edited by guruboolez: Dec 29 2005, 22:44
Go to the top of the page
+Quote Post
JeanLuc
post Aug 21 2005, 19:53
Post #5





Group: Members
Posts: 1311
Joined: 4-June 02
From: Cologne, Germany
Member No.: 2213



Jesus Christ ... now that's one hell of a listening test.

Great work Guru ... as always.


--------------------
The name was Plex The Ripper, not Jack The Ripper
Go to the top of the page
+Quote Post
guruboolez
post Aug 21 2005, 19:55
Post #6





Group: Members (Donating)
Posts: 3474
Joined: 7-November 01
From: Strasbourg (France)
Member No.: 420



Thanks:)

Few words to say that I had to split the messages dedicated to "comments" in order to display the full set.
Go to the top of the page
+Quote Post
SirGrey
post Aug 21 2005, 20:13
Post #7





Group: Members
Posts: 241
Joined: 8-February 04
Member No.: 11863



Wow ! Great work.
Very interesting...
Especially lame progress on your beloved classical music smile.gif
Go to the top of the page
+Quote Post
skelly831
post Aug 21 2005, 20:25
Post #8





Group: Members (Donating)
Posts: 782
Joined: 11-April 05
From: México
Member No.: 21361



AMAZING Guru!

This is the kind of test that makes people think about their choices of format.
I've always believed in MPC, Lame's position was expected i think, especially with the latest alpha baing so nicely improved, but the results for AAC are something to think about.

Thanks a lot Guru!


--------------------
we was young an' full of beans
Go to the top of the page
+Quote Post
HbG
post Aug 21 2005, 20:40
Post #9





Group: Members
Posts: 289
Joined: 12-May 03
From: The Hague
Member No.: 6555



You're invaluable to this community!

And, i start liking my signature more and more smile.gif


--------------------
Veni Vidi Vorbis.
Go to the top of the page
+Quote Post
rjamorim
post Aug 21 2005, 21:10
Post #10


Rarewares admin


Group: Members
Posts: 7515
Joined: 30-September 01
From: Brazil
Member No.: 81



Awesome! Thank-you very much, Bin Boolez*!


* Internal joke, just ignore it


--------------------
Get up-to-date binaries of Lame, AAC, Vorbis and much more at RareWares:
http://www.rarewares.org
Go to the top of the page
+Quote Post
bug80
post Aug 21 2005, 21:19
Post #11





Group: Members
Posts: 403
Joined: 23-January 05
From: The Netherlands
Member No.: 19254



I bow, guruboolez. smile.gif Great test!

Now if only my iRiver flash player had better Ogg Vorbis support..
Go to the top of the page
+Quote Post
Yaztromo
post Aug 21 2005, 21:31
Post #12





Group: Members
Posts: 236
Joined: 28-July 03
From: England, UK
Member No.: 8031



Thanks Guru. It's more than obvious you've dedicated a lot of time to this test.

I don't think anybody had seriously thought Vorbis had got THAT good. The results are quite amazing.
Go to the top of the page
+Quote Post
ff123
post Aug 21 2005, 21:37
Post #13


ABC/HR developer, ff123.net admin


Group: Developer (Donating)
Posts: 1396
Joined: 24-September 01
Member No.: 12



Guru,

I'm always amazed to read the professional-quality listening tests you perform for free, and then think about all the lousy ones I've had the displeasure to endure in the glossy magazines!

ff123
Go to the top of the page
+Quote Post
Cyaneyes
post Aug 21 2005, 21:45
Post #14





Group: Members
Posts: 297
Joined: 21-September 03
Member No.: 8934



Guru.. you've made me very happy I purchased a DAP that supports Vorbis.

Thank you for your dedication!
Go to the top of the page
+Quote Post
Lyx
post Aug 21 2005, 21:58
Post #15





Group: Members
Posts: 3353
Joined: 6-July 03
From: Sachsen (DE)
Member No.: 7609



Amazing. When you think he cannot get any better, then he shows you wrong. New landmark in blind-listening-test performance and presentation - and with some very interesting surprises. Thank you!


--------------------
I am arrogant and I can afford it because I deliver.
Go to the top of the page
+Quote Post
CiTay
post Aug 21 2005, 22:39
Post #16


Administrator


Group: Admin
Posts: 2378
Joined: 22-September 01
Member No.: 3



Well done, guruboolez!
Go to the top of the page
+Quote Post
Busemann
post Aug 21 2005, 22:48
Post #17





Group: Members
Posts: 730
Joined: 5-January 04
Member No.: 10970



QUOTE
• Apple AAC: There's still no VBR mode with iTunes. Consequently it's currently impossible to use Apple's AAC encoder unless other contenders will output an average bitrate close to either 160 kbps or 192 kbps. It's unlikely...


There's always QuickTime Pro 7 and its VBR encoder:-)
Go to the top of the page
+Quote Post
Lyx
post Aug 21 2005, 23:03
Post #18





Group: Members
Posts: 3353
Joined: 6-July 03
From: Sachsen (DE)
Member No.: 7609



QUOTE (guruboolez @ Aug 21 2005, 08:51 PM)
Vorbis is now –thanks to Aoyumi– an excellent audio format for 180 kbps encodings (and classical music). It has apparently no flaws (lowest note was 3.5) but only minor problems audible from time to time: additional noise, fatness, or softened details.

...

Vorbis is now impressive. Last year constant noise boost or coarseness spoiled the performance of this format. I was surprised to hear those problems on –q6,00 setting supposed to be free of them due to lossless coupling. Aoyumi has apparently identifies precisely the cause of this problem and he worked to solve it. Not entirely though: some remaining trails are still audible with few samples but the intensity is now really frail (at least on the tested samples).


As one of those who were/are sensitive to the hf-boost issue and one of the first folks to mention it, there were some questions which i asked myself to which i couldn't find an answer. And someway now seems a turning point in vorbis-development where i would like to remove my remaining doubts about vorbis.

What is the cause of vorbis' noise issue? Most here do know that it was introduced between RC3 and 1.0final. Now Aoyumi seems to have succeeded - at least in high-bitrates - to almost nullify this problem, so he obviously has an idea where the problem is/was. So what was it? Also, it does not seem to be an on/off-issue... it was slowly "fixed" over the course of time - from that it would seem to me that the "reason" for this problem is/was a fundamental vorbis problem. If this is the case, then i would wonder "how can such fundamental changes happen between a release candidate and a final release?".

I'm not trying to critizice someone here. Just asking if someone could lift the "magic" from this mysterious vorbis issue.

- Lyx


--------------------
I am arrogant and I can afford it because I deliver.
Go to the top of the page
+Quote Post
NumLOCK
post Aug 21 2005, 23:19
Post #19


Neutrino G-RSA developer


Group: Developer
Posts: 852
Joined: 8-May 02
From: Geneva
Member No.: 2002



Great work Guruboolez !

It is normal though - independently of the psymodel - that transform codecs do have a big encoding advantage on these highly tonal samples (1 sine wave = 1 coefficient). This puts Musepack at a big disadvantage - but still, it is very very interesting to read, especially about Vorbis smile.gif

Would be interesting to test hard rock samples some day, to see if Vorbis beats the others there too wink.gif

This post has been edited by NumLOCK: Aug 21 2005, 23:20


--------------------
Try Leeloo Chat at http://leeloo.webhop.net
Go to the top of the page
+Quote Post
vinnie97
post Aug 21 2005, 23:28
Post #20





Group: Members
Posts: 472
Joined: 6-March 03
Member No.: 5360



holy eff, excellent test, Guru! *not worthy*

oh and hail Vorbis! w00t.gif
Go to the top of the page
+Quote Post
kwanbis
post Aug 21 2005, 23:47
Post #21





Group: Developer (Donating)
Posts: 2362
Joined: 28-June 02
From: Argentina
Member No.: 2425



excellent guruboolez


--------------------
MAREO: http://www.webearce.com.ar
Go to the top of the page
+Quote Post
HotshotGG
post Aug 22 2005, 00:09
Post #22





Group: Members
Posts: 1593
Joined: 24-March 02
From: Revere, MA
Member No.: 1607



QUOTE
What is the cause of vorbis' noise issue? Most here do know that it was introduced between RC3 and 1.0final. Now Aoyumi seems to have succeeded - at least in high-bitrates - to almost nullify this problem, so he obviously has an idea where the problem is/was. So what was it? Also, it does not seem to be an on/off-issue... it was slowly "fixed" over the course of time - from that it would seem to me that the "reason" for this problem is/was a fundamental vorbis problem. If this is the case, then i would wonder "how can such fundamental changes happen between a release candidate and a final release


http://www2.ocn.ne.jp/~mp3lab/exp_lab/vorb...rbis_sweep.html here you go. wink.gif. That might help you to shed some light on the subject. In the description you can see the amplitude is heavily modulated so as he said before it looks like it was a combination point stereo (which only has an effect on the amplitude) and something else possible maybe even noise normalization. He has done an excellent job with it and the positive results are shown here in this "everything but the kitchen sink listening test". It's good to see improvements on a lot of the other codecs as well though biggrin.gif. At higher -q values myself I really can't tell the difference, however on lower -q values you can see how noise normalization has positive impact subjectively.

This post has been edited by HotshotGG: Aug 22 2005, 02:20


--------------------
College student/IT Assistant
Go to the top of the page
+Quote Post
vlada
post Aug 22 2005, 00:19
Post #23





Group: Members
Posts: 401
Joined: 7-January 04
Member No.: 11023



Great test, thank you. I'm just wondering, how would QT AAC implementation compare with others and WMA 9 Pro. I'm not sure about it, but I heard that WMA Pro is meant for high bitrates, so this test should be ideal to compare it to others. Would it be too difficult to ad these formats?

Anyway, thank you for your hard work.

Regards,
Vlada
Go to the top of the page
+Quote Post
bond
post Aug 22 2005, 01:34
Post #24





Group: Members
Posts: 881
Joined: 11-October 02
Member No.: 3523



great test!

and great work from aoyumi!


--------------------
I know, that I know nothing (Socrates)
Go to the top of the page
+Quote Post
kl33per
post Aug 22 2005, 02:08
Post #25


A/V Moderator


Group: Members
Posts: 841
Joined: 9-June 03
From: Brisbane, AUS
Member No.: 7078



Awesome work guru. I don't recall the test one year ago, and as such was surprised by the results. Very well done!


--------------------
www.sessions.com.au - Sessions Entertainment
Go to the top of the page
+Quote Post

4 Pages V   1 2 3 > » 
Closed TopicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



RSS Lo-Fi Version Time is now: 18th September 2014 - 14:19