Welcome Guest ( Log In | Register )

80 kbps personal listening test (summer 2005), AAC MP3 Ogg Vorbis WMA
post Jul 10 2005, 19:13
Post #1

Group: Members (Donating)
Posts: 3474
Joined: 7-November 01
From: Strasbourg (France)
Member No.: 420


After some listening tests performed during two years, I’ve experimented something new, based on this discussion. This time, I've tried to perform a multiformat blind comparison based on a much larger group of samples, but without ABX confirmation. Tests are still performed within a double-blind methodology: only difference is that I haven’t spent time to confirm the audible differences with an ABX session. The spared time was invested in something more interesting (to my eyes but also for statistical analysis tools): 150 samples instead of the 15 usual ones.

1.1/ classical samples

Few words about this extravagant number. I was used to perform comparisons on a limited number of classical samples (15…20). It was probably enough to draw reliable leads about relative quality of various codecs, but such limited collection couldn’t represent the fullness of classical music, which consists of numerous instruments played in countless combinations, offering for most of them a wide dynamics. There are also voice, electronic, and to finish all variants linked to technical factors (acoustic, recording noise, etc…). That’s why I’ve tried to build a structured collection of “classical music” situations, which of course doesn’t aspire to completeness, but which should represent most situations. The collection is made up of very hard to encode samples as well as of very easy ones, loud (+10 dB) and ultra-quiet (+30 dB); noisy and crystal clear recordings; ultra-tonal and micro-detailed sounds. I’ve split it in four series:

artificial: electronic samples – most should correspond to critical samples for lossy encoders. Total: 5 samples.
ensemble: various instruments (no voice) played together. I’ve divided it in 2 categories: chamber music and orchestral music (wider ensemble). For each category, I’ve distinguished period instruments (Middle-Age, Renaissance, Baroque) and modern ones (~19 and 20 century). Total: 60 samples.
solo: instrument played alone. Again, I’ve created separate categories (winds, bow, pinch strings [i.e. guitar family: lute, theorbo, harp…], keyboards). Total: 55 samples.
voice: male, female, child – in solo, duo and chorus. Total: 30 samples.

(note#1: all samples are deliberately short. First, it’s easier to upload them. Second, there’s only one acoustic phenomenon to test per sample, and it makes comparison between different tests a bit more interesting. The exact length for the collection is 25 minutes; it corresponds to 10.00 seconds per sample on average).

(note#2: all samples were named following a simple convention. The first letter (A, E, S, V) corresponds to the category (artificial, ensemble, solo, voice). The number to the catalogue number. Then, additional information is tied: nature of instrument, type of instrument or voice, etc…

ex: S11_KEYBOARD_Harpsichord_A
ex: E35_PERIOD_CHAMBER_E_flutes_harpsichord.mpc
To make short, samples will be called S11, E35, etc…)

With such a collection, I should obtain very precise idea of different lossy encoders performance on classical. For me, it’s interesting, especially if I plan to buy in the near future a portable player supporting one new audio format, as Vorbis, AAC or WMAPro. I’d like to know how good these new formats are compared to MP3. These 150 samples may also help developers/testers for evaluating the performance of codec on a wide panel of situations.

1.2/ various music samples

Last and not least, I’ve decided to give more audience to this test by adding samples representing some other genres than classical. For an elementary reason –99.9% of my CDs are classical- I can’t build the same kind of structured collection with what I will call now to make short “various music”. I used all samples selected by Roberto during his listening tests, removed all classical ones, and kept the 35 samples representing “various music”. It’s much less than the 150 above, but more than the double of what was used during all previous collective listening tests.

=> total = 150 classical + 35 various = 185 samples.

1.3/ choice of bitrate

For my first test based on these samples, I’ve selected a friendly bitrate (at least as tester): 80 kbps. It may appear as uninteresting, that’s why I must explain my choice.
First, I plan to perform similar tests at higher bitrate. My dream is to build a coherent set of tests including all bitrate from 80 to 160 or 192. But this project is very ambitious –too ambitious certainly- and I’ll possibly stop my tests (in this current form) at ~130 kbps.
But why 80, and not 64 kbps? To my ears, there is currently no encoder that sound satisfying at 64 kbps. They’re all disappointing or unsuitable to listening on headphone, even crap ones, even on urban environment (I repeat: to my ears). But I’ve noticed that the perceptible and highly annoying distortions I’ve heard at 64 kbps are seriously lowered once the bitrate reaches the next step. Vorbis has less problems, AAC-LC (at least advanced encoders) also seems to improve quickly beyond 64 kbps. It’s a bit like mp3, which was considered as acceptable at 128 kbps, but which quickly sunk below this value. I would consider as reasonable the *idea* of an acceptable quality at 80 kbps with modern encoders. Let’s see the facts...


2.1/ competitors

One big problem with this kind of test is the choice of competitors. Choosing the formats is easy: tester has just to select want he considers as interesting. Here, I’ll exclude outdated formats (vqf, MP3Pro) and unsuitable ones (MPC, MP3 – this last one would also be interesting to test, just for reference...). Remains: WMA, WMAPro (if available at this bitrate), AAC-LC, AAC-HE, Vorbis. But what implementation should I use? Nero AAC or iTunes AAC? Nero AAC features a VBR mode, but is VBR reliable at this bitrate, especially for samples which represents a wide dynamic? And for Nero, which encoder would be the best: the “high” one (default, which has verified issues with classical) or the “fast” one (which performs better with classical, but maybe not as well with various music, and which is still considered as not completely mature by Nero’s developers)? Vorbis CVS or Vorbis aoTuV? I’d say aoTuV, but if vorbis fails people will (legitimately) suspect the other one could have performed better. WMA CBR or WMA VBR? VBR is theoretically better than CBR, but tests have already shown that VBR could be unsafe at low bitrate.
My first idea was to test them all. Schnofler ABC/HR allows the use of countless encoders in a same round (ff123 software is limited to 8 contenders). But after a quick enumeration of all possible competitors (iTunes AAC, Nero AAC CBR fast, Nero AAC CBR high, Nero AAC VBR fast, Nero AAC VBR high, faac, Vorbis aoTuV, Vorbis CVS, Vorbis ABR, WMA CBR, WMA VBR, HE-AAC fast, high, CBR & VBR...) and a mental calculation of the number of comparisons I have to perform with 185 samples and so many contenders, I’ve immediately canceled this project. Last but not least, multiplying the competitors in a single test will lower the significance (statistically speaking) of the results.
Then, I came to a second idea: testing all competitors for one single format in a single pool, and put the winner of each pool in the final arena. It’s like sports: qualification first, final for the best. Remaining problem is the additional work. I’ve planned to test 4…5 codecs per bitrate with 185 samples, not 13 or 14. That’s why I’ve reduced the number of tested samples for the preliminary pools. I’ve limited the number at 40 samples, using 25 samples coming from different categories of the complete classical collection and 15 from the 35 samples representing “various music”. The imbalance in favor of classical is intended: the whole test is clearly focused on classical – “various music” is just an extension or bonus.

2.2/ Encoding mode and output bitrate

Other problem: VBR and CBR. Testing VBR and CBR has always been a source of controversy. In my opinion, testing a VBR encoder which outputs the targeted bitrate on average (i.e. a full set of CDs) is absolutely not a problem, even if bitrate reach amazing value on short tested samples. It’s not a problem, but the test should meet in my opinion the following condition: the test must include samples for which VBR encoders produce high bitrate as well as low one. VBR encoders have the chance to automatically increase the bitrate when a difficulty is detected – possibility that CBR encoders don’t have, and they sometime suffer from that handicap, especially on critical samples. But VBR encoders also decrease the bitrate of musical parts they don’t consider as difficult – and this diminution is sometimes very important; theoretically it shouldn’t affect the quality, but we know the gap between theory and reality, between principle and implementations of the principle. Testing the output quality of ‘non-difficult’ part is therefore very important, because these samples are the possible handicap of VBR encoders; otherwise there’s a big risk of favoring VBR encoders over CBR by testing only samples apparently favorable to VBR (whatever the format).
My classical music gallery is not exclusively based on critical or difficult samples; most of them don’t exhibit any specific issue. The sample pool should therefore be fairly distributed between samples with lower bitrate than the targeted one and samples with a higher bitrate. I’ll post as appendix a distribution curve which confirms this.

2.3/ degree of tolerance

By testing VBR profiles, it’s not always possible to match the exact target. Some encoders don’t have a precise scale of VBR settings. With luck, one available profile will approximately correspond to the fixed bitrate; sometimes, the output bitrate will deviate too much from the target. CBR is not free of problem either, although they’re less important. With AAC for example, CBR is a form of ABR: output bitrate could vary a little (but fortunately not very much).
That’s why trying to obtain identical bitrate between various contender could be considered as an utopia, even when the test is limited to CBR encoders only. The tester has therefore to allow some freedom: not too much of course in order to keep significant comparisons and not too less in order to make the test possible. I consider a deviation of 10% as acceptable, but again, at one condition: 10% between the lowest averaged bitrate and the highest averaged one, and not 10% between all encoders and the target. As example, if one encoder reaches 72 kbps (80 kbps - 10%) and another 88 kbps (80 kbps + 10%), the total difference would be ~20%: too much.
However, I will possibly allow rare exceptions: when a VBR profile is outside but close to the limit or if it would be more interesting to test a more common profile (example : musepack –quality 4 instead of –quality 3.9). Of course, the deviation mustn’t be exaggerated; and I’ll try to limit the possible exceptions to the pool, in order to keep the fairest conditions during the final test.

2.4/ Bitrate evaluation for VBR encoders

Now that rules are fixed, we have to estimate the corresponding bitrate for each VBR encoder and profile. It’s not as easy as we can suppose. Ideally, I had to encode a lot of albums at each profile. But with my slow computer, it’s not really possible. And doing it would only help to obtain the corresponding bitrate for classical; according to my experience, this average bitrate could seriously differ from the output value that other people listening to other music (like metal) have already reported. Think about LAME sf21 issues, which could inflate the bitrate up to 230…250 kbps with –preset-standard, and compare it to the average bitrate I obtain with classical: <190 kbps! Other but different example: lossless.
For practical reasons, I followed a methodology I don’t really consider as acceptable, and took the average bitrate of the 185 kbps as reference for my test. I don’t like it, because short samples could dramatically exaggerate the behavior of VBR encoders, and therefore distort the final estimation. Nevertheless, with 185 samples, this kind of over- and underrating occurring with some samples would normally be softened. And indeed, it seems that the average bitrate of encodings I’ve done of the full suite with formats I’ve used in the past (lame –preset standard, MPC) are very close to the average bitrate of my ancient music library. I can’t absolutely be certain that my gallery works like a microcosm and that bitrate matches the real usage of a full library, but I’m pretty sure that the deviation isn’t significative (+/- 5%, something like that).

2.5/ Bitrate report

There’s, before starting to reveal the results one last problem I’d like to put in the spotlight. It concerns the different way to calculate the bitrate. I’ve tried to obtain the most reliable value, and that’s why I’ve logically thought to calculate it myself with the filesize as basis. As long as no tags are integrated within the files, the calculated bitrate should correspond to the real one (audio stream). But the problem is somewhere else. Some formats are apparently embedded in complex containers, which weigh the size down. It’s not a problem in real life: adding something like 30 Kb per 5 Mb file is totally insignificant. But when these 30 Kb are appended to very short encodings, the calculation of the average bitrate is as consequence completely distorted. Concrete example: iTunes AAC. Just experiment the following thing: encode a sample (length: one second exactly) in CBR. At 80 kbps, we should obtain an 80 Kbits or 10 Kb file (80 x 1 / 8). But the final size is 60 Kb, and it corresponds to a 480 kbps (60x8) encoding! What’s the problem? Simply because iTunes add for each encoding something like 50 Kb of extra-chunks. The problem could be solved with foobar2000 0.8 and the “optimize mp4 layout” command: filesize drops to 14 Kb. But even here, the 14 Kb correspond to ~128 kbps bitrate, and the audio stream is only 80 kbps.
iTunes is not apparently alone in this situation. I haven’t looked closely, but it seems that WMA (Pro) have the same behavior, and we have no “optimize WMA layout” tool to partially correct this. If we keep in mind that the average length of my samples is 10 second with some of them at only 5 seconds, we have to admit that calculating the bitrate with filesize/length formula is for this test anything but reliable.

That’s why I followed the value calculated by specialized software. MrQuestionMan 0.7 was released during my test, but the software have some issue to calculate a correct average size on short-sized encodings (iTunes AAC encodings as example). Foobar2000 appeared as the most reliable tool, and I’ve decided to trust the calculated value. For practical reasons, foobar2000 is also preferable: the “copy name” command could be modified to easily export bitrate in spreadsheet.

2.6/ notation and scale

The -really- last problem wink.gif
Each time I have to evaluate quality at low bitrates I regret the inappropriateness of the scale in use in ABC/HR. At 80 kbps, encodings would rarely reach the 4.0 state (“slight but not annoying difference”). 3.0 (“slightly annoying”) would rather be the best quality degree that modern encoders could obtain at this bitrate. It implies that the notation will fluctuate within a compressed scale, from 1.0 to 3.0. It’s not very much, especially when big differences in quality between contenders are noticed by the tester.
To solve this issue, I’ve simply mentally lowered the visible scale by one point. Example: when I considered an encoding to be “annoying” (state corresponding to “2.0”) I put the slider to 3.0. The scale I used for the test was:
5.0 : “perceptible but not annoying”
4.0 : “slightly annoying”
3.0 : “annoying”
2.0 : “very annoying”
1.0 : “totally crap”

If exceptionally one encoding appeared as corresponding to “perceptible but not annoying” I’ve put the slider on 4.9, which means “5.0”; if the quality was superior to this state, I wrote the exact notation in comments. A transparent encoding obtained 6.0.
When the tests were finished, I’ve removed one point to all notation. 6.0 became 5.0, 3.4 -> 2.4 and 1.0 were transformed in a shameful 0.0! By doing it, I maintain the usual scale; only change is therefore a lower floor, corresponding to an exceptionally bad quality.
The redefinition of the quality scale could directly be redefined with Schnofler’s ABC/HR software, but apparently the tester have to type the description for each new test (did I miss an option?); it was faster for me to do this small mental exercise rather than typing more than 200 time the same content wink.gif

Now, the pools !
Go to the top of the page
+Quote Post
Start new topic
post Jul 12 2005, 17:16
Post #2

Group: Members (Donating)
Posts: 3474
Joined: 7-November 01
From: Strasbourg (France)
Member No.: 420

Back to the upcoming MP3 96 kbps Pool.

I have encoded the second group of 35 various samples, and bitrate was significantly higher than the average one obtained with the classical group.

iTunes classical = 100 kbps
iTunes various = 104 kbps
=> with iTunes, the bitrate for the second group is within the +/- 10% tolerence I've fixed. VBR is possible I'd say

Audition q30 classical = 96 kbps
Audition q30 various = 112 kbps
=> with Audition, the bitrate for the second group is 20% higher than the target bitrate. Even if I admit that the average bitrate for these 35 short samples doesn't entirely correspond the the average bitrate of full albums, it's clearly too high.

As a consequence, I've lowered the setting to VBR q20. Lowpass is automatically adjusted, but doesn't significantly drop (from 14780 to 14440 Hz). Bitrate:
Audition q20 classical = 89 kbps
Audition q20 various = 102 kbps
=> bitrate is now within the acceptable range for both group. However, bitrate for classical now reaches the critical limit of 87 kbps (96kbps -10%) and is not fully comparable with the bitrate obtained with iTunes (100 kbps).

Then I've tried VBR q25. It's a manual preset, and there's no defaulted lowpass value for manual VBR setting. I've therefore choose 14600 Hz. Bitrate:
Audition q25 classical = 92 kbps
Audition q25 various = 107 kbps
=> excessive deviation with the second group (+11...12%)

At this stage, I have four possibilities:

1/ Using Q20: bitrate is OK for group2, bitrate is OK for group1 but too low when compared to iTunes at 100 kbps.
2/ Using Q25: bitrate is OK for classical, but is too high for group2
3/ Trying Q22..23 in order to obtain an unlikely better compromise
=> in all cases, the selected setting can't be the optimal one for none of both musical category.

4/ Using two different settings for the two different groups.
To me, this possibility makes sense. As someone planning to encode classical only, I won't choose anything else than VBR Q30 which match the desired bitrate. If someone plan to encode something different, he won't probably happy with Q30 (~110 kbps) and will certainly go for Q20, and even maybe a slightly lower setting.
The dual bitrate problem will also occur with other listening tests. All VBR encoders can't output the same bitrate with different kind of samples. It can be experienced with faac, Nero AAC, Lame MP3, Fhg MP3, MPC, WMA9 and WMA9Pro. In all case, I will have to make compromises which probably not correspond to the users' choices. Using two different settings (each one corresponding to the rational choice of someone listening to either "various music" [yes, the concept sucks] or "classical music"].

I could also play a dangerous game: testing iTunes and Audition VBR encodings at an excessive bitrate, cross my finger and hope to see LAME win. The scenario is possible I'd say. iTunes has clearly no chance to pass the pool even with a winning bitrate; but I'm less confident with a contender such as Audition.

I feel that solution 2 (Q25 for both group) and solution 4 (Q20 for various, Q30 for classical) are the two more pertinent. What would be the best in your opinion?
Go to the top of the page
+Quote Post
post Jul 13 2005, 06:19
Post #3

Group: Members
Posts: 42
Joined: 19-June 05
From: Bergen, Norway
Member No.: 22841

QUOTE (guruboolez @ Jul 12 2005, 06:16 PM)
I feel that solution 2 (Q25 for both group) and solution 4 (Q20 for various, Q30 for classical) are the two more pertinent. What would be the best in your opinion?

The question is: Do you want to compare encoders at a given bitrate, or do you want to compare encoders using given settings? huh.gif In the first case, solution 4 will come closest to the intention, in the second case, solution 2 would be better.

Personally I think that there is most sence in comparing at a given bitrate. In a perfect world, this would actually mean that you would have to find the perfect VBR setting for each sample to get the perfect bitrate - however, this would generate a lot of work previous to the testing (unless it could be done via a program/script?), and would not be consistent with normal use or be very helpful for the normal user. I therefore find the idea of finding the perfect VBR setting for each group of samples to be a good compromise.

Conclusion: I'm a supporter of solution 4! smile.gif

Edit: In brackets

This post has been edited by a_aa: Jul 13 2005, 06:29
Go to the top of the page
+Quote Post
post Jul 13 2005, 14:38
Post #4

Group: Members
Posts: 385
Joined: 25-June 04
Member No.: 14895

QUOTE (a_aa @ Jul 12 2005, 09:19 PM)
[Personally I think that there is most sence in comparing at a given bitrate. In a perfect world, this would actually mean that you would have to find the perfect VBR setting for each sample to get the perfect bitrate - however, this would generate a lot of work previous to the testing (unless it could be done via a program/script?), and would not be consistent with normal use or be very helpful for the normal user. I therefore find the idea of finding the perfect VBR setting for each group of samples to be a good compromise.

Conclusion: I'm a supporter of solution 4! smile.gif

I am not too happy with solution 4 - after all, VBR modes target a certain quality, and not a certain bitrate. If the VBR mode allocates too little bits for certain samples and maybe thereby creates artefacts, it is a fault of the encoder/psymodel and should be treated and evaluated as such.
I know that there is no perfect solution to this problem, but I think combining the two sample sets and calculating the average bit rate of all samples and use this for selecting the VBR mode might be the least bad solution. It would be even less bad if the number of "classical" and "various" samples would be approximately equal wink.gif

Proverb for Paranoids: "If they can get you asking the wrong questions, they don't have to worry about answers."
-T. Pynchon (Gravity's Rainbow)
Go to the top of the page
+Quote Post
post Jul 13 2005, 15:25
Post #5

Group: Members (Donating)
Posts: 3474
Joined: 7-November 01
From: Strasbourg (France)
Member No.: 420

QUOTE (sTisTi @ Jul 13 2005, 02:38 PM)
I am not too happy with solution 4 - after all, VBR modes target a certain quality, and not a certain bitrate.

That's true. But everyone is counting on an approximate bitrate with every VBR setting. MPC --standard is something like ~180 kbps, MP3 --standard close to ~200 kbps, etc... We are all using this kind of correspondence, for our own purpose or for recommendation (someone using CBR 192 is suggested to use VBR --standard instead).
In other words, we're all thinking in term of bitrate.

If the VBR mode allocates too little bits for certain samples and maybe thereby creates artefacts, it is a fault of the encoder/psymodel and should be treated and evaluated as such.

Why "too little"? The fact is that bitrate is very different from one group to another. Here, classical needs less bitrate than 'various'. Therefore, someone listening to classical and looking for a VBR setting which offers 96 kbps would be tempted to use Q30, and not Q20, not Q25. He will use the VBR preset matching with the target.
But this setting, working well with classical, won't work with other kind of music. People looking for 96 kbps with various won't use Q30 (-> 110 kbps), Q25 neither (105 kbps). There's no point for both kind of listener to make a compromise, as long as they don't mix both kind of music.

I know that there is no perfect solution to this problem, but I think combining the two sample sets and calculating the average bit rate of all samples and use this for selecting the VBR mode might be the least bad solution.

I understand the logic, but does it really correspond to a real usage?
I'm tempted to make an analogy with video encoding. Take XviD as example. There's a dedicated mode to improve the quality with cartoons movies. If you want to test the quality of XviD with both movies and cartoons, it would be senseless to use one and only one parameter for both kind of movies. It would also be surprising if the tester will try to find an unlikely hybrid settings: he will certanly lower the quality for both genres, and then handicap the contender with this compromise. If I remember correctly, Doom9 has adapted the encoder's parameters according to the kind of movie. Futurama was encoded with cartoon mode, but not Matrix. Did people complaint? I don't know.

Contrary to previous collective tests, I'm using a wide gallery of samples. Wide enough to make a distinction between both kind of music. The second group is maybe a "bonus" (I said it in my first post), but I don't mean by "bonus" something minor that should be neglected. I'd like to test the reaction of various encoders with different kind of music, and see if some of these encoders are unbalanced in favor of either classical or either something more popular. From my own experience, some renowned encoders have based their reputation on one specific kind of music or sample - and one only. Problem is, that these reputed encoders are sometimes suggested to people listening to something totally different. LAME -V5 is as example working very well with many kind of music, but with classical at least, it's not trustable.
That's why I'm very interested to test two separate kind of samples. And I'm more and more convinced that testing both category with one setting is 1/ not optimal 2/ won't correspond to the real usage of possible listeners.

For example, with AAC at 128 kbps, it will be impossible for me to test Nero's VBR internet (which appeared to be the best AAC solution on classical on a previous test I made last december). Why: bitrate is ~140 kbps. WIth the seond group, the average bitrate don't have this problem. By discarding VBR for bitrate issues with one group, I'll be force to use CBR with both group, and I let you imagine the reaction of many people, which will probably shout about the usage of unoptimal setting, etc...

Some goes for MPC. If --radio seems to be close to 128 kbps with te second group, it's not the case for classical. I could try to find a average setting, and at the end I'll be force to use something comprise within the --thumb profile. I let you also imagine the reaction of some people (a couple of names immediately come into my mind...).

For all these reasons, it looks preferable for me to evaluate both group as independent one. You probably noticed that I didn't proposed any mixed results for the final test, and let both category totally independant.

It would be even less bad if the number of "classical" and "various" samples would be approximately equal wink.gif

I don't have material enough to build a coherent gallery similar to the classical one. And I don't plan to restrict the amount of tested situations with classical. I can't solve the imbalance of both categories, unless someone plan to build something similar with 'various' music.
Go to the top of the page
+Quote Post

Posts in this topic
- guruboolez   80 kbps personal listening test (summer 2005)   Jul 10 2005, 19:13
- - guruboolez   III. PRELIMINARY POOLS POOL#1: Nero AAC-LC Ne...   Jul 10 2005, 19:14
- - guruboolez   IV. FINAL TEST: AAC vs Vorbis vs WMA vs MP3 Have...   Jul 10 2005, 19:15
- - guruboolez   V. APPENDIX Very short this time • If pe...   Jul 10 2005, 19:16
- - rjamorim   Awesome, as always! Thanks, Guru.   Jul 10 2005, 20:34
- - SirGrey   Thanks a lot, guruboolez, as always To be hone...   Jul 10 2005, 21:02
- - spoon   RE Bitrate problems: if your samples are 10 second...   Jul 10 2005, 21:08
|- - guruboolez   QUOTE (spoon @ Jul 10 2005, 09:08 PM)RE Bitra...   Jul 10 2005, 21:25
- - Pri3st   Amazing work! Thanks for all that information...   Jul 10 2005, 22:06
- - Canar   Wow, guru. Even after debunking my invalid asserti...   Jul 10 2005, 22:12
- - sehested   Great work guruboolez!   Jul 10 2005, 22:20
- - music_man_mpc   Merci beaucoup Guru!   Jul 10 2005, 22:29
- - bond   i also have to say i am impressed! great work ...   Jul 10 2005, 22:40
- - Megaman   Thank you for taking the time to test the latest e...   Jul 10 2005, 23:07
- - rjamorim   What impressed me the most was Vorbis' perform...   Jul 10 2005, 23:13
- - nyaochi   Thank you very much for the fabulous test, guruboo...   Jul 11 2005, 00:15
- - IgorC   Great job. I agree with the most part of the state...   Jul 11 2005, 00:17
|- - rjamorim   QUOTE (IgorC @ Jul 10 2005, 08:17 PM)It would...   Jul 11 2005, 00:35
|- - guruboolez   QUOTE (IgorC @ Jul 11 2005, 12:17 AM)Indeed a...   Jul 11 2005, 02:43
|- - Mo0zOoH   QUOTE (guruboolez @ Jul 11 2005, 04:43 AM)AAC...   Jul 12 2005, 02:10
|- - guruboolez   QUOTE (Mo0zOoH @ Jul 12 2005, 02:10 AM)Edit: ...   Jul 12 2005, 02:27
|- - QuantumKnot   QUOTE (guruboolez @ Jul 12 2005, 11:27 AM)QUO...   Jul 16 2005, 02:14
|- - guruboolez   QUOTE (QuantumKnot @ Jul 16 2005, 02:14 AM)QU...   Jul 17 2005, 14:29
- - Razor70   So can I be the noob and stupid one here and ask.....   Jul 11 2005, 01:05
|- - Danimal   QUOTE (Razor70 @ Jul 10 2005, 07:05 PM)So can...   Jul 11 2005, 02:43
|- - Destroid   QUOTE (Razor70 @ Jul 11 2005, 12:05 AM)So can...   Jul 11 2005, 02:44
- - ff123   Fantastic work, guru! Hats off to you and all...   Jul 11 2005, 01:25
- - Jojo   wow! not sure what to say...it's just amaz...   Jul 11 2005, 01:27
- - HotshotGG   QUOTE So can I be the noob and stupid one here and...   Jul 11 2005, 02:35
|- - Razor70   QUOTE (HotshotGG @ Jul 10 2005, 08:35 PM)QUOT...   Jul 11 2005, 02:45
||- - guruboolez   QUOTE (Razor70 @ Jul 11 2005, 02:45 AM)Right ...   Jul 11 2005, 02:52
|- - guruboolez   QUOTE (HotshotGG @ Jul 11 2005, 02:35 AM)Guru...   Jul 11 2005, 02:48
- - sld   Wow... as a satisfied user of Vorbis for my flash ...   Jul 11 2005, 02:46
- - Enig123   Guru, you always bring us such brilliant articles....   Jul 11 2005, 03:02
- - HotshotGG   QUOTE Vorbis apparently embed some encoding tools ...   Jul 11 2005, 04:56
- - kl33per   Wow, Thanks for putting the effort in Guru.   Jul 11 2005, 06:17
- - spoon   QUOTE (guruboolez @ Jul 10 2005, 08:25 PM)QUO...   Jul 11 2005, 09:16
- - guruboolez   > Spoon: I undernstand better the purpose. Good...   Jul 11 2005, 10:47
- - Aoyumi   Guruboolez, I appreciate the large-scale test of y...   Jul 11 2005, 12:17
- - guruboolez   Aoyumi> congrats! I hope that your work wil...   Jul 11 2005, 15:12
|- - dev0   QUOTE (guruboolez @ Jul 11 2005, 03:12 PM)My ...   Jul 11 2005, 16:10
|- - rjamorim   QUOTE (dev0 @ Jul 11 2005, 12:10 PM)I'd v...   Jul 11 2005, 16:21
- - rutra80   Can we have listening-tests like this announced on...   Jul 11 2005, 16:44
|- - Garf   QUOTE (rutra80 @ Jul 11 2005, 05:44 PM)Can we...   Jul 11 2005, 16:48
|- - Pio2001   QUOTE (Garf @ Jul 11 2005, 05:48 PM)IMHO robe...   Jul 11 2005, 17:50
|- - guruboolez   QUOTE (Pio2001 @ Jul 11 2005, 05:50 PM)QUOTE ...   Jul 11 2005, 17:59
|- - rjamorim   QUOTE (guruboolez @ Jul 11 2005, 01:59 PM)and...   Jul 11 2005, 18:34
- - guruboolez   Something annoys me with Audition: it's a bit ...   Jul 11 2005, 16:54
|- - rjamorim   QUOTE (guruboolez @ Jul 11 2005, 12:54 PM)+ A...   Jul 11 2005, 17:18
- - guruboolez   I have three choices (they're translated in fr...   Jul 11 2005, 17:48
- - Zurman   Simply a m a z i n g Guru, as usual My underst...   Jul 11 2005, 20:44
|- - rjamorim   QUOTE (Zurman @ Jul 11 2005, 04:44 PM)mp3@128...   Jul 11 2005, 20:55
||- - Zurman   QUOTE (rjamorim @ Jul 11 2005, 11:55 AM)QUOTE...   Jul 11 2005, 22:07
|- - guruboolez   QUOTE (Zurman @ Jul 11 2005, 08:44 PM)Simply ...   Jul 11 2005, 20:57
|- - a_aa   First: Thanks for a very interesting article, I ad...   Jul 11 2005, 21:45
||- - Busemann   QUOTE (a_aa @ Jul 11 2005, 12:45 PM)I do unde...   Jul 11 2005, 21:59
||- - guruboolez   QUOTE (a_aa @ Jul 11 2005, 09:45 PM)but are t...   Jul 11 2005, 22:26
|- - Zurman   QUOTE (guruboolez @ Jul 11 2005, 11:57 AM)QUO...   Jul 11 2005, 22:08
|- - Busemann   QUOTE I didn't say that because it had the bes...   Jul 11 2005, 22:11
|- - Zurman   QUOTE (Busemann @ Jul 11 2005, 01:11 PM)QUOTE...   Jul 11 2005, 22:54
|- - aabxx   Cheers gurubolez. I tried to do this test myself ...   Jul 11 2005, 23:03
- - guruboolez   About 96 kbps and MP3 Using LAME and WMP10-Fhg is...   Jul 11 2005, 22:10
- - guruboolez   Did you try with all samples? Making a difference ...   Jul 12 2005, 00:11
- - guruboolez   Well, it seems that iTunes MP3 encoder would make ...   Jul 12 2005, 00:39
|- - Busemann   QUOTE (guruboolez @ Jul 11 2005, 03:39 PM)- i...   Jul 12 2005, 01:24
- - guruboolez   Back to the upcoming MP3 96 kbps Pool. I have enc...   Jul 12 2005, 17:16
|- - sehested   QUOTE (guruboolez @ Jul 12 2005, 08:16 AM)I f...   Jul 12 2005, 20:02
|- - a_aa   QUOTE (guruboolez @ Jul 12 2005, 06:16 PM)I f...   Jul 13 2005, 06:19
|- - sTisTi   QUOTE (a_aa @ Jul 12 2005, 09:19 PM)[Personal...   Jul 13 2005, 14:38
|- - guruboolez   QUOTE (sTisTi @ Jul 13 2005, 02:38 PM)I am no...   Jul 13 2005, 15:25
|- - a_aa   QUOTE (sTisTi @ Jul 13 2005, 03:38 PM)I am no...   Jul 13 2005, 15:54
- - sTisTi   @guruboolez & a_aa: I appreciate your argument...   Jul 13 2005, 17:45
|- - guruboolez   QUOTE (sTisTi @ Jul 13 2005, 05:45 PM)(...) a...   Jul 13 2005, 18:11
- - sehested   BTW Guruboolez, what is you listening set-up? Sou...   Jul 13 2005, 20:50
|- - Mo0zOoH   QUOTE (sehested @ Jul 13 2005, 10:50 PM)BTW G...   Jul 14 2005, 00:25
- - guruboolez   I used a Creative Audigy2, a Beyerdynamic DT-531 h...   Jul 14 2005, 01:41
- - Cygnus X1   Guru, I'm assuming that since you encoded the ...   Jul 14 2005, 02:28
|- - guruboolez   QUOTE (Cygnus X1 @ Jul 14 2005, 02:28 AM)I as...   Jul 14 2005, 02:38
- - aspifox   Resampling is a different ballgame -- I'd pre-...   Jul 14 2005, 10:12
- - Madman2003   Why do you use a creative audigy and not an emu ca...   Jul 14 2005, 10:39
- - guruboolez   I have a Terratec DMX6fire 24/96, and the annoyanc...   Jul 14 2005, 11:40
- - HotshotGG   QUOTE I'm open to suggestions. In the beginnin...   Jul 16 2005, 04:26
- - HbG   You can still say one encoder is preferable over t...   Jul 17 2005, 16:00
- - a_aa   If this is a discussion on principles, it is not a...   Jul 17 2005, 19:27
- - pepoluan   This test has been linked to from the HA Wiki page...   Jan 19 2006, 18:51
- - Jan S.   QUOTE (pepoluan @ Jan 19 2006, 06:51 PM)This ...   Jan 19 2006, 19:51
- - pepoluan   QUOTE (Jan S. @ Jan 20 2006, 01:51 AM)QUOTE (...   Jan 23 2006, 20:16

Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:


RSS Lo-Fi Version Time is now: 27th March 2015 - 04:49