Public MP3 Listening Test @ 128 kbps - FINISHED
Reply #102 – 2008-11-26 09:04:00
...Unlike you, I don't see anybody defending LAME in this thread. ... I don't think HELIX is currently as trustable as LAME. A possible collective experience may help to get a better vision of HELIX quality and flaws. This experience will make the pudding bigger and the proof clearer. Well, as you can learn from recent posts there are some people feeling that there are posters here defending Lame in an inadequate way (though there is nothing to defend). Chance is high they wouldn't do something similar if Lame had come out clear on top. I am one of these who feel like that. And you are one of those Lame defenders, and you do it in a way I really dislike. What you say isn't wrong, it's just killer statements which if taken seriously makes this test worthless. It's true, and you can read it for instance in my posts in this thread, that such a test just contributes to the experience on encoders. It is one of the most objective contributions of a considerable amount of participants with higher demands on encoder quality who spent a considerable amount of time evaluating this. It's the average judgement of active HA members (and comparable people) on the samples tested. Not more. Not less. You are trying to relativate Helix' result by throwing doubts on the way we can trust Helix, and on the other hand you try to give special merits to Lame because you think we can trust Lame more. This simply isn't fair. And it's even a bad argument, cause Lame 3.98 isn't Lame 3.97 and when going back in time we had significant changes in Lame technology when looking at the Lame history. Moreover what is this trust in Lame good for if for instance with Lame 3.97 the 'sandpaper problem' came up? We just should stick to the real experience we have with encoders. The trust speech without hard facts is the non-audio variant of the warm-fuzzy feeling speech. I like the way AlexB talked about his judgement on Helix behavior on 3 samples which he didn't like. He says what he felt, but in a way which respects the results of the test (which is the judgement of all the participants). If we look at the test results IMO we can conclude the following for practical purposes: a) the overall outcome of the encoders averaged over all the samples doesn't give any hint which encoder to use b) the detailed outcome of the encoders on the individual samples gives some hints which encoder to use: b1) iTunes and Lame 3.97 aren't attractive candidates for encoding (things can look different in case those samples where these encoders perform weakly are not very relevant for the individual choosing the encoder) b2) Lame 3.98, Helix, and FhG are all good candidates to use. Which encoder is 'best' is personal and can partially be answered by figuring out which samples are individually most relevant and looking at these encoders' outcome on these samples. Best is backing things up by additional personal tests with favorite music. Not mentioning non-audio quality related topics which are relevant too for encoder choice, but in a very individual way.