IPB

Welcome Guest ( Log In | Register )

35 Pages V  « < 24 25 26 27 28 > »   
Closed TopicStart new topic
Multiformat 128 kbps Listening Test, Pre-Test Discussion
user
post Nov 28 2005, 15:53
Post #626





Group: Members
Posts: 873
Joined: 12-October 01
From: the great wide open
Member No.: 277



another alternative:

1. test new vorbis, new aac, new lame, (eg. mpc or certain old lame version (3.90.x ?, formats/versions used already in previous 128k listening test) as cross-anchor) at 128k vbr,

2. select a format (of the above) as competitor (or some) at 104k vbr against wma-standard 50.
maybe this is meaningful.

This post has been edited by user: Nov 28 2005, 16:05


--------------------
www.High-Quality.ch.vu -- High Quality Audio Archiving Tutorials
Go to the top of the page
+Quote Post
guruboolez
post Nov 28 2005, 16:03
Post #627





Group: Members (Donating)
Posts: 3474
Joined: 7-November 01
From: Strasbourg (France)
Member No.: 420



QUOTE (Dibrom @ Nov 28 2005, 03:35 PM)
For your information, the original criticisms were constructive (I notice you didn't even respond to my "constructive criticisms" in my last post -- how's that for selective reading?).  They became unconstructive once someone decided to take them personally
*

Your original criticisms were maybe constructive (I doubt so), but were totally off-topic. The way LAME developers are working on their encoder has nothing to do or to be debate in this thread.

QUOTE
It was rude of Gambit to make the comment that he did. But on the other hand, he was making an observation based on actual behavior. Given all that has transpired in this thread and now this, I can't say that what he said was wrong (sorry), even if it wasn't very tactful to make it a public observation.

He made an observation?! He treated Sebastian as "clueless person" (before abusing of the moderating tools by removing the insult and cleaning my quote rolleyes.gif ). Clueless mean "someone who knows nothing about nothing". Is Sebastian a clueless guy? Does he really know nothing? How could you tell that Gambit's words were just an observation?! It's offending. Based on nothing.
Go to the top of the page
+Quote Post
Sebastian Mares
post Nov 28 2005, 16:05
Post #628





Group: Members
Posts: 3637
Joined: 14-May 03
From: Bad Herrenalb
Member No.: 6613



QUOTE (Dibrom @ Nov 28 2005, 03:35 PM)
It was rude of Gambit to make the comment that he did.  But on the other hand, he was making an observation based on actual behavior.  Given all that has transpired in this thread and now this, I can't say that what he said was wrong (sorry), even if it wasn't very tactful to make it a public observation.
*


Could you give me some precise and objective facts why you think I am the most clueless guy in this thread?

Edit: You can send me the reasons via PM so we don't have to continue the off-topicness in this thread.

QUOTE (Dibrom @ Nov 28 2005, 03:35 PM)
QUOTE
I doubt you would like to go to school and have the maths teacher insult you because you don't know how to determin the maxiumum and minumum points of a function.
*


I don't think it was even really about the math (for the most part), which just says another thing about communication...
*



It was an example. Gambit replied that I am the most clueless guy in this thread after I wrote that WMP encodes to WMA Standard, CBR 128 kbps, with DRM by default. And I wrote this because some other user suggested testing all codecs at their default values.

QUOTE (Dibrom @ Nov 28 2005, 03:35 PM)
QUOTE (Sebastian Mares @ Nov 28 2005, 04:13 AM)
WTF are you talking about? Do you want to say that Roberto influenced me to think you're a dick? Sorry to disappoint you, but I found that myself.
And pointing out flaws looks different that starting to bitch and flame IMO.
You simply came into this thread and started writing how clueless we all are and how God-like you are. I had to ask you and Dibrom multiple times to provide some constructive critism and not only "this is shit", "you are the most clueless guy in this thread", etc.


Ah, yes, of course. Another one of those "I like criticism, but only when it's not critical, and only when I don't take it personally."

For your information, the original criticisms were constructive (I notice you didn't even respond to my "constructive criticisms" in my last post -- how's that for selective reading?). They became unconstructive once someone decided to take them personally and use them as a launching point to rail on with his tirades about how damn evil HA and myself are, and how there are insidious conspiracies against LAME lurking in dark shadows and the hearts of HA admins alike. Or something like that, but probably worse, and definitely much scarier.

I notice that in both of your complaints, you didn't once fault this person, yet place the blame on Gambit and I. Again, how convenient for you!

You say you aren't influenced? Hrmm.. could have fooled me. You're being influenced by someone's fantasy, whether it's simply yours or someone elses.

*sigh* I'm really surprised to see this sort of nonsense coming from you, after all that has been gone through to end earlier problems in this thread. As test coordinator, I'd have thought you'd want to keep this kind of thing out of the discussion, and keep the discussion on topic.

But anyway, I guess it's illustrative. You and your buddies have made much of a point by now that you only want debate on your terms, within very specific constraints, otherwise you get very upset. That's not the way that HA works (granting exception for forum rules), and certainly shouldn't be the way a supposedly democractic process in test preparation should transpire. It's simply not worth it to me to continue to bother with this thread under those considerations, so I suppose you won't have to look forward to any more "unconstructive" criticisms from me at any rate. In hind sight, it was quite stupid of me to ever optimistically come back for seconds to begin with...
*



No, I don't want to debate on my terms. Whenever asked for, I gave my opinion why something should or shouldn't be done in one way or another. The opinions I had were based on arguments I thought are good enough. The only on-topic discussion we had in this thread was about codecs and settings.

For codecs, I clearly stated in the first post that Nero AAC, iTunes AAC, LAME and AoTuV are going to be tested. The only redundant format is AAC which is included twice, but that's because Nero released their new encoder and because Ivan and Garf wanted some results from more people and samples. iTunes AAC was included because it performed very well and it would be nice to compare it against the other formats.
What was left was the discussion for either two other competitors or a competitor and a low anchor. I soon realized that a low anchor is needed because of the facts already mentioned. This made room for only one competitor. People suggested WMA Pro, WMA Std, MPC and ATRAC. I gave my opinion on ATRAC: the format is not very popular since only Sony SW/HW players support it (or MDs, but only LP2 encodes to a bitrate similar to 128 kbps and then again, hardware encoders don't produce the same bitstream as software encoders, especially since SonicStage was updated, while MDs units were not). Regarding MPC, please read the post that I linked to in the poll. What was left was WMA Pro and Std. I first wanted to test Pro, but then changed my mind and decided to test Std because it's more popular and therefore more meaningful. I also didn't exclude Pro from an extension test together with ATRAC if people really want it. So, the codec collection was pretty much done.
Oh, low anchor... I decided to use Shine because it simply produces low quality files. Although Blade is crap, too, it features a rudimentary psymodel that gives a slight advantage over Shine - an advantage which is not desired for a low anchor.
Later, we found out that 2-pass VBR with WMA Standard cannot be used and that VBR quality mode produces files with too high/low bitrate. So we had to open the "discussion" about the fifth competitor ("discussion" in quotation marks because I only wanted a poll so I don't have to lose time reading arguments and whatever between stuff like "John Doe: XYZ", "I'm with John Doe", "Yeah, XYZ" or "I think format ABC should be tested and not what you have in the poll"...

The settings discussion also derailed because some people had nothing better to do than start discussing what LAME should use as default and how disorganized LAME developers are and what a bad experience you had. That's why I wanted to test the formats at their best possible settings. If Gabriel (who should know what LAME does) suggested --vbr-new, I use --vbr-new. The same thing I do with Nero AAC. If Ivan recommends XYZ, I will use XYZ.

Anyways, I would really like to stop discussion these personal issues. If you think I am a jerk, fine, do so, I have no problem with that. smile.gif
If you think this test is a huge bullshit, don't take it.

This post has been edited by Sebastian Mares: Nov 28 2005, 16:13


--------------------
http://listening-tests.hydrogenaudio.org/sebastian/
Go to the top of the page
+Quote Post
Sebastian Mares
post Nov 28 2005, 16:10
Post #629





Group: Members
Posts: 3637
Joined: 14-May 03
From: Bad Herrenalb
Member No.: 6613



QUOTE (naylor83 @ Nov 28 2005, 03:42 PM)
Sebastian, may I make a suggestion:

Why don't you start over again, in a new thread. We can leave all of this behind us. You can clearly state in the first post what the purpose/goal of the test is, and any discussion can go on from there.
*


And start another bash fest in such a short time? Nah. tongue.gif

The idea is not bad, but why not stick to the topic from now on?

QUOTE (user @ Nov 28 2005, 03:53 PM)
another alternative:

1. test new vorbis, new aac, new lame, (eg. mpc or certain old lame version (3.90.x ?, formats/versions used already in previous 128k listening test) as cross-anchor) at 128k vbr,

2. select a format (of the above) as competitor (or some) at 104k vbr against wma-standard 50.
maybe this is meaningful.
*


I didn't understand that. Could you paraphrase the sentences, please?


--------------------
http://listening-tests.hydrogenaudio.org/sebastian/
Go to the top of the page
+Quote Post
naylor83
post Nov 28 2005, 16:14
Post #630





Group: Members
Posts: 210
Joined: 19-June 05
From: Uppsala, Sweden
Member No.: 22842



QUOTE (Sebastian Mares @ Nov 28 2005, 05:10 PM)
QUOTE (naylor83 @ Nov 28 2005, 03:42 PM)
Sebastian, may I make a suggestion:

Why don't you start over again, in a new thread. We can leave all of this behind us. You can clearly state in the first post what the purpose/goal of the test is, and any discussion can go on from there.
*


And start another bash fest in such a short time? Nah. tongue.gif

The idea is not bad, but why not stick to the topic from now on?


Because, as you can see, this is going nowhere... sad.gif

Alternatively, you could just change the title for this topic/thread to "Multiformat 128 kbps Listening Test - pre-test flame war" laugh.gif rolleyes.gif wink.gif blink.gif

This post has been edited by naylor83: Nov 28 2005, 16:38


--------------------
davidnaylor.org
Go to the top of the page
+Quote Post
user
post Nov 28 2005, 16:14
Post #631





Group: Members
Posts: 873
Joined: 12-October 01
From: the great wide open
Member No.: 277



Nero AAC, iTunes AAC, LAME and AoTuV
wma-standard
--> problems, alternative solutions possible.

You are right with this selection, those are the popular formats.
But to get facts about the popular wma (wma is written at nearly every hardware player today), the test should be configured such, that either wma-standardq50 gets a fair comparison (at 104k), or the test is splitted, the other formats at 128k vbr, then a 2nd test with wma-standard q50 at 104k vbr, with 1 or a few competitors.

about the anchor:

if the anchor is too low, you could add simply noise in a file. The anchor should be low, but not too low, well all is relative. Shine, Blade, any will do.

But for cross comparisons to future and past tests, we should consider a "cross-anchor" concept.
A quite stable format/encoder version, which has meaning, maybe of the past or present, but which isn't too bad performing,
ie. some old Blade version is too bad.
MPC 1.14 or 1.15(v) would suit to this concept,
the specific mpc version depending on,imo, which has been in use in past tests,
or a stable lame, like 3.90.x, which was also well tested and quite well performing overall.

This post has been edited by user: Nov 28 2005, 16:20


--------------------
www.High-Quality.ch.vu -- High Quality Audio Archiving Tutorials
Go to the top of the page
+Quote Post
Sebastian Mares
post Nov 28 2005, 16:18
Post #632





Group: Members
Posts: 3637
Joined: 14-May 03
From: Bad Herrenalb
Member No.: 6613



QUOTE (user @ Nov 28 2005, 04:14 PM)
Nero AAC, iTunes AAC, LAME and AoTuV
wma-standard --> problems, alternative solutions possible.

You are right with this selection, those are the popular formats.
But to get facts about the popular wma (wma is written at nearly every hardware player today), the test should be configured such, that either wma-standardq50 gets a fair comparison (at 104k), or the test is splitted, the other formats at 128k vbr, then a 2nd test with wma-standard q50 at 104k vbr, with 1 or  a few competitors.
*


Ah, got the point now. Well, such a test would have to be conduced in January the earliest since a parallel test would be too difficult. We can talk about that later.

QUOTE (user @ Nov 28 2005, 04:14 PM)
about the anchor:

if the anchor is too low, you could add simply noise in a file. The anchor should be low, but not too low, well all is relative. Shine, Blade, any will do.

But for cross comparisons to future and past tests, we should consider a "cross-anchor" concept.
A quite stable format/encoder version, which has meaning, maybe of the past or present, but which isn't too bad performing,
ie. some old Blade version is too bad.
MPC 1.14 or 1.15(v) would suit to this concept,
the specific mpc version depending on,imo, which has been in use in past tests,
or a stable lame, like 3.90.x, which was also well tested and quite well performing overall.
*


Something like that is only useful when the same samples and the same listeners are used (this sounds like we "use" listeners tongue.gif). When changing samples and/or listeners, we can't really do a 1:1 comparison.

This post has been edited by Sebastian Mares: Nov 28 2005, 16:24


--------------------
http://listening-tests.hydrogenaudio.org/sebastian/
Go to the top of the page
+Quote Post
guruboolez
post Nov 28 2005, 16:48
Post #633





Group: Members (Donating)
Posts: 3474
Joined: 7-November 01
From: Strasbourg (France)
Member No.: 420



QUOTE (user @ Nov 28 2005, 04:14 PM)
But for cross comparisons to future and past tests, we should consider a "cross-anchor" concept.
*

The idea sounds good, but with low anchor + historical reference we have two additional formats featuring in each test. Comparison between tests are also difficult to perform (Sebastian recalled them).
Go to the top of the page
+Quote Post
ChiGung
post Nov 28 2005, 16:48
Post #634





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



I really believe the 2pass is not a problem if used on the joined sample corpus with minimum gaps between samples and an accurate cue sheet - the same joined corpus and cue sheet could be used for all the codecs, then decoded, recut, and losslessly packed for distribution to testers. Distributing lossless decodes of the all the test samples could make the test 'blinder' and avoid duplication of work and possible errors in the encoding/decoding process.
Wma's 2pass vbr process is a bitrate targeting method only. It will distribute bitrate among the test samples in the same manner as it would if the samples where encoutered inside their full tracks (its all relative -there is no degradation of the 2pass calibrated vbr performance when the individual samples complexity is related to the sample collection - than related to the sample collection inside its source tracks)
Its only true that short duration samples would have slightly more vbr freedom than long samples, and samples relative shortness will differ relating to their parent track or the bulk sample corpus, but the bitrate of all the other manualy chosen vbr settings relate to the corpus only (they are related when they are totaled and checked against the test target bitrate) and* if 2pass vbr on full tracks was used the achieved bitrates of the samples would still need to be summed and checked against the tests target bitrate.


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
guruboolez
post Nov 28 2005, 16:51
Post #635





Group: Members (Donating)
Posts: 3474
Joined: 7-November 01
From: Strasbourg (France)
Member No.: 420



QUOTE (ChiGung @ Nov 28 2005, 04:48 PM)
Wma's 2pass vbr process is a bitrate targeting method only. It will distribute bitrate among the test samples in the same manner as it would if the samples where encoutered inside their full tracks
*

Right. But a real track is not composed by 15 "difficult" short samples mixed into one big file. Therefore, the bitrate distribution of WMA 2-pass would differ between our test files and real encodings. It's the biggest concern. We can't test something that don't correspond to what users would get by using the same setting.
Go to the top of the page
+Quote Post
Sebastian Mares
post Nov 28 2005, 17:07
Post #636





Group: Members
Posts: 3637
Joined: 14-May 03
From: Bad Herrenalb
Member No.: 6613



QUOTE (ChiGung @ Nov 28 2005, 04:48 PM)
I really believe the 2pass is not a problem if used on the joined sample corpus with minimum gaps between samples and an accurate cue sheet - the same joined corpus and cue sheet could be used for all the codecs, then decoded, recut, and losslessly packed for distribution to testers. Distributing lossless decodes of the all the test samples could make the test 'blinder' and avoid duplication of work and possible errors in the encoding/decoding process.
Wma's 2pass vbr process is a bitrate targeting method only. It will distribute bitrate among the test samples in the same manner as it would if the samples where encoutered inside their full tracks (its all relative -there is no degradation of the 2pass calibrated vbr performance when the individual samples complexity is related to the sample collection - than related to the sample collection inside its source tracks)
Its only true that short duration samples would have slightly more vbr freedom than long samples, and samples relative shortness will differ relating to their parent track or the bulk sample corpus, but the bitrate of all the other manualy chosen vbr settings relate to the corpus only (they are related when they are totaled and checked against the test target bitrate) and* if 2pass vbr on full tracks was used the achieved bitrates of the samples would still need to be summed and checked against the tests target bitrate.
*

  1. Providing all samples losslessly => total package size is over 300 MB. While most people might have DSL, why not make things as easy as possible for the testers?
  2. It has been said already that using such a synthetic scenario would not reflect real-world usage at all. The problem is that combining all samples would result in one very complex file. Usually, audio files are no complex all the way, but also have parts that are easier to encode.


--------------------
http://listening-tests.hydrogenaudio.org/sebastian/
Go to the top of the page
+Quote Post
ChiGung
post Nov 28 2005, 17:10
Post #637





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



QUOTE (guruboolez @ Nov 28 2005, 03:51 PM)
But a real track is not composed by 15 "difficult" short samples mixed into one big file. Therefore, the bitrate distribution of WMA 2-pass would differ between our test files and real encodings. It's the biggest concern. We can't test something that don't correspond to what users would get by using the same setting.
*

But the other codecs achieve the target bitrate using a prescribed setting - their setting also only produces the target bitrate for 15 difficult files, in real world usage their setting is likely to give lower bitrate, in the same way that wma's 2pass chosen (inaccessible) setting would result in a lower bitrate. Its the vbr quality being tested that produces target bitrate for these samples. The 2 pass is just a rather accurate method of targeting bitrate, only it doesnt return the precise settings needed to target the bitrate, just the encoded file.


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
naylor83
post Nov 28 2005, 17:17
Post #638





Group: Members
Posts: 210
Joined: 19-June 05
From: Uppsala, Sweden
Member No.: 22842



QUOTE (ChiGung @ Nov 28 2005, 06:10 PM)
But the other codecs achieve the target bitrate using a prescribed setting - their setting also only produces the target bitrate for 15 difficult files, in real world usage their setting is likely to give lower bitrate, in the same way that wma's 2pass chosen (inaccessible) setting would result in a lower bitrate.


No, not if I've understood how normal (1-pass) VBR works:

The chosen samples will get the same bitrates, whether included in the original song or not. This is becaue one-pass VBR (vorbis, aac, mp3) sets the bitrate for each frame based only on the complexity of that very frame, i.e. regardless of how easy or difficult the rest of the track is. This is equal to quality targeted encoding, as opposed to bitrate targeted encoding.

Or maybe I just don't understand what you're trying to say.

This post has been edited by naylor83: Nov 28 2005, 17:20


--------------------
davidnaylor.org
Go to the top of the page
+Quote Post
ChiGung
post Nov 28 2005, 17:32
Post #639





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



QUOTE (naylor83 @ Nov 28 2005, 04:17 PM)
..The chosen samples will get the same bitrates, whether included in the original song or not. ...Or maybe I just don't understand what you're trying to say.
*

Thats not neccessarily your fault wink.gif

Ill try a different angle.
The test is compare performance of different vbr codecs performance running at optimal settings to produce a target average bitrate across a challenging sample corpus. The settings for most codecs are manualy/expertly prescribed. WMA provides a 2pass method to achieve the bitrate automatically and no way to achieve it manually. I see no problem. The other codecs achieved bitrate *also relates only to the test corpus and does not reflect real world performance in that respect.


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
stephanV
post Nov 28 2005, 17:41
Post #640





Group: Members
Posts: 394
Joined: 6-May 04
Member No.: 13932



QUOTE (ChiGung @ Nov 28 2005, 05:10 PM)
But the other codecs achieve the target bitrate using a prescribed setting - their setting also only produces the target bitrate for 15 difficult files, in real world usage their setting is likely to give lower bitrate
*

No, the achieved bitrates for the VBR presets are determined by using a large range of tracks.

http://www.hydrogenaudio.org/forums/index....showtopic=38955


Why using VBR 2-pass on just the sample is wrong was explained by Alex B a while ago.

This post has been edited by stephanV: Nov 28 2005, 17:42


--------------------
"We cannot win against obsession. They care, we don't. They win."
Go to the top of the page
+Quote Post
ChiGung
post Nov 28 2005, 17:45
Post #641





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



QUOTE
No, the achieved bitrates for the VBR presets are determined by using a large range of tracks.
http://www.hydrogenaudio.org/forums/index....showtopic=38955

Actualy, estimated using a large range of tracks, if the bitrate doesnt fit this sample corpus, the setting or corpus would be changed to fit.

edit: The vbr settings are determinded (finally) by the sample corpus used, no more no less.

This post has been edited by ChiGung: Nov 28 2005, 18:22


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
stephanV
post Nov 28 2005, 17:48
Post #642





Group: Members
Posts: 394
Joined: 6-May 04
Member No.: 13932



mellow.gif

Forgive me, I don't quite get your last comment...


--------------------
"We cannot win against obsession. They care, we don't. They win."
Go to the top of the page
+Quote Post
Synthetic Soul
post Nov 28 2005, 18:25
Post #643





Group: Super Moderator
Posts: 4887
Joined: 12-August 04
From: Exeter, UK
Member No.: 16217



QUOTE (rjamorim @ Nov 17 2005, 10:33 PM)
QUOTE (naylor83 @ Nov 17 2005, 07:50 PM)
Since I wasn't around when the last test was conducted - how exactly do you go about distributing a test like this? Are we given a package to download, which includes software and a load of samples and instructions?
You are given a package with all executable files: Java ABC/HR, command line decoders, and batch files to process the samples.

Then, you download each sample package at a time or the whole sample collection at once.

Download this, and read the readme:
http://pessoal.onda.com.br/rjamorim/abc-hr_bin.zip

You'll get an idea on how a listening test works.

Thank you for that Roberto. I have taken a look and am trying to get myself up to speed. The readme is excellent, and I have looked at ff123's page.

However, Sebastian, can you please confirm that this is the method that you will use? If I am to familiarise myself with ABC/HR Java I just want to check that I'm facing in the right direction! smile.gif

Will it be 0.5beta, the latest version available at Rarewares?

NB: I have a 95% chance of downloading the necessary files, but a 10% chance of actually returning any results. However I would very much like to participate, to both extend my knowledge of such things and contribute to the results of the test.


--------------------
I'm on a horse.
Go to the top of the page
+Quote Post
ChiGung
post Nov 28 2005, 18:26
Post #644





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



QUOTE (stephanV @ Nov 28 2005, 04:48 PM)
mellow.gif

Forgive me, I don't quite get your last comment...
*

Its basically not true what you said, that the other codecs settings are determined from a larger range of samples, prelim tests may include more samples but the final achieved average bitrate is for the sample corpus used, it doesnt include parent tracks or other unused samples, and if the bitrate is off target, new setting is required (ideally) or a corpus change (much less ideal)

edit: well it seems this detail is wrong, drat, dont let it get in the way of the flow though - it shouldnt, it couldve gone either way rolleyes.gif

This post has been edited by ChiGung: Nov 29 2005, 01:53


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
Sebastian Mares
post Nov 28 2005, 18:36
Post #645





Group: Members
Posts: 3637
Joined: 14-May 03
From: Bad Herrenalb
Member No.: 6613



QUOTE (Synthetic Soul @ Nov 28 2005, 06:25 PM)
QUOTE (rjamorim @ Nov 17 2005, 10:33 PM)
QUOTE (naylor83 @ Nov 17 2005, 07:50 PM)
Since I wasn't around when the last test was conducted - how exactly do you go about distributing a test like this? Are we given a package to download, which includes software and a load of samples and instructions?
You are given a package with all executable files: Java ABC/HR, command line decoders, and batch files to process the samples.

Then, you download each sample package at a time or the whole sample collection at once.

Download this, and read the readme:
http://pessoal.onda.com.br/rjamorim/abc-hr_bin.zip

You'll get an idea on how a listening test works.

Thank you for that Roberto. I have taken a look and am trying to get myself up to speed. The readme is excellent, and I have looked at ff123's page.

However, Sebastian, can you please confirm that this is the method that you will use? If I am to familiarise myself with ABC/HR Java I just want to check that I'm facing in the right direction! smile.gif

Will it be 0.5beta, the latest version available at Rarewares?

NB: I have a 95% chance of downloading the necessary files, but a 10% chance of actually returning any results. However I would very much like to participate, to both extend my knowledge of such things and contribute to the results of the test.
*



Yes, I can confirm that.

I hope you can find time to test some samples. smile.gif


--------------------
http://listening-tests.hydrogenaudio.org/sebastian/
Go to the top of the page
+Quote Post
stephanV
post Nov 28 2005, 19:58
Post #646





Group: Members
Posts: 394
Joined: 6-May 04
Member No.: 13932



QUOTE (ChiGung @ Nov 28 2005, 06:26 PM)
QUOTE (stephanV @ Nov 28 2005, 04:48 PM)
mellow.gif

Forgive me, I don't quite get your last comment...
*

Its basically not true what you said, that the other codecs settings are determined from a larger range of samples, prelim tests may include more samples but the final achieved average bitrate is for the sample corpus used, it doesnt include parent tracks or other unused samples, and if the bitrate is off target, new setting is required (ideally) or a corpus change (much less ideal)
*


No, this is not true.

It only makes sense to take the setting that achieves ~ 128 kbps on a large variety of samples and as far as i understand this is also done. If not, the test would be pointless.


--------------------
"We cannot win against obsession. They care, we don't. They win."
Go to the top of the page
+Quote Post
ChiGung
post Nov 28 2005, 20:21
Post #647





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



If any codec&setting's actual average bitrate for the actual test samples used fell outside target bitrate - adjustments would be made to its setting or the test samples to allow it compete with the others complying with the average bitrate requirement of the test.
Its pure distraction and false to say that the other codecs are targeting the bitrate requirement any more fairly than wma would target directly with 2pass.
The other codecs bitrates are tuned for the sample corpus effectively with a 'manual multipass' method, that wma cant use because its finer settings arent available. WMA can tune its fine settings itself with an automatic 2 pass method. Thats the bare facts of this debate. A real world usage consideration has been applied towards wma's 2pass method which has not been recognised against the others.
Real world usage cant be tested without imitating real world usage, but using plenty of different samples allows differences in test focus and real world usage to average out (the more times you role a dice, the more likely the average score of the summed dice rolls is to be close to the middle)


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
naylor83
post Nov 28 2005, 20:36
Post #648





Group: Members
Posts: 210
Joined: 19-June 05
From: Uppsala, Sweden
Member No.: 22842



QUOTE (ChiGung @ Nov 28 2005, 09:21 PM)
If any codec&setting's actual average bitrate for the actual test samples used fell outside target bitrate - adjustments would be made to its setting or the test samples to allow it compete with the others complying with the average bitrate requirement of the test.
Its pure distraction and false to say that the other codecs are targeting the bitrate requirement any more fairly than wma would target directly with 2pass.
The other codecs bitrates are tuned for the sample corpus effectively with a 'manual multipass' method, that wma cant use because its finer settings arent available. WMA can tune its fine settings itself with an automatic 2 pass method. Thats the bare facts of this debate. A real world usage consideration has been applied towards wma's 2pass method which has not been recognised against the others.
Real world usage cant be tested without imitating real world usage, but using plenty of different samples allows differences in test focus and real world usage to average out (the more times you role a dice, the more likely the average score of the summed dice rolls is to be close to the middle)
*


Sebastian - can you confirm that this is the method used? I.e. that the bitrate of the other contenders is tuned based on the sample corpus rather than a large batch of full songs and tracks.


--------------------
davidnaylor.org
Go to the top of the page
+Quote Post
Sebastian Mares
post Nov 28 2005, 20:46
Post #649





Group: Members
Posts: 3637
Joined: 14-May 03
From: Bad Herrenalb
Member No.: 6613



No, I cannot confirm that. The settings used are going to be settings that average 128 kbps on a large batch of full songs. If the codecs reach a bitrate that it too high with the given samples in this test, I will replace the one or the other sample.

If I could only find ff123's post regarding a similar decision in a test ran in the past...

This post has been edited by Sebastian Mares: Nov 28 2005, 20:57


--------------------
http://listening-tests.hydrogenaudio.org/sebastian/
Go to the top of the page
+Quote Post
ChiGung
post Nov 28 2005, 21:07
Post #650





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



QUOTE (Sebastian Mares @ Nov 28 2005, 07:46 PM)
No, I cannot confirm that. The settings used are going to be settings that average 128 kbps on a large batch of full songs. If the codecs reach a bitrate too high with the given samples in this thread, I will replace the one or the other sample with a different one.

That doesnt contradict what I said....
QUOTE (Chigung)
If any codec&setting's actual average bitrate for the actual test samples used fell outside target bitrate - adjustments would be made to its setting or the test samples to allow it compete with the others complying with the average bitrate requirement of the test.


Either adjusting the setting or the samples is a deviation from real world usage.
And the fairness of tweaking the test material to raise or lower a codecs utilised bitrate for the challenging corpus is obviously unideal -could benefit or be detrimental to that codecs performance.

But if you decide to do it that way, and find settings averaging 128 kbs for a wide range, then just 2pass the linked test corpus attached to the extra 'normal material' to produce your average 128kbs wma encode, and do the normal manual multipass of same material, to find/check out the settings for the other codecs to get you their encodes.

(if distibuting all of the samples blindly and losslessly is too much, just distribute the wma decodes losslessly ((if they cant be cut well)) )

This post has been edited by ChiGung: Nov 28 2005, 22:41


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post

35 Pages V  « < 24 25 26 27 28 > » 
Closed TopicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



RSS Lo-Fi Version Time is now: 23rd December 2014 - 00:26