IPB

Welcome Guest ( Log In | Register )

35 Pages V  « < 27 28 29 30 31 > »   
Closed TopicStart new topic
Multiformat 128 kbps Listening Test, Pre-Test Discussion
user
post Nov 30 2005, 13:08
Post #701





Group: Members
Posts: 873
Joined: 12-October 01
From: the great wide open
Member No.: 277



Interpretation of the poll:

I am a little surprised, that wma-pro is such a clear winner with close to 50%.
This means, HA members seem to be interested in wma-pro.

None-votes:
Or did they chose wma-pro, as it wouldn't cause probs in the 128k test setup ?
That the test setup is of concern to HA members, shows the high result, that none additional format should be added to test, ie., votes for make a simple test, so that there are no flaws.

HA members don't show much interest in wma-standard, or are concerned about the test setup again, difficulty with 128k target bitrate.

No surprise (for me) at HA: Interest of HA members watching, how MPC performs with new encoders. Maybe those voters considered also the test setup, mpc as comparable anchor.




Each vote could be driven by several parameters, which can be in no particular order:
- personal capabilities regarding of existing portable hardware
- personal interest regarding of future hardware
- scientific (not influenced by above personal interests) curiousity of a ranking of the various formats, so that even unpopular encoders were selected
- thoughts about a good test setup


So, as we are here with a more general scientific and curiousity interest, and we aren#t prejudiced towards encoders in generally, otherwise we would stand on same low level as other groups ("Hifi-Wigwam" wink.gif ), we will be open to testing all formats, of course considering quality & practical usability.
So, I suggest, testing ogg, acc 2x, lame, maybe with the winner wma-pro,
but considering the whole picture,
I suggest instead something different, as we have aac in 2 modern variants in the test, there is no reason against, to add a second lame, the old but famous 3.90.3.

A 2nd consecutive test should contain then a comparable anchor format encoder, and wma , wma-pro(or instead lame 3.90.3) , mpc, if possible at lower bitrate as 128k, not only because of the wma-standard-128k-problem.

of course, it is not a big problem, if wma-pro and the lame 3.90.3 are swapped in those 2 tests.

This post has been edited by user: Nov 30 2005, 13:20


--------------------
www.High-Quality.ch.vu -- High Quality Audio Archiving Tutorials
Go to the top of the page
+Quote Post
ChiGung
post Nov 30 2005, 13:11
Post #702





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



QUOTE (sehested @ Nov 30 2005, 07:44 AM)
QUOTE (ChiGung @ Nov 29 2005, 04:24 PM)
@Jaz - leave the condescension to those who know how to read a freekin' equation mad.gif
Could we please have proper tone...

I got frustrasted because Jaz obviously just skimmed over my post and decided to confront me with the first thing that confused him, after several pages of me patiently trying to get you guys to grapple with the reality of what you have previously rejected for indestinct (lack of) reasoning, Im supposed to take Jaz's 'ahem'ing SUPPOSED nonsensical objections without bother. Maybe I should, but Im weak dry.gif
QUOTE
I too fail to see the point in the formulas you present. blink.gif

The point is to model the result of the bitrate targeting process mathematically, to make it easier for those with the means to do so, to conceptualise its nature. Ive been doing alot of mathematical modelling recently and am confident the model is analogous to the 2passes actual performance, it could be tweaked for example with further scaling to Demandrate by a function of target bitrate, but based on the axiom that the prepass discerns a global vbr adjustment its linearity and thus the conclusion I drew from it are secure.
Its true the mathematical rendering of the result shouldnt add anything, but it might help someone with a chance to understand the performance of the process to focus mathematically on the Object in contention.
QUOTE
Adding engineering units to the formulas might help you to see what I mean:
phrase_Demandrate kbps = phrase_Bitrate kbps / target_Bitrate kbps
phrase_Bitrate kbps = phrase_Demandrate kbps * target_Bitrate kbps

Now according to my old math book:
kbps / kbps = factor without unit
kbps * kbps = kbps

It's like apples and pears, you are not supposed to compare them. wink.gif

I noticed the possible confusion with units too, but decided to leave the names as they where, so I could waste peoples time who were only reading my posts to find fault rather than reason, to dismiss rather than correct, to fuzz the subject rather than clarify... with a little wild goose chase tongue.gif

DemandRate never claimed to be same units as Bitrate, it is bitrate/bitrate,

(phrase_Demandrate=phrase_Bitrate/target_Bitrate)

Its just a variable name, what the name refers to is defined mathematically and in planest possible english I could muster, for your comprehension of my post, if that is your goal.
QUOTE
Nothing new here.

Agreed just another social critique, skillful but still completely missing the point.

People, this is supposed to be an Objective forum no? You've collectively rejected the 2pass method for wma without valid reason, and applied no scrutiny at all to Neros choice of ABR encoding and every attempt ive made to help you come to terms with these things has been met with....windbush....silence.... irrelevant nit-picking or insubstantial redirections. I dont know what else I can say now.

HA / HiFiWigWam whats the difference here ?

I can guess the mob response 'oh hes getting all uppity now, the devs are saying nothing he must be wrong, I can get him with this point he slipped up on even though I havent a clue about most of what he wrote, guru said this, ff123 said that, JohnV's a Nero Engineer he cant be mistaken.'

Want to knock me down a peg or two? Lets have more posts criticising my manner then.
Want to fatigue the arguement and carry on regardless when its knackered? More posts finding faults -real or half imagined.

I suggest stop concentrating on demoting the sole detractor, and put your talents to figuring the subject out yourselves and then explaining your understanding. How much counter explaination has there been??

If Nero go on ahead with submitting their ABR without admitting its acute differences from in-situ encoding (it could hardly be more different - that should not be hard to see, if you cant see it, this is not your field of expertese and you can forget about understanding how wma's 2pass targeting can resolve its own differences*)...

....Ill be laughing like a man condemned.

*at least it is VBR! If my model is wrong, its individual bitrate choices can deviate slightly from in-situ encode. Neros ABR bitrate distribution bears very little relation to the in-situ other than the Average -that is an easily confirmable fact dudes.

freekin' peace, headbang.gif


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
Sebastian Mares
post Nov 30 2005, 14:29
Post #703





Group: Members
Posts: 3629
Joined: 14-May 03
From: Bad Herrenalb
Member No.: 6613



Why do you keep comparing 2-pass VBR with ABR when they are not the same thing as Guru and others pointed out? And using a synthetical sample that is made of almost 100% of complex material has absolutely no real world meaning.

This post has been edited by Sebastian Mares: Nov 30 2005, 14:29


--------------------
http://listening-tests.hydrogenaudio.org/sebastian/
Go to the top of the page
+Quote Post
naylor83
post Nov 30 2005, 14:44
Post #704





Group: Members
Posts: 210
Joined: 19-June 05
From: Uppsala, Sweden
Member No.: 22842



QUOTE (Sebastian Mares @ Nov 30 2005, 03:29 PM)
Why do you keep comparing 2-pass VBR with ABR when they are not the same thing as Guru and others pointed out? And using a synthetical sample that is made of almost 100% of complex material has absolutely no real world meaning.
*


I think he's on to something that we can't grasp. (I can speak for myself anyway.) tongue.gif

Edit: ChiGung - why don't you encode a few tracks and samples and show us the results. Maybe we'd get your point wink.gif

This post has been edited by naylor83: Nov 30 2005, 14:48


--------------------
davidnaylor.org
Go to the top of the page
+Quote Post
ff123
post Nov 30 2005, 16:40
Post #705


ABC/HR developer, ff123.net admin


Group: Developer (Donating)
Posts: 1396
Joined: 24-September 01
Member No.: 12



ChiGung's point is that Nero ABR has a similar (although not identical) problem to wma 2-pass when using short samples to encode. That is, the bitrate if encode the sample is not the same as if you were to encode the entire track.

His suggestion (I think) is to use all of the samples pasted together for wma 2-pass to work on, which is not a bad idea as long as the samples are not biased towards being too complex (which I think previous sample sets have been).

ff123
Go to the top of the page
+Quote Post
ChiGung
post Nov 30 2005, 17:12
Post #706





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



QUOTE (naylor83 @ Nov 30 2005, 01:44 PM)
QUOTE (Sebastian Mares @ Nov 30 2005, 03:29 PM)
Why do you keep comparing 2-pass VBR with ABR when they are not the same thing as Guru and others pointed out? And using a synthetical sample that is made of almost 100% of complex material has absolutely no real world meaning.

I think he's on to something that we can't grasp. (I can speak for myself anyway.) tongue.gif

Edit: ChiGung - why don't you encode a few tracks and samples and show us the results. Maybe we'd get your point wink.gif
*


Because my machines are a mess with other stuff, and i dont have either codec at hand, but Ive been writing my own codec for the last few months and am very familiar with the technologies Im describing. You only need to visualise the basic nature of the different processes to see 'how they compare'

Im comparing them because Wma std was deselected on criteria that the proposed Nero ABR method (of running only on samples) would not pass, and if examined and conducted properly Wma std could pass.
(its not even a criteria I would have thought essential to the tests veracity, but whatever~)

Neros ABR reacts to bit demand as it encounters it and changes its vbr setting (raises or lowers its factor of allocation of bits for demand) to stay within an bitrate allocation limits (the ABR target) over a time limit specified in its design relating to expected minimum playback buffer size. If the audio preceeding a sample is higher than average demand, the ABR process will start by allocating (significantly) less bits at the beginning of the sample than if the preceeding audio was lower than average demand, and through the length of the sample the 'generosity of allocation' will reverse to counter act it. This effect will have a 'half-life' of several seconds at least. To be similar to in-situ encoding ABR must have a section of the preceeding audio as long as the span of its 'playback buffer' design to run in, because of how its bitrate distribution can be very different if it starts straight into the sample.

edit: (this starting of encoding with ABR of sample without its preceeding section, would tend to be a hinderance to the codecs performance rather than a boost - but still a not insignificant difference from in-situ performance)

The Nero developers and others know this but are keeping quiet about it, because theyre either not reading the thread or it suits their agenda to let me go on sounding like a lone madman in the faces of rightly bewildered and innocent testers.

edit:( more cranky paranoid bs no doubt rolleyes.gif )

It is a little disturbing to me that there has been no substantial feedback from experts about all this. But ill stop taking it seriously -until I forget myself again rolleyes.gif Im sorry for getting prickly with people, I shouldnt have but Im tired re- explaining the situation. The thread is basically being let down by the usual experts it relies on to put these things straight.

I would like to duck out now.

Users' last post was very well considered and I apologise for leapfrogging it.

This post has been edited by ChiGung: Nov 30 2005, 17:40


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
ChiGung
post Nov 30 2005, 17:21
Post #707





Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713



QUOTE (ff123 @ Nov 30 2005, 03:40 PM)
ChiGung's point is that Nero ABR has a similar (although not identical) problem to wma 2-pass when using short samples to encode.  That is, the bitrate if encode the sample is not the same as if you were to encode the entire track.

His suggestion (I think) is to use all of the samples pasted together for wma 2-pass to work on, which is not a bad idea as long as the samples are not biased towards being too complex (which I think previous sample sets have been).

ff123
*

Ah thankyou ff123 - sentient feedback. Sorry for all the hot air, ill leave you guys to it now.

Regards,
andy


--------------------
no conscience > no custom
Go to the top of the page
+Quote Post
Ivan Dimkovic
post Nov 30 2005, 17:44
Post #708


Nero MPEG4 developer


Group: Developer
Posts: 1466
Joined: 22-September 01
Member No.: 8



QUOTE
Neros ABR reacts to bit demand as it encounters it and changes its vbr setting (raises or lowers its factor of allocation of bits for demand) to stay within an bitrate allocation limits (the ABR target) over a time limit specified in its design relating to expected minimum playback buffer size. If the audio preceeding a sample is higher than average demand, the ABR process will start by allocating (significantly) less bits at the beginning of the sample than if the preceeding audio was lower than average demand, and through the length of the sample. This effect will have a 'half-life' of several seconds at least. To be similar to in-situ encoding ABR must have a section of the preceeding audio as long as the span of its 'playback buffer' design to run in, because of how its bitrate distribution can be very different if it starts straight into the sample.


First of all, "ABR" is just a CBR with relatively larger bit reservoir. In the new encoder, this bit reservoir is proportional to the final size (of 2.5%), with additional "startup" bitres size, that (by the way) every CBR encoder without a requirement for low startup decoding delay has.

Little bit of theory, in general case of an perceptual encoder with bit reservoir, typical to most CBR and ABR encoders in the market - bit rate control works by trying to maintain the bit reservoir half-full by adding and donating bit reservoir bits depending on the frame complexity and bitres statistics.

Basically - I don't know what is the big problem with this, this way is the way commercial encoders with bit reservoir worked since the invention of audio coding. Of course, if you drain the bit reservoir, the next "hard to encode" sample would have less bits - but I cannot recall that anyone complained about this in previous tests, including the MPEG ones - only difference is "run in" time (time to fill and empty the bit reservoir, which depends on the bitres size, but also on the encoder strategy, could be pretty quick - or pretty long).

By the way - your "run in / run out" situation could happen on the set of, say, 10-100 silent and pathological samples in theory - so, what do you propose? To eliminate all CBR encoders, or force them to use bit-reservoir of 0 bits in order to make sure all the frames are encoded with the same quality and NOT depending on their order? That is just pure nonsense.

Basically forcing encoders to behave as you wish is not really reflecting typical consumer use - people encode music tracks and expect some constant quality in those tracks. Period - goal of the ABR encoder is to maintain the average bit rate and constant quality on the music track - which is exactly what we do (or at least I think we do).

If you start putting additional demands, we might end up in a very strange situation that nothing could be tested:

- VBR encoders sometimes do not generate files within 128 kbps limits (hey, let's do Vorbis -q 4.25 with fatboy, velvet and castanets for starters) - also, they might undercode sample set for quality vs. bit rate check - and thanks to this we would increase their quality level and pronounce it "average 128 kbps" - exactly depending on the material you used for checking the bit rate distribution

- Some CBR encoders with bad bit buffer management would drain all their bit reservoir asap, and end up with poor quality (but that is not problem of the test smile.gif

- Some ABR encoders might encode material with slight difference depending on order of the material - but, as I explained, that can happen even on the frame-level (few zero frames, vice versa)

- 2-pass VBR encoders might do the same (they would, actually)

- CBR encoders would be handicaped if we feed them with material which has PE distribution clearly higher than the test requirements...

Etc... so, at the end - we won't be testing anything as somebody would anyway complain about something.

QUOTE
The Nero developers and others know this but are keeping quiet about it, because theyre either not reading the thread or it suits their agenda to let me go on sounding like a lone madman in the faces of rightly bewildered and innocent testers.


Oh, please...

As far as I am concerned, WMA 2-pass should be tested. But, if I recall correctly - it was rejected because it is:

a) Not widely used
b) 2-pass encoding is not widely available in encoding apps

This post has been edited by Ivan Dimkovic: Nov 30 2005, 17:47
Go to the top of the page
+Quote Post
sehested
post Nov 30 2005, 18:15
Post #709





Group: Members (Donating)
Posts: 325
Joined: 5-April 04
From: Copenhagen, Denmark
Member No.: 13246



QUOTE (Ivan Dimkovic @ Nov 30 2005, 08:44 AM)
As far as I am concerned, WMA 2-pass should be tested.  But, if I recall correctly - it was rejected because it is:

a) Not widely used
b) 2-pass encoding is not widely available in encoding apps
No not really. WMA 2-pass was selected for this test, but were replaced by WMA Pro due to the following problems with WMA std:
- obtaining full length samples within time frame of this test for proper 2-pass encoding proved impracticable
- alternative settings did not result in bit rates within comparable range for this test
Go to the top of the page
+Quote Post
Alex B
post Nov 30 2005, 18:22
Post #710





Group: Members
Posts: 1303
Joined: 14-September 05
From: Helsinki, Finland
Member No.: 24472



If I have understood ChiGung correctly (I'm not a mathematician) I proposed a quite similar method for VBR 2-pass about 20 pages ago (150s average part - 30 s sample - 150 s average part), but later I realized that it would be
1) too complicated to explain for the general audience
2) not practical for the test conductor.
3) can be claimed to not represent a real life situation, thus unfair
VBR 2-pass has a limited usability in real life. I would rather encode e.g. a video soundtrack using unlimited VBR unless the average bitrate must be exact. Often an exact average bitrate is not needed with 2-channel 128 kbps audio soundtracks, since the major part of the used bandwidth usage consists of the video file anyway and the overall bitrate can be slightly adjusted with the 2-pass video codec settings. The situation may be different with multi-channel audio formats that generally need higher bitrates to sound good.

In my opinion WMA Pro VBR (1-pass) is a bit more interesting contender for this test.

WMA standard VBR 50 (1-pass) should be tested in a separate ~100 kbps test.

Ivan explained many things, but I hope LAME developers can answer to my question about the LAME ABR behavior even the ABR mode is not going to be tested this time. In the past it has been used instead of VBR at lower bitrates. It would be nice to know how it takes into account the preceding and following parts when it determines the bitrate allocation.


--------------------
http://listening-tests.freetzi.com
Go to the top of the page
+Quote Post
Halcyon
post Nov 30 2005, 18:23
Post #711





Group: Members
Posts: 244
Joined: 6-November 01
Member No.: 416



QUOTE (Ivan Dimkovic @ Nov 30 2005, 06:44 PM)
As far as I am concerned, WMA 2-pass should be tested.  But, if I recall correctly - it was rejected because it is:
a) Not widely used
b) 2-pass encoding is not widely available in encoding apps


Which is precisely the point why some of the issues (not this particular though) are being debated.

If 2-pass is not ok for WMA, because it's not "widely used", then why use Lame developmental versions (betas/alphas), when they are not "widely used".

Why use Aotuv tunings, when they are not "widely used" or "widely available in encoding apps".

To Sebastian:

All this silliness about codec selection can/could've be(en) solved by stating a solid research question.

Is it for example "widely used codecs in their most widely used versions"

or

"Best of breed codecs in their best tuned versions"

If the rules are not the same for all the contestants, it's not really very fair test (and could be criticized in scientific terms).

I know this is beating a dead horse and I don't want to advocate any changes at this point. Organizing these things is hard enough as it is smile.gif

Maybe this should work as a reminder to future test organizers, about how test methodologies are set up. Proper research question first (formulate a null hypothesis, if you will), everything else after that and based on that.
Go to the top of the page
+Quote Post
Gabriel
post Nov 30 2005, 18:26
Post #712


LAME developer


Group: Developer
Posts: 2950
Joined: 1-October 01
From: Nanterre, France
Member No.: 138



QUOTE
It is a little disturbing to me that there has been no substantial feedback from experts about all this

I already suggested something that should solve this potential problem: let encoders adapt themselves to the content by not testing the first few seconds of the sample.
If you encode a few seconds at the beginning that are not tested by the listener, you solve the problems of psymodel adaptation and bitrate management adaptation.

Otherwise, of course if in an encoder you define a big enough bit reservoir, some very short samples might totally fit into the remaining reservoir space, leading a very big local bitrate, although the encoder would still respect its bitrate/reservoir constraints.


QUOTE
bit rate control works by trying to maintain the bit reservoir half-full

hint: Targetting an half-full state is perhaps not the best choice wink.gif It seems to me that usually most people are targetting a low value like 10 or 20% fullness...
Go to the top of the page
+Quote Post
Gabriel
post Nov 30 2005, 18:29
Post #713


LAME developer


Group: Developer
Posts: 2950
Joined: 1-October 01
From: Nanterre, France
Member No.: 138



QUOTE
I hope LAME developers can answer to my question about the LAME ABR behavior

Our current ABR is quite crude: We are doing bit allocation by lowering the target bitrate by 10% for long blocks, and on short blocks we are allocating based on PE, without considering bitrate.
Overall it works, but this is quite basic.
Go to the top of the page
+Quote Post
Triza
post Nov 30 2005, 19:27
Post #714





Group: Members
Posts: 367
Joined: 16-November 03
Member No.: 9867



QUOTE (Halcyon @ Nov 30 2005, 09:23 AM)
QUOTE (Ivan Dimkovic @ Nov 30 2005, 06:44 PM)
As far as I am concerned, WMA 2-pass should be tested.  But, if I recall correctly - it was rejected because it is:
a) Not widely used
b) 2-pass encoding is not widely available in encoding apps


Which is precisely the point why some of the issues (not this particular though) are being debated.

If 2-pass is not ok for WMA, because it's not "widely used", then why use Lame developmental versions (betas/alphas), when they are not "widely used".

Why use Aotuv tunings, when they are not "widely used" or "widely available in encoding apps".

To Sebastian:

All this silliness about codec selection can/could've be(en) solved by stating a solid research question.

Is it for example "widely used codecs in their most widely used versions"

or

"Best of breed codecs in their best tuned versions"

If the rules are not the same for all the contestants, it's not really very fair test (and could be criticized in scientific terms).

I know this is beating a dead horse and I don't want to advocate any changes at this point. Organizing these things is hard enough as it is smile.gif

Maybe this should work as a reminder to future test organizers, about how test methodologies are set up. Proper research question first (formulate a null hypothesis, if you will), everything else after that and based on that.
*



For God's sakes guys. If I were Sebastian I would abandon this whole listening test because you guys just cannot move on.

Guru set out the goals quite clearly. We want to test the Latest encoders (after all we want to test progress), but only the ones that have HW support. This would mean that WMA Std in 2 pass mode. Sadly we cannot do 2-pass because that would required to be executed on full tracks and Sebastian rightly do not want to get embroiled on copyright issues. So there was a poll what to do with WMA. People chose WMA Pro.

Yes the goals are not fully met, but we have to move on. Why do we need to do this navel-gazing all the time.

While I am at it @Gambit and @Dibrom for their recent criticisms

Sage advices while sitting on the fence regardless however constructive they seem to be are unnecessary. There is a point in any teamwork, when people have to move on and when a constructive-looking critisim is a hindrance and can easily jepordize the whole project. Especially when the whole thing is an voluntary work by Sebastian Guru and a little minority who drives the whole thing. Very intelligent and rightly-respected people like you should realize that and for a greater benefit should keep quiet.

We should be talking what samples we should use etc, not what codec to use.

Triza

This post has been edited by Triza: Nov 30 2005, 19:40
Go to the top of the page
+Quote Post
Sebastian Mares
post Nov 30 2005, 20:36
Post #715





Group: Members
Posts: 3629
Joined: 14-May 03
From: Bad Herrenalb
Member No.: 6613



I will post the samples I have in a few minutes as HE-AAC or MP3 so you have an idea. What I am looking for are some orchestral samples. Hope that Guru can post some or I am going to use from Roberto's test. Also, I still have hopes that PoisonDan is going to post the Sash! sample I requested.

This post has been edited by Sebastian Mares: Nov 30 2005, 20:36


--------------------
http://listening-tests.hydrogenaudio.org/sebastian/
Go to the top of the page
+Quote Post
kwanbis
post Nov 30 2005, 21:18
Post #716





Group: Developer (Donating)
Posts: 2362
Joined: 28-June 02
From: Argentina
Member No.: 2425



QUOTE (Triza @ Nov 30 2005, 06:27 PM)
Sadly we cannot do 2-pass because that would required to be executed on full tracks and Sebastian rightly do not want to get embroiled on copyright issues.

Sorry if already asked, but i don't think anybody would get into trouble for doing this, i mean, he won't keep the whole song, right? He would just encode the file, decode, cut, and delete. In fact, a 30 segs clip is as legal as this. (i have seen no laws about the 30 secs clips being allowed). If we wanted to be strict to the law, we whould be asking for permissions even on 30 secs tracks. Cripping the test because of this is sad.

This post has been edited by kwanbis: Nov 30 2005, 21:25


--------------------
MAREO: http://www.webearce.com.ar
Go to the top of the page
+Quote Post
Sebastian Mares
post Nov 30 2005, 21:24
Post #717





Group: Members
Posts: 3629
Joined: 14-May 03
From: Bad Herrenalb
Member No.: 6613



Sorry, it is not an option. I know what happened to an HA administrator because of the great German laws. Discussion is over.


--------------------
http://listening-tests.hydrogenaudio.org/sebastian/
Go to the top of the page
+Quote Post
kwanbis
post Nov 30 2005, 21:26
Post #718





Group: Developer (Donating)
Posts: 2362
Joined: 28-June 02
From: Argentina
Member No.: 2425



then the test won't be serving its purpose. as stated, an econder would adapt to a whole track diferently than to a 30 secs clip.

edit: maybe we can buy CDs, with a donation.

This post has been edited by kwanbis: Nov 30 2005, 21:29


--------------------
MAREO: http://www.webearce.com.ar
Go to the top of the page
+Quote Post
Sebastian Mares
post Nov 30 2005, 21:32
Post #719





Group: Members
Posts: 3629
Joined: 14-May 03
From: Bad Herrenalb
Member No.: 6613



QUOTE (kwanbis @ Nov 30 2005, 09:26 PM)
then the test won't be serving its purpose. as stated, an econder would adapt to a whole track diferently than to a 30 secs clip.

edit: maybe we can buy CDs, with a donation.
*


Huh? Don't we run listening tests with samples for at least three years? There was nothing wrong with them until now. blink.gif

And buying CDs now is too late. I don't want to postpone the test again.

This post has been edited by Sebastian Mares: Nov 30 2005, 21:33


--------------------
http://listening-tests.hydrogenaudio.org/sebastian/
Go to the top of the page
+Quote Post
Sebastian Mares
post Nov 30 2005, 21:42
Post #720





Group: Members
Posts: 3629
Joined: 14-May 03
From: Bad Herrenalb
Member No.: 6613



Samples: http://www.hydrogenaudio.org/forums/index....showtopic=39288


--------------------
http://listening-tests.hydrogenaudio.org/sebastian/
Go to the top of the page
+Quote Post
kwanbis
post Nov 30 2005, 21:42
Post #721





Group: Developer (Donating)
Posts: 2362
Joined: 28-June 02
From: Argentina
Member No.: 2425



QUOTE (Sebastian Mares @ Nov 30 2005, 08:32 PM)
Huh? Don't we run listening tests with samples for at least three years? There was nothing wrong with them until now. blink.gif
And buying CDs now is too late. I don't want to postpone the test again.

its just my opinion. one tends to try make it better each time. i understand you desire to do it. but i think a postponed and correct test is better than a questioned but sooner one. anyway, don't take ofence, is just a thought.


--------------------
MAREO: http://www.webearce.com.ar
Go to the top of the page
+Quote Post
Sebastian Mares
post Nov 30 2005, 21:46
Post #722





Group: Members
Posts: 3629
Joined: 14-May 03
From: Bad Herrenalb
Member No.: 6613



QUOTE (kwanbis @ Nov 30 2005, 09:42 PM)
QUOTE (Sebastian Mares @ Nov 30 2005, 08:32 PM)
Huh? Don't we run listening tests with samples for at least three years? There was nothing wrong with them until now. blink.gif
And buying CDs now is too late. I don't want to postpone the test again.

its just my opinion. one tends to try make it better each time. i understand you desire to do it. but i think a postponed and correct test is better than a questioned but sooner one. anyway, don't take ofence, is just a thought.
*



I still fail to see why you call it "questioned". Were Roberto's or Guru's tests questioned because samples were encoded instead of full tracks? blink.gif

This post has been edited by Sebastian Mares: Nov 30 2005, 21:48


--------------------
http://listening-tests.hydrogenaudio.org/sebastian/
Go to the top of the page
+Quote Post
naylor83
post Nov 30 2005, 21:51
Post #723





Group: Members
Posts: 210
Joined: 19-June 05
From: Uppsala, Sweden
Member No.: 22842



QUOTE (ChiGung @ Nov 30 2005, 06:21 PM)
QUOTE (ff123 @ Nov 30 2005, 03:40 PM)
ChiGung's point is that Nero ABR has a similar (although not identical) problem to wma 2-pass when using short samples to encode.  That is, the bitrate if encode the sample is not the same as if you were to encode the entire track.

His suggestion (I think) is to use all of the samples pasted together for wma 2-pass to work on, which is not a bad idea as long as the samples are not biased towards being too complex (which I think previous sample sets have been).

ff123
*

Ah thankyou ff123 - sentient feedback. Sorry for all the hot air, ill leave you guys to it now.

Regards,
andy
*



Thanks for the clean short version, ff123. I with you now, ChiGung.


--------------------
davidnaylor.org
Go to the top of the page
+Quote Post
Sebastian Mares
post Nov 30 2005, 21:55
Post #724





Group: Members
Posts: 3629
Joined: 14-May 03
From: Bad Herrenalb
Member No.: 6613



QUOTE (naylor83 @ Nov 30 2005, 09:51 PM)
QUOTE (ChiGung @ Nov 30 2005, 06:21 PM)
QUOTE (ff123 @ Nov 30 2005, 03:40 PM)
ChiGung's point is that Nero ABR has a similar (although not identical) problem to wma 2-pass when using short samples to encode.  That is, the bitrate if encode the sample is not the same as if you were to encode the entire track.

His suggestion (I think) is to use all of the samples pasted together for wma 2-pass to work on, which is not a bad idea as long as the samples are not biased towards being too complex (which I think previous sample sets have been).

ff123
*

Ah thankyou ff123 - sentient feedback. Sorry for all the hot air, ill leave you guys to it now.

Regards,
andy
*



Thanks for the clean short version, ff123. I with you now, ChiGung.
*



If sample sets are less complex, we might have the problem that all encoders end up tied.

And again, problem with such a synthetic track is that it has almost no real world meaning. Also, I'd have to distribute more samples losslessly (or are there any lossless splitters for MP4 and Vorbis?) which is bad for testers.

This post has been edited by Sebastian Mares: Nov 30 2005, 21:57


--------------------
http://listening-tests.hydrogenaudio.org/sebastian/
Go to the top of the page
+Quote Post
sehested
post Nov 30 2005, 21:55
Post #725





Group: Members (Donating)
Posts: 325
Joined: 5-April 04
From: Copenhagen, Denmark
Member No.: 13246



Sebastian,

The samples are looking great!

Average bitrate between codec is excellent. smile.gif

I look so much forward to performing this test biggrin.gif

Keep up the good work!
Go to the top of the page
+Quote Post

35 Pages V  « < 27 28 29 30 31 > » 
Closed TopicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



RSS Lo-Fi Version Time is now: 22nd August 2014 - 15:17