IPB

Welcome Guest ( Log In | Register )

4 Pages V  < 1 2 3 4 >  
Reply to this topicStart new topic
Is MPC better than mp3?
pepoluan
post Apr 4 2006, 23:38
Post #51





Group: Members
Posts: 1455
Joined: 22-November 05
From: Jakarta
Member No.: 25929



QUOTE (vinnie97 @ Apr 5 2006, 05:15 AM)
That provides an extra layer of psychologic protection for those who worry excessively about transparency and problem samples wreaking havoc on said transparency. wink.gif
*
LOL biggrin.gif if I worry about transparency I'll go lossless.

No, but seriously.

Suppose I use a codec "Z" (letter chosen to hopefully not denote any known lossy codec) which achieves transparency at level "50" out of "100", giving a bitrate of ... let's say 100 kbps.

If I need to be more sure of transparency, then I can bump up the setting to level "60", giving a bitrate of 110 kbps.

Now let's say there's another codec called "W" (again, hopefully not denoting any known lossy codec) that achieves transparency at level "Q" out of "Z", giving a bitrate of ... let's say 120 kbps. The next quality level is "R" at 130 kbps.

Why should I use "W" at level "R" instead of "Z" at level "60"?


--------------------
Nobody is Perfect.
I am Nobody.

http://pandu.poluan.info
Go to the top of the page
+Quote Post
vinnie97
post Apr 5 2006, 04:50
Post #52





Group: Members
Posts: 472
Joined: 6-March 03
Member No.: 5360



You lost me at variable "Q." laugh.gif

Seriously, that does make sense...format bias comes into play as well. But indeed there's no reason to bother with "W" and throw away bits if ABXing has shown you've reached perpetual transparency at a lower bitrate with codec "Z."
Go to the top of the page
+Quote Post
user
post Apr 10 2006, 12:14
Post #53





Group: Members
Posts: 873
Joined: 12-October 01
From: the great wide open
Member No.: 277



lol!
when I wrote this at 4th of April, kwanbis repeated afterwards his post showing 2 different multiformat tests to compare...........

weird...
April jokes ?

I knew (and posted) during the prediscussion of the latest 128k multiformat test, that cross-test-references are necessary and interesting in general,
and that without this, sooner or later somebody will come and connect the old tests (eg. with mpc versions) and the new test (eg. without mpc), to show "somewhat", haha, a comparison between mpc measured/ranked in old tests to compare the old "value" with formats in new tests and their new "values".
You made my day !


--------------------
www.High-Quality.ch.vu -- High Quality Audio Archiving Tutorials
Go to the top of the page
+Quote Post
rjamorim
post Apr 10 2006, 12:34
Post #54


Rarewares admin


Group: Members
Posts: 7515
Joined: 30-September 01
From: Brazil
Member No.: 81



QUOTE (user @ Apr 10 2006, 08:14 AM) *
lol!
when I wrote this at 4th of April, kwanbis repeated afterwards his post showing 2 different multiformat tests to compare...........

weird...
April jokes ?

I knew (and posted) during the prediscussion of the latest 128k multiformat test, that cross-test-references are necessary and interesting in general,
and that without this, sooner or later somebody will come and connect the old tests (eg. with mpc versions) and the new test (eg. without mpc), to show "somewhat", haha, a comparison between mpc measured/ranked in old tests to compare the old "value" with formats in new tests and their new "values".
You made my day !


So, you're not aware that results can be extrapolated between tests?


--------------------
Get up-to-date binaries of Lame, AAC, Vorbis and much more at RareWares:
http://www.rarewares.org
Go to the top of the page
+Quote Post
kwanbis
post Apr 10 2006, 13:33
Post #55





Group: Developer (Donating)
Posts: 2390
Joined: 28-June 02
From: Argentina
Member No.: 2425



QUOTE (user @ Apr 10 2006, 11:14 AM) *
when I wrote this at 4th of April, kwanbis repeated afterwards his post showing 2 different multiformat tests to compare...........

as far as i understand, or my logic understands, the only problem with doing so, was

1) diferent samples
2) diferent people

but, if we consider that

1) they were considered problem samples
2) statistically, it shouldnt matter

the extrapolation of the test should be ok, as it is related to the perception of a encoded file quality against an original.


--------------------
MAREO: http://www.webearce.com.ar
Go to the top of the page
+Quote Post
StewartR
post Apr 10 2006, 13:50
Post #56





Group: Members
Posts: 7
Joined: 27-February 06
From: Maidenhead, UK
Member No.: 28124



QUOTE (pepoluan @ Apr 4 2006, 10:56 PM) *
Sometimes I wonder...

... if a codec (name your favorite here, I have mine) already performs transparently at a lower bitrate...

... then why encode at a higher bitrate?


OK, I'm only relatively new here, but I'd like to offer two answers to the question posed by pepoluan. I'f I'm talking rubbish then I'd be very grateful if a more knowledgeable / experienced HA member could point out my errors.

Firstly, transparency isn't just a function of how good your ears are. It's also affected by how good your equipment is. If you upgrade your equipment, music that previously sounded transparent might no longer sound transparent. But if you encode at a higher bitrate than you might have thought necessary, you have a bit of insurance in that direction.

Secondly, if you want to transcode down to a lower bitrate (e.g. to use on a DAP) then a higher bitrate to start with will hopefully give a better end result.

So for example I tend to encode music to MP3 using LAME -V0. Given the quality of my audio equipment, and the fact that I don't often just sit and listen carefully to music, I strongly suspect -V2 would be transparent to me for most if not all practical purposes. (I haven't done any serious listening tests to confirm this, and I can't be bothered to. Life's too short.) But I can afford the extra 25% storage space required by -V0, and that reassures me that next time I do want to listen hard to something I won't be bothered by compression artefacts. Also, when I want to transcode down to -V5 for use on my Walkman, I expect to get better-sounding results transcoding down from -V0 than from -V2. (Again, I haven't tested this and I don't want to test it.)

Basically that extra 25% storage space buys me a form of insurance. Does this make any sense at all?
Go to the top of the page
+Quote Post
shadowking
post Apr 10 2006, 14:13
Post #57





Group: Members
Posts: 1529
Joined: 31-January 04
Member No.: 11664



It makes some sense, only that when V2 is clearly stuffing up then V0 is equally useless. On small differences V0 might be a little better. I've have numerous samples that had problems with v5 and often v4 or v3 made little difference. I've had one that was abxable at V2, not at V1, yet abaxable again at V0 ! - even one that v5 was better than V4.

I am now sure is that when the psymodel is doing funny things then quality isn't increasing and the bits are wasted. MPC is better but its still the same in principle. The non-perceptuals like wavpack lossy and Optimfrog DS don't have this problem at all - though bitrate is more expensive.

This post has been edited by shadowking: Apr 10 2006, 14:31


--------------------
Wavpack -b450s0.7
Go to the top of the page
+Quote Post
pepoluan
post Apr 10 2006, 21:59
Post #58





Group: Members
Posts: 1455
Joined: 22-November 05
From: Jakarta
Member No.: 25929



QUOTE (StewartR @ Apr 10 2006, 07:50 PM) *
Firstly, transparency isn't just a function of how good your ears are. It's also affected by how good your equipment is. If you upgrade your equipment, music that previously sounded transparent might no longer sound transparent. But if you encode at a higher bitrate than you might have thought necessary, you have a bit of insurance in that direction.
Well, it actually boils down to your ears then smile.gif whether you can hear the difference between the lossy and the lossless encoded file. The equipment only helps.

But anyways, of course I am talking about same equipment here. It is absolutely pointless trying to compare my iPaq2210 output (fed into an amp & speaker) with my desktop computer output...

QUOTE
Secondly, if you want to transcode down to a lower bitrate (e.g. to use on a DAP) then a higher bitrate to start with will hopefully give a better end result.
Repeat after me: Transcoding from lossy to lossy - bad. Transcode from lossless to lossy - good. biggrin.gif


--------------------
Nobody is Perfect.
I am Nobody.

http://pandu.poluan.info
Go to the top of the page
+Quote Post
user
post Apr 11 2006, 11:16
Post #59





Group: Members
Posts: 873
Joined: 12-October 01
From: the great wide open
Member No.: 277



No,
extrapolation of those tests (not only of those, please study some theory of experimentals) is only funny and not scientific,
not worth being posted.
With all the scientific/theoretical and experimental approaches of HA, this "extrapolation" is not possible.

You could "extrapolate" old tests with new tests, if you would have included the "comparable anchor format", ie. a tested encoder of an old test together with the new test probants.
Then you could say, that eg. 4.7 rating of new test matches 4.5 rating of old test or whatever, and to watch, how a relative ranking of newer formats has developed towards older formats/encoders.

Please reread my posts during the preparation of the 128k multiformat test, I asked for including some "comparable anchor" to the new test, but the conductors haven't taken this idea.
There wasn't said (neither by kwanbis, nor by Roberto), that comparable anchor is not necessary to compare new with old test, due to this or that fact or argument, reason.

So, to come now with a comparison between old and new tests, reveals, that those guys, who already creeped into mpc threads in past, to argue against mpc, when mpc had still the crown alone, are continuing now with their propaganda. Sorry, fitting together those old and new tests graphs, sounds like cheap marketing.


--------------------
www.High-Quality.ch.vu -- High Quality Audio Archiving Tutorials
Go to the top of the page
+Quote Post
rjamorim
post Apr 11 2006, 12:52
Post #60


Rarewares admin


Group: Members
Posts: 7515
Joined: 30-September 01
From: Brazil
Member No.: 81



QUOTE (user @ Apr 11 2006, 07:16 AM) *
No,
extrapolation of those tests (not only of those, please study some theory of experimentals) is only funny and not scientific,
not worth being posted.
With all the scientific/theoretical and experimental approaches of HA, this "extrapolation" is not possible.


It is perfectly possible, and has been done several times before. The anchor indeed helps, but if any, my tests show that rankings have been consistent to quality across tests no matter if you use an anchor as reference or not.

You are just nitpicking here. If you want to seriously nitpick, you could determine that tests could only be compared ("extrapolated") if you used at least one encoder in common, the same listeners, the same samples, and the same conditions. That's not only unfeasible, that is impossible, as your hearing isn't perfectly the same day after day, and unmeasurable factors like mood and tiredness play a very important role in your testing habilities.

We are trying to compromise here for the sake of information.

This post has been edited by rjamorim: Apr 11 2006, 12:53


--------------------
Get up-to-date binaries of Lame, AAC, Vorbis and much more at RareWares:
http://www.rarewares.org
Go to the top of the page
+Quote Post
m0rbidini
post Apr 11 2006, 17:29
Post #61





Group: Members
Posts: 213
Joined: 1-October 01
From: Lisbon, Portugal
Member No.: 127



I'm not a statistics wizard but I don't think user is nitpicking. I really think that having different anchors makes extrapolation much harder than not having the same listeners, same samples and same conditions, if these are representative of real world scenario in both tests.

Maybe extrapolation could be done, but not as slightly as just putting both graphics side by side.
Go to the top of the page
+Quote Post
user
post Apr 11 2006, 18:01
Post #62





Group: Members
Posts: 873
Joined: 12-October 01
From: the great wide open
Member No.: 277



QUOTE (rjamorim @ Apr 11 2006, 01:52 PM) *
You are just nitpicking here. If you want to seriously nitpick, you could determine that tests could only be compared ("extrapolated") if you used at least one encoder in common, the same listeners, the same samples, and the same conditions. That's not only unfeasible, that is impossible, as your hearing isn't perfectly the same day after day, and unmeasurable factors like mood and tiredness play a very important role in your testing habilities.

We are trying to compromise here for the sake of information.


Thank you,
you have written down, why cross comparisons (with absolute differences from 4.x to 4.y) between tests are difficult till impossible without my proposed "comparable anchor", which would allow a relative ranking of formats between old and new tests with careful interpretation.
It is obvious, who nitpicks.
Information based on pseudo-scientific-looking graph? At least, now we read, that the goal of this obscure graph should be "information" smile.gif
Yellow press compromises its "information" sometimes, too. Not a serious way to inform people?


--------------------
www.High-Quality.ch.vu -- High Quality Audio Archiving Tutorials
Go to the top of the page
+Quote Post
rjamorim
post Apr 12 2006, 00:23
Post #63


Rarewares admin


Group: Members
Posts: 7515
Joined: 30-September 01
From: Brazil
Member No.: 81



QUOTE (user @ Apr 11 2006, 02:01 PM) *
Thank you,
you have written down, why cross comparisons (with absolute differences from 4.x to 4.y) between tests are difficult till impossible without my proposed "comparable anchor"


Nope, I said it would be difficult till impossible if one nitpicked as bad as you do. Don't try to distort what I said.

QUOTE
Information based on pseudo-scientific-looking graph?


Again, if you want to nitpick so badly (as you obviously do), even my tests were pseudo-scientific, as they weren't formally conduced as per the ITU guidelines.

So, feel free to ignore all my tests and forget these things happened. Have a nice day.

This post has been edited by rjamorim: Apr 12 2006, 00:23


--------------------
Get up-to-date binaries of Lame, AAC, Vorbis and much more at RareWares:
http://www.rarewares.org
Go to the top of the page
+Quote Post
user
post Apr 12 2006, 10:12
Post #64





Group: Members
Posts: 873
Joined: 12-October 01
From: the great wide open
Member No.: 277



Dear friend,
you have conducted tests as conductor.
^^that's nitpicking wink.gif

Logic tells about nitpicking here.

See, what
m0rbidini Posted Yesterday, 06:29 PM :

I'm not a statistics wizard but I don't think user is nitpicking. I really think that having different anchors makes extrapolation much harder than not having the same listeners, same samples and same conditions, if these are representative of real world scenario in both tests.

Maybe extrapolation could be done, but not as slightly as just putting both graphics side by side.


--------------------
www.High-Quality.ch.vu -- High Quality Audio Archiving Tutorials
Go to the top of the page
+Quote Post
stephanV
post Apr 12 2006, 11:17
Post #65





Group: Members
Posts: 394
Joined: 6-May 04
Member No.: 13932



Still, even the listening tests done rjamorim do not justify the comment that "other formats are struggling to reach the same quality". MPC already tied with Vorbis and QT AAC back in 2003 at 128 kbps. And logic suggests that at higher bitrates, for most people, differences between formats will become smaller, not bigger.

So what point are you actually trying to defend?


--------------------
"We cannot win against obsession. They care, we don't. They win."
Go to the top of the page
+Quote Post
Garf
post Apr 12 2006, 12:11
Post #66


Server Admin


Group: Admin
Posts: 4886
Joined: 24-September 01
Member No.: 13



QUOTE (rjamorim @ Apr 11 2006, 01:52 PM) *
QUOTE (user @ Apr 11 2006, 07:16 AM) *
No,
extrapolation of those tests (not only of those, please study some theory of experimentals) is only funny and not scientific,
not worth being posted.
With all the scientific/theoretical and experimental approaches of HA, this "extrapolation" is not possible.


It is perfectly possible, and has been done several times before. The anchor indeed helps, but if any, my tests show that rankings have been consistent to quality across tests no matter if you use an anchor as reference or not.

You are just nitpicking here. If you want to seriously nitpick, you could determine that tests could only be compared ("extrapolated") if you used at least one encoder in common, the same listeners, the same samples, and the same conditions. That's not only unfeasible, that is impossible, as your hearing isn't perfectly the same day after day, and unmeasurable factors like mood and tiredness play a very important role in your testing habilities.

We are trying to compromise here for the sake of information.


Well, you've lost me there. Where have your tests shown this? (That's a request for information, not a rhetorical remark)

Extrapolating results between two tests with any common anchor looks pretty hazy to me, and it's not something I'd accept as solid in any way without some strong indication that in the given circumstances it's a valid compromise to make.
Go to the top of the page
+Quote Post
Garf
post Apr 12 2006, 12:58
Post #67


Server Admin


Group: Admin
Posts: 4886
Joined: 24-September 01
Member No.: 13



QUOTE (user @ Apr 11 2006, 12:16 PM) *
No,
extrapolation of those tests (not only of those, please study some theory of experimentals) is only funny and not scientific,
not worth being posted.
With all the scientific/theoretical and experimental approaches of HA, this "extrapolation" is not possible.

You could "extrapolate" old tests with new tests, if you would have included the "comparable anchor format", ie. a tested encoder of an old test together with the new test probants.
Then you could say, that eg. 4.7 rating of new test matches 4.5 rating of old test or whatever, and to watch, how a relative ranking of newer formats has developed towards older formats/encoders.


I'm sorry, but I know of no formal proof that this or that extrapolation is a valid one and this or that one isn't. If you're looking for black and white, there won't be any.

The conditions for an extrapolation to be valid are pretty much the same that are required for the test itself to be valid. There must not be a way to show how it could, in a manner that has a reasonable likelihood of occuring, lead to wrong results. More abstractly and generally, what determines the goodness of a test is whether the results will lead to consistent improvement. And more specifically again: a test that is not solid wouldn't be able to lead to improvement at some point, or at the very least, it can be shown that this would happen.

What people will consider a valid test is also based on the above; but the above is not a black and white issue: the likelyhood the results could get flawed can vary, and the cirumstances under which it can happen, could too. By clearly stating the methodology, you allow everyone to make a decision for himself whether they consider the flaws important or not. If you use a good methodology, most people will consider that is not the case, and your results will be "accepted".

I wrote the above directly concerning this thread, but if you think about it, it's exactly the same what happens in science. If you call it unscientific and funny, you are wrong.

In a discussion, it's valid not to accept a conclusion, extrapolation or test results. But be aware that any data is still better than no data at all (and that's something different from "data so invalid you could as well toss a coin"). Waiving a result because of a minor issue is something you can do, but unless you're willing to come up with some results of your own, don't expect people to take you very seriously.

I'd like to see rjamorim's data and reasoning that leads him to believe an extrapolation would be valid. If we see it, we can think about what the flaws could be, how likely they are, and consequently, how much attention this extrapolation should get.

This post has been edited by Garf: Apr 12 2006, 13:04
Go to the top of the page
+Quote Post
kwanbis
post Apr 12 2006, 13:43
Post #68





Group: Developer (Donating)
Posts: 2390
Joined: 28-June 02
From: Argentina
Member No.: 2425



For me, its like a race, you compare lap times of race 1 to the one in race 20, and see that racer X, of race 20, had a better time than racer H on race 1, so racer 20 gets the "record lap". But:

1) Diferent racers
2) Diferent cars
3) Diferent climate

Still nobody argues about it. As roberto said (ups, we agree once more), you can compare, if you want to be picky, you probably would find some statistic problem, as with everything done in life. I can argue that you must do 100% of the world population, or the test have no meaning.

People subjetivelly heard some samples, and decided to give MPC 4.47, against the originals. Then another group of people did the same, and give 4.74 to iTunes AAC, again, against the originals.

EDIT: If both groups where statistical representations of the universe, universe = universe, so we can assume the same group did the test.

This post has been edited by kwanbis: Apr 12 2006, 13:48


--------------------
MAREO: http://www.webearce.com.ar
Go to the top of the page
+Quote Post
Garf
post Apr 12 2006, 13:45
Post #69


Server Admin


Group: Admin
Posts: 4886
Joined: 24-September 01
Member No.: 13



QUOTE (kwanbis @ Apr 12 2006, 02:43 PM) *
As roberto said (ups, we agree once more), you can compare, if you want to be picky, you probably would find some statistic problem, as with everything done in life. I can argue that you must do 100% of the world population, or the test have no meaning.


I'm sorry, but this is a stupid comparison, because the last sentence is wrong, unless you have some evidence that statistical sampling theory is fatally flawed.
Go to the top of the page
+Quote Post
user
post Apr 12 2006, 13:47
Post #70





Group: Members
Posts: 873
Joined: 12-October 01
From: the great wide open
Member No.: 277



Thanks Garf for questioning, what rjamorim leads to the opinion, that putting both graphics side by side could be a valid extrapolation.
If he would have continued defending this assumption, I'd asked it myself.

As a long time is between those listening tests,
different samples,
different encoders (ie. no anchor encoder),
different people,
maybe same people who aged during both tests,
it is very unlikely to be able to compare one encoders' absolute ranking ("4.x") of the old test with other encoders' ranking ("4.y") at another test like done in that side-by-side-graph.

morbidini wrote :
Maybe extrapolation could be done, but not as slightly as just putting both graphics side by side.

This could not have been written better.

Even kwanbis wrote at Apr 2 2006, 03:02 AM Post #12 to his (imo) unlucky 2-graph-comparison :
it could be argued that diferent samples were used ... even diferent people probably submited results ... anyway ....


edit addon:

hm, some post above kwanbis compared statistical listening tests with race laps and measuring times.
hmhm.
Any comments (necessary)?

The point of abx and ABC/HR here has been and is, that the results are valid for the samples, the tested encoders, tested people, the test situation as such, and not more.
The public multiformat tests with a bigger group of testers mirrors the ranking of general people's impression, but only focussed on the actual test (conditions).

This post has been edited by user: Apr 12 2006, 13:56


--------------------
www.High-Quality.ch.vu -- High Quality Audio Archiving Tutorials
Go to the top of the page
+Quote Post
kwanbis
post Apr 12 2006, 13:49
Post #71





Group: Developer (Donating)
Posts: 2390
Joined: 28-June 02
From: Argentina
Member No.: 2425



QUOTE (Garf @ Apr 12 2006, 12:45 PM) *
I'm sorry, but this is a stupid comparison, because the last sentence is wrong, unless you have some evidence that statistical sampling theory is fatally flawed.

well, statistics have proven wrong many times. Thats why, for example, nobody can predict anything 100% with statistics.


--------------------
MAREO: http://www.webearce.com.ar
Go to the top of the page
+Quote Post
Garf
post Apr 12 2006, 13:59
Post #72


Server Admin


Group: Admin
Posts: 4886
Joined: 24-September 01
Member No.: 13



QUOTE (user @ Apr 12 2006, 02:47 PM) *
As a long time is between those listening tests,
different samples,
different encoders (ie. no anchor encoder),
different people,
maybe same people who aged during both tests,


Some of these don't matter at all (different people for example), some may not matter, some may matter a lot.

My corncern is that I believe people tend to rate the encoders more against each other, rather than against the ratings scale itself ("Perceptual but not annoying" etc). I know that I myself have this tendency, and I have participated in the tests.

*BUT* transparency is a hard anchor, since it's always 5.0 in any test. This may be enough to anchor the high bitrate tests together.

I'd just like to see more data so I can reach my own conclusion.


QUOTE (kwanbis @ Apr 12 2006, 02:49 PM) *
QUOTE (Garf @ Apr 12 2006, 12:45 PM) *

I'm sorry, but this is a stupid comparison, because the last sentence is wrong, unless you have some evidence that statistical sampling theory is fatally flawed.

well, statistics have proven wrong many times. Thats why, for example, nobody can predict anything 100% with statistics.


Statistics proven wrong? Eh?

If you say something is true with 95% confidence, you know you will be wrong 5% of the time.

How can you prove that wrong? As I already asked, are you going to rewrite mathematics?

Statistical sampling is a known method, for which we know the pitfalls and accuracy very well. It tells us we don't need to ask the entire population of the world something in order to make a statement about it. You haven't come one inch closer to supporting your original entirely wrong statement, and you won't ever get an inch closer, either.
Go to the top of the page
+Quote Post
user
post Apr 12 2006, 14:02
Post #73





Group: Members
Posts: 873
Joined: 12-October 01
From: the great wide open
Member No.: 277



QUOTE (kwanbis @ Apr 12 2006, 02:49 PM) *
QUOTE (Garf @ Apr 12 2006, 12:45 PM) *

I'm sorry, but this is a stupid comparison, because the last sentence is wrong, unless you have some evidence that statistical sampling theory is fatally flawed.

well, statistics have proven wrong many times. Thats why, for example, nobody can predict anything 100% with statistics.


err,
1st kwanbis takes the results of statistics, ie. the 2 graphs, mixes them up,
and now he questions the principles of maths & HA?

just a hint: statistics is not about predicting something with 100%,
but about measuring something with some safety of measuring correct and not guessing.
ie. probability of a percentage lower than 100%.
Statistics hasn't been proven wrong.
Maybe certain test setups and used statistics and the interpretations were flawed.
(Like it looks here with high probability to put those 2 graphs side by side to demonstrate whatever. The 2 single graphs are not questioned (by me or HA or anybody with sense), but the putting side-by-side.)

This post has been edited by user: Apr 12 2006, 14:05


--------------------
www.High-Quality.ch.vu -- High Quality Audio Archiving Tutorials
Go to the top of the page
+Quote Post
m0rbidini
post Apr 12 2006, 14:03
Post #74





Group: Members
Posts: 213
Joined: 1-October 01
From: Lisbon, Portugal
Member No.: 127



QUOTE (Garf)
I'd like to see rjamorim's data and reasoning that leads him to believe an extrapolation would be valid. If we see it, we can think about what the flaws could be, how likely they are, and consequently, how much attention this extrapolation should get.

I agree with this part. rjamorim was fast writing "So, you're not aware that results can be extrapolated between tests?" as if it was a given. My objection, however, is how this "extrapolation" (if you can call it that) is being made without any kind of explication, just by overlapping the ratings graphics.

QUOTE (Garf)
The conditions for an extrapolation to be valid are pretty much the same that are required for the test itself to be valid.

Can't you can have two perfectly valid tests and not be able to do a simple extrapolation between them (like the one being tried here)? Aren't there more conditions, like having a valid way to relate the different anchors?
Go to the top of the page
+Quote Post
Garf
post Apr 12 2006, 14:06
Post #75


Server Admin


Group: Admin
Posts: 4886
Joined: 24-September 01
Member No.: 13



QUOTE (m0rbidini @ Apr 12 2006, 03:03 PM) *
QUOTE (Garf)
The conditions for an extrapolation to be valid are pretty much the same that are required for the test itself to be valid.

Can't you can have two perfectly valid tests and not be able to do a simple extrapolation between them (like the one being tried here)? Aren't there more conditions, like having a valid way to relate the different anchors?


Yes, of course. I perhaps didn't explain myself clearly there. I was talking about the conditions for a method of extrapolation to be valid, in the scientific sense; see the following sentence for example. I don't mean the validity of the test is linked to the validity of an extrapolation (except in the obvious way that it would be hard to make a valid extrapolation out of an invalid test smile.gif). Just that the way of determining whether it is is the same.

This post has been edited by Garf: Apr 12 2006, 14:09
Go to the top of the page
+Quote Post

4 Pages V  < 1 2 3 4 >
Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



RSS Lo-Fi Version Time is now: 19th December 2014 - 03:39