IPB

Welcome Guest ( Log In | Register )

2 Pages V  < 1 2  
Reply to this topicStart new topic
Listening test using 2013-03-09 build
db1989
post Mar 14 2013, 22:00
Post #26





Group: Super Moderator
Posts: 5275
Joined: 23-June 06
Member No.: 32180



I definitely don’t disagree, and I can appreciate how useful that is for a developer. I see and agree with all your points about bit-comparison being used to determine that files are either identical or not, but that seems to be about as much as the technique can reveal, and I would like to think that this use should be easy to work out from first principles.

In contrast, as I said I was asking for examples “except from ‘this≠that’ ”, in reply to bawjaws’ comment about “comparative quality” and how bit-comparing can provide any other information.

Whether or not that was exactly what bawjaws meant, this tangent started because kabal4e attempted to comment on the performance of Opus by proffering statistics from a bit-comparator, albeit while not specifying what was compared to what – and claiming to acknowledge that such a method has no useful relation to hearing, or to the complex workings of lossy encoding, but feeling that posting it somehow remained appropriate anyway. As I’ve said before in reference to ‘I know this isn’t valid, but’–type arguments, things like that just seem like an attempt to ‘have your cake and eat it’: try to make a point that might run contrary to the rules – or just basic principles – but secure immunity from this dischord by acknowledging that it might exist… doesn’t make sense, does it?

This post has been edited by db1989: Mar 14 2013, 22:01
Go to the top of the page
+Quote Post
jmvalin
post Mar 14 2013, 23:09
Post #27


Xiph.org Speex developer


Group: Developer
Posts: 475
Joined: 21-August 02
Member No.: 3134



QUOTE (db1989 @ Mar 14 2013, 17:00) *
I definitely don’t disagree, and I can appreciate how useful that is for a developer. I see and agree with all your points about bit-comparison being used to determine that files are either identical or not, but that seems to be about as much as the technique can reveal, and I would like to think that this use should be easy to work out from first principles.


What I'm trying to say is that kabal4e's comment that most samples were bit-identical *is* useful. It tells me that the change I made to fix a corner case indeed only impacts corner cases because the majority of the time it's not triggered at all. That *is* more useful than "no audible difference". There's comparing quality and there's "let's figure out what's going on here". Let's not confuse the two.
Go to the top of the page
+Quote Post
kabal4e
post Mar 15 2013, 00:15
Post #28





Group: Members
Posts: 8
Joined: 10-March 13
From: Waikato, NZ
Member No.: 107144



QUOTE (jmvalin @ Mar 15 2013, 11:09) *
What I'm trying to say is that kabal4e's comment that most samples were bit-identical *is* useful. It tells me that the change I made to fix a corner case indeed only impacts corner cases because the majority of the time it's not triggered at all. That *is* more useful than "no audible difference". There's comparing quality and there's "let's figure out what's going on here". Let's not confuse the two.

Hi all,

Yes. That's what I was trying to say. Thank you Jean-Marc for translating my ESOL into something clearer smile.gif))
What happened was I tried ABX-ing one track encoded with 2013.03.12 and 2013.03.13 builds of opus-tools at 64kbps. I failed to spot the difference reliably and then didn't save the ABX log, which is not unusual for me. However, just out of curiosity I used the foobar replaygain tool for a lossless original and two encoded files; all track gains were identical and there was only a small difference between two encoded files peak values. Then I used foobar bitcompare tool, which showed the bit difference between two encoded tracks of only 25-50% and the maximum difference between samples of approx. 0.25.
To sum up, I couldn't ABX the difference, track gains identical, slight full track peak difference, up to 50% sample values exactly matching, and max difference between sample values was 0.25. That's why I said there was no difference.

The main thing is that I never said I used only bit-comparison tool for tracks comparison. Will try to attach the ABX report in the future, however, in case a person desperately wants to proove some point, what stops him from faking the ABX-log?

This post has been edited by kabal4e: Mar 15 2013, 01:05
Go to the top of the page
+Quote Post
db1989
post Mar 15 2013, 01:33
Post #29





Group: Super Moderator
Posts: 5275
Joined: 23-June 06
Member No.: 32180



I do apologise if I misread anything or underestimated the usefulness of such reports to a developer!

But here’s the inevitable howevertongue.gif

QUOTE (kabal4e @ Mar 14 2013, 23:15) *
To sum up, I couldn't ABX the difference, track gains identical, slight full track peak difference, up to 50% sample values exactly matching, and max difference between sample values was 0.25. That's why I said there was no difference.
You could, and probably should wink.gif, just have stopped after the ABX test. Ranking encodings based upon the statistics output by a bit-comparator is only slightly informative at best, potentially misleading at worse. Besides, if you can’t tell the difference, does it matter how many small divergences might have been introduced by the lossy encoding process?

QUOTE
The main thing is that I never said I used only bit-comparison tool for tracks comparison. Will try to attach the ABX report in the future, however, in case a person desperately wants to proove some point, what stops him from faking the ABX-log?
If someone wants to cheat, s/he’ll find a way. They always do. That doesn’t mean people who want to promote proper practices should just abandon all their principles because some people might be dishonest. I could apply this to plenty of contexts in life, but then I’d be getting boring. biggrin.gif Anyway, for reference, there has been discussion here about possible ways – and, for all I know since I didn’t follow it, perhaps even the release of tools – to make ABX logs ‘cheat-proof’; so, whilst I don’t think the current vulnerability is any reason for the rest of us to stop promoting such testing using the presently available methods, you might find those previous posts interesting.
Go to the top of the page
+Quote Post

2 Pages V  < 1 2
Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



RSS Lo-Fi Version Time is now: 29th July 2014 - 00:44