IPB

Welcome Guest ( Log In | Register )

SoundExpert explained, Methodology issues
Serge Smirnoff
post Nov 24 2010, 13:27
Post #1





Group: Members
Posts: 371
Joined: 14-December 01
Member No.: 641



I found this thread among SoundExpert referals and was a bit surprised with almost complete misunderstanding of SE testing methodology and particularly how diff signal is used in SE audio quality metrics. Discussion on the topic from 2006 actually seems more meaningful. So I decided to post here some SE basics for reference purposes. I will use a thought experiment which is close to reality though.

Suppose we have two sound signals the main and the side one. They could be for example a short piano passage and some noise. We can prepare several mixes of them in different proportions:
  • equal levels of main and side signals (0dB RMS)
  • half level of side signal (-6dB RMS)
  • quarter level of side signal (-12dB RMS)
  • 1/8 level of side signal (-18dB RMS)
  • 1/16 level of side signal (-24dB RMS)

After normalization all mixes have equal levels and we can evaluate perceptibility of the side signal in the mixes. Here at SE we found that this perceptibility is a monotonous function of side signal level and looks like this:

Figure: Side signal perception

(1) In other words, there is a relationship between objectively measured level of side signal and its subjectively estimated perceptibility in the mix. And what is more:
(a) this relationship is well described by 2-nd order curve (assuming levels are in dB)
(b) the relationship holds for any sound signals whether they are correlated or not, the only differences are position and curvature of the curve.

(2) These side stimulus perceptibility curves are the core of SE rating mechanism. Each device under test has its own curve plotted on basis of SE online listening tests.
(3) Side signals are difference signals of devices being tested. Levels of side signals are expressed in dB of Difference level parameter which is exactly equal to RMS level of side signal in our case.
(4) Subjective grades of perceptibility are anchor points of 5-grade impairment scale.
(5) Audio metrics beyond threshold of audibility is determined by extrapolation of that 2-nd order curves. Virtual grades in extrapolated area could be considered as objective quality parameters regarding human auditory peculiarities.

So, yes, difference signal is used in SE testing. We take into account both its level and how human auditory system perceives it together with reference signal. Some difference signals having fairly high levels still remain almost imperceptible against the background of reference signal and vice versa; perceptibility curves reflect this.

This is the concept. Many parts of it still need thorough verification in carefully designed listening tests, which are beyond SE possibilities. All we can do is to analyze collected grades returned by SE visitors. This will be done for sure and yet this can't be a replacement of properly organized listening tests.

SE testing methodology is new and questionable, but all assumptions look reasonable and SE ratings promising, at least to me. Time will show.


--------------------
keeping audio clear together - soundexpert.org
Go to the top of the page
+Quote Post
 
Start new topic
Replies
Serge Smirnoff
post Nov 30 2010, 09:09
Post #2





Group: Members
Posts: 371
Joined: 14-December 01
Member No.: 641



QUOTE (SebastianG @ Nov 30 2010, 01:04) *
It's not hard to imagine the possibility of signal pairs (main,side) where you can't hear any difference between main and main+side but you can easily hear a difference between main and main+0.5*side.

In practice - never. In all cases perception of gradually unmasked artifacts is monotonous function. That was also confirmed by B. Feiten in already mentioned "Measuring the Coding Margin of Perceptual Codecs with the Difference Signal" (AES Preprint # 4417). This is the main point of SE metric that was stated in the first post (above the graph). Once again - not a single case where the curve was not monotonous and numerous cases of monotonous behavior. So I treat this as a fact.

QUOTE (SebastianG @ Nov 30 2010, 01:04) *
Hint: phase is a bitch. ;-) Your implicit assumption is that both signals are independent. But this is not necessarily the case with perceptual audio coders. Take for example the MPEG4 tool called PNS (perceptual noise substitution). It just replaces some high frequency noise with synthetically generated noise of the same level. This is done by transmitting the noise level only. Obviously, we can use this tool in cases when the main perceptual feature is the energy level and anything else is not important. Then, we have the following properties: Noise level of original matches the noise level of the encoded result, so energy(main) = energy(main+side). Probability theory tells us that main and main+side are orthogonal. This implies a coherence between main and side of 0.7 -- ZERO POINT SEVEN. Hardly independent. This also implies that a 50/50 mix -- main+0.5*side -- would lose 3dB power. You can easily compute this via
CODE
main = [1 0];
side = [0 1] - main;
20*log10(norm(main+0.5*side))

(Matlab code)

So, by attenuating the sample-by-sample difference we actually amplify the perceived difference (since we lose power) in this case! What does that tell us? It tells us that you overrate sample-by-sample differences. Perceptual audio coders try to retain certain things so it sounds similar and tolerate other losses. And you're focussing on the "other losses" (as well). What you're doing is basically violating some of a perceptual encoder's principles (like keeping energy levels similar no matter how large the sample-by-sample difference will be). By amplifing the difference you could destroy some signal properties the encoder and our HAS cares about much more than you do. Sound perception is not as simple as you want us to believe. Sample-by-Sample differences are not important. And "extrapolating artefacts" this way is nothing but a big waste of time. Even testing with "attenuated artefacts" doesn't tell you anything. Your methodology breaks down because you're assuming that the difference is independent from the original. It is not.

I didn't make such assumption, quite the opposite - see 1b in the first post. Nevertheless, the case you discribe is realy interesting. If exaggerated and simplified a bit it will look like following:

We have a sound excerpt which has a time interval (between tonal parts) which consists purely of, say, white noise. Also we have a coder which can only substitute the noise with uncorrelated one whenever it detects that there are no tonal parts during that interval. Then diff. signal will consist of amplified noise portion (being uncorrelated they will be added not subtracted). So the version of our excerpt with amplified differences will have stronger noise part which can be detected in listening tests while in practice this is not important for HAS.

Is this the case you wanted to produce? If yes I will examine it more carefully. It is really interesting as it helps to determine the limits of the metric.


--------------------
keeping audio clear together - soundexpert.org
Go to the top of the page
+Quote Post
2Bdecided
post Nov 30 2010, 16:24
Post #3


ReplayGain developer


Group: Developer
Posts: 5362
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



QUOTE (Serge Smirnoff @ Nov 30 2010, 08:09) *
In all cases perception of gradually unmasked artifacts is monotonous function.
How can you say this when SebG and Woodinville both gave you examples to the contrary?

I hit the exact problem Woodinville describes using the method I posted on the first page of this thread - a listener gets stuck in a "false" minima of audibility because double the difference gives you the original signal back (with the part "removed" by the codec being inverted, but that difference is not usually audible). Hardly monotonic - the chance of hearing the artefact becomes zero at a single gain setting (+6dB), and (with the specific audio I used - YMMV!) leaps back to the "expected" function very quickly either side of that.

Cheers,
David.
Go to the top of the page
+Quote Post
Serge Smirnoff
post Nov 30 2010, 17:38
Post #4





Group: Members
Posts: 371
Joined: 14-December 01
Member No.: 641



QUOTE (2Bdecided @ Nov 30 2010, 19:24) *
How can you say this when SebG and Woodinville both gave you examples to the contrary?

I hit the exact problem Woodinville describes using the method I posted on the first page of this thread - a listener gets stuck in a "false" minima of audibility because double the difference gives you the original signal back (with the part "removed" by the codec being inverted, but that difference is not usually audible). Hardly monotonic - the chance of hearing the artefact becomes zero at a single gain setting (+6dB), and (with the specific audio I used - YMMV!) leaps back to the "expected" function very quickly either side of that.

In many papers devoted to "coding margin" a special filtering is recommended to eliminate those "ghost" frequencies. We also use it.


--------------------
keeping audio clear together - soundexpert.org
Go to the top of the page
+Quote Post
Woodinville
post Dec 1 2010, 03:11
Post #5





Group: Members
Posts: 1414
Joined: 9-January 05
From: In the kitchen
Member No.: 18957



QUOTE (Serge Smirnoff @ Nov 30 2010, 08:38) *
QUOTE (2Bdecided @ Nov 30 2010, 19:24) *
How can you say this when SebG and Woodinville both gave you examples to the contrary?

I hit the exact problem Woodinville describes using the method I posted on the first page of this thread - a listener gets stuck in a "false" minima of audibility because double the difference gives you the original signal back (with the part "removed" by the codec being inverted, but that difference is not usually audible). Hardly monotonic - the chance of hearing the artefact becomes zero at a single gain setting (+6dB), and (with the specific audio I used - YMMV!) leaps back to the "expected" function very quickly either side of that.

In many papers devoted to "coding margin" a special filtering is recommended to eliminate those "ghost" frequencies. We also use it.



How do you know what "it" is? You have to work specifically to every bit rate, every bandwidth, every sampling rate, every different encoder?

This is not useful.


--------------------
-----
J. D. (jj) Johnston
Go to the top of the page
+Quote Post
Serge Smirnoff
post Dec 1 2010, 09:17
Post #6





Group: Members
Posts: 371
Joined: 14-December 01
Member No.: 641



QUOTE (Woodinville @ Dec 1 2010, 06:11) *
QUOTE (Serge Smirnoff @ Nov 30 2010, 08:38) *

In many papers devoted to "coding margin" a special filtering is recommended to eliminate those "ghost" frequencies. We also use it.



How do you know what "it" is? You have to work specifically to every bit rate, every bandwidth, every sampling rate, every different encoder?

This is not useful.

Subtracting a portion of reference signal from output one it's not hard to figure out what frequencies are "ghosted' and remove them with FIR filter. So, yes, we do it for every test sample with amplified artifacts. This helps to get smoother perception curves. Every item tested at SE has its own unique curve plotted on results of SE listening tests. Extrapolating that curve we get resulting quality rating for each testing item.


--------------------
keeping audio clear together - soundexpert.org
Go to the top of the page
+Quote Post
Woodinville
post Dec 1 2010, 22:03
Post #7





Group: Members
Posts: 1414
Joined: 9-January 05
From: In the kitchen
Member No.: 18957



QUOTE (Serge Smirnoff @ Dec 1 2010, 00:17) *
QUOTE (Woodinville @ Dec 1 2010, 06:11) *
QUOTE (Serge Smirnoff @ Nov 30 2010, 08:38) *

In many papers devoted to "coding margin" a special filtering is recommended to eliminate those "ghost" frequencies. We also use it.



How do you know what "it" is? You have to work specifically to every bit rate, every bandwidth, every sampling rate, every different encoder?

This is not useful.

Subtracting a portion of reference signal from output one it's not hard to figure out what frequencies are "ghosted' and remove them with FIR filter. So, yes, we do it for every test sample with amplified artifacts. This helps to get smoother perception curves. Every item tested at SE has its own unique curve plotted on results of SE listening tests. Extrapolating that curve we get resulting quality rating for each testing item.


So, it's "by clip". This still seems useless.


--------------------
-----
J. D. (jj) Johnston
Go to the top of the page
+Quote Post
Kees de Visser
post Dec 1 2010, 23:47
Post #8





Group: Members
Posts: 735
Joined: 22-May 05
From: France
Member No.: 22220



QUOTE (Woodinville @ Dec 1 2010, 23:03) *
This still seems useless.
So which options are available to reveal sub-threshold differences in a listening test ?
Go to the top of the page
+Quote Post
Woodinville
post Dec 1 2010, 23:55
Post #9





Group: Members
Posts: 1414
Joined: 9-January 05
From: In the kitchen
Member No.: 18957



QUOTE (Kees de Visser @ Dec 1 2010, 14:47) *
QUOTE (Woodinville @ Dec 1 2010, 23:03) *
This still seems useless.
So which options are available to reveal sub-threshold differences in a listening test ?


This leads to a very simple question: What does "sub-threshold differences in a listening test" mean?

Therein lies, perhaps, the underlying philosophical problem here.


--------------------
-----
J. D. (jj) Johnston
Go to the top of the page
+Quote Post
Kees de Visser
post Dec 2 2010, 09:35
Post #10





Group: Members
Posts: 735
Joined: 22-May 05
From: France
Member No.: 22220



QUOTE (Woodinville @ Dec 2 2010, 00:55) *
This leads to a very simple question: What does "sub-threshold differences in a listening test" mean?
Differences that can be proven to exist with technical means, but are undetectable with a standard listening test.

Let me try this analogy:
Someone has to leave the next day on a 6-month boat trip. He has to prepare canned food and can choose between two unlabeled lots that look identical. Someone told him that the lots have different "best before" dates: one expires in 1 month, the other in 10 months. He tastes a bit from each, but both taste absolutely identical. He knows that best before dates don't mean that the food will be bad the day after, but his chances to survive the trip are probably bigger when he picks the fresher one.
(btw, the boat is too small to take both)
Go to the top of the page
+Quote Post
2Bdecided
post Dec 2 2010, 11:25
Post #11


ReplayGain developer


Group: Developer
Posts: 5362
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



QUOTE (Kees de Visser @ Dec 2 2010, 08:35) *
QUOTE (Woodinville @ Dec 2 2010, 00:55) *
This leads to a very simple question: What does "sub-threshold differences in a listening test" mean?
Differences that can be proven to exist with technical means, but are undetectable with a standard listening test.

Let me try this analogy:
Someone has to leave the next day on a 6-month boat trip. He has to prepare canned food and can choose between two unlabeled lots that look identical. Someone told him that the lots have different "best before" dates: one expires in 1 month, the other in 10 months. He tastes a bit from each, but both taste absolutely identical. He knows that best before dates don't mean that the food will be bad the day after, but his chances to survive the trip are probably bigger when he picks the fresher one.
(btw, the boat is too small to take both)
The best before date is a simple function - an apples to apples comparison - you know that 6 months is better than 5 months. You also know that what you want to do (go out longer in the boat) relates to what you are measuring (how long the food will last).

Comparing codecs isn't like this at all. Comparing codecs is an apples to oranges comparison - you don't know that artefacts 6dB below threshold are better than artefacts 5dB below threshold - 1) because the characteristic of the artefacts could be different, and 2) you haven't said what "better" means. Better for what? Not for just listening (either is fine), so for what?

Cheers,
David.
Go to the top of the page
+Quote Post
Kees de Visser
post Dec 2 2010, 13:09
Post #12





Group: Members
Posts: 735
Joined: 22-May 05
From: France
Member No.: 22220



QUOTE (2Bdecided @ Dec 2 2010, 12:25) *
Comparing codecs isn't like this at all. Comparing codecs is an apples to oranges comparison - you don't know that artefacts 6dB below threshold are better than artefacts 5dB below threshold - 1) because the characteristic of the artefacts could be different, and 2) you haven't said what "better" means. Better for what? Not for just listening (either is fine), so for what?
Do we agree that there are 3 types of quality levels, from better to worse:
1- artefacts are non-existent (-inf), like in lossless coding
2- artefacts are below the hearing threshold
3- artefacts are audible, by at least one listener for at least one (killer)sample

In my view the better codec is the one that will remain in category 2 in any situation (e.g. inserting an Orban in the monitoring chain).

Example: original master is 24/96. Two lossy copies are made, one 16/44.1 and one mp3 320kbs. Both sound identical to the master.
I would say the 16/44.1 is better than the mp3, but if you can give arguments for the contrary, I'm all ear.

QUOTE (2Bdecided @ Dec 2 2010, 12:32) *
If I sing the same thing twice, what do you do to these two files to present them on SoundExpert.com?
SoundExpert won't work for this, nor will ABX since there's a huge risk for false positives. A lot depends on where you switch from A to B. Small tempo and pitch differences will remain unnoticed when heard in isolation, but as soon as you jump from one to the other they can become apparent. This is the daily job of an audio editor, to find the best spot to inaudibly switch from one take to another. (hint: it's not always easy and I'm glad to be paid per hour) smile.gif
Go to the top of the page
+Quote Post
2Bdecided
post Dec 2 2010, 16:04
Post #13


ReplayGain developer


Group: Developer
Posts: 5362
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409



QUOTE (Kees de Visser @ Dec 2 2010, 12:09) *
QUOTE (2Bdecided @ Dec 2 2010, 12:25) *
Comparing codecs isn't like this at all. Comparing codecs is an apples to oranges comparison - you don't know that artefacts 6dB below threshold are better than artefacts 5dB below threshold - 1) because the characteristic of the artefacts could be different, and 2) you haven't said what "better" means. Better for what? Not for just listening (either is fine), so for what?
Do we agree that there are 3 types of quality levels, from better to worse:
1- artefacts are non-existent (-inf), like in lossless coding
2- artefacts are below the hearing threshold
3- artefacts are audible, by at least one listener for at least one (killer)sample
You can certainly define 3 such categories. It also sounds like a thing that's theoretically true (whatever that means). I suspect your categories are completely useless though...

In practice, it's hard to find a codec in category 2 that gives a significant bitrate saving over those in category 1.

It's rather difficult to prove that the codec is in category 2 rather than 3. You've got to get everyone in the world to listen carefully to every possible audio signal.

QUOTE
In my view the better codec is the one that will remain in category 2 in any situation (e.g. inserting an Orban in the monitoring chain).
Ah, good, so now we have everyone in the world listening to every possible audio signal via every possible piece of audio processing. Excellent.

Now, seriously, even if we put the "every person" and "every audio signal" parts to one side, you must realise that for any codec which changes the signal (let's assume the change is inaudible), there must be some audio processing we can do to make that change audible. So no codec can remain in category 2 "in any situation".


QUOTE
Example: original master is 24/96. Two lossy copies are made, one 16/44.1 and one mp3 320kbs. Both sound identical to the master.
I would say the 16/44.1 is better than the mp3, but if you can give arguments for the contrary, I'm all ear.
If the mp3 is made from a 16/44.1 file (as is normal) then this is silly - of course it can't be better, since it's a copy of a copy.

However, if the mp3 is made from the 24/96 by resampling to 44.1 but maintaining 24-bits, then it's kind of trivial to find a situation where the mp3 is "better":
The original master contains a signal at -110dB
The mp3 is decoded to 24-bits
The "processing" applied to the 16/44.1 wav and the decoded 320kbps mp3 is... increasing the level by 80dB.

Oh look - both sounded identical to the master before processing, but with my highly advanced processing in place (well, OK, it was a volume control!) the mp3 is revealed to be far closer to the master than the 16/44.1 version.


These are all silly examples, but I think they prove the point - there's far too much assumption in the SoundExpert methods, or the "this thing sounds the same but must be better" statements.


QUOTE
QUOTE (2Bdecided @ Dec 2 2010, 12:32) *
If I sing the same thing twice, what do you do to these two files to present them on SoundExpert.com?
SoundExpert won't work for this, nor will ABX since there's a huge risk for false positives. A lot depends on where you switch from A to B. Small tempo and pitch differences will remain unnoticed when heard in isolation, but as soon as you jump from one to the other they can become apparent. This is the daily job of an audio editor, to find the best spot to inaudibly switch from one take to another. (hint: it's not always easy and I'm glad to be paid per hour) smile.gif
But SBR is "singing along" with the music without tempo and pitch differences, yet re-creating it from scratch (the original waveform is discarded). ABX works fine. Amplifying the sample-by-sample differences is meaningless.


I don't see any explanation of why the SoundExpert approach works for SBR, or accurately quantifies the subjective quality of SBR wrt "traditional" coding.


It's funny - we've seen a second revolution in audio coding. The first was when basic psychoacoustics came in, and suddenly having a waveform that was "closest" to the original was no longer the way to judge quality. With two codecs, the one which had a greater error signal could sound better.

Now with SBR and PS we have another revolution where the waveform isn't an (inaudibly) distorted version of the original, but actually bares no resemblance to the original. So any measurements that include psychoacoustics while assuming that the waveform should be at least vaguely similar are also broken.

I'm not convinced that the SoundExpert method actually survived the first revolution, but it's difficult to see how it survived the second.

ABX will survive whatever happens.


I'll eat my words if someone can provide a detailed explanation of how SoundExpert works, and prove a correlation - but if it relies on sticking plasters to undo or account for each new coding trick, it's no good generally.

Cheers,
David.
Go to the top of the page
+Quote Post
Kees de Visser
post Dec 2 2010, 17:52
Post #14





Group: Members
Posts: 735
Joined: 22-May 05
From: France
Member No.: 22220



QUOTE (2Bdecided @ Dec 2 2010, 17:04) *
QUOTE (Kees de Visser @ Dec 2 2010, 12:09) *
In my view the better codec is the one that will remain in category 2 in any situation (e.g. inserting an Orban in the monitoring chain).
Ah, good, so now we have everyone in the world listening to every possible audio signal via every possible piece of audio processing. Excellent.
Exactly, that's not very practicle.
And that's the very reason why so many audio professionals prefer to offer lossless formats and let the customer decide how to process it for his/her personal use.
I remember numerous complaints from HA members about online music being only available in lossy formats. Deutsche Grammophon offers both flac and 320kbs mp3, which makes a lot of sense IMO, even if they sound identical wink.gif
Go to the top of the page
+Quote Post

Posts in this topic
- Serge Smirnoff   SoundExpert explained   Nov 24 2010, 13:27
- - drewfx   What is the justification for the "dashed...   Nov 24 2010, 18:20
|- - Serge Smirnoff   QUOTE (drewfx @ Nov 24 2010, 21:20) What ...   Nov 24 2010, 20:00
||- - drewfx   QUOTE (Serge Smirnoff @ Nov 24 2010, 14:0...   Nov 24 2010, 20:24
||- - Serge Smirnoff   QUOTE (drewfx @ Nov 24 2010, 23:24) Exact...   Nov 24 2010, 21:49
|- - Porcus   QUOTE (drewfx @ Nov 24 2010, 18:20) What ...   Nov 27 2010, 15:49
|- - drewfx   QUOTE (Porcus @ Nov 27 2010, 09:49) QUOTE...   Nov 29 2010, 18:43
|- - greynol   QUOTE (drewfx @ Nov 29 2010, 09:43) And t...   Nov 29 2010, 19:18
|- - Serge Smirnoff   QUOTE (greynol @ Nov 29 2010, 22:18) Some...   Nov 29 2010, 20:21
- - drewfx   Just to be clear - I am not necessarily questionin...   Nov 24 2010, 22:17
|- - Serge Smirnoff   If you want to build human-hearing-oriented audio ...   Nov 25 2010, 00:24
||- - alexeysp   QUOTE (Serge Smirnoff @ Nov 25 2010, 01:2...   Nov 25 2010, 11:35
||- - Serge Smirnoff   QUOTE (alexeysp @ Nov 25 2010, 13:35) ...   Nov 25 2010, 19:33
|- - knutinh   QUOTE (drewfx @ Nov 24 2010, 22:17) I rep...   Nov 25 2010, 19:15
|- - Serge Smirnoff   QUOTE (knutinh @ Nov 25 2010, 21:15) If t...   Nov 25 2010, 19:49
|- - Kees de Visser   In the recently closed thread which the OP referre...   Nov 25 2010, 21:39
- - 2Bdecided   Just to be clear, your graph example shows grades ...   Nov 25 2010, 12:30
|- - Serge Smirnoff   QUOTE (2Bdecided @ Nov 25 2010, 14:30) Ju...   Nov 25 2010, 23:50
- - Woodinville   QUOTE (Serge Smirnoff @ Nov 24 2010, 04:2...   Nov 26 2010, 08:25
|- - Serge Smirnoff   QUOTE (Woodinville @ Nov 26 2010, 10:25) ...   Nov 26 2010, 16:25
|- - Woodinville   QUOTE (Serge Smirnoff @ Nov 26 2010, 07:2...   Nov 27 2010, 07:17
|- - Serge Smirnoff   QUOTE (Woodinville @ Nov 27 2010, 09:17) ...   Nov 27 2010, 08:29
|- - Woodinville   QUOTE (Serge Smirnoff @ Nov 26 2010, 23:2...   Nov 27 2010, 23:05
|- - knutinh   QUOTE (Woodinville @ Nov 27 2010, 23:05) ...   Nov 28 2010, 19:24
- - greynol   That's a mighty big if. For years people have...   Nov 28 2010, 20:14
|- - Kees de Visser   The technique isn't new, according to this AES...   Nov 28 2010, 21:35
||- - Serge Smirnoff   QUOTE (Kees de Visser @ Nov 29 2010, 00:3...   Nov 28 2010, 22:47
|- - 2Bdecided   QUOTE (greynol @ Nov 28 2010, 19:14) That...   Nov 29 2010, 11:49
|- - Porcus   QUOTE (2Bdecided @ Nov 29 2010, 11:49) I ...   Nov 29 2010, 13:00
|- - 2Bdecided   QUOTE (Porcus @ Nov 29 2010, 12:00) QUOTE...   Nov 29 2010, 16:27
|- - Porcus   [Heavily edited] QUOTE (2Bdecided @ Nov 29 2...   Nov 29 2010, 16:47
|- - knutinh   QUOTE (2Bdecided @ Nov 29 2010, 16:27) QU...   Nov 30 2010, 09:53
|- - Porcus   QUOTE (knutinh @ Nov 30 2010, 09:53) Why ...   Nov 30 2010, 11:28
|- - knutinh   QUOTE (Porcus @ Nov 30 2010, 11:28) QUOTE...   Nov 30 2010, 11:34
- - greynol   If we aren't going to consider real-world usag...   Nov 29 2010, 20:27
|- - Serge Smirnoff   QUOTE (greynol @ Nov 29 2010, 23:27) What...   Nov 29 2010, 20:36
- - greynol   Breaking masking by amplifying a difference signal...   Nov 29 2010, 20:45
|- - Serge Smirnoff   QUOTE (greynol @ Nov 29 2010, 23:45) Brea...   Nov 29 2010, 21:19
|- - Kees de Visser   QUOTE (greynol @ Nov 29 2010, 21:45) Brea...   Nov 29 2010, 23:21
|- - greynol   QUOTE (Kees de Visser @ Nov 29 2010, 14:2...   Nov 30 2010, 08:19
- - greynol   How so?   Nov 29 2010, 21:31
|- - Serge Smirnoff   QUOTE (greynol @ Nov 30 2010, 00:31) How ...   Nov 29 2010, 22:10
- - SebastianG   QUOTE (Serge Smirnoff @ Nov 24 2010, 13:2...   Nov 29 2010, 22:04
- - Woodinville   Using a difference signal as a signal-detection te...   Nov 29 2010, 22:14
|- - Porcus   QUOTE (Woodinville @ Nov 29 2010, 22:14) ...   Nov 29 2010, 23:00
||- - Woodinville   QUOTE (Porcus @ Nov 29 2010, 14:00) QUOTE...   Nov 30 2010, 00:26
|- - Serge Smirnoff   QUOTE (Woodinville @ Nov 30 2010, 01:14) ...   Nov 30 2010, 09:20
- - Serge Smirnoff   QUOTE (SebastianG @ Nov 30 2010, 01:04) I...   Nov 30 2010, 09:09
|- - 2Bdecided   QUOTE (Serge Smirnoff @ Nov 30 2010, 08:0...   Nov 30 2010, 16:24
|- - Serge Smirnoff   QUOTE (2Bdecided @ Nov 30 2010, 19:24) Ho...   Nov 30 2010, 17:38
|- - Woodinville   QUOTE (Serge Smirnoff @ Nov 30 2010, 08:3...   Dec 1 2010, 03:11
|- - Serge Smirnoff   QUOTE (Woodinville @ Dec 1 2010, 06:11) Q...   Dec 1 2010, 09:17
|- - Woodinville   QUOTE (Serge Smirnoff @ Dec 1 2010, 00:17...   Dec 1 2010, 22:03
|- - Kees de Visser   QUOTE (Woodinville @ Dec 1 2010, 23:03) T...   Dec 1 2010, 23:47
||- - Woodinville   QUOTE (Kees de Visser @ Dec 1 2010, 14:47...   Dec 1 2010, 23:55
||- - greynol   QUOTE (Woodinville @ Dec 1 2010, 14:55) s...   Dec 2 2010, 06:47
||- - Serge Smirnoff   QUOTE (Woodinville @ Dec 2 2010, 02:55) T...   Dec 2 2010, 08:53
||- - Kees de Visser   QUOTE (Woodinville @ Dec 2 2010, 00:55) T...   Dec 2 2010, 09:35
||- - greynol   QUOTE (Kees de Visser @ Dec 2 2010, 00:35...   Dec 2 2010, 10:34
||- - 2Bdecided   QUOTE (Kees de Visser @ Dec 2 2010, 08:35...   Dec 2 2010, 11:25
|||- - Kees de Visser   QUOTE (2Bdecided @ Dec 2 2010, 12:25) Com...   Dec 2 2010, 13:09
||||- - 2Bdecided   QUOTE (Kees de Visser @ Dec 2 2010, 12:09...   Dec 2 2010, 16:04
|||||- - Kees de Visser   QUOTE (2Bdecided @ Dec 2 2010, 17:04) QUO...   Dec 2 2010, 17:52
|||||- - Serge Smirnoff   QUOTE (2Bdecided @ Dec 2 2010, 19:04) Now...   Dec 2 2010, 19:24
||||- - greynol   QUOTE (Kees de Visser @ Dec 2 2010, 04:09...   Dec 2 2010, 19:15
|||- - Serge Smirnoff   QUOTE (2Bdecided @ Dec 2 2010, 14:25) Com...   Dec 2 2010, 13:10
||- - Woodinville   QUOTE (Kees de Visser @ Dec 2 2010, 00:35...   Dec 3 2010, 00:32
|- - Serge Smirnoff   QUOTE (Woodinville @ Dec 2 2010, 01:03) S...   Dec 2 2010, 09:01
- - Porcus   Joking aside: I'd be surprised if MPEG didn...   Nov 30 2010, 12:03
- - 2Bdecided   I can see how this could work for a simple low pas...   Dec 1 2010, 16:26
- - Serge Smirnoff   QUOTE (2Bdecided @ Dec 1 2010, 19:26) Wit...   Dec 2 2010, 09:41
- - 2Bdecided   QUOTE (Serge Smirnoff @ Dec 2 2010, 08:41...   Dec 2 2010, 11:32
- - Serge Smirnoff   QUOTE (2Bdecided @ Dec 2 2010, 14:32) If ...   Dec 2 2010, 12:18


Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



RSS Lo-Fi Version Time is now: 20th December 2014 - 17:31