SoundExpert explained, Methodology issues
SoundExpert explained, Methodology issues
Nov 24 2010, 13:27
Joined: 14-December 01
Member No.: 641
I found this thread among SoundExpert referals and was a bit surprised with almost complete misunderstanding of SE testing methodology and particularly how diff signal is used in SE audio quality metrics. Discussion on the topic from 2006 actually seems more meaningful. So I decided to post here some SE basics for reference purposes. I will use a thought experiment which is close to reality though.
Suppose we have two sound signals – the main and the side one. They could be for example a short piano passage and some noise. We can prepare several mixes of them in different proportions:
After normalization all mixes have equal levels and we can evaluate perceptibility of the side signal in the mixes. Here at SE we found that this perceptibility is a monotonous function of side signal level and looks like this:
Figure: Side signal perception
(1) In other words, there is a relationship between objectively measured level of side signal and its subjectively estimated perceptibility in the mix. And what is more:
(a) this relationship is well described by 2-nd order curve (assuming levels are in dB)
(2) These side stimulus perceptibility curves are the core of SE rating mechanism. Each device under test has its own curve plotted on basis of SE online listening tests.
(3) Side signals are difference signals of devices being tested. Levels of side signals are expressed in dB of Difference level parameter which is exactly equal to RMS level of side signal in our case.
(4) Subjective grades of perceptibility are anchor points of 5-grade impairment scale.
(5) Audio metrics beyond threshold of audibility is determined by extrapolation of that 2-nd order curves. Virtual grades in extrapolated area could be considered as objective quality parameters regarding human auditory peculiarities.
So, yes, difference signal is used in SE testing. We take into account both its level and how human auditory system perceives it together with reference signal. Some difference signals having fairly high levels still remain almost imperceptible against the background of reference signal and vice versa; perceptibility curves reflect this.
This is the concept. Many parts of it still need thorough verification in carefully designed listening tests, which are beyond SE possibilities. All we can do is to analyze collected grades returned by SE visitors. This will be done for sure and yet this can't be a replacement of properly organized listening tests.
SE testing methodology is new and questionable, but all assumptions look reasonable and SE ratings – promising, at least to me. Time will show.
keeping audio clear together - soundexpert.org
Nov 25 2010, 12:30
Joined: 5-November 01
From: Yorkshire, UK
Member No.: 409
Just to be clear, your graph example shows grades where the default noise level (0dB) is quite objectionable, and reducing the noise makes it less and less so - correct?
But with codec testing, you do kind of the opposite. The default noise level (0dB) is usually indistinguishable/transparent, or very nearly so, and to build the "worse quality" part of the curve (the part where people can hear the noise), you have to amplify the coding noise - correct?
People in this thread are saying the scale beyond "imperceptible" makes no sense. I'm not sure if that's true or not. What you're "measuring" (I put that in quotes - see later) is how far the coding noise sits below the threshold of audibility. (or above, if it's audible at the default level). If the second-order curve theory holds true, then to do this you only need sufficient points on the curve where the difference is audible. Points on the curve where the difference is inaudible don't help because it does become a flat line there.
There are several accepted ways to judge the threshold of audibility. I used this one...
Each masking threshold was determined by a 3-interval, forced choice task, using a one up two down transformed stair case tracking method. This procedure yields the threshold at which the listener will detect the target 70.7% of the time [Levitt, 1971]. The process is as follows.
For each individual measurement, the subject is played three stimuli, denoted A, B, and C. Two presentations consist of the masker only, whilst the third consists of the masker and tar-get. The order of presentation is randomised, and the subject is required to identify the odd-one-out, thus determining whether A, B, or C contains the target. The subject is required to choose one of the three presentations in order to continue with the test, even if this choice is pure guesswork, hence the title “forced choice task.” If the subject fails to identify the target signal, the amplitude of the target is raised by 1 dB for the next presentation. If the subject cor-rectly identifies the target signal twice in succession, then the amplitude of the target is re-duced by 1 dB for the next presentation. Hence the amplitude of the target should oscillate about the threshold of detection, as shown in Figure 6.5. In practice, mistakes and lucky guesses by the listener typically cause the amplitude of the target to vary over a greater range than that shown. A reversal (denoted by an asterisk in Figure 6.5) indicates the first incorrect identification following a series of successes (upper asterisks), or the first pair of correct identi-fications following a series of failures (lower asterisks). The amplitudes at which these rever-sals occur are averaged to give the final masked threshold. An even number of reversals must be averaged, since an odd number would cause a +ve or –ve bias. Throughout these tests, the final six (out of eight) reversals were averaged to calculate each masked threshold.
The initial amplitude of the target is set such that it should be easily audible. Before the first reversal, whenever the subject correctly identifies the target twice, the amplitude is reduced by 6 dB. After the first reversal, whenever the subject fails to identify the target, the amplitude is increased by 4 dB. After the second reversal, whenever the subject correctly identifies the tar-get twice, the amplitude is reduced by 2 dB. After the third reversal, the amplitude is always changed by 1 dB, and the following six reversals are averaged to calculate each masked threshold. This procedure allows the target amplitude to rapidly approach the masked thresh-old, and then finely track it. If the target amplitude were changed in 1 dB steps initially, then the decent to the masked threshold would take considerably longer, and add greatly to listener fatigue. In the case where the listener fails to identify the target initially, then the target ampli-tude is increased by 6 dB for each failed identification, up to the maximum allowed by the re-play system (90 dB peak SPL at the listener’s head).
This is normally used for simple noise masking tone experiments. It seems to work OK with coding noise, but repetition of a moment of coded audio over and over again is quite mind numbing and makes people listen in a very different way to normal music listening. Whether it pushes their thresholds up or down I don't know. Quite a fascinating subject IMO!
It seems to me that your method is far kinder to listeners. If your second order curve fitting can be justified, then it's a really neat way of finding the threshold of audbility (the cross over from 5.0 "imperceptible", to 4.9 "just perceptible but not annoying" on the usual scale) without even having to test at that (difficult) level.
So far so good. What I'm less convinced of is the implication that a given codec has so much "headroom", and that this is a "good thing".
e.g. on the range of content tested, at a given bitrate/setting, a given codec might be transparent even with the noise elevated by 12dB. It scores well in your test. Fair enough. IMO it would be wrong to draw too much from this conclusion. e.g.
1. It's tempting to think this means it's suitable for transcoding, but it might not be - it might fall apart when transcoded.
2. It's tempting to think this means that audible artefacts will be rarer (and/or less bad) with this codec than with one where the noise becomes audible when elevated by 3dB, but this might be very wrong - this wonderful codec which keeps coding noise 12dB below the threshold of audibility on the content tested might fall apart horribly on some piece of content that hasn't been tested.
I'm sure you know all this! I'm just thinking aloud.
Anyway, I find it fascinating. Thanks for the explanation.
|Lo-Fi Version||Time is now: 26th July 2014 - 16:41|