Welcome Guest ( Log In | Register )

I have "Golden Ears" Which AAC VBR Bitrate is acceptable?
post Aug 29 2012, 09:28
Post #1

Group: Members
Posts: 2
Joined: 29-August 12
Member No.: 102734

So, yes, I have the "golden ears" everyone refers too...

I can actually hear the difference between FLAC, and MP3 320 kbps

I can hear the difference between 1,536 Kbps DTS, 800 Kbps VBR AAC, and 384/640 Kbps AC3 when I watch films, and my equipment isn't very fancy either

I just finished transcoding my FLAC discography of Eminem to 128 Kbps VBR AAC

I am thinking of retranscoding (again, from flac) to 192 Kbps or 320 Kbps VBR AAC
but is it necessary? (this is going to be for playback from my ipod touch)

My FLAC files look something like this...
Format/Info : Free Lossless Audio Codec
Duration : 6mn 42s
Bit rate mode : Variable
Bit rate : 2 849 Kbps
Channel(s) : 2 channels
Sampling rate : 96.0 KHz
Bit depth : 24 bits
Stream size : 137 MiB (100%)

This post has been edited by KmanKaiser: Aug 29 2012, 09:37
Go to the top of the page
+Quote Post
Start new topic
post Aug 29 2012, 22:01
Post #2

Group: Super Moderator
Posts: 11386
Joined: 1-April 04
From: Northern California
Member No.: 13167

A passed ABX only guarantees that one of them is not transparent from the original source.

Your eyes cannot hear.
Go to the top of the page
+Quote Post
post Aug 30 2012, 08:32
Post #3

Group: Members
Posts: 2402
Joined: 30-November 06
Member No.: 38207

QUOTE (greynol @ Aug 29 2012, 23:01) *
A passed ABX only guarantees that one of them is not transparent from the original source.

The more I think of that statement, the more interesting it gets (although I guess, not in practice, where I presume that the codecs would share some artifacts, making ABXing them harder, or at least no easier, than original-to-lossy).

But in principle, the artifacts could be disjoint, giving you “twice as many” artifacts to detect -- or even worse (though even less likely in practice), it could “double” an artifact -- say, one note getting a treble boost not noticeable until you compare to the other codec which has a treble cut on the same note; say +D to 0 is inaudible, -d to 0 is inaudible, +D to -d is.

Conditioned upon this being the situation (not too likely I'd say, but for the sake of the argument), claiming “one of them is not transparent” is the inference that “in this signal, D to 0 is audible or 0 to (-d) is audible” from the observation “in this signal, D to -d is audible”. Or, simplifying to absolute value terms, “max{D,d} is audible” from “D+d is audible”. What is the type I/type II trade-off, and how does that compare to the likely low N used in ABXing each to the original?

(In the more realistic case where artifacts are shared, the inference would be “in this signal, max{D,d} is audible” from “in this signal, |D-d| is audible”, and that is ... not too objectionable.)
Go to the top of the page
+Quote Post

Posts in this topic

Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:


RSS Lo-Fi Version Time is now: 30th November 2015 - 03:49