IPB

Welcome Guest ( Log In | Register )

 
Reply to this topicStart new topic
Does AAC enconding quality depend on absolute sound level?
gholzmann
post Jul 4 2013, 15:05
Post #1





Group: Members
Posts: 16
Joined: 9-October 12
Member No.: 103732



Hallo!

I have a very specific question about the AAC codec (would be of course also interesting for other lossy codecs):
does the encoding/decoding quality depend on the absolute input audio level?

For example:
If I have a recording with a loudness of -35 dB RMS (or LUFS) and the same recording with -20 dB RMS.
Then I encode both and decode them again, and normalize both to a standard value (let's say -23 db RMS).
Does the louder one have a better quality then the more quiet one?

I think both should be equivalent, but I don't know the AAC internals ...

Thanks for any hints,
LG
Georg
Go to the top of the page
+Quote Post
Dynamic
post Jul 4 2013, 16:58
Post #2





Group: Members
Posts: 796
Joined: 17-September 06
Member No.: 35307



There would be a specific answer for each encoder (e.g. QAAC, Nero aacenc, FhG in Winamp, faac, ffmpeg)

AAC encoders operate on normalized values in floating point, so unless they have a fixed Absolute Threshold of Hearing level below which they cut off, there should be little repeatable difference in favour of one signal level or the other, though if there was a difference, it would probably favour the louder one. There WILL be some differences between the two encodings, just as there will be differences if you add a few samples of silence to the start, causing the signal to line up differently against the transform windows used internally.
Go to the top of the page
+Quote Post
gholzmann
post Jul 4 2013, 18:07
Post #3





Group: Members
Posts: 16
Joined: 9-October 12
Member No.: 103732



QUOTE (Dynamic @ Jul 4 2013, 17:58) *
AAC encoders operate on normalized values in floating point, so unless they have a fixed Absolute Threshold of Hearing level below which they cut off, there should be little repeatable difference in favour of one signal level or the other, though if there was a difference, it would probably favour the louder one.


Thanks for your answer!
That's exactly the question, if one should make the signal louder before encoding or if it doesn't matter.
We have to encode the signal and send it over network (afterwards we decode it again) and don't want to manipulate the dynamic range if it's not necessary ...

LG
Georg
Go to the top of the page
+Quote Post
nu774
post Jul 4 2013, 18:14
Post #4





Group: Developer
Posts: 514
Joined: 22-November 10
From: Japan
Member No.: 85902



It also depends on encoding strategy (VBR/CBR).
VBR can be more affected by the input sound level. More quiet input -> less bitrate.
As for QuickTime encoder, if you lower the input level by -50dB or so, the resulting bitrate will be dramatically different in case of TVBR mode, and you will get poor result.
You can use CBR to force encoder allocating bits on those quite input.

This post has been edited by nu774: Jul 4 2013, 18:15
Go to the top of the page
+Quote Post
gholzmann
post Jul 5 2013, 13:29
Post #5





Group: Members
Posts: 16
Joined: 9-October 12
Member No.: 103732



QUOTE (nu774 @ Jul 4 2013, 19:14) *
VBR can be more affected by the input sound level. More quiet input -> less bitrate.
...
You can use CBR to force encoder allocating bits on those quite input.


Hm, that makes sense. Especially as we want to amplify quiet segments in an audio file (after enconding -> network -> decoding).

Thanks a lot!

This post has been edited by gholzmann: Jul 5 2013, 13:30
Go to the top of the page
+Quote Post
C.R.Helmrich
post Jul 5 2013, 19:51
Post #6





Group: Developer
Posts: 686
Joined: 6-December 08
From: Erlangen Germany
Member No.: 64012



Which AAC encoder are you using, Georg, and at which bit-rate?

Chris


--------------------
If I don't reply to your reply, it means I agree with you.
Go to the top of the page
+Quote Post
gholzmann
post Jul 6 2013, 10:24
Post #7





Group: Members
Posts: 16
Joined: 9-October 12
Member No.: 103732



Hallo Chris!

QUOTE (C.R.Helmrich @ Jul 5 2013, 20:51) *
Which AAC encoder are you using, Georg, and at which bit-rate?


This is done on an iphone: so we are using Apple's AAC encoder (or wathever is used on an iphone) - ATM the bitrate is 150k (but that's just the tradeoff between network traffic and audio quality).

Have a nice weekend!
LG
Georg
Go to the top of the page
+Quote Post
C.R.Helmrich
post Jul 6 2013, 10:45
Post #8





Group: Developer
Posts: 686
Joined: 6-December 08
From: Erlangen Germany
Member No.: 64012



Moin,

I don't know the internals of the Apple encoder, but assuming you are talking about 150 kbps stereo, quality should not be an issue, so I suggest you just feed the encoder whatever PCM input you have, and do the loudness normalization after Decoding. You could even use a VBR coder at that bit-rate to make sure you get somewhat consistent quality. Apple's Constrained VBR (CVBR) should do just fine.

Chris


--------------------
If I don't reply to your reply, it means I agree with you.
Go to the top of the page
+Quote Post

Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



RSS Lo-Fi Version Time is now: 29th July 2014 - 00:58