IPB

Welcome Guest ( Log In | Register )

 
Reply to this topicStart new topic
Stereo and Dual Channel at VBR
Steve Forte Rio
post Dec 8 2013, 11:01
Post #1





Group: Members
Posts: 443
Joined: 4-October 08
From: Ukraine
Member No.: 59301



Hi.

I have a sample with very high stereo separation. I was experimenting with it at different stereo modes.

Default Joint gave me 191 kbps for VBR V2 (LAME 3.99.5), forced Joint gave 290 kbps (which is absolutely explainable), but for Dual Channen I've got a little bit lower bitrate - 186 kbps. And it's very interesting. We have an economy of bitrate for L/R compred to Mid/Side mode (as was shown by forcing M/S) , but also we have an economy for Dual Channel compared to simple stereo, so I can assume that dual channel mode is more optimal for this sample. Am I right? How can you explain reducing of bitrate for -md? AFAIK Dual Channel uses separate bitreservoirs for each channel, but I do not fully understand how could it affect bitrate in VBR mode. Explanation of SS mode says that it can give more bits to one channel if it's content has more complexity, and DC mode encodes with absolutely identical bitrate for both channels. I understand it in terms of CBR encoding, but for VBR, when we have no bitrate limitations... I don't...

This post has been edited by Steve Forte Rio: Dec 8 2013, 11:04
Go to the top of the page
+Quote Post
[JAZ]
post Dec 8 2013, 11:53
Post #2





Group: Members
Posts: 1778
Joined: 24-June 02
From: Catalunya(Spain)
Member No.: 2383



Remember that MP3 has only a reduced set of packet sizes, and that in VBR it just changes from one size to the other to accommodate for the needs.
This is not the same than saying there's no bitrate limitations. As such, what remains constant in dual channel is the amount of bits of each packet that is spent on each channel.
Since dual channel was intended for streams with dual languages (And VBR wasn't even a mode of the first MP3 encoders), and only one would be played at a time, It probably was thought that maintaining the bitrate constant would allow for optimizations on the decoder side (But this is speculation).


As for why in this particular example Dual Channel uses less bitrate than Joint Stereo... Well... A simple look a the output of the commandline can explain it:
CODE
   kbps        LR    MS  %     long switch short %
-V2  (-mj)
  193.2       94.8   5.2        85.9   6.5   7.5
-V2 -md
  188.0      100.0              92.9   3.4   3.7
-V2 -ms
  193.5      100.0              86.1   6.5   7.5
-V2 -mf
  294.1            100.0        85.9   6.5   7.5


It uses more long blocks (more efficient). Why does it consider that it should use them, and has an effect on the quality? A detailed inspection (plus ABX test) would be needed to answer that.
Go to the top of the page
+Quote Post
pdq
post Dec 8 2013, 16:49
Post #3





Group: Members
Posts: 3394
Joined: 1-September 05
From: SE Pennsylvania
Member No.: 24233



Forced stereo still allows distributing the bits unequally so if one channel needs more bits it can get more than half of the bits when needed.

Dual channel is strictly two 160 kbit maximum streams, so sometimes one channel needs more but doesn't get it, making the average bitrate lower.
Go to the top of the page
+Quote Post
robert
post Dec 8 2013, 23:29
Post #4


LAME developer


Group: Developer
Posts: 788
Joined: 22-September 01
Member No.: 5



Unlike the stereo modes, dual channel mono allows to use bock switching independantly on each channel. This may result in lower bitrates, but IIRC there were a number of test samples showing audible artefacts when playing both mono channels at the same time.
Go to the top of the page
+Quote Post
lvqcl
post Dec 9 2013, 00:39
Post #5





Group: Developer
Posts: 3362
Joined: 2-December 07
Member No.: 49183



Unfortunately -d option is disabled so it's not possible to test simple stereo (-m s) with independent block switching.
Go to the top of the page
+Quote Post
greynol
post Dec 9 2013, 18:56
Post #6





Group: Super Moderator
Posts: 10000
Joined: 1-April 04
From: San Francisco
Member No.: 13167



QUOTE (pdq @ Dec 8 2013, 07:49) *
Dual channel is strictly two 160 kbit maximum streams, so sometimes one channel needs more but doesn't get it, making the average bitrate lower.

For any given frame, if one channel used 160 kbits then that frame was encoded at 320 kbit, or am I missing something?


--------------------
I should publish a list of forum idiots.
Go to the top of the page
+Quote Post
robert
post Dec 9 2013, 19:19
Post #7


LAME developer


Group: Developer
Posts: 788
Joined: 22-September 01
Member No.: 5



QUOTE (pdq @ Dec 8 2013, 16:49) *
Forced stereo still allows distributing the bits unequally so if one channel needs more bits it can get more than half of the bits when needed.

Dual channel is strictly two 160 kbit maximum streams, so sometimes one channel needs more but doesn't get it, making the average bitrate lower.

There's not such a constrain for dual channel mode, not that I remember.
Go to the top of the page
+Quote Post
pdq
post Dec 9 2013, 19:28
Post #8





Group: Members
Posts: 3394
Joined: 1-September 05
From: SE Pennsylvania
Member No.: 24233



QUOTE (robert @ Dec 9 2013, 13:19) *
QUOTE (pdq @ Dec 8 2013, 16:49) *
Forced stereo still allows distributing the bits unequally so if one channel needs more bits it can get more than half of the bits when needed.

Dual channel is strictly two 160 kbit maximum streams, so sometimes one channel needs more but doesn't get it, making the average bitrate lower.

There's not such a constrain for dual channel mode, not that I remember.

I stand corrected.
Go to the top of the page
+Quote Post

Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



RSS Lo-Fi Version Time is now: 29th August 2014 - 16:00