Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Lets talk WMA 10 Pro (Read 28611 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Lets talk WMA 10 Pro

Is there any way to encode VBR 1 pass at 16 bit depth? I just feel that i am waisting space if i chose 24bits, why did they not include 16 bit mode?

Lets talk WMA 10 Pro

Reply #1
Is there any way to encode VBR 1 pass at 16bit samples? I just feel that i am waisting space if i chose 24bits, why did they not include 16 bit mode?


You can't choose bitdepth because lossy files don't have a bitdepth. They just call it 24bit because it looks cool.

Lets talk WMA 10 Pro

Reply #2

Is there any way to encode VBR 1 pass at 16bit samples? I just feel that i am waisting space if i chose 24bits, why did they not include 16 bit mode?


You can't choose bitdepth because lossy files don't have a bitdepth. They just call it 24bit because it looks cool.
So why on VBR 2-pass and CBR i can use 16 bit?

Lets talk WMA 10 Pro

Reply #3


Is there any way to encode VBR 1 pass at 16bit samples? I just feel that i am waisting space if i chose 24bits, why did they not include 16 bit mode?


You can't choose bitdepth because lossy files don't have a bitdepth. They just call it 24bit because it looks cool.
So why on VBR 2-pass and CBR i can use 16 bit?


I think it may be decoder: 24 bit setting may allow decoder to output at 24 bits. But, yes, lossy formats cannot have bitdepth; this means that you don't waste your space by choosing 24 bits.

Lets talk WMA 10 Pro

Reply #4



Is there any way to encode VBR 1 pass at 16bit samples? I just feel that i am waisting space if i chose 24bits, why did they not include 16 bit mode?


You can't choose bitdepth because lossy files don't have a bitdepth. They just call it 24bit because it looks cool.
So why on VBR 2-pass and CBR i can use 16 bit?


I think it may be decoder: 24 bit setting may allow decoder to output at 24 bits. But, yes, lossy formats cannot have bitdepth; this means that you don't waste your space by choosing 24 bits.

Well if it doesnt matter, why does LAME MP3 and WMA both let you chose? Wouldnt they just remove the option if it didnt matter?

Lets talk WMA 10 Pro

Reply #5
Which LAME switch allows you to set the bitdepth?

Lets talk WMA 10 Pro

Reply #6
Which LAME switch allows you to set the bitdepth?
--bitwidth w  / input bit width is w (default 16)



Lets talk WMA 10 Pro

Reply #9
Lossy files have no bit depth, so whatever you're reading is either wrong or being misunderstood.

Lets talk WMA 10 Pro

Reply #10
Although I (literally) have no idea how it would relate to the OP, I can easily imagine using knowledge of the input PCM resolution to set things like minimum quantization thresholds and the like, in an encoder.

But I have no information on the OP, sorry. I'll poke somebody who might, but don't count on it.
-----
J. D. (jj) Johnston

Lets talk WMA 10 Pro

Reply #11
Do you guys know what I'm talking about? Ill show you an example:

If i chooe CBR i can chose 64kbps, 44Hz and 16bits, 2 channel

But i change it to Quality VBR i have to chose something like:

Quality 25, 44Hz, 24bit, 2 channel

See how the 16 bit changes to 24?





Do you guys know what I'm talking about? Ill show you an example:

If i chooe CBR i can chose 64kbps, 44Hz and 16bits, 2 channel

But i change it to Quality VBR i have to chose something like:

Quality 25, 44Hz, 24bit, 2 channel

See how the 16 bit changes to 24?


Oh and im talking about bits per sample and not bitdepth


Lets talk WMA 10 Pro

Reply #13
Most of the people here at HA have little use for WMA, so you may not get a satisfactory answer.

Lets talk WMA 10 Pro

Reply #14
Just take a WAVE editor like Audacity, zoom in so you can see the separate samples. The bit depth tells you, how many values can a sample have (2^16 = 65536 for 16bit). Does it make sense? O.K., then almost all lossy formats use DCT conversion, which will transform a wave to sum of trigonometrical functions. So during DCT you're loosing any bit depth, the data are stored as sinus waves.

You can (or must) define bitdepth of input data, but the actual compressed file doesn't have any bit depth. So if WMA is using DCT (and I suppose it is), it can't have any bit depth. It is some kind of misinformation.

Lets talk WMA 10 Pro

Reply #15
New Topic Title : Let's talk about bit depth

Lets talk WMA 10 Pro

Reply #16
Although I (literally) have no idea how it would relate to the OP, I can easily imagine using knowledge of the input PCM resolution to set things like minimum quantization thresholds and the like, in an encoder.

But I have no information on the OP, sorry. I'll poke somebody who might, but don't count on it.


Sure, but theres scale factors and all that, so while the average sample might only get 4-6 bits or similar, you can't actually say what the bit depth of any individual sample will be.  IMO its less misleading to say theres no actual bit depth.

Lets talk WMA 10 Pro

Reply #17

Although I (literally) have no idea how it would relate to the OP, I can easily imagine using knowledge of the input PCM resolution to set things like minimum quantization thresholds and the like, in an encoder.

But I have no information on the OP, sorry. I'll poke somebody who might, but don't count on it.


Sure, but theres scale factors and all that, so while the average sample might only get 4-6 bits or similar, you can't actually say what the bit depth of any individual sample will be.  IMO its less misleading to say theres no actual bit depth.


I think you miss the point. If I know that the input is 16 bits, there's no point in having scale factors or quantizers that describe something 18 bits down, now, is there?

I'm saying that knowing the original resolution can easily lead to coding efficiency, which is, I'd think, enough incentive to ask for it.

There is no way on the planet earth to assign any "bit depth" to a perceptual coder's output that means much of anything useful.
-----
J. D. (jj) Johnston

Lets talk WMA 10 Pro

Reply #18
I think you miss the point. If I know that the input is 16 bits, there's no point in having scale factors or quantizers that describe something 18 bits down, now, is there?


I'm not a coding expert, I just know some mathematics. IMO once the input data are converted using DCT to frequency domain, there's absolutely no use for the bit depth information. Even a single sinus wave has infinite bit depth. That's the important point - bit depth of WMA, MP3, AAC, Ogg Vorbis etc. is infinite.

Lets talk WMA 10 Pro

Reply #19
It can be that the reason why it has the 16bits option in CBR is being it a legacy option.
I doubt there is any reasonable fact to require 24bits of input for it to work, i just assume that they worked with 24bit data in order to improve the encoder, and *maybe* a marketing campaing around the lines of "See! we encode at 24bits!"

Edit: Remember that the codec was developed with the VC-1 video codec in mind, not as an audio CD encoder. So assuming the masters are on 24bits, it could seem a direct option.

 

Lets talk WMA 10 Pro

Reply #20

Which LAME switch allows you to set the bitdepth?
--bitwidth w  / input bit width is w (default 16)

This is to specify bit depth of input data when input is raw data (ie just in order to be able to correctly interpret the input data).




I think you miss the point. If I know that the input is 16 bits, there's no point in having scale factors or quantizers that describe something 18 bits down, now, is there?

I'm saying that knowing the original resolution can easily lead to coding efficiency, which is, I'd think, enough incentive to ask for it.


But on the other hand, we probably all have some adaptative noise floor in our encoders (don't we?), so knowledge of the input bit depth is not that interesting anymore.

Lets talk WMA 10 Pro

Reply #21
But on the other hand, we probably all have some adaptative noise floor in our encoders (don't we?), so knowledge of the input bit depth is not that interesting anymore.


So you'd let your system adapt below the actual noise floor? Why?

I'm not a coding expert, I just know some mathematics. IMO once the input data are converted using DCT to frequency domain, there's absolutely no use for the bit depth information. Even a single sinus wave has infinite bit depth. That's the important point - bit depth of WMA, MP3, AAC, Ogg Vorbis etc. is infinite.



Err. No.  It does not. I fyou start with a 2 bit sine wave, you'll see the quantization noise in one form or another (let's hope you dithered), not "infinite bit depth". When you quantize to some bit depth, you irrevocably add noise. (not necessary the same power at all frequences, there is indeed noise shaping, but you can't eliminate noise)

When you quantize to any fixed (integer) bit depth, you add noise. You don't ever get rid of it. Them's the roolz. Ask Dr. Shannon.
-----
J. D. (jj) Johnston

Lets talk WMA 10 Pro

Reply #22
So you'd let your system adapt below the actual noise floor? Why?


Most of the time, the instantaneous dynamic range handled by the encoder is quite lower than the one of the initial data. (usually a human can not make use of a +90dB dynamic range instantaneously)
But I think that you've got a point there, as in long silent parts, we (LAME) might reduce our noise floor quite below the noise floor of the original data.

Lets talk WMA 10 Pro

Reply #23
Most of the time, the instantaneous dynamic range handled by the encoder is quite lower than the one of the initial data. (usually a human can not make use of a +90dB dynamic range instantaneously)



Boy, do I have a synthetic signal or two for you

Seriously, look at things like chimes, glockenspiels, triangles...
-----
J. D. (jj) Johnston

Lets talk WMA 10 Pro

Reply #24
Quote
' date='May 24 2007, 23:05' post='494228']
It can be that the reason why it has the 16bits option in CBR is being it a legacy option.
I doubt there is any reasonable fact to require 24bits of input for it to work, i just assume that they worked with 24bit data in order to improve the encoder, and *maybe* a marketing campaing around the lines of "See! we encode at 24bits!"

Since the WMA Pro decoder supports output of 24 bit/sample, there must be an option to encode with 24 bit input samples...as easy as that. And since the platform usually is a PC, there is also no issue with processor wordlength restrictions (floating point anyway). So assumed that all the internal processing is adapted to support the 24 bit resolution (like the pyschoacoustic model etc, which I don't know of course), it would well make sense.
Which doesn't mean that it will "sound" better then... marketing sure sounds like a good reason to claim support for 24 bit