IPB

Welcome Guest ( Log In | Register )

9 Pages V  < 1 2 3 4 > »   
Closed TopicStart new topic
Is it possible to stretch 16 bits symmetrically?, instead of padding with zeros?
fluzzknock
post Apr 18 2014, 02:46
Post #26





Group: Members
Posts: 17
Joined: 18-April 14
Member No.: 115570



QUOTE (giro1991 @ Apr 15 2014, 13:08) *
The way I picture it,
bit depth is vertical definition and
sample rate is horizontal definition.
surely, stretching bitdepth (however acheived) would allow more definition... and you could technically keep adding definition, (not from the original source of course) but definition in the quantum realm.

EDIT: as opposed to padding with zero's which im certain is what pieplines do.


What you are getting at (I think) can be achieved with existing digital audio editors. In Adobe Audition, for example, you can take a 16-bit/44kHz FLAC file and "upconvert" it to 24-bit/96kHz or 32-bit/192kHz or whichever combination you want. This will give you an increased number of samples per second over a larger range of potential quantization values. Depending on what you are trying to achieve, however, there may no benefit to doing any of this digital conversion from an audio playback perspective. If your goal is the most accurate reproduction of the original analog waveform, this process can actually degrade that.
Go to the top of the page
+Quote Post
xnor
post Apr 18 2014, 13:30
Post #27





Group: Developer
Posts: 487
Joined: 29-April 11
From: Austria
Member No.: 90198



I really like the money analogy because it is so simple.

If you start with integer dollars and you go to cents you have to multiply by 10^2 = 100 (decimal is base 10).
$1 = 100c (decimal 1 followed by 2 zeros)
$2 = 200c
...

With 16 to 32 bits it is exactly the same, you have to multiply by 2^16 = 65536 (binary is base 2).
1 = 65536 (binary 1 followed by 16 zeros)
2 = 131072 (binary 10 followed by 16 zeros)
...


If you upsample by 2x, all you basically do is interpolation (linear in the example below just for the sake of simplicity, shouldn't be used for audio!):
1, 2, 4, 5 => 1, 1.5, 2, 3, 4, 4.5, 5

If you stayed at 16 bits you'd just introduce additional errors, because 1.5 would have to become either 1 or 2.
Upsampling doesn't add information to the signal.

This post has been edited by xnor: Apr 18 2014, 13:30
Go to the top of the page
+Quote Post
KMD
post Apr 18 2014, 16:17
Post #28





Group: Members
Posts: 97
Joined: 21-January 09
From: UK
Member No.: 65825



The output from a DAC in the fundamental architecture of digital audio is not a smooth curve. The reconstruction filter only removes the steps abouve 10 Khz . Low frequency square waves steps are still there. Modern DACs are delta sigma interpolating oversampling and so the output is smooth. Converting to 32 bits and interpolating would be beneficial but obsolete considering that.
Go to the top of the page
+Quote Post
db1989
post Apr 18 2014, 18:35
Post #29





Group: Super Moderator
Posts: 5275
Joined: 23-June 06
Member No.: 32180



QUOTE (fluzzknock @ Apr 18 2014, 02:46) *
What you are getting at (I think) can be achieved with existing digital audio editors. In Adobe Audition, for example, you can take a 16-bit/44kHz FLAC file and "upconvert" it to 24-bit/96kHz or 32-bit/192kHz or whichever combination you want. This will give you an increased number of samples per second over a larger range of potential quantization values. Depending on what you are trying to achieve, however, there may no benefit to doing any of this digital conversion from an audio playback perspective. If your goal is the most accurate reproduction of the original analog waveform, this process can actually degrade that.
Yes, this was all already known, but the OP refused to believe that upconverting bit-depth really stretched the possible samples/codes equally. It does. Upconverting is pointless unless it is to do processing at higher precision.

QUOTE (KMD @ Apr 18 2014, 16:17) *
The output from a DAC in the fundamental architecture of digital audio is not a smooth curve. The reconstruction filter only removes the steps abouve 10 Khz . Low frequency square waves steps are still there. Modern DACs are delta sigma interpolating oversampling and so the output is smooth. Converting to 32 bits and interpolating would be beneficial but obsolete considering that.
Notwithstanding the fact that I have no idea why you think this is relevent, {citation needed} at least 6

This post has been edited by db1989: Apr 18 2014, 18:38
Go to the top of the page
+Quote Post
fluzzknock
post Apr 18 2014, 19:20
Post #30





Group: Members
Posts: 17
Joined: 18-April 14
Member No.: 115570



QUOTE (db1989 @ Apr 18 2014, 13:35) *
QUOTE (fluzzknock @ Apr 18 2014, 02:46) *
What you are getting at (I think) can be achieved with existing digital audio editors. In Adobe Audition, for example, you can take a 16-bit/44kHz FLAC file and "upconvert" it to 24-bit/96kHz or 32-bit/192kHz or whichever combination you want. This will give you an increased number of samples per second over a larger range of potential quantization values. Depending on what you are trying to achieve, however, there may no benefit to doing any of this digital conversion from an audio playback perspective. If your goal is the most accurate reproduction of the original analog waveform, this process can actually degrade that.
Yes, this was all already known, but the OP refused to believe that upconverting bit-depth really stretched the possible samples/codes equally. It does. Upconverting is pointless unless it is to do processing at higher precision.


Agreed. It just seemed that what he was actually looking for wasn't the same as what he said he was looking for. tongue.gif
Go to the top of the page
+Quote Post
KMD
post Apr 18 2014, 23:07
Post #31





Group: Members
Posts: 97
Joined: 21-January 09
From: UK
Member No.: 65825



the relevance is that the comment thread is pontificating about the purpose of converting a data stream to a higher bit depth. Is it really nessasary to confirm that a 22Khz brick wall filter does not remove artefacts below 22Khz, but do you have a question about that.
Go to the top of the page
+Quote Post
Nimos
post Apr 18 2014, 23:11
Post #32





Group: Members
Posts: 19
Joined: 20-June 10
Member No.: 81659



To make it easier for you, you can compare audio to a single pixel of graphics. Or a sub pixel to be exact(red, green or blue)

Now for simplicity we can restrict to a grayscale pixel where only one sub pixel is present . For a 8 bit image, you have 256 shades from 0( deep black) to 255( bright white). 32 bit images that you use generally is 8 bit each for red, green, blue and transparency.

Now if you have a 8 bit pixel at 128(half bright) and upscale it to 16 bit, and view in 16 bit display then you should get the same shade at 32768. You can change the 8 bit value up or down by 1 and you will get 16 bit value up or down by 256. You won't have a value in between.

But if you are increasing the frame rate, analogous to resampling audio, you will get values in between the two levels of gray at frames between the original frames. For example of you are doubling the frame rate, you will get a value up our down by 128 at the new frames being created in between the existing frames(assuming a linear sweep).

Okay, now will the new frame rate make the video appear smooth? Not necessarily. In analogue domain, your Crt, Lcd or Led monitor will not jump abruptly from one level to another level of brightness. It may appear like a slow smooth transition due to the characteristics of the element producing the light. Ever seen a light bulb or tube when power goes off? Similarly, the inertia of the speaker diaphragm will make any rapid movement smoothed out.

Also you cannot go on increasing the frame rate and expect smooth transition. Your eye has a limitation where it can sense only a signal which persisted for a specific time period. Any image persisted below this period will not be sensed and it will get mixed with the next image. For audio we have already reached this point and we can go no further. Your ear low passes and thus smoothens the Audio like your eye.

This is based on my knowledge and may not be 100% correct. But don't bother about data that is not there or problems that do not exist.

NOTE: since audio data is signed you will have range of one bit less for audio levels than for image.
Go to the top of the page
+Quote Post
saratoga
post Apr 18 2014, 23:58
Post #33





Group: Members
Posts: 4844
Joined: 2-September 02
Member No.: 3264



QUOTE (KMD @ Apr 18 2014, 18:07) *
Is it really nessasary to confirm that a 22Khz brick wall filter does not remove artefacts below 22Khz, but do you have a question about that.


I think he means the nonsense you wrote about a stairstep below 10 kHz.
Go to the top of the page
+Quote Post
giro1991
post Apr 19 2014, 00:07
Post #34





Group: Members
Posts: 104
Joined: 27-February 13
Member No.: 106916



QUOTE (Nimos @ Apr 18 2014, 23:11) *
But if you are increasing the frame rate, analogous to resampling audio, you will get values in between the two levels of gray at frames between the original frames. For example of you are doubling the frame rate, you will get a value up our down by 128 at the new frames being created in between the existing frames(assuming a linear sweep).

Okay, now will the new frame rate make the video appear smooth? Not necessarily. In analogue domain, your Crt, Lcd or Led monitor will not jump abruptly from one level to another level of brightness. It may appear like a slow smooth transition due to the characteristics of the element producing the light. Ever seen a light bulb or tube when power goes off? Similarly, the inertia of the speaker diaphragm will make any rapid movement smoothed out.

Also you cannot go on increasing the frame rate and expect smooth transition.

I understand we've reached the optimum, for recording, and listening and that our perception won't notice. But i'm talking strictly proccessing, forget output.
If I read it right, you've proved that combining upsampling with an increase in depth allows more definition.

QUOTE
Edit2: It just occurred to me that the other concept that you might think when talking about stretching isn't what you expect. If you expand the 16bits, so that they are evenly distributed on a 24bit signal, or 32bit signal, what you have achieved is volume boosting it.

So there is an alternative method? Plrease excuse my persistance but I've seen adobe audition show up before prior to posting and also noted some DAWs reduce volume by -6dB, when doing 16>24, some don't.
So there are discrepancies.

What sparked my curiosity initially is how Philips achieved "16bit performance using a 14bit dac" in the 80s.
Noiseshaped dither applied to an oversampled signal is clear so lets not discuss this, but rather the 16>28>14bit stage.
I wanted to know what exactly is going on and whether or not there is a strict way of going about it if I were to replicate it in DAW/player remaining in computational realm for time being, when converting up/down.
See this thread, the 3rd post. My curiousty surrounds mysterious 'Upconverting' (another wild term let loose).
http://forums.stevehoffman.tv/threads/upsa...-dither.304816/

I can't see how recording analog in 24 bit then (compressing with dither) to 16 bit yields good results but you can't do it the other way?
EDIT, though you probably have told me it's just not getting through.

Sound cards today are rediculous 24bit SNR yet are subject to noise present on 16bit material, can you not emulate 24bit with 16? perhaps a stupid question?

Something else - older hardware with a low SNR specificaton.
Surely that is form of dynamic compression, pschoacoustically no? likewise truncating/rounding 16bit file to 8bit?

I appreciate the input (:

I should really take up a course on this.

This post has been edited by giro1991: Apr 19 2014, 00:14
Go to the top of the page
+Quote Post
saratoga
post Apr 19 2014, 00:16
Post #35





Group: Members
Posts: 4844
Joined: 2-September 02
Member No.: 3264



QUOTE (giro1991 @ Apr 18 2014, 19:07) *
What sparked my curiosity initially is how Philips achieved "16bit performance using a 14bit dac" in the 80s.
Noiseshaped dither applied to an oversampled signal is clear so lets not discuss this, but rather the 16>28>14bit stage.


http://en.wikipedia.org/wiki/Oversampling

QUOTE (giro1991 @ Apr 18 2014, 19:07) *
I wanted to know what exactly is going on and whether or not there is a strict way of going about it if I were to replicate it in DAW/player remaining in computational realm for time being, when converting up/down.


Oversampling is used to improve the accuracy when converting from analog to digital, or digital to analog. If aren't actually doing anything, then this question doesn't even make sense. A loss of SNR is only defined when you are actually doing something.

QUOTE (giro1991 @ Apr 18 2014, 19:07) *
Sound cards today are rediculous 24bit SNR yet are subject to noise present on 16bit material, can you not emulate 24bit with 16? perhaps a stupid question?


The question doesn't really make sense. Actually, I don't understand why you are asking complex questions when you have not understood the fundamentals. It is a waste of time.
Go to the top of the page
+Quote Post
giro1991
post Apr 19 2014, 01:21
Post #36





Group: Members
Posts: 104
Joined: 27-February 13
Member No.: 106916



I am trying to replicate this, in foobar...

more specifically this


This post has been edited by giro1991: Apr 19 2014, 01:26
Go to the top of the page
+Quote Post
saratoga
post Apr 19 2014, 01:35
Post #37





Group: Members
Posts: 4844
Joined: 2-September 02
Member No.: 3264



QUOTE (giro1991 @ Apr 18 2014, 20:21) *
I am trying to replicate this, in foobar...


By using the EQ?
Go to the top of the page
+Quote Post
xnor
post Apr 19 2014, 01:43
Post #38





Group: Developer
Posts: 487
Joined: 29-April 11
From: Austria
Member No.: 90198



QUOTE (giro1991 @ Apr 19 2014, 00:07) *
I understand we've reached the optimum, for recording, and listening and that our perception won't notice. But i'm talking strictly proccessing, forget output.
If I read it right, you've proved that combining upsampling with an increase in depth allows more definition.

Higher sampling rate allows for storing higher frequencies. This helps especially with nonlinear effects that would otherwise create a lot of aliasing.
Higher bit depth allows for higher dynamic range, reduces the quantization errors for each processing step.


QUOTE
Plrease excuse my persistance but I've seen adobe audition show up before prior to posting and also noted some DAWs reduce volume by -6dB, when doing 16>24, some don't.

Just increasing bit depth should not change the levels at all. I've never seen a DAW do this.


QUOTE
Noiseshaped dither applied to an oversampled signal is clear so lets not discuss this, but rather the 16>28>14bit stage.

If you have a higher sampling rate you can get away with lower bit depth, because you can move the noise out of the audible range.


QUOTE
I wanted to know what exactly is going on and whether or not there is a strict way of going about it if I were to replicate it in DAW/player remaining in computational realm for time being, when converting up/down.
See this thread, the 3rd post. My curiousty surrounds mysterious 'Upconverting' (another wild term let loose).

That is nonsense. Increasing bit depth is a simple matter of multiplying by 2^(increase in bits), which is just zero padding.
The sample values do not change nonlinearly, so there is no need to attenuate anything.


QUOTE
Sound cards today are rediculous 24bit SNR yet are subject to noise present on 16bit material, can you not emulate 24bit with 16? perhaps a stupid question?

Yes it is. Did you not get the previous examples?

A 24-bit value of 98304 would be a 16-bit value of 1.5, but since we're only storing integers we have to use 1 or 2 (so we introduce an error, i.e. lose information).
If you have a 16-bit value of 1, how on earth would you restore the lost information to find out what the original 24-bit value was?


QUOTE
Surely that is form of dynamic compression, pschoacoustically no? likewise truncating/rounding 16bit file to 8bit?

No, low SNR just means that: a high noise floor.

Think of it as the information in the lower bits being destroyed by setting the bits to random values.
Go to the top of the page
+Quote Post
xnor
post Apr 19 2014, 01:49
Post #39





Group: Developer
Posts: 487
Joined: 29-April 11
From: Austria
Member No.: 90198



QUOTE (giro1991 @ Apr 19 2014, 01:21) *
I am trying to replicate this, in foobar...

This doesn't make any sense.

The noise is already in your recordings.
Go to the top of the page
+Quote Post
giro1991
post Apr 19 2014, 02:24
Post #40





Group: Members
Posts: 104
Joined: 27-February 13
Member No.: 106916



I have a 16bit mulitbit dac, which I will be using with its digital filter bypassed to compare sound.
Then it makes sense...

QUOTE
If you have a higher sampling rate you can get away with lower bit depth, because you can move the noise out of the audible range.


Ok thank you that one explains it all.

EQ won't work because almost all vsts, yes do operate at high bit depths and large sample rates but always apply effects within 22.05, never beyond.

This post has been edited by giro1991: Apr 19 2014, 02:24
Go to the top of the page
+Quote Post
Arnold B. Kruege...
post Apr 19 2014, 04:37
Post #41





Group: Members
Posts: 3646
Joined: 29-October 08
From: USA, 48236
Member No.: 61311



QUOTE (KMD @ Apr 18 2014, 11:17) *
The output from a DAC in the fundamental architecture of digital audio is not a smooth curve.


Yes it is.

QUOTE
The reconstruction filter only removes the steps above 10 Khz .



Not a problem because the steps are all above 22.05 KHz (44.1 KHz sampling)


QUOTE
Low frequency square waves steps are still there.


What low frequency square waves?

QUOTE
Modern DACs are delta sigma interpolating oversampling and so the output is smooth.


No, the steps are there, they are just at a higher frequency because of the oversampling.


QUOTE
Converting to 32 bits and interpolating would be beneficial but obsolete considering that.


Beneficial, why?
Go to the top of the page
+Quote Post
giro1991
post Apr 19 2014, 11:24
Post #42





Group: Members
Posts: 104
Joined: 27-February 13
Member No.: 106916



The main process I'm going about is quantization, If only I had known this term earlier.
As said, increasing rate can allow a reduction in bit depth without affecting output quality, but what is often overlooked is dither (specifically noise shaped) as a method of reducing bit depth (to 8,12 or 14), and that 3 stages can work together to "quantize" the signal.

When you look at the way quantization is used in older hardware, which is where this method first appears (always do your history/look at the timeline!) it is clear that ultrasonic dither that was/is present at the dac input, at a high level, outwith band out-of-interest (noise shaped not TDPF), was an intentional feature that kept dac linear. ultrasonisic content, actually aids thelinearity, because, in reality, if you reverse engineer the process, ADCs saw ultrasonic content above 22.5 all the time, present on studio tape. quantizing simply reverses the process, and outputs appears to have less of this distortion mentioned by JAZ.

What I find really flipping odd is mastering engineers today (analog tape never used mind you) seem to use noise shaped dither plugins, most of which only apply "dither shape" below 22050
,

This shape above is "phsycoacoustic shape" base on flethcer munson curve, similar in application method (regarding bit depth), but is not the same as noise shaped dither rendered in the ultrasonic range, a computation that exists in D/A conversion process (hardware dsp) on older hardware and, iirc on todays hardware too (I think I read noiseshaping is inherant feature on Delta converters somewhere).

EDIT: Please see this, another gem from the web

QUOTE
This topic has come up again and again for years. The analog guys generally knew about the importance of sound +20kHz, but with the digital takeover, much of this arcane knowledge is becoming lost.
I'd like to point to a thread from recording.org:
The following is an excerpt from the much talked about On Line Chat with Mr. Rupert Neve, Fletcher of Mercenary Audio fame moderating.....(as it pertains to this thread).
QUOTE
Fletcher: "There has been some measure of debate about bandwidth including frequencies above 20kHz, can we hear them, do they make a difference, etc.?"

Rupert: "OK, Fletch, pin your ears back... back in 1977, just after I had sold the company,
George Martin called me to say that Air Studios had taken delivery of a Neve Console which did not seem to be giving satisfaction to Geoff Emmerick.
In fact, he said that Geoff is unhappy... engineers from the company, bear in mind that at this point I was not primarily involved,
had visited the studio and reported that nothing was wrong. They said that the customer is mad and that the problem will go away if we ignore it long enough.

Well I visited the studio and after careful listening with Geoff, I agreed with him that three panels on this 48 panel console sounded slightly different. We discovered that there was a 3 dB peak at 54kHz Geoff's golden ears had perceived that there was a difference. We found that 3 transformers had been incorrectly wired and it was a matter of minutes to correct this. After which Geoff was happy. And I mean that he relaxed and there was a big smile on his face.

As you can imagine a lot of theories were put forward, but even today I couldn't tell you how an experienced listener can perceive frequencies of the normal range of hearing. And following on from this, I was visiting Japan and was invited to the laboratories of Professor Oohashi He had discovered that when filters were applied to an audio signal
cutting off frequencies of 20 kHz, the brain started to emit electric signals which can be measured and quantified

These signals were at the frequencies and of the pattern which are associated with frustration and anger. Clearly we discussed this at some length and I also would forward the idea that any frequencies which were not part of the original music, such as quantisizing noise produced by compact discs and other digital sources, also produced similar brain Waves."

Notice how they ridicule the OP at first,
Source

This post has been edited by giro1991: Apr 19 2014, 11:48
Go to the top of the page
+Quote Post
giro1991
post Apr 19 2014, 11:50
Post #43





Group: Members
Posts: 104
Joined: 27-February 13
Member No.: 106916



And note that izotope mbit among others, is industry standard for 24 > 16 down conversion, with that dither peaking at 22050? It's no wonder music today sound like garbage.

This post has been edited by giro1991: Apr 19 2014, 11:50
Go to the top of the page
+Quote Post
lvqcl
post Apr 19 2014, 12:02
Post #44





Group: Developer
Posts: 3325
Joined: 2-December 07
Member No.: 49183



So you don't understand that frequencies above 22050 don't exist in a signal sampled at 44100 Hz?

Then I agree with saratoga: "I don't understand why you are asking complex questions when you have not understood the fundamentals. It is a waste of time."
Go to the top of the page
+Quote Post
giro1991
post Apr 19 2014, 12:05
Post #45





Group: Members
Posts: 104
Joined: 27-February 13
Member No.: 106916



I do, but during the conversion process it appears to be essential.
Go to the top of the page
+Quote Post
KMD
post Apr 19 2014, 12:25
Post #46





Group: Members
Posts: 97
Joined: 21-January 09
From: UK
Member No.: 65825



OK. I did temporarily overlook the fact that the use of dither keeps the input moving between quantization levels at a rate of a least 10KHz.

Here is a reference that is relevant Resolution below the Least Significant Bit in Digital Systems with Dither by John Vanderkooy and Stanley P. Lipshitz
Go to the top of the page
+Quote Post
xnor
post Apr 19 2014, 12:30
Post #47





Group: Developer
Posts: 487
Joined: 29-April 11
From: Austria
Member No.: 90198



???

I feel so confused.
Go to the top of the page
+Quote Post
giro1991
post Apr 19 2014, 12:35
Post #48





Group: Members
Posts: 104
Joined: 27-February 13
Member No.: 106916



"The work was motivated by some common misunderstandings about digital systems.
It is commonly believed that small signals or signal details are lost if they are smaller
than the quantizing step. Expanding on previous- arguments, it is shown that this is not
true when the signal to be quantized contains a wide-band noise dither with an amplitude
of approximately the step size."

I like it already, will give it a read.

http://www.drewdaniels.com/dither.pdf

This post has been edited by giro1991: Apr 19 2014, 12:37
Go to the top of the page
+Quote Post
drfisheye
post Apr 19 2014, 12:59
Post #49





Group: Members
Posts: 18
Joined: 4-January 14
Member No.: 113780



QUOTE (giro1991 @ Apr 19 2014, 11:24) *
QUOTE
I was visiting Japan and was invited to the laboratories of Professor Oohashi He had discovered that when filters were applied to an audio signal
cutting off frequencies of 20 kHz, the brain started to emit electric signals which can be measured and quantified

These signals were at the frequencies and of the pattern which are associated with frustration and anger. Clearly we discussed this at some length and I also would forward the idea that any frequencies which were not part of the original music, such as quantisizing noise produced by compact discs and other digital sources, also produced similar brain Waves."

That's because the mosquito's stopped biting when the high frequencies were present.

Quoting a guy who tells a story about a Japanese professor who I can't find on the internet isn't going to convince anyone.

Power cables can also make a hell of a difference in sound by the way. Without them, you don't hear anything. That's what a Korean professor told me.
Go to the top of the page
+Quote Post
db1989
post Apr 19 2014, 13:23
Post #50





Group: Super Moderator
Posts: 5275
Joined: 23-June 06
Member No.: 32180



This will be a strong contender for worst thread of the year.
Go to the top of the page
+Quote Post

9 Pages V  < 1 2 3 4 > » 
Closed TopicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



RSS Lo-Fi Version Time is now: 23rd July 2014 - 19:07