IPB

Welcome Guest ( Log In | Register )

 
Reply to this topicStart new topic
"Optimal" Predictor, help me understanding adaptive covariance estimation
pest
post Nov 15 2006, 21:16
Post #1





Group: Members
Posts: 208
Joined: 12-March 04
From: Germany
Member No.: 12686



As some of you may have noticed i'm also working on a lossless-audio-codec.
The core prediction uses a joint-channel cholesky decomposition every k-samples.
The decomposition is applied on a covariance matrix which is updated every sample in the following way

CODE
  for (int i=0;i<=maxorder;i++)
  {
    for (int j=i;j<=maxorder;j++) covar[i][j] = decay * covar[i][j] + history[i] * history[j]
  }


You can see there's a learning factor 'decay' involved. The problem is
that for some samples old-data seems to be more important than recent data
leading to a large encoding-cost if you use a relative high factor that works good
on most samples. I've tried to exploit some statistical properties as prediction gain
or variance of the input to estimate a optimal learning factor but this seems
to be more complicated than i first thought. Does anybody know of a good
way to estimate this property?

any help is appreciated

pest

edit: typo in the codebox

This post has been edited by pest: Nov 16 2006, 14:13
Go to the top of the page
+Quote Post
SebastianG
post Nov 15 2006, 22:21
Post #2





Group: Developer
Posts: 1318
Joined: 20-March 04
From: Göttingen (DE)
Member No.: 12875



Looks like you want to do backward adaptive prediction. Are you sure you want a decoder to perform the same calculations (cholesky every k samples)? This is pretty time consuming, isn't it? Also, if you want your compressed files to be machine-independent you need to make sure, that the calculations you perform give the exact same results on every machine. (This is not the case if you rely on an FPU for these calculations.)

Backward adaptive prediction is not my specialty, sorry. Perhaps a sliding window approach helps.

This post has been edited by SebastianG: Nov 15 2006, 22:25
Go to the top of the page
+Quote Post
pest
post Nov 15 2006, 23:23
Post #3





Group: Members
Posts: 208
Joined: 12-March 04
From: Germany
Member No.: 12686



QUOTE (SebastianG @ Nov 15 2006, 22:21) *
Looks like you want to do backward adaptive prediction. Are you sure you want a decoder to perform the same calculations (cholesky every k samples)? This is pretty time consuming, isn't it?


Using a relative low-order (8 intra/4 inter-channel). On top of that predictor is a
cascading lms-structure with a maximum order of 1280. This is running at 1x on my Athlon1200,
so yes, very slow indeed.

QUOTE
Also, if you want your compressed files to be machine-independent you need to make sure, that the calculations you perform give the exact same results on every machine. (This is not the case if you rely on an FPU for these calculations.)


The current version works flawless, but it's not machine-independent, you're right. It's mainly a proof
of concept and surpasses LA already. Do you know a way to make the FPU code machine-independent
or is this simply not possible?

QUOTE
Perhaps a sliding window approach helps.


I think the current approach is like a window with logarithmic decaying powers.
Or do you think of something different?
Go to the top of the page
+Quote Post
AstralStorm
post Nov 15 2006, 23:41
Post #4





Group: Members
Posts: 745
Joined: 22-April 03
From: /dev/null
Member No.: 6130



QUOTE (pest @ Nov 15 2006, 23:23) *
The current version works flawless, but it's not machine-independent, you're right. It's mainly a proof
of concept and surpasses LA already. Do you know a way to make the FPU code machine-independent
or is this simply not possible?


Well, you could try to detect various FPU problems at compile time. I mean like non-IEEE floating point unit with excess precision, testing various trig functions you're using, and such.
Also strengthen it with install-time losslessness test.


--------------------
ruxvilti'a
Go to the top of the page
+Quote Post
pest
post Nov 21 2006, 17:53
Post #5





Group: Members
Posts: 208
Joined: 12-March 04
From: Germany
Member No.: 12686



I've recently realised that the solution to this problem leads to complex
algorithms involving learning functions and neuronal networks
my brute-force solution is to iterate with a bisection method
to a local minima of the quadratic error and use this learning rate
for the complete block. this should! be better than nothing.
good to know i'm at the first semester laugh.gif

This post has been edited by pest: Nov 21 2006, 17:54
Go to the top of the page
+Quote Post

Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



RSS Lo-Fi Version Time is now: 19th December 2014 - 23:02