IPB

Welcome Guest ( Log In | Register )

15 Pages V  « < 11 12 13 14 15 >  
Reply to this topicStart new topic
Yalac - Comparisons, How the evaluation release compares to other compressors
SebastianG
post Sep 17 2006, 13:31
Post #301





Group: Developer
Posts: 1318
Joined: 20-March 04
From: Göttingen (DE)
Member No.: 12875



QUOTE (TBeck @ Sep 17 2006, 03:34) *
"sh_2444" contains 5 files with 24 bit and 44 or 48 Khz.

And these are compressed to around 60% of the original file sizes?
Unbelievable!
Are the lower bits of the samples constant?
Is the audio signal very quiet?
Doesn't 24/44 usually compress to around 80%?
(taking typical recording levels into account)
Go to the top of the page
+Quote Post
TBeck
post Sep 17 2006, 14:52
Post #302


TAK Developer


Group: Developer
Posts: 1098
Joined: 1-April 06
Member No.: 29051



QUOTE (SebastianG @ Sep 17 2006, 14:31) *
QUOTE (TBeck @ Sep 17 2006, 03:34) *

"sh_2444" contains 5 files with 24 bit and 44 or 48 Khz.

And these are compressed to around 60% of the original file sizes?
Unbelievable!
Are the lower bits of the samples constant?
Is the audio signal very quiet?

The lower bits are not constant, but the amplitudes are quite small. For me it was a bit difficult to find files with 24 Bit and only 44 or 48 KHz. I am not to happy with such a small and possibly unusual set.

But i found it ok to post the results, because the comparison with FLAC should make it clear, that Yalac isn't doing wonders on such files. Possibly i will add some other compressors to the comparison.

QUOTE (SebastianG @ Sep 17 2006, 14:31) *
Doesn't 24/44 usually compress to around 80%?
(taking typical recording levels into account)

As a rule of thumb i would say:

If music is sampled at 44/48 KHz, you can expect compressors similar to mine to remove about the same amount of bits from the same music sampled at 16 or 24 bit. If it can remove 8 bits, you will achieve 50 percent compression ratio for 16 bit and 67 percent for 24 bit files.

But this statement is based upon only a small selcetion of 24-Bit files, i may be wrong.

If someone knows a place to download some more 48K/24-bit samples of high quality (not sampled with a cheap and noisy soundcard), please let me know.

Thomas
Go to the top of the page
+Quote Post
pest
post Sep 17 2006, 15:06
Post #303





Group: Members
Posts: 208
Joined: 12-March 04
From: Germany
Member No.: 12686



QUOTE
If someone knows a place to download some more 48K/24-bit samples of high quality (not sampled with a cheap and noisy soundcard), please let me know.


this isn't very useful for your situation but i hope posting this link is still ok
it contains samples of different and non-standard wave-files...
perhaps this helps you improving the wave-reader.

http://www-mmsp.ece.mcgill.ca/Documents/Au...VE/Samples.html

This post has been edited by pest: Sep 17 2006, 19:44
Go to the top of the page
+Quote Post
SebastianG
post Sep 17 2006, 15:18
Post #304





Group: Developer
Posts: 1318
Joined: 20-March 04
From: Göttingen (DE)
Member No.: 12875



QUOTE (TBeck @ Sep 17 2006, 15:52) *
As a rule of thumb i would say:

If music is sampled at 44/48 KHz, you can expect compressors similar to mine to remove about the same amount of bits from the same music sampled at 16 or 24 bit. If it can remove 8 bits, you will achieve 50 percent compression ratio for 16 bit and 67 percent for 24 bit files.

Yeah, this is what I had in mind (with the exception of my bad guesswork) since the LSBs are very unpredictable at higher resolution levels.

QUOTE (TBeck @ Sep 17 2006, 15:52) *
If someone knows a place to download some more 48K/24-bit samples of high quality (not sampled with a cheap and noisy soundcard), please let me know.

You could decode a high quality MP3 to 24 bits wink.gif
Go to the top of the page
+Quote Post
madorangepanda
post Sep 17 2006, 15:27
Post #305





Group: Members
Posts: 60
Joined: 11-May 05
Member No.: 21998



QUOTE (TBeck @ Sep 17 2006, 14:52) *
If someone knows a place to download some more 48K/24-bit samples of high quality (not sampled with a cheap and noisy soundcard), please let me know.

You may be able to find stuff at archive.org. Alot of it has been converted to 44.1khz and 16bit though
Archive.org 24bit Flacs Some of these may be 48khz
Go to the top of the page
+Quote Post
TBeck
post Sep 17 2006, 16:28
Post #306


TAK Developer


Group: Developer
Posts: 1098
Joined: 1-April 06
Member No.: 29051



QUOTE (pest @ Sep 17 2006, 16:06) *
this isn't very useful for your situation but i hope posting this link is still ok
it contains samples of different and non-standard wave-files...perhaps
this helps you improving the wave-reader.

http://www-mmsp.ece.mcgill.ca/Documents/Au...VE/Samples.html


QUOTE (madorangepanda @ Sep 17 2006, 16:27) *
You may be able to find stuff at archive.org. Alot of it has been converted to 44.1khz and 16bit though
Archive.org 24bit Flacs Some of these may be 48khz

Thanks!


Here an update for my high resolution comparison with more participiants:

Yalac V0.11a slightly improved over V0.11.
FLAC 1.1.2
MPEG4-Als RM17. Parameters: -7
Monkey's Audio 3.99. Mode: High
Optimfrog 4.600ex. Parameters: --mode high --optimize fast

I am only testing the compression modes, i am currently interested in.

CODE
          Yalac V0.11a                                    |  FLAC  | Monkey | Mpeg4  | Ofr    |
          Turbo  Fast   Light  Normal High   Extra  Insane|  -8    |  High  |  -7    | High   |
----------------------------------------------------------+--------+--------+--------+--------+
sh_2444                                                   |        |        |        |        |
Ratio:    58.53  58.23  57.98  57.87  57.76  57.64  57.57 |  60.28 |  57.77 |  57.45 |  57.64 |
EncoTime:  6.33   8.48  10.88  15.05  19.65  27.70  86.32 |  96.54 |  20.96 | 877.74 |  63.29 |
----------------------------------------------------------+--------+--------+--------+--------+
sh_2496                                                   |        |        |        |        |
Ratio:    54.58  54.33  54.21  54.18  53.84  53.79  53.76 |  57.92 |  53.88 |  53.65 |  53.50 |
EncoTime: 12.66  17.34  22.04  29.22  38.18  48.59 107.73 | 184.58 |  40.78 |2458.40 | 121.57 |
----------------------------------------------------------+--------+--------+--------+--------+
Go to the top of the page
+Quote Post
TBeck
post Sep 21 2006, 04:41
Post #307


TAK Developer


Group: Developer
Posts: 1098
Joined: 1-April 06
Member No.: 29051



Current progress (V0.12)

I hope i am not boaring you with my reports. It's just some kind of a reward for me to post about some progress after many lonely hours of work.

Done:

- Another final reconfiguration of the presets. Turbo achieves now about 0.1 percent better compression at the same speed. Some tiny improvements of some other presets.
- Removed evaluation level Extra, because it was too irritating to have so many options. Only evaluation level Max has been kept and it is significantly faster than before. Simple to use: Specify for instance -p0 for preset 0 (Turbo) with standard evaluation, add 'm' for max evaluation: -p0m
- Optimized encoder for lower sampling rates from 8 to 32 KHz. Verified it's function.

To do (for V0.12):

- Add seek table. Might decrease compression by about 0.02 percent.

Probably not too exiting news. Just some steps further on my way to a public release.

BTW: How important is 8-Bit support? I will implement it sooner or later, but is it really needed for a first release? The tuning would take some time.

Thomas
Go to the top of the page
+Quote Post
Shade[ST]
post Sep 21 2006, 05:30
Post #308





Group: Members
Posts: 1189
Joined: 19-May 05
From: Montreal, Canada
Member No.: 22144



8-bit is not important, IMO. I have yet to see a sample on a computer capable of running TAK that would use 8-bit files, anyways.
Go to the top of the page
+Quote Post
Gnerma
post Sep 21 2006, 06:20
Post #309





Group: Members (Donating)
Posts: 89
Joined: 6-August 03
From: Bakersfield, CA
Member No.: 8203



QUOTE (TBeck @ Sep 20 2006, 20:41) *
I hope i am not boaring you with my reports. It's just some kind of a reward for me to post about some progress after many lonely hours of work.

This certainly isn't the case for me and I'm sure many others like me who are very interested in what you're up to but might not be actively participating in the thread smile.gif In fact many coders could learn a thing or two from you about running a useful, open dialogue about what they have in development.

This post has been edited by Gnerma: Sep 21 2006, 06:21
Go to the top of the page
+Quote Post
Synthetic Soul
post Sep 24 2006, 10:48
Post #310





Group: Super Moderator
Posts: 4887
Joined: 12-August 04
From: Exeter, UK
Member No.: 16217



I have added FLAC 1.1.2_CVS, Flake SVN Rev.42 and WavPack 4.4a3 to my comparison.

The settings are not part of the core encoder set, so to view them you need to append "All=1" to the URL, i.e.:

http://www.synthetic-soul.co.uk/comparison...sp?All=1

IMHO the table is just too busy to easily make deductions. If I can get some time I may add a hack to limit the output to certain settings.

Until then, you may do better to download the table in CSV, open in Excel, cut out the rows that don't interest you and use "Data" > "Sort" to sort the table to your liking.


--------------------
I'm on a horse.
Go to the top of the page
+Quote Post
Destroid
post Sep 24 2006, 21:15
Post #311





Group: Members
Posts: 555
Joined: 4-June 02
Member No.: 2220



Yalac TURBO & TURBO extra :drool: Fastest encoding speeds and great size reduction

Thanks for multiple sorting ability of your table.


--------------------
"Something bothering you, Mister Spock?"
Go to the top of the page
+Quote Post
pest
post Sep 27 2006, 12:16
Post #312





Group: Members
Posts: 208
Joined: 12-March 04
From: Germany
Member No.: 12686



@TBeck

i have read that you cascade different predictors (up to 5 if i remember correctly).
do you use a special weighting between the different stages (such as a sign-sign lms filter)?

edit: spelling as always

This post has been edited by pest: Sep 27 2006, 12:17
Go to the top of the page
+Quote Post
TBeck
post Sep 27 2006, 13:16
Post #313


TAK Developer


Group: Developer
Posts: 1098
Joined: 1-April 06
Member No.: 29051



QUOTE (pest @ Sep 27 2006, 13:16) *
i have read that you cascade different predictors (up to 5 if i remember correctly).
do you use a special weighting between the different stages (such as a sign-sign lms filter)?

Sorry, i can't remember to have said something about 5 predictors.

Do you mean this statement: "...Possibly it works better if only one Lpc-Filter is beeing used. But yalac currently sends the signal through up to 4 different filters...."?

The signal may go through up to 4 filters:

1) Initial filtering
2) PreFilter (optional)
3) Channel decorrelation (optional)
4) Linear prediction

Each next filter is working on the output of the previous one. They are not working in parallel. No output of different filters is beeing weighted and summed up and there is no feedback from the output of later filters to the input of earlier filters.

I would expect an weighting approach of different parallel filters to be most efficient, if it is adapting very fast to changes of the signal characteristics. Because it isn't efficient to store the updated filter parameters and weights too frequently into the bit stream, you would have to perform the adaption on the encoder and decoder side to avoid the need to store the parameters.

But i don't want the decoder to perform any (continous) adaption, because 1) it would be slower and 2) i suppose, that it is far more probable to run into patent issues witch such adaptive approaches. Hence abolutely no adaption in the decoder and only a block (subframe) based adaption in the encoder.
Go to the top of the page
+Quote Post
pest
post Sep 27 2006, 14:55
Post #314





Group: Members
Posts: 208
Joined: 12-March 04
From: Germany
Member No.: 12686



QUOTE
Do you mean this statement: "...Possibly it works better if only one Lpc-Filter is beeing used. But yalac currently sends the signal through up to 4 different filters...."?


yeah, i thought about predictors when you mean different filters, and next time i read more careful!

QUOTE
The signal may go through up to 4 filters:

1) Initial filtering
2) PreFilter (optional)
3) Channel decorrelation (optional)
4) Linear prediction


ok, then a cascaded weighting is not really useful if you only do one stage of linear prediction


QUOTE
I would expect an weighting approach of different parallel filters to be most efficient, if it is adapting very fast to changes of the signal characteristics. Because it isn't efficient to store the updated filter parameters and weights too frequently into the bit stream, you would have to perform the adaption on the encoder and decoder side to avoid the need to store the parameters.


i think that the weighting of a cascade is better because it's more stable against sudden changes
in signal characteristics. you can look at a cascade as some sort of parallel filters too

P0 = predictor of the first stage
P1 = predictor of the next stage

PW = weight0 * P0 + weight1 * P1 // weight the cascade
or
PW = weight0 * P0 + weight1 * (P0+P1) // weight them paralell

QUOTE
But i don't want the decoder to perform any (continous) adaption, because 1) it would be slower and 2) i suppose, that it is far more probable to run into patent issues witch such adaptive approaches. Hence abolutely no adaption in the decoder and only a block (subframe) based adaption in the encoder.


the algorithm i mentioned is extreme simple. it's used in MP4ALS btw.
here's some C pseudo code

CODE
weight0 = weight1 = weight2 = init_weight

for (i=0;i<NumSamples;i++)
{
  P0 = first stage predictor
  P1 = second stage predictor
  P2 = third stage predictor

  PW = (weight0 * P0 + weight1 * P1 + weight2 * P2) >> Shift

  Error = Input - PW

  if (Error > 0)
  {
    (P0<0)?weight0--:weight0++;
    (P1<0)?weight1--:weight1++;
    (P2<0)?weight2--:weight2++;
  } else if (Error < 0)
  {
    (P0<0)?weight0++:weight0--;
    (P1<0)?weight1++:weight1--;
    (P2<0)?weight2++:weight2--;
  }
}
Go to the top of the page
+Quote Post
SebastianG
post Sep 27 2006, 14:59
Post #315





Group: Developer
Posts: 1318
Joined: 20-March 04
From: Göttingen (DE)
Member No.: 12875



QUOTE (TBeck @ Sep 27 2006, 14:16) *
1) Initial filtering
2) PreFilter (optional)
3) Channel decorrelation (optional)
4) Linear prediction
Each next filter is working on the output of the previous one.

And with what precision do you work here? Are the samples quantized to the original precision after each stage? If yes: Have you thought about that the quantization noise adds up and is likely to decrease compression performance?

QUOTE (TBeck @ Sep 27 2006, 14:16) *
Hence abolutely no adaption in the decoder and only a block (subframe) based adaption in the encoder.

I assume you're talking about forward-adaptive versus backward-adaptive prediction and that you're using a pure forward-adaptive scheme. Correct?
Go to the top of the page
+Quote Post
TBeck
post Sep 27 2006, 15:20
Post #316


TAK Developer


Group: Developer
Posts: 1098
Joined: 1-April 06
Member No.: 29051



QUOTE (pest @ Sep 27 2006, 15:55) *
i think that the weighting of a cascade is better because it's more stable against sudden changes
in signal characteristics. ...

Surely.

Possibly the best approach would be: Use this backwards adaption, but also check against a forward predictor and use it, if the backwards predictor fails because of a too fast change of the signal:

(backwards prediction error) > (forwards prediction error + size of forwards predictor coefficients)

To have the best from both worlds...

I know, that the lack of fast adaption sometimes makes Yalac perform worse than the compressors with backwards adaption. But i wanted to know, what can be achieved with forwards prediction (without continouus adaption) only. And not to forget about my preference for speed and my worryness about patent troubles.

Possibly i will work on another (adaptive) encoder if the current encoder has been released...


QUOTE (pest @ Sep 27 2006, 15:55) *
the algorithm i mentioned is extreme simple. it's used in MP4ALS btw.

Thanks. I allready looked at it.



QUOTE (SebastianG @ Sep 27 2006, 15:59) *
And with what precision do you work here? Are the samples quantized to the original precision after each stage? If yes: Have you thought about that the quantization noise adds up and is likely to decrease compression performance?

I'm always using 14 bits. I could increase it to 15 bits, but this would make the encoder slower and would only give less than 0.05 percent better compression.

The output of each filter has the original precision. If necessary it is again beeing scaled down to 14 bits before sending it to the next filter.

QUOTE (SebastianG @ Sep 27 2006, 15:59) *
QUOTE (TBeck @ Sep 27 2006, 14:16) *

Hence abolutely no adaption in the decoder and only a block (subframe) based adaption in the encoder.

I assume you're talking about forward-adaptive versus backward-adaptive prediction and that you're using a pure forward-adaptive scheme. Correct?

Exactly.

This post has been edited by TBeck: Sep 27 2006, 15:24
Go to the top of the page
+Quote Post
pest
post Sep 27 2006, 15:26
Post #317





Group: Members
Posts: 208
Joined: 12-March 04
From: Germany
Member No.: 12686



QUOTE (TBeck @ Sep 27 2006, 16:20)
Possibly i will work on another (adaptive) encoder if the current encoder has been released...


that's nice to know. i've hacked something together in 1month and it's on par with Monkey's Audio
compression wise. perhaps you use something of my ideas in your future codec or
integrate adaptive predictors into yalac for people who only want to archive their music.
Go to the top of the page
+Quote Post
SebastianG
post Sep 27 2006, 15:27
Post #318





Group: Developer
Posts: 1318
Joined: 20-March 04
From: Göttingen (DE)
Member No.: 12875



QUOTE (pest @ Sep 27 2006, 15:55) *
the algorithm i mentioned is extreme simple. it's used in MP4ALS btw.
here's some C pseudo code

That's news to me. I havn't found any informations on the net regarding ALS doing backward-adaptive prediction. From what I know it's strictly forward-adaptive and nonlinear quantized parcor coefficients are transmitted per subblock. At least this was the case a year ago. huh.gif
QUOTE (Tilman Liebchen @ May 2005)
The MPEG-4 ALS codec uses forward-adaptive Linear
Predictive Coding (LPC) to reduce bit rates compared
to PCM, leaving the optimization entirely to
the encoder.


Cheers!

This post has been edited by SebastianG: Sep 27 2006, 15:34
Go to the top of the page
+Quote Post
pest
post Sep 27 2006, 15:31
Post #319





Group: Members
Posts: 208
Joined: 12-March 04
From: Germany
Member No.: 12686



QUOTE (SebastianG @ Sep 27 2006, 16:27)
That's news to me. I havn't found any informations on the net regarding ALS doing backward-adaptive prediction. From what I know it's strictly forward-adaptive and nonlinear quantized parcor coefficients are transmitted per subblock. At least this was the case a year ago. huh.gif

Cheers!


You're right the paper is only about forward-adaptive prediction but the -z modes are using
a cascade of dpcm, rls and lms filters.
Go to the top of the page
+Quote Post
TBeck
post Sep 27 2006, 16:27
Post #320


TAK Developer


Group: Developer
Posts: 1098
Joined: 1-April 06
Member No.: 29051



QUOTE (pest @ Sep 27 2006, 16:26) *
that's nice to know. i've hacked something together in 1month and it's on par with Monkey's Audio
compression wise.
...

You shouldn't tell me this! It's cruel! huh.gif I have spend years to be on par with at least Monkey High!
Go to the top of the page
+Quote Post
pest
post Sep 27 2006, 16:49
Post #321





Group: Members
Posts: 208
Joined: 12-March 04
From: Germany
Member No.: 12686



QUOTE (TBeck @ Sep 27 2006, 17:27)
You shouldn't tell me this! It's cruel! huh.gif I have spend years to be on par with at least Monkey High!


that was not my intention. that you've archived such a high compression ratio with a foward-only predictor is really awesome.
and since i'm working in the field of compression for about 10 years i'm able to code very fast laugh.gif
but as always i'm too shy to publish something...
Go to the top of the page
+Quote Post
Shade[ST]
post Sep 27 2006, 17:49
Post #322





Group: Members
Posts: 1189
Joined: 19-May 05
From: Montreal, Canada
Member No.: 22144



QUOTE (pest @ Sep 27 2006, 11:49) *
QUOTE (TBeck @ Sep 27 2006, 17:27)

You shouldn't tell me this! It's cruel! huh.gif I have spend years to be on par with at least Monkey High!


that was not my intention. that you've archived such a high compression ratio with a foward-only predictor is really awesome.
and since i'm working in the field of compression for about 10 years i'm able to code very fast laugh.gif
but as always i'm too shy to publish something...

It's funny that we discover the best members / developpers only after like.. 10 years of lurking tongue.gif
Go to the top of the page
+Quote Post
TBeck
post Sep 27 2006, 23:58
Post #323


TAK Developer


Group: Developer
Posts: 1098
Joined: 1-April 06
Member No.: 29051



QUOTE (pest @ Sep 27 2006, 17:49) *
QUOTE (TBeck @ Sep 27 2006, 17:27)

You shouldn't tell me this! It's cruel! huh.gif I have spend years to be on par with at least Monkey High!


that was not my intention. that you've archived such a high compression ratio with a foward-only predictor is really awesome.

Thanks. And i know that it's a lot easier with backwards prediction.

But sometimes i am asking myself, if my choice (forward prediction) has been wrong... But that's ok, the same would happen now and then, if i had used backward prediction...

QUOTE (pest @ Sep 27 2006, 17:49) *
but as always i'm too shy to publish something...

Someone should push you a bit.

This post has been edited by TBeck: Sep 27 2006, 23:59
Go to the top of the page
+Quote Post
TBeck
post Oct 1 2006, 18:02
Post #324


TAK Developer


Group: Developer
Posts: 1098
Joined: 1-April 06
Member No.: 29051



V0.12 is done

(but still needs some testing performed by myself)

Names:

Yalac is now beeing called TAK! And that's also the file extension: ".tak'.

Encoder:

- Speed up of some common functions.
- Removed frame partition resolution 256, which had been reintroduced in V0.11 for evaluation purposes. It wasn't able to achieve at least 0.05 percent better compression, which is my criterion for the inclusion of (very) slow encoder options.
- Removed frame partition resolution 32, beacuse 64 is now nearly as fast.
- Removed frame partition search level normal, because it was quite useless. Now fast is the default, old high can be selected by checking the option "Validate".

Presets:

(I call this the really, really, really... final configuration!)

- TURBO is using frame partition resolution 64 instead of 32 and a frame duration of 94 instead of 102 ms. Should compress up to 0.1 percent better without a significant speed penality.
- FAST is using a frame duration of 125 instead of 102 ms. Should compress up to 0.05 percent better without a significant speed penality.
- NORMAL activates channel decorrelation method Mid-Side, which is very useful for some files. Possible slowdown of about 5 percent.
- EXTRA is using PreFilter sensitivity medium instead of high. Shouldn't compress worse but decode a bit faster.
- Removed evaluation level EXTRA to avoid confusion caused by too many options. Only evaluation level MAX has been kept. On the command line you may append 'm' to activate it: -p1m for preset FAST with evaluation MAX (old syntax was -p12).

Functionality:

- It's now possible to copy the original wave file header and other non-audio information located at the end of the wave file into the compressed file. It can be restored by the decompressor to get a file which is totally bit identical to the source (not only regarding the audio data, which is always identical). Currently the size of the non-audio data is limited to 1 MByte (The file format itself supports up to 16 MByte).

File format:

- Added meta data container.
- Added seek table for fast random access to audio positions. Select a seek point distance from 2, 1 (default) or 0.5 seconds or set one for each frame. Because of a very compact representation you will not loose more than 0.02 percent compression with the highest setting (a seek point for each frame) when compressing CD-Audio. Support for user defined seek positions (markers, possibly text labeled) to save specific positions of interest will be implemented later.
- Added a new frame header type "Seek info frame", which can be optionally inserted into the file to improve seeking on playback devices, which can not use the seek table. Otherwise they have to use the "Format info frames", which by default are beeing inserted only every 2 seconds. Because "Format info frames" are considerably bigger than the new "Seek info frames", the latter should be used for this purpose.
- Switched to stronger checksums (CRC's) for some sensitive data.

Release:

I hope to send the new version to the following testers within the next 3 days (this time i may need a bit more time to verify the program function, because there are so many modifications of the file format):

Destroid (welcome back!)
Josef Pohm
Robby (new tester)
Synthetic Soul
(Skymmer, if he sends me an email adress...)

Only reason for this selection: I haven't heard from the other testers within the last 10 days and i don't want to fill their mail box with new versions they possibly currently don't need.

Any of them can sent me an email anytime, and i will sent the current version.

What should be tested:

- Comparison with V0.11: Speed and compression performance of the presets. Probably no big surprises here, but nevertheless interesting, because V0.12 should show the same performance as the later first public release.
- Try the new option -wx ("wave extra data") to save and restore non-audio data from the original wave file.
- Not urgent: Possibly perform another damage test. V0.12 may loose a bit more data than V0.11, if the first frame has been damaged, because my error handling code needs some more adaptions to the new file format.

Plans for V 0.13:

- You should be able to cancel encoding and decoding.
- Option to control the overwriting of existing files.
- Possibly better command line interface (more and clearer options).
- Remove bugs in the handling of the new file format. I'm quite sure, that you will find some in V0.12...
- Better (there is nearly none...) handling of file io errors.

Thomas
Go to the top of the page
+Quote Post
Shade[ST]
post Oct 1 2006, 18:28
Post #325





Group: Members
Posts: 1189
Joined: 19-May 05
From: Montreal, Canada
Member No.: 22144



QUOTE (TBeck @ Oct 1 2006, 13:02) *
V0.12 is done
I have only one thing I can say : WICKED
Go to the top of the page
+Quote Post

15 Pages V  « < 11 12 13 14 15 >
Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



RSS Lo-Fi Version Time is now: 31st October 2014 - 22:02