IPB

Welcome Guest ( Log In | Register )

 
Reply to this topicStart new topic
TAK 2.1.0 - Beta release
TBeck
post Nov 30 2010, 14:41
Post #1


TAK Developer


Group: Developer
Posts: 1095
Joined: 1-April 06
Member No.: 29051



Beta release 1 of TAK 2.1.0 ((T)om's lossless (A)udio (K)ompressor)

It consists of:

- TAK Applications 2.1.0 Beta 1
- TAK Winamp plugin 2.1.0 Beta 1
- TAK Decoding library 2.1.0 Beta 1

The final release will additionally contain the SDK.

Download:

Download link removed. Beta 2 has been released.

What's new

This release adds a new user selectable codec, which significantly improves the compression efficiency of LossyWav-processed files. Files compressed with this codec can not be decoded by earlier versions of Tak, Takc, in_tak and tak_deco_lib! The default codec remains unchanged und is therefore backwards compatible to TAK V2.0.0.

Improvements:

- New additional codec that improves the compression efficiency of LossyWav-processed files by up to about 2 percent (relative to the original file size) for the quality setting -q5.0 (less or more for other settings). It supports any block size that is an integer multiple of 256 samples. Please don't specify the -fsl512 option at the command line. While this was required for the standard codec, it will severily hurt the performance of the new dedicated LossyWav-codec. Another advantage of the new codec: You will not loose much compression if LossyWav deceides to remove no bits, as can happen with for instance some low amplitude files with little signal complexity. Simply specify -cLW at the command line to activate the new codec. Earlier it wasn't advantegous to use presets higher than -p2m when encoding LossyWav-Files. That's no longer true, you may even benefit from -p4m.
Modifications:
- The file info function now also shows the name of the codec used to compress the file. The new codec is called "3 LossyWav (TAK 2.1)".
- Moved the verify-option from the details-dialog to the general compression options dialog.

Known issues:

- If you use pipe decoding and the application reading the pipe is beeing terminated before the whole file has been read, TAKC may get into an endless loop and has to be manually killed with the task manager. I don't think this is a big issue but i will try to fix it in one of the next versions. BTW: Big thanks to shnutils for testing the pipe decoding!
- There seem to be some compatibility issues with pipe decoding to some other applications ("crc1632.exe" has been reported). I will try to fix it in the next release.

Results

Here are some compression results for my primary test corpus. First a comparison of different codecs and LossyWav quality settings.

Comparison of Codecs
CODE
         FLAC 1.2.1 TAK 2.0    TAK 2.1    Advantage over
         -8         -p2m       -p4m       FLAC       TAK 2.0

-q0.0    20,61      19,07      17,25       3,36       1,82
-q2.5    27,43      25,95      23,93       3,50       2,02
-q5.0    33,26      31,78      29,62       3,64       2,16
-q7.5    38,79      37,28      35,03       3,76       2,25

Compression in percent relative to the original file size.

Sometimes LossyWav deceides not to remove any or only very few bits from a file. Then it can happen, that the LossyWav-Mode of the codec is less efficient then the standard mode. To test this i used a worst case scenario. I compressed my test corpus with TAK's standard (-cStd) and LossyWav (-cLW) codec, but without prior processing with LossyWav.

Compression of unprocessed files
CODE
         TAK 2.1    TAK 2.1
         -cStd      -cLW       Loss

-p0      58,74      58,99      -0,25
-p1      57,84      57,73       0,11
-p2      56,90      57,00      -0,10
-p3      56,36      56,44      -0,08
-p4      56,02      56,06      -0,04
-p4m     55,88      55,97      -0,09

Since the presets of the 2 codecs are constructed slightly different, they are not directly comparable. But i think it is safe to say, that the average loss is usually not bigger than about 0.1 percent.

Encoding and decoding speed are close to the standard codec, therefore i conducted no tests.

Beta testing

The beta version has already gone through extensive testing performed by my automatic scripts. Please try the beta release and report any bugs in this thread.

Thanks for testing and have fun

Thomas

This post has been edited by TBeck: Dec 4 2010, 01:42
Go to the top of the page
+Quote Post
_mē_
post Nov 30 2010, 19:24
Post #2





Group: Members
Posts: 231
Joined: 6-April 09
Member No.: 68706



Nice that TAK is still alive.
Any plans for a Linux version?
Go to the top of the page
+Quote Post
viktor
post Nov 30 2010, 21:40
Post #3





Group: Members
Posts: 297
Joined: 17-November 06
Member No.: 37682



how about the planned flac merger?
Go to the top of the page
+Quote Post
Nick.C
post Nov 30 2010, 22:31
Post #4


lossyWAV Developer


Group: Developer
Posts: 1785
Joined: 11-April 07
From: Wherever here is
Member No.: 42400



Thomas,

Many thanks for your efforts to make lossyWAV processed audio compress more efficiently!

n.b. It should be noted that these tests were carried out using an unreleased beta of lossyWAV which allows the duration of the codec-block to be changed from the default 10msec. This beta (or the one after it....) will be released in due course.


--------------------
lossyWAV -q X -a 4 --feedback 4| FLAC -8 ~= 320kbps
Go to the top of the page
+Quote Post
Destroid
post Dec 1 2010, 13:18
Post #5





Group: Members
Posts: 545
Joined: 4-June 02
Member No.: 2220



A good release, TAK still very strong in efficiency and uniquely devoted to its own agenda.

UNIX question: I was led to believe TAK runs uneventfully under WINE, correct my assumption.

FLAC merger question: The main argument for a merger was to make the 'ultimate' lossless format (the better-known FLAC combined with TAK proficiencies). It seems a universal lossless format fails to garnish much public interest, as proven by WMA lossless, ALAC and (ultimately) MPEG-4 ALS.

I support TAK having the ability to handle the area between lossless and lossy. It makes sense to compromise for space-savings with minimal artifact-induced side-effects.

I do have one question of my own for TAK:

The OpenCL FLAC (previously CUDA FLAC) totally rules in compression speed. It was intimated that other lossless codecs could not become parallel by design. Is this true with TAK as well?

Thanks (again) Thomas for making the holidays merrier! smile.gif


--------------------
"Something bothering you, Mister Spock?"
Go to the top of the page
+Quote Post
TBeck
post Dec 2 2010, 23:45
Post #6


TAK Developer


Group: Developer
Posts: 1095
Joined: 1-April 06
Member No.: 29051



QUOTE (_mē_ @ Nov 30 2010, 19:24) *
Any plans for a Linux version?

QUOTE (viktor @ Nov 30 2010, 21:40) *
how about the planned flac merger?

Well, i have learned from the past...

I will not promise anything and only announce new features which i have already begun to work on. Development will proceed silently unless i need support from the users.

QUOTE (Nick.C @ Nov 30 2010, 22:31) *
n.b. It should be noted that these tests were carried out using an unreleased beta of lossyWAV which allows the duration of the codec-block to be changed from the default 10msec. This beta (or the one after it....) will be released in due course.

The results above are for the default block size of 512 samples. 256 surely would further improve the compression. smile.gif

QUOTE (Destroid @ Dec 1 2010, 13:18) *
The OpenCL FLAC (previously CUDA FLAC) totally rules in compression speed. It was intimated that other lossless codecs could not become parallel by design. Is this true with TAK as well?

Thanks (again) Thomas for making the holidays merrier! smile.gif

Nice to hear from you! rolleyes.gif

TAK's design is equally suited for parallel processing. When it comes to the processing of 24-bit-audio, it's even better suited than FLAC, because it will still work with 16*16=32 bits multiplications while FLAC usually has to switch to a higher resolution, which currently isn't supported very well by CUDA.

Currently i prefer to improve TAK's encoding speed by multi core support and some more assembler optimizations. Multi core support is planned for the next release (i just bought a second hand cpu to have better testing oppurtunities).

I have spend the last days working on assembler optimizations. Since TAK is already extremely optimized, there is not much to gain this way. Only the SSSE3 instruction set yielded some notable improvement of about 10 percent. Since SSSE3 (note the three 'S') isn't supported by AMD, only intel users will benefit from those optimizations.

Possibly i will release another beta with those optimizations. This way TAK 2.1 will also be of interest for a lot more users (who don't use LossyWav).
Go to the top of the page
+Quote Post
alvaro84
post Dec 3 2010, 10:15
Post #7





Group: Members
Posts: 128
Joined: 9-August 06
Member No.: 33830



QUOTE (TBeck @ Dec 2 2010, 23:45) *
...Only the SSSE3 instruction set yielded some notable improvement of about 10 percent. Since SSSE3 (note the three 'S') isn't supported by AMD, only intel users will benefit from those optimizations.


Does this affect decoding too?

For encoding I like the idea of using multiple cores to encode the same file, this way HDD seeking can't be as much of a bottleneck as when I use more instances of the same encoder to utilize more threads.

(And thanks for thinking of us who don't use lossywav smile.gif)
Go to the top of the page
+Quote Post
TBeck
post Dec 3 2010, 13:12
Post #8


TAK Developer


Group: Developer
Posts: 1095
Joined: 1-April 06
Member No.: 29051



QUOTE (alvaro84 @ Dec 3 2010, 10:15) *
Does this affect decoding too?

No. From my experiences with the encoder tuning i don't think SSSE2/SSSE3/SSSE3 would help much, but maybe i will try it sometime.

QUOTE (alvaro84 @ Dec 3 2010, 10:15) *
(And thanks for thinking of us who don't use lossywav smile.gif)

Sorry, i was very insensitive... blush.gif

Currently the new tuned encoder is undergoing my comprehensive validation procedure. This will take several hours. Then there will be a Beta 2 release.
Go to the top of the page
+Quote Post
Zarggg
post Dec 3 2010, 16:41
Post #9





Group: Members
Posts: 545
Joined: 18-January 04
From: bethlehem.pa.us
Member No.: 11318



QUOTE (TBeck @ Dec 3 2010, 07:12) *
QUOTE (alvaro84 @ Dec 3 2010, 10:15) *
Does this affect decoding too?

No. From my experiences with the encoder tuning i don't think SSSE2/SSSE3/SSSE3 would help much, but maybe i will try it sometime.

There was discussion a couple months back either here or on XMPlay's forums that using SSE/etc optimizations may actually add a lossy aspect to the decoding process (since it takes processor-specific "shortcuts"). Is there any truth to this?

This post has been edited by Zarggg: Dec 3 2010, 16:42
Go to the top of the page
+Quote Post
alvaro84
post Dec 3 2010, 17:23
Post #10





Group: Members
Posts: 128
Joined: 9-August 06
Member No.: 33830



QUOTE (TBeck @ Dec 3 2010, 13:12) *
No. From my experiences with the encoder tuning i don't think SSSE2/SSSE3/SSSE3 would help much, but maybe i will try it sometime.


Fine with me, decoder is fast enough smile.gif
I'm still happy that SSSE3 in my CPU gets some use. I'm such a geek blink.gif laugh.gif
A bit too many S' though laugh.gif

And I don't feel hurt smile.gif I'm just afraid of lossywav, as usual (even though it's just another lossy thing and I actually listen lossy w/o any problem) laugh.gif
Go to the top of the page
+Quote Post
Nick.C
post Dec 3 2010, 18:45
Post #11


lossyWAV Developer


Group: Developer
Posts: 1785
Joined: 11-April 07
From: Wherever here is
Member No.: 42400



QUOTE (alvaro84 @ Dec 3 2010, 16:23) *
I'm just afraid of lossywav, as usual (even though it's just another lossy thing and I actually listen lossy w/o any problem) laugh.gif
.... could you please clarify that statement? unsure.gif


--------------------
lossyWAV -q X -a 4 --feedback 4| FLAC -8 ~= 320kbps
Go to the top of the page
+Quote Post
TBeck
post Dec 4 2010, 01:54
Post #12


TAK Developer


Group: Developer
Posts: 1095
Joined: 1-April 06
Member No.: 29051



QUOTE (Zarggg @ Dec 3 2010, 16:41) *
There was discussion a couple months back either here or on XMPlay's forums that using SSE/etc optimizations may actually add a lossy aspect to the decoding process (since it takes processor-specific "shortcuts"). Is there any truth to this?

This depends on the specific implementation. In most cases it is possible to obtain the same accuray with an SSEx implementation if you use the same algorithm and data types. There is one exception: While SSE2 and above support the same 64-bit floating point format as the FPU, an FPU implementation can yield more accurate results, because it internally is working with 80 bits. But if this does matter depends on the algorithm.

This post has been edited by TBeck: Dec 4 2010, 01:54
Go to the top of the page
+Quote Post
alvaro84
post Dec 4 2010, 09:22
Post #13





Group: Members
Posts: 128
Joined: 9-August 06
Member No.: 33830



QUOTE (Nick.C @ Dec 3 2010, 18:45) *
QUOTE (alvaro84 @ Dec 3 2010, 16:23) *
I'm just afraid of lossywav, as usual (even though it's just another lossy thing and I actually listen lossy w/o any problem) laugh.gif
.... could you please clarify that statement? unsure.gif


Oh, we exchanged privs about it a year ago or so. I fear getting lossy things disguised as lossless. I hope their size will maki it apparent smile.gif
Other than that, even I'd use it, except that I'd be afraid of mixing the files with true lossless ones. My memory is not perfect, you know huh.gif
(On the top of that you also wrote how easy to spot these files. Thanks for reminding me again!)

This post has been edited by alvaro84: Dec 4 2010, 09:26
Go to the top of the page
+Quote Post

Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



RSS Lo-Fi Version Time is now: 23rd July 2014 - 20:48