IPB

Welcome Guest ( Log In | Register )

--blocksize: info on its impact on strength and how to use correctly?, Was: --blocksize (TOS #6)
_m_
post Feb 6 2012, 21:02
Post #1





Group: Members
Posts: 231
Joined: 6-April 09
Member No.: 68706



Recently I discovered --blocksize.
Bryant, could you tell some more about its impact on strength and how to use it correctly?
I saw some topics mentioning that setting it very low can have positive impact on some sources.
I tried the opposite, setting it to maximum on my quick and dirty high-res, multichannel dataset and it turned out to be slightly better than default, 0.05% on average, winning on 22 of 29 samples. Still, it was 0.71% worse than flake. Taking the best result from default / --blocksize reduced the flake's advantage to 0.63%.
Go to the top of the page
+Quote Post
 
Start new topic
Replies
bryant
post Feb 7 2012, 21:01
Post #2


WavPack Developer


Group: Developer (Donating)
Posts: 1297
Joined: 3-January 02
From: San Francisco CA
Member No.: 900



Increasing the block size from the default will normally improve compression slightly (as youve seen) because the overhead of the block header becomes a smaller and smaller percentage of the total data size. However, there are negative implications of this and so I would not recommend it unless you are just archiving and going for the best possible compression ratio. The problem is that the resulting files require more memory to play (especially multichannel files) and some players (like Ffmpeg) might refuse to play them at all. I have even been considering reducing the default block size for some modes to reduce the playback memory footprint even though it might reduce compression slightly.

The times that setting a low block size improves compression is only in situations where there is redundancy in the LSBs of the audio samples. The only two cases of this that I know of are files from LossyWAV and decoded HDCD files, but there might be others. In the cases where the amount of redundancy is changing often, using smaller blocks help WavPack take advantage of this, and of course with LossyWAV you simply want the block size to match the block size that LossyWAV is using. Note that the --merge-blocks option must be used with --blocksize to get this.

The easiest way to get better compression is to use the -x switch. Although this can be very slow during encoding, there is no cost during decoding. I would start with -x4 to get an idea how much improvement you might get. Of course, -h and -hh improve compression too, but those will result in somewhat slower decoding.
Go to the top of the page
+Quote Post
_m_
post Feb 7 2012, 21:40
Post #3





Group: Members
Posts: 231
Joined: 6-April 09
Member No.: 68706



Thanks for the answer. All the numbers that I gave were with -hhx6. Which is what I call 'rather fast', but 'week/very week'.
I'll have to consider increasing blocksize, do some testing etc. While the main use is archival, portability is worth something too and the gains seem tiny.

This post has been edited by _m_: Feb 7 2012, 21:41
Go to the top of the page
+Quote Post
bryant
post Feb 8 2012, 20:50
Post #4


WavPack Developer


Group: Developer (Donating)
Posts: 1297
Joined: 3-January 02
From: San Francisco CA
Member No.: 900



QUOTE (_m_ @ Feb 7 2012, 12:40) *
All the numbers that I gave were with -hhx6.

Ah, okay. I'm surprised that Flake gives better compression than that. I'm on Flake 0.10-3 (with -12) and it's consistently worse than WavPack's best, but perhaps there were compression improvements in later versions.

Anyway, I'm glad WavPack is working out for you. smile.gif
Go to the top of the page
+Quote Post

Posts in this topic


Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



RSS Lo-Fi Version Time is now: 22nd December 2014 - 23:19