IPB

Welcome Guest ( Log In | Register )

2 Pages V   1 2 >  
Reply to this topicStart new topic
Yalac Evaluation and optimization, The effect of parameter variations on specific files
TBeck
post Apr 10 2006, 07:27
Post #1


TAK Developer


Group: Developer
Posts: 1098
Joined: 1-April 06
Member No.: 29051



Content

In this thread the testers should post results for specific variations of individual encoder options of Yalac (Working name for "Yet another lossless audio compressor"), which will be helpful for debugging and optimization purposes.

General comparisions of the preset modes with other compressors should go into the thread "Yalac - Comparisons".

What happened

Bad timing of my introduction (April 1.) forced an early publication of an evaluation release of Yalac, to prove, that it really works. This are results of 8 forum members, who where so kind to test the experimental release for me.

Many thanks!
Go to the top of the page
+Quote Post
Destroid
post May 22 2006, 21:41
Post #2





Group: Members
Posts: 554
Joined: 4-June 02
Member No.: 2220



Ok, this test uses the command-line Yalac encoder (YALACC) version 0.06 to test the -c1 switch (SSE). Once again I used the original 1985 pressing (DDD), and I timed using Igor Pavlov's TIMER.EXE (mentioned in another dicussion) which indicates where the hard disk can bottle-neck encoding/decoding speed(s).
CODE
Dire Straits - Brothers in Arms 584,178,044 bytes duration 55:11
========================================================================
name/params Ratio EncTime/CPU% DecTime/CPU
--------------------- ------ ------------ ------------
Yalacc 0.06 -p0 46.15% 63.92x / 62% 79.90x / 46%
Yalacc 0.06 -p0 -c1 46.15% 67.31x / 66% 84.69x / 48%

Yalacc 0.06 -p1 45.70% 33.38x / 95% 84.46x / 55%
Yalacc 0.06 -p1 -c1 45.70% 33.60x / 95% 83.59x / 55%

Yalacc 0.06 -p2 45.41% 11.45x / 99% 82.29x / 61%
Yalacc 0.06 -p2 -c1 45.41% 11.84% / 99% 84.19x / 60%

Yalacc 0.06 -p3 45.34% 4.37x / 99% 82.61x / 59%
Yalacc 0.06 -p3 -c1 45.34% 4.47x / 99% 81.72x / 60%

I found using YALACC with the -c1 switch had an impact on encoding for all profiles, but most interesting was using the -c1 switch on the command-line for decoding usually had a positive effect. I'm not sure if using -c1 for decoding is valid but I re-ran the decoding process twice and the differences are measurable. blink.gif

System = A64 3000+, 512MB, Caviar 80GB, Win2K

edit: Simplified the table, added specs

This post has been edited by Destroid: May 22 2006, 23:22


--------------------
"Something bothering you, Mister Spock?"
Go to the top of the page
+Quote Post
Synthetic Soul
post May 22 2006, 22:32
Post #3





Group: Super Moderator
Posts: 4887
Joined: 12-August 04
From: Exeter, UK
Member No.: 16217



QUOTE (Destroid @ May 22 2006, 21:41) *
@Synthetic Soul: I may simplify this table to make it more readable. Perhaps the kernal and user percentages are unnecessary for this kind of testing?
It's my understanding that Process = Kernel + User, and Global is what TimeThis would report. I was intending to just look at Process (CPU only) and Global (CPU + IO) values.

Would the 99% Process Time (suggesting a 1% IO time) in the High/Insane encodes be due to disk caching or something?

Edit: Sorry, just realised. blush.gif I suppose the increased encoding time just means that the time required accessing the file becomes negligable compared to the raw processing of the data, i.e.: the more work goes into compressing the data the more negligable to actual IO time becomes.

This post has been edited by Synthetic Soul: May 22 2006, 22:46


--------------------
I'm on a horse.
Go to the top of the page
+Quote Post
Destroid
post May 22 2006, 23:23
Post #4





Group: Members
Posts: 554
Joined: 4-June 02
Member No.: 2220



QUOTE (Synthetic Soul @ May 22 2006, 21:32) *
I was intending to just look at Process (CPU only) and Global (CPU + IO) values.


Sounds good to me.

Drat, I forgot my specs again! Also fixed smile.gif


--------------------
"Something bothering you, Mister Spock?"
Go to the top of the page
+Quote Post
TBeck
post May 23 2006, 02:21
Post #5


TAK Developer


Group: Developer
Posts: 1098
Joined: 1-April 06
Member No.: 29051



QUOTE (Destroid @ May 22 2006, 22:41) *
I found using YALACC with the -c1 switch had an impact on encoding for all profiles, but most interesting was using the -c1 switch on the command-line for decoding usually had a positive effect. I'm not sure if using -c1 for decoding is valid but I re-ran the decoding process twice and the differences are measurable. blink.gif

Thanks for this evaluation. I find it absolutely interesting! Especially the low cpu usage when decoding.

For the effect of SSE: It can't be the reason for the speed difference between the two fast encoding passes:
QUOTE
CODE
Yalacc 0.06 -p0        46.15%    63.92x / 62%    79.90x / 46%
Yalacc 0.06 -p0 -c1    46.15%    67.31x / 66%    84.69x / 48%

Look at the CPU usage: It's higher in the second pass. That means, that less time has been spent for Disk-IO: 100 - 66 = 34 vs. 100 - 62 = 38 percent. If encoding with SSE was faster, than the portion of the processing time should be reduced (less than the 62 % without SSE) if disk io time was constant.

I am quite sure, that the speed difference has been caused by some caching or other drive issues.

And to reveal a secret: The effect of SSE on preset fast should be absolutely minimal, maybe a tenth of the effect on high, which is allready very small. And decoding does not use SSE at all...
Go to the top of the page
+Quote Post
TBeck
post Jun 5 2006, 03:18
Post #6


TAK Developer


Group: Developer
Posts: 1098
Joined: 1-April 06
Member No.: 29051



Deceisions for V0.09

This time i need some advice, because i am not sure what would be the best for the next version.

V0.08 has brought the prefilter:

- Activated on presets HIGH and EXTRA.
- Gives on average about 0.20 to 0.25 percent better compression, up to 4 percent on specific files.
- Slows down encoding a bit and decoding on average by 10 to 15 percent, up to 35 percent on specific files.

Within my previous post i allready wrote, that the advantage of the PreFilter probably is beeing caused by at least two seperate effects on the signal.

The first effect produces the huge advantage (up to 4 percent) on some specific files.

The second effect is mostly responsible for the average advantage (0.20 to 0.25), that the PreFilter achieves on most file sets. It helps here, because it reduces the precision required for the accurate storage of the predictor coefficients, which need less space within the file stream when the PreFilter is applied.

To verify this, i implemented another usually more efficient way to store the predictor coefficients and tried it with the PreFilter turned off. And indeed i achieved about the same average compression advantage as with the PreFilter.

Details: Previous versions directly stored the predictor coefficients while the current implementation stores the parcor coefficients. Usually one would expect a bigger advantage for the parcor representation, but because of an initial transformation, that my codec performs on the signal, the storage requirement for the direct coefficients is only slightly bigger.

Advantages of the parcor representation over the PreFilter:

- While it doesn't encode faster than the PreFilter on the higher presets, it is much faster with lower predictor orders and therefore can be used for presets FAST and NORMAL and increase their compression.
- HIGH performs very well without the Optimize Quantization option and therefore is about 20 percent faster,

General disadvantages:

- The decoding speed of NORMAL, HIGH and EXTRA is beeing reduced by about the same amount as with the PreFilter.

Time for some data of my two main test file sets:
CODE
rw           Fast    Normal  High    Extra
Compression    0,12    0,20    0,12    0,17 %
Encode        -6,59   -1,96   20,77  -27,65 %
Decode        -0,60   -6,68  -13,70  -10,25 %
                
songs        Fast    Normal  High    Extra
Compression    0,15    0,24    0,15    0,22 %
Encode        -3,00   -1,91   25,54  -29,06 %
Decode        -2,54   -7,35  -12,63  -11,44 %

The table compares the current implementation of V0.09 with V0.07 (which has no PreFilter). Compression is the improvement of the compression ratio, Encode and decode show the change in encoding and decoding speed. Decoding has been performed without file output.

Short summary for V0.08 with PreFilter (was to lazy to build another table): No difference (fields would contain 0.00) to V0.07 for presets FAST and NORMAL, HIGH would compress only 0.03 percent better than V0.09 but with 30 percent less encoding speed. Decoding speed on HIGH identical to V0.09.

Preset HIGH of V0.09 uses neither the PreFilter nor the Optimize Quantization option. Both options have been moved to EXTRA.

What i like:

- The compression advantage for FAST and especially NORMAL.
- The 20 percent higher encoding speed for HIGH (30 percent better compared to V0.08).

Current preference for V0.09:

Use the configuration described above: New way to store the coefficients, PreFilter moved to preset EXTRA. I don't want to remove it, because it has such a huge effect on some (for instance Joseph Pohm's files).

Question:

The main advantage of Yalac is the high decoding performance. But V0.08 and V0.09 both reduce the decoding speed. Is the compression advantage worth the speed penality? Contrary to the PreFilter, there will be no option to disable the new way to store the prdictor coefficients.

Things to remember:

- The decoding performance has been measured without file output. With output turned on the speed difference would be smaller on most systems.
- The reduction of the decoding performance is directly related to the predictor order the encoder has choosen. If you want better decoding performance, you can always reduce the maximum predictor order the encoder is allowed to use.

This post has been edited by TBeck: Jun 5 2006, 04:04
Go to the top of the page
+Quote Post
Shade[ST]
post Jun 5 2006, 03:28
Post #7





Group: Members
Posts: 1189
Joined: 19-May 05
From: Montreal, Canada
Member No.: 22144



I would change the way the coefficients are stored only for fast and normal. For high, extra and insane, I would activate the prefilter.

These would be my choices because in my experience, the prefilter hurt with my files @normal compression.
Go to the top of the page
+Quote Post
TBeck
post Jun 5 2006, 03:42
Post #8


TAK Developer


Group: Developer
Posts: 1098
Joined: 1-April 06
Member No.: 29051



QUOTE
' date='Jun 5 2006, 04:28' post='399563']
I would change the way the coefficients are stored only for fast and normal. For high, extra and insane, I would activate the prefilter.

Why? The new way to store the coefficients (parcor) helps any preset and the PreFilter can always be applied additionally. And with the usage of the parcor coefficients the PreFilter is only advantegous on very special and rare files, therefore it seems well placed in preset EXTRA. Most important: It would make my code far more complex, if i would use both representations of the predictor coefficients.
Go to the top of the page
+Quote Post
Shade[ST]
post Jun 5 2006, 03:50
Post #9





Group: Members
Posts: 1189
Joined: 19-May 05
From: Montreal, Canada
Member No.: 22144



Ah! I didn't think about the more complex code issue; in that case, maybe you could forget the prefilter altogether? Or put it only in extra and insane, yes. Or, in any case, make it toggleable in the other modes (if ever you need to restrict certain options..)

I'm sure you will find the right solution -- you seem to be so dedicated and well-thought-through.

Good luck,
Tristan.
Go to the top of the page
+Quote Post
TBeck
post Jun 5 2006, 03:58
Post #10


TAK Developer


Group: Developer
Posts: 1098
Joined: 1-April 06
Member No.: 29051



QUOTE
' date='Jun 5 2006, 04:50' post='399571']
Ah! I didn't think about the more complex code issue; in that case, maybe you could forget the prefilter altogether? Or put it only in extra and insane, yes. Or, in any case, make it toggleable in the other modes (if ever you need to restrict certain options..)

I'm sure you will find the right solution -- you seem to be so dedicated and well-thought-through.

Thanks!

That's the way i tried to describe above: PreFilter toggleable and activated by default only for preset EXTRA and INSANE.
Go to the top of the page
+Quote Post
Synthetic Soul
post Jun 5 2006, 12:17
Post #11





Group: Super Moderator
Posts: 4887
Joined: 12-August 04
From: Exeter, UK
Member No.: 16217



I've written numerous responses to this now Thomas, but keep contradicting myself and going around in circles.

In my understanding the pre-filter was providing enough benefit to warrant its inclusion. It seems, on first impression, that the parcor method has benefits over using the pre-filter, but little to no disadvantage. With these two points in mind it would make sense to remove the pre-filter and implement the parcor method.

My main concern is the drop in speed for Fast, and Normal, especially encoding. As it stands Yalac is in the top league for decompression and encoding speed, while providing better compression than its competitors. However, Yalac cannot compete with the codecs for whom compression is utmost, e.g.: Monkey's Audio and OptimFROG.

By my results (although I am all too aware that they are affected by IO issues) 0.08 Fast decompressed significantly better than 0.07, but it seems 0.09 would now be slower than 0.07. I would like to see some more non-IO affected figures on this.

There is a compression difference of 0.728% between Yalac 0.08 Insane and Fast (less than 15MiB on my 2GiB corpus). The difference between FLAC -0 and -8 is 4.706%; Wavpack -hx to -f is 2.758%. I haven't yet worked out my point here, but it seems to lead me to the fact that there seems less reason for Yalac users to opt for a slower compression rate, knowing that they may only achieve a little saving. It's the same reason that I stick with Wavpack -h and don't use -x (a difference of 0.657% for my corpus). That said, many people seem to...

Without all this tweaking even 0.03's Fastest (64.690%) was getting much better compression than FLAC -8 (66.028%), and was even better than WavPack default (65.750%). The way I see it you've already achieved what you need to with regard to compression.

I think it is great to squeeze out some more MB for High to Insane, but I think Yalac could begin to lose some of the appeal of Fast, and perhaps Normal. The fact that Fast could potentially decode and encode faster then FLAC -0 (it's main "competitor" in this realm), while providing noticeably better compression, is very tempting.

I don't know, it's so difficult to call, but I guess my point is that I wouldn't want to see Fast or Normal losing any more speed.

As it stands I think the faster presets (Fast, Normal, maybe High) are Yalac's best selling points. I'm not sure if 0.2-03% better compression is as much concern to the target audience as the fact that it is the fastest, with considerably better compression.

I guess the benefit of the pre-filter is that it can be applied to some presets only, whereas a switch to parcor, one assumes, would have to be implemented for all presets. I think, if I could have a Fastest preset that just used Yalac's core/fastest techniques to compress (no doubt still compressing better than its direct competitors) I would be happiest.

I'm not overly happy with this response either (I see I'm still contradicting myself), but I'll leave it here at least as a starting point for further discussion.

Argh! I've just remembered the 20-25% improvement in encoding speed with High, which can't really be ignored; that is very impressive and extremely tempting... way too tempting. I'm also still trying to work out how bad a 6% reduction in encoding speed is for Fast...

Edit: OK, after re-reading the text above I suppose I would have to conclude that I would prefer the pre-filter was kept (if my understanding is correct). I would also like to see Normal worrying more about encoding and decoding speed than squeezing out more compression, even if it means making it compress slightly less well than currently... now there' a statement that should promote some discussion. wink.gif

QUOTE (TBeck @ Jun 1 2006, 13:30) *
Edit: Just learned, that there is a difference between "lightens" and "enlightens"...
Yes, very much so, although I assumed "lightens" to mean that it took a weight off you, as in "Phew! That's a weight off my mind".

This post has been edited by Synthetic Soul: Jun 5 2006, 12:40


--------------------
I'm on a horse.
Go to the top of the page
+Quote Post
TBeck
post Jun 5 2006, 17:58
Post #12


TAK Developer


Group: Developer
Posts: 1098
Joined: 1-April 06
Member No.: 29051



QUOTE (Synthetic Soul @ Jun 5 2006, 13:17) *
I've written numerous responses to this now Thomas, but keep contradicting myself and going around in circles.

This seems to reflect my own trouble with this deceision...

But your and ShadeST's post allready enlightened rolleyes.gif me a bit!

I have to collect some more and more accurate data before i will answer to your post.
Go to the top of the page
+Quote Post
TBeck
post Jun 5 2006, 19:43
Post #13


TAK Developer


Group: Developer
Posts: 1098
Joined: 1-April 06
Member No.: 29051



Here the updated data i promised:

- Encoding speed measured without file output.
- Corrected some errors in V0.09 data (especially preset HIGH).
- Only test set rw, because it seems more representative.
- More comparisons.

CODE
Absolute values for test set rw
                                                        
Enco-Rate    Fast    Normal  High    Extra                                            
V0.07         37,75   15,74    6,45    3,95                                            
V0.08a        37,93   15,63    6,01    2,69                                            
V0.09         37,76   15,41    7,87    3,94                                            
                                                        
Deco-Rate    Fast    Normal  High    Extra                                            
V0.07         68,61   58,37   51,81   53,67                                            
V0.08a        69,46   57,83   43,79   46,63                                            
V0.09         67,62   53,47   45,43   41,10                                            
                                                        
Compression  Fast    Normal  High    Extra                                            
V0.07         57,31   56,71   56,36   56,27                                            
V0.08a        57,31   56,71   56,20   56,10                                            
V0.09         57,19   56,51   56,24   56,02                                            

                                                        
Comparisons for test set rw (in percent)
                                                        
V0.08 vs V0.07                                                        
             Fast    Normal  High    Extra                                            
Compression    0,00    0,00    0,16    0,17                                            
Encode         0,48   -0,70   -6,82  -31,90                                            
Decode         1,24   -0,93  -15,48  -13,12                                            
                                                        
V0.09 vs V0.07                                                        
             Fast    Normal  High    Extra                                            
Compression    0,12    0,20    0,12    0,25                                            
Encode         0,03   -2,10   22,02   -0,25                                            
Decode        -1,44   -8,39  -12,31  -23,42                                            
                                                        
V0.09 vs V0.08                                                        
             Fast    Normal  High    Extra                                            
Compression    0,12    0,20   -0,04    0,08                                            
Encode        -0,45   -1,40   28,84   31,65                                            
Decode        -2,65   -7,54    3,75  -11,86


I will be back for some comments soon...


Edit: I forgot: Test system is a Pentium III with 800 MHz...

This post has been edited by TBeck: Jun 5 2006, 19:50
Go to the top of the page
+Quote Post
Shade[ST]
post Jun 5 2006, 20:09
Post #14





Group: Members
Posts: 1189
Joined: 19-May 05
From: Montreal, Canada
Member No.: 22144



Somehow, I feel the speed loss is great compared to the compression gain; efficiency might not be an issue, here.

Could you also post results with IO speeds taken in account? (antivirus disabled, if you have one)

I think IO results will be necessary before we are able to judge the validity of the modifications and decide whether they are worth it or not...

Keep up the great work,
Tristan.
Go to the top of the page
+Quote Post
TBeck
post Jun 5 2006, 20:29
Post #15


TAK Developer


Group: Developer
Posts: 1098
Joined: 1-April 06
Member No.: 29051



QUOTE
' date='Jun 5 2006, 21:09' post='399818']
Somehow, I feel the speed loss is great compared to the compression gain; efficiency might not be an issue, here.

In general i would not agree!

Here my short comments:

- The difference for encoding and decoding speed on FAST between V0.09 and the other versions is not significant.
- Encoding and decoding on HIGH is faster for V0.09 than for V0.08! Since both decode slower than V0.07, it would be more appropriate to ask ourselfes, if we want to go back to V0.07: Remove parcor coefficients and the PreFilter!
- The most important disadvantage for V0.09 may be the 8.39 percent penality for decoding on NORMAL.
- Presset EXTRA seems not too important for me.

QUOTE
' date='Jun 5 2006, 21:09' post='399818']
Could you also post results with IO speeds taken in account? (antivirus disabled, if you have one)

I think IO results will be necessary before we are able to judge the validity of the modifications and decide whether they are worth it or not...

With IO turned on, the differences would be far smaller for FAST, a bit smaller for NORMAL and equal for HIGH and EXTRA. I don't think that this would help us...
Go to the top of the page
+Quote Post
Synthetic Soul
post Jun 6 2006, 10:01
Post #16





Group: Super Moderator
Posts: 4887
Joined: 12-August 04
From: Exeter, UK
Member No.: 16217



QUOTE (TBeck @ Jun 5 2006, 20:29) *
- The most important disadvantage for V0.09 may be the 8.39 percent penality for decoding on NORMAL.
Agreed.

Your figures are very persuasive.

My main concerns at the moment are Fast and Normal - especially Normal, considering that it should strive to be the best balance of speed and compression that Yalac can offer, IM(H)O wink.gif.

I don't have an issue with the changes in Fast since 0.07, as you say they are negligable. Normal's drop in decoding speed is a slight concern.

I guess all I can do is reiterate that I would gladly see Normal and Fast lose some compression in order to speed up encoding and decoding (well, depending on the benefit gained. smile.gif).

It seems moving to parcor has too many benefits to ignore, but it is possible that you could now make other changes that would create a better spread between Fast and Insane.

I would be very interested to know what you could do, if given license to drop Normal's compression ratio by 0.2-0.3%, and Fast's by 0.5-0.7%. I don't know whether you have enough variables to achieve this, i.e.: whether past improvements have been code improvements that simply can't be undone for speed gains. If you did have some switches that you could simply turn off to favour speed over compression I would be interested to hear your thoughts on the possibilities though.

QUOTE (Destroid @ Jun 5 2006, 20:51) *
Now, returning to your previous discussion... wink.gif
I would be very interested to hear your opinions, and Josef's. Anyone, in fact!

NB: Thomas, if you would like any of this split to a new thread just let me know. This thread does seem to sway off-topic, and some of this is more relevant to the thread "Yalac – Evaluation and optimization", which is sadly neglected... (poor thing).

Edit: Made text amendment as below...

This post has been edited by Synthetic Soul: Jun 6 2006, 17:01


--------------------
I'm on a horse.
Go to the top of the page
+Quote Post
TBeck
post Jun 6 2006, 16:20
Post #17


TAK Developer


Group: Developer
Posts: 1098
Joined: 1-April 06
Member No.: 29051



QUOTE (Synthetic Soul @ Jun 6 2006, 11:01) *
I would be very interested to know what you could do, if given license to drop Normal's encoding speed by 0.2-0.3%, and Fast's by 0.5-0.7%. I don't know whether you have enough variables to achieve this, i.e.: whether past improvements have been code improvements that simply can't be undone for speed gains. If you did have some switches that you could simply turn off to favour speed over compression I would be interested to hear your thoughts on the possibilities though.

I will need a bit time to answer, but i am not sure, if i understand this right: Shouldn't this be "if given license to drop Normal's compression ratio by 0.2-0.3%, and Fast's by 0.5-0.7%"?
Go to the top of the page
+Quote Post
Synthetic Soul
post Jun 6 2006, 17:00
Post #18





Group: Super Moderator
Posts: 4887
Joined: 12-August 04
From: Exeter, UK
Member No.: 16217



blush.gif Yes, sorry.

Please don't spend time on it if you think it is futile.

However, if you think the excercise may enlighten either/any of us, I would be very interested to hear your thoughts.


--------------------
I'm on a horse.
Go to the top of the page
+Quote Post
TBeck
post Jun 6 2006, 18:14
Post #19


TAK Developer


Group: Developer
Posts: 1098
Joined: 1-April 06
Member No.: 29051



QUOTE (Synthetic Soul @ Jun 6 2006, 11:01) *
My main concerns at the moment are Fast and Normal - especially Normal, considering that it should strive to be the best balance of speed and compression that Yalac can offer, IM(H)O wink.gif.

Yes. And it probably makes the first impression for new users. And some users will possibly never try any other preset. Therefore it should clearly show Yalac's strength (speed) and compare well to the default mode of other compressors.

In this context i am not too happy, that Yalac's NORMAL often compresses a bit worse than Monkey's NORMAL. Therefore i would like the 0.20 percent improvement provided by the parcor coefficients. Possibly this is the most important (psychological) advantage of the parcor coefficients for me...


QUOTE (Synthetic Soul @ Jun 6 2006, 11:01) *
...I guess all I can do is reiterate that I would gladly see Normal and Fast lose some compression in order to speed up encoding and decoding (well, depending on the benefit gained. smile.gif).

...It seems moving to parcor has too many benefits to ignore, but it is possible that you could now make other changes that would create a better spread between Fast and Insane.

...I would be very interested to know what you could do, if given license to drop Normal's compression ratio by 0.2-0.3%, and Fast's by 0.5-0.7%.

...If you did have some switches that you could simply turn off to favour speed over compression I would be interested to hear your thoughts on the possibilities though.

There is not much that can be done to speed up FAST (at least not without big changes of my code): A decrease of the predictor order from 32 to 8 reduces compression by about 0.80 percent and provides about 25 percent faster encoding.

But a quick check for NORMAL looks more promising: Setting partition serch level from normal to fast reduces compression by about 0.06 percent and provides about 25 percent faster encoding. This looks promising. But don't forget my statements above: I would prefer NORMAL to compress a bit better! Ok, one more dceision...

Another option would be the reduction of the maximum predictor order from 128 to 96. V0.09 will give access to 8, 96 and 192 predictors. Probably HIGH will use only 192 predictors in the future.

QUOTE (Synthetic Soul @ Jun 6 2006, 11:01) *
NB: Thomas, if you would like any of this split to a new thread just let me know. This thread does seem to sway off-topic, and some of this is more relevant to the thread "Yalac Evaluation and optimization", which is sadly neglected... (poor thing).

I thought about it too. But is it really practicable? We need the comparison results for our discussion.

Another approach: We open another thread, where only the comparisons (of any or only of the latest version) are beeing posted.
Go to the top of the page
+Quote Post
jido
post Jun 6 2006, 19:59
Post #20





Group: Members
Posts: 246
Joined: 10-February 04
From: London
Member No.: 11923



QUOTE (TBeck @ Jun 4 2006, 18:42) *
It would make my code far more complex, if i would use both representations of the predictor coefficients.

Could you use a representation compatible with both normal and parcor coefficients?
Go to the top of the page
+Quote Post
Synthetic Soul
post Jun 6 2006, 20:22
Post #21





Group: Super Moderator
Posts: 4887
Joined: 12-August 04
From: Exeter, UK
Member No.: 16217



QUOTE (TBeck @ Jun 6 2006, 18:14) *
In this context i am not too happy, that Yalac's NORMAL often compresses a bit worse than Monkey's NORMAL. Therefore i would like the 0.20 percent improvement provided by the parcor coefficients. Possibly this is the most important (psychological) advantage of the parcor coefficients for me...
Ah, perhaps this is the crux of the matter. I must admit, part of my logic revolves around Yalac Normal compressing at roughly the same speed as WavPack default, which is currently approximately 140% the speed of Yalac Normal, according to both mine and Josef's results. If Fast could match FLAC, and Normal could match WavPack, it would be quite a feat.

I must admit I don't consider Monkey's Audio that much. The benefit gained by using parcor would close this gap nicely.

QUOTE (TBeck @ Jun 6 2006, 18:14) *
There is not much that can be done to speed up FAST (at least not without big changes of my code): A decrease of the predictor order from 32 to 8 reduces compression by about 0.80 percent and provides about 25 percent faster encoding.
OK, at least I know not to pursue that any further.

QUOTE (TBeck @ Jun 6 2006, 18:14) *
But a quick check for NORMAL looks more promising: Setting partition serch level from normal to fast reduces compression by about 0.06 percent and provides about 25 percent faster encoding. This looks promising. But don't forget my statements above: I would prefer NORMAL to compress a bit better! Ok, one more dceision...

Another option would be the reduction of the maximum predictor order from 128 to 96. V0.09 will give access to 8, 96 and 192 predictors. Probably HIGH will use only 192 predictors in the future.
That is promising. However, as you say, I suppose the decision needs to be made whether you aim for the compression of Monkey's Audio, which is very achievable (perhaps achieved), or the speed of WavPack, which is a lot of ground to make up. I suppose it may be prudent to make the decision and then go all out for one or the other, and it's looking like Monkey's Audio is the goal to aim for.

QUOTE (TBeck @ Jun 6 2006, 18:14) *
I thought about it too. But is it really practicable? We need the comparison results for our discussion.

Another approach: We open another thread, where only the comparisons (of any or only of the latest version) are beeing posted.
It's no real effort for me to move posts to another thread, or split posts to another thread. Whatever you see fit, but I'll leave it for the moment.

Thanks for your time and patience Thomas.


--------------------
I'm on a horse.
Go to the top of the page
+Quote Post
TBeck
post Jun 6 2006, 20:54
Post #22


TAK Developer


Group: Developer
Posts: 1098
Joined: 1-April 06
Member No.: 29051



QUOTE (Synthetic Soul @ Jun 6 2006, 21:22) *
Thanks for your time and patience Thomas.

The truth is: The discussion with you did help me to become more aware of my (sometimes hidden) motivations! It had been not clear for me, that the comparison with Monkey Normal was the most important reason for my (earlier...) preference for the parcor coefficients. And sometimes i am loosing the bigger image, when i am too involveld into some optimization details. This thread can correct this... and often did so.

This post has been edited by TBeck: Jun 6 2006, 20:55
Go to the top of the page
+Quote Post
TBeck
post Jun 6 2006, 22:35
Post #23


TAK Developer


Group: Developer
Posts: 1098
Joined: 1-April 06
Member No.: 29051



QUOTE (jido @ Jun 6 2006, 20:59) *
QUOTE (TBeck @ Jun 4 2006, 18:42) *

It would make my code far more complex, if i would use both representations of the predictor coefficients.

Could you use a representation compatible with both normal and parcor coefficients?

No. It's the difference between the representations which makes one work better than the other.
Go to the top of the page
+Quote Post
TBeck
post Jun 8 2006, 20:44
Post #24


TAK Developer


Group: Developer
Posts: 1098
Joined: 1-April 06
Member No.: 29051



Current Progress (V0.09)

I had to come to a deceision: Use parcor coefficients or not. I have deceided against it.

The speed penality for NORMAL and the higher presets was mostly responsible for the final deceision. Ok, the parcor coefficients would not be slower than the PreFilter, but i don't want to drop the PreFilter, beacuse it has such a huge advantage for some files. But keeping both -PreFilter and parcor coefficients- would have slowed encoding and decoding down by an inacceptable amount. And the PreFilter can be turned off (for higher speeds), which would not have been possible for the parcor coefficients.

Nice to get rid of this difficult deceision... But somehow i wasn't too happy afterwards.

I could not forget about the encoding speed up with the parcor coeffecients: about 25 percent for HIGH.

Again time for a reconfiguration of the presets. NORMAL encodes now 17 percent faster and looses 0.05 percent compression on my primary test file set.

Unfortunately this increases the distance to Monkey's NORMAL... But Synthetic Soul is right: Speed is the main advantage of Yalac! Hence i could sacrify a tiny bit of compression.

Now for HIGH: The parcor coefficients provided a speed up, because they did not need the "Optimize Quantization" option. Time to speed up this option. It seems, as if we will see HIGH become about 25 percent faster without a significant compression penality! And the method i used for the speed up can be applied to other optimizations of the compression efficiency, which i actually did not want to implement, because they would have been too slow!

There will be more changes for V0.09, but i will talk later about them.

BTW: Preset FASTEST is back. Not too important for me, but why not provide both extremes: INSANE for the upper, FASTEST for the lower end.

Thomas
Go to the top of the page
+Quote Post
Shade[ST]
post Jun 8 2006, 21:01
Post #25





Group: Members
Posts: 1189
Joined: 19-May 05
From: Montreal, Canada
Member No.: 22144



Sounds good, though I wonder how large the difference between fast and fastest will be -- will there not be throttling of the processor / ram / hard drive?

It would be nice if you could make YALAC decompress faster than WAV can be read tongue.gif (which is approx 200x on my system, I think)
Go to the top of the page
+Quote Post

2 Pages V   1 2 >
Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



RSS Lo-Fi Version Time is now: 25th October 2014 - 20:51