Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Yalac – Evaluation and optimization (Read 22029 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Yalac – Evaluation and optimization

Reply #25
I had to come to a deceision: Use parcor coefficients or not. I have deceided against it.
  I didn't see that coming.  I think I'm pleased; although I too find it difficult to say no to the benefits of parcor coefficients.

And the PreFilter can be turned off (for higher speeds), which would not have been possible for the parcor coefficients.
I think this is a huge factor.  It allows you more flexibility for each preset.

It seems, as if we will see HIGH become about 25 percent faster without a significant compression penality! And the method i used for the speed up can be applied to other optimizations of the compression efficiency, which i actually did not want to implement, because they would have been too slow!
That all sounds like very good news indeed. 

BTW: Preset FASTEST is back. Not too important for me, but why not provide both extremes: INSANE for the upper, FASTEST for the lower end.
Exactly, why not?  I think the bigger the range the better.  Even users who normally use Extra may want Fastest occasionally if they are in a hurry to transport a file. I also suspect Fastest will still have ridiculously competative compression rates...

Sounds good, though I wonder how large the difference between fast and fastest will be -- will there not be throttling of the processor / ram / hard drive?
This is definately a factor.  However, with the idea that IO issues will decrease over the coming years it is well worth ensuring that Yalac doesn't fall foul of merely attaining current IO speeds.  There's also decoding to RAM to consider.

Well, I'm really looking forward to taking a look at 0.09.  A new Fastest preset, and faster encoding for Normal, sound great to me.
I'm on a horse.

Yalac – Evaluation and optimization

Reply #26
I had to come to a deceision: Use parcor coefficients or not. I have deceided against it.
  I didn't see that coming.  I think I'm pleased; although I too find it difficult to say no to the benefits of parcor coefficients.

Yes, it's difficult, and i can not swear, that they will never come back...

Sounds good, though I wonder how large the difference between fast and fastest will be -- will there not be throttling of the processor / ram / hard drive?
This is definately a factor.  However, with the idea that IO issues will decrease over the coming years it is well worth ensuring that Yalac doesn't fall foul of merely attaining current IO speeds.  There's also decoding to RAM to consider.

Yes, IO speeds will increase. But usually CPU speeds are increasing faster! Therefore the gap between theoretical and practical (with IO) decoding speed will even grow!

One may sometimes get another impression, because cpu speed increases smoothly (ok, it will change with multi core cpus) while io speed always makes big jumps when the manufacturers double the density of the disk surfaces. And serial ATA has brought some nice improvements for the interface.

I can not get this out of my mind: Does it make sense to drop stronger compression methods ony because they slow down decoding by 20 to 30 percent? Isn't 2 to 3 times faster decoding than Monkey's Audio sufficient?

Ok, it may be a different matter if portable devices with very limited cpu power are beeing used for decoding. But here some other point becomes important: We are always evaluating the average decoding performance, but what's about the worst case?

For each subframe the encoder chooses the maximum predictor order (up to the limit defined by the preset), which increases compression. Your file may for instance use 20 predictors on average , but 256 for some subframes. These high predictor order subframes may need up to 3 times more cpu power! If your playback device is slow, you may have to limit the maximum predictor order to for instance 32 or 64 to avoid stuttering!

And this is important: If you limit the predictor order to such low values, the decoding speed penality for the parcor coefficients is not significant!

Yalac – Evaluation and optimization

Reply #27
I think that as long as you can encode an album from a CD at real-time speeds versus extraction (which places the speeds at approx. 50x, tops), then it shouldn't really pose a problem to implement less computationally efficient methods.  The thing is, though, that I consider 50-60 x encoding really important -- I quite dislike having to wait for the encoder when I extract a CD (though I don't extract at that speed either, don't worry)

The faster the better, though.  I _am_ willing to sacrifice compression levels for speed and vice versa, but a compromise is always necessary.  I think that for now, the thing to do would be to find the compromise that most people would choose to have, and match the results to that.  Maybe starting a poll could help.  If so, I'd be pleased to compile some questions and start that poll.

Keep us in touch, Master Tom

Have a nice day,
Tristan.

Yalac – Evaluation and optimization

Reply #28
Quote
' date='Jun 9 2006, 17:34' post='401194']
I think that as long as you can encode an album from a CD at real-time speeds versus extraction (which places the speeds at approx. 50x, tops), then it shouldn't really pose a problem to implement less computationally efficient methods.  The thing is, though, that I consider 50-60 x encoding really important -- I quite dislike having to wait for the encoder when I extract a CD (though I don't extract at that speed either, don't worry)

Well, i have been talking about decoding performance but most of it is relevant for encoding too.

For instance: The encoding speed for individual frames is variable. The variation is smaller than for decoding.

Would it matter for your purposes (extracting a cd), if for instance the average encoding speed is 50* but only 25* on some frames? Does this disturb the extracting process?

Quote
' date='Jun 9 2006, 17:34' post='401194']
...
most people would choose to have, and match the results to that.  Maybe starting a poll could help.  If so, I'd be pleased to compile some questions and start that poll.

I will think about a poll. Thanks.

  Thomas

Yalac – Evaluation and optimization

Reply #29
Some very good points, and ones that I can't disagree with.

It's just so difficult to call!  It's great to have the compression of Monkey's Audio Normal, but it's also great to encode as fast as FLAC and WavPack.  Yalac is in a strange, perhaps difficult, place between fast codecs and high compressing codecs.  Yalac has some superb qualities (I was looking at Josef's results earlier and noticed that Yalac is "only" 7th fastest at encoding, but it encodes 13% better than its nearest rival!), and it is a case of trying to make best use of those qualities, to find the right niche.

I suppose the decision has to be made regarding which users would opt for Yalac.  Is it FLAC users wanting better compression, or Monkey's Audio users wanting faster decoding?  That  said, will a little variation in either direction stop the opposing factor from switching?  Yalac could be very appealing to both factions, irrespective of which it tends slightly toward.

If parcor coefficients will be less effected by limiting predictor orders then that does seem an important point.  If Yalac goes for speed the possibility of hardware support is higher, and this factor will be an issue.

I would dearly love to hear the opinions of some other, more knowledgeable, members.
I'm on a horse.

Yalac – Evaluation and optimization

Reply #30
It would be a pity to sacrifice encoding speed for extra reduction, but this seems inevitable given the increasing CPU speeds. However, it would be best if decoding speed remained as optimal as possible. The reasons for this are obvious when you see the hardware support for FLAC because of its fast decoding and finalized file format (container). I know that finaliziation for Yalac is further along than the current discussion, but it is worthwhile to consider.

During my testing I've enjoyed seeing how Yalac performs, it has accomplished quite a feat in both filesize reduction, encoding speeds and fast decoding. My opinion, I would not want to give up more than 10% encoding speed for a mere 2% file reduction
I would dearly love to hear the opinions of some other, more knowledgeable, members.

Oops, I just read this part. My bad

The current discussion might fit better under the "Optimizing Yalac" thread, not that I'm militant about these things 
"Something bothering you, Mister Spock?"

Yalac – Evaluation and optimization

Reply #31
...high compressing codecs.  Yalac has some superb qualities (I was looking at Josef's results earlier and noticed that Yalac is "only" 7th fastest at encoding, but it encodes 13% better than its nearest rival!), and it is

...It would be a pity to sacrifice encoding speed for extra reduction, but this seems inevitable given the increasing CPU speeds. However, it would be best if decoding speed remained as optimal as possible. The reasons for this are obvious when you see the hardware support for FLAC because of its fast decoding and finalized file format (container). I know that finaliziation for Yalac is further along than the current discussion, but it is worthwhile to consider.

Because speed seems that much important, i could not resist: Looks as if V0.09 will include a FASTEST preset with at least 50 percent higher speed than FAST! It may compress 1.5 to 2.0 percent worse than FAST.

This should be sufficient to get a better rank in (not only) Josef Pohm's table...

Well, for me it's enough to pay for one night without sleep.