Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Big-label mastering engineers don’t understand lossy formats (Read 70505 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Big-label mastering engineers don’t understand lossy formats

Reply #50
care to back that statement up with a widely accepted definition of "clipping" that would exclude an aggressive dynamic-range limiter?
Urf. You don't need a limiter or anything to clip. f(x)=max(x,min(x,1.0),-1.0) on a signal that exceeds [-1,1] will cause clipping where the signal falls outside that interval. This is conceptually identical to working with samples at a certain word-length and dealing with values that fall outside that word length.


That's not the question. The question is "Can a dynamic range limiter cause clipping?"

The answer is "Duh."

Big-label mastering engineers don’t understand lossy formats

Reply #51
Perhaps interviewing someone with a background in lossy compression codec development would be helpful in explaining exactly what sorts of audible changes/artifacts might occur?

Big-label mastering engineers don’t understand lossy formats

Reply #52
Compression is clipping - In a very strict and literal sense. Whenever a compressor or limiter acts on peaks, it is inducing "clipping" by definition.

That's not the question. The question is "Can a dynamic range limiter cause clipping?"

Moving the goal posts just a little?


Big-label mastering engineers don’t understand lossy formats

Reply #54
Compression is clipping - In a very strict and literal sense. Whenever a compressor or limiter acts on peaks, it is inducing "clipping" by definition.
Oh for goodness sake - you haven't a clue what you're talking about with this one!

(there's no nicer way of putting it - sorry!).

Cheers,
David.


David, care to back that statement up with a widely accepted definition of "clipping" that would exclude an aggressive dynamic-range limiter?


http://en.wikipedia.org/wiki/Clipping_(aud...igital_clipping

Quote
In digital signal processing, clipping occurs when the signal is restricted by the range of a chosen representation. For example in a system using 16-bit signed integers, 32767 is the largest positive value that can be represented, and if during processing the amplitude of the signal is doubled, sample values of, for instance, 32000 should become 64000, but instead they are truncated to the maximum, 32767.


To be honest, I'm really surprised and disappointed by the few posters who are taking extreme positions and resorting to anonymous ad hominem attacks, especially when we already agree on so much.

Disagreement I'm fine with. The childishness of a few of your peers, not so much.


As far as I can tell, you seem to be the primary person launching ad hom attacks.  If you stopped doing that, perhaps this would be less of a problem. 

But without direct before-and-after evidence, saying that their claims are conclusively false would not only be illegal and unethical, but factually incorrect.


This is false for a number of reasons.  Most directly, since you are an American, libel is not actually illegal.

That's all I have to say on that. I'm sure the vast majority of readers here are smart enough and unbiased enough to understand that. For the few who aren't - I'm sorry to hear that. Good luck and be well. I wish you a long life of yelling at strangers on the interwebs.


Heh, so basically you came back to post some personal attacks against people who are trying to help explain things to you.  I wonder what an unbiased person would think of that.

Big-label mastering engineers don’t understand lossy formats

Reply #55
Compression is clipping - In a very strict and literal sense. Whenever a compressor or limiter acts on peaks, it is inducing "clipping" by definition.
Oh for goodness sake - you haven't a clue what you're talking about with this one!

(there's no nicer way of putting it - sorry!).

Cheers,
David.


David, care to back that statement up with a widely accepted definition of "clipping" that would exclude an aggressive dynamic-range limiter?
You are really confused. The whole point of compressing or limiting peaks is to avoid clipping. If you increase the amplitude without squashing the peaks down, they will be clipped once their amplitude exceeds digital full scale. If you "squash" them (e.g. by momentarily reducing the amplitude), you can keep their shape, avoid clipping distortion, raise the overall level of the rest of the signal - and (up to a point) the momentary reduction in amplitude is inaudible. That's exactly what classic peak limiting does. It's almost the opposite of clipping!

It's true that some DRC processors can also introduce clipping. Sometimes clipping is introduced intentionally - good old Orban has used a module called a "clipper" for decades (though FM processing means this acts in a slightly different way to what we're discussing here). Sometimes clipping is introduced as a by-product of poorly chosen settings. Usually you find clipping on CDs because, even after the various stages including multi-band compression and peak limiting, the engineer still wants it louder, and at some point simply clipping the signal sounds less bad that even more peak limiting.

I hope this clears things up.

btw, I don't think the 84260 members of HA count as each other's peers.

Cheers,
David.

Big-label mastering engineers don’t understand lossy formats

Reply #56
When I am in a position to improve something of mine I think is important and it is fairly easy, I generally take action and improve it.  Maybe it's too late for Justin Colletti to amend or pull his article, if not then that's unfortunate, IMHO.

I wouldn't present information about the effects of the process of psychoacoustic coding primarily from mastering specialists any more than I would present information about the effects of food on the human body primarily from chefs (if at all!), no matter how reputable they are.

Also, how does reporting that someone has won Grammy awards or has a very successful career have anything to do with their ability to understand the intricate details of something when this knowledge is not necessary to do their job well, let alone at all?

Big-label mastering engineers don’t understand lossy formats

Reply #57
It's true that some DRC processors can also introduce clipping. Sometimes clipping is introduced intentionally.


Thank you David. That was exactly the point I was trying to make when I wrote that Dave Fridmann's work is 'not clipped because it's loud. It's loud because it's clipped. That's a world of difference.'

The rest of the discussion on the definition of clipping seems a little ego-driven to me. The main point remains that extreme compression and limiting are forms of clipping, and that they are often used in that way today for aesthetic effect.

I don't see why we should be arguing about that, unless our desire is to show off for an audience by battling over semantics and further derailing the conversation.

As far as I can tell, you seem to be the primary person launching ad hom attacks.


Please find a quote where I made an ad hominem attack directed at any specific individual. If I did, I'd be happy to apologize.

I know that I expressed my disappointment with the tone of the discussion after two anonymous ad hominem attacks were made against me. I'm doubtful that any objective reader would construe that as an ad hominem attack on my part.

...since you are an American, libel is not actually illegal.


That's technically correct. Libel is not a criminal offense - Only something you can be sued for. My sincere apologies for the semantic misstep.

Once again, the central point still stands: I am ethically obligated to state that related ABX tests invite skepticism of some mastering engineers' claims. So I did.

I am also ethically obligated to state that no conclusive proof has been presented to show that these particular mastering engineers were unable to hear differences between two similar audio files and adjust accordingly. So I did that too.

If you feel that I failed at expressing these two points clearly, I'll accept that as your personal critique. However, if you feel I have some other obligation that runs contrary to this, or that I made no concerted effort to fulfill both of these obligations, then we'll just have to disagree.

Heh, so basically you came back to post some personal attacks against people who are trying to help explain things to you.  I wonder what an unbiased person would think of that.


Please see above!

Unfortunately, I'm non-anonymous, so it's necessary for me to correct the record when inaccurate portrayals of my positions are made in a public conversation.

At this point, I'd ask that the moderators lock this thread, so I won't have to monitor it for additional inaccurate portrayals like the one you made in the quote above.

I think that all essential points have been presented on both sides of this "conversation", and at this time, any further discussion in this thread is likely to be pretty darn silly.

On a more positive note, a few HA members wrote me personally to express their regret about the direction this conversation took.

I'm very happy about that, so thanks. I'll try not to let the manners and off-topic arguments of a few anonymous posters color my feelings about the site too much.

I'm especially surprised by the comments of the few, because I've already made it clear that I'm a big fan of the philosophy behind HA and of ABX tests in general. As mentioned, I've even sung their praises to thousands upon thousands of readers. I guess I'll just have to accept that some people don't get that alienating your allies is lousy strategy.

To the rest of you, thanks for reading and have a good one,

Justin

Big-label mastering engineers don’t understand lossy formats

Reply #58
When I am in a position to improve something of mine I think is important and it is fairly easy, I generally take action and improve it.  Maybe it's too late for Justin Colletti to amend or pull his article, if not then that's unfortunate, IMHO.

I wouldn't present information about the effects of the process of psychoacoustic coding primarily from mastering specialists any more than I would present information about the effects of food on the human body primarily from chefs (if at all!), no matter how reputable they are.

Also, how does reporting that someone has won Grammy awards or has a very successful career have anything to do with their ability to understand the intricate details of something when this knowledge is not necessary to do their job well, let alone at all?


What do you suggest would be good a way to amend the article? When you do so, please bear in mind that:
  • It already states that related ABX tests invite skepticism of some the mastering engineer's claims.
  • It links to articles on the merits of ABX tests in evaluating audio claims.
  • I've offered to write a follow up if anyone cares to present good evidence that refutes these claims.
  • I'm under an obligation to avoid libel, defamation, or any unproven statements on my part that these particular engineers can't hear the differences between these particular files.
  • And that this is a news article about new procedures and tools being used by mastering engineers - not an article about ABX tests, sample rates, bit rates, perception, bias, the loch ness monster, or anything else.


If you care to make specific recommendation, I'd be happy to hear it. But please be sure to read the article in full before making your critique. I'm certain that you're an intelligent person, and based on your current analysis, it's hard for me to believe that you have done more than skim it. Don't sweat it. It happens to the best of us.

For the future, please also be aware that moderators are generally tasked with helping make forums more civil and their discussions more focused. If you feel you've done a lot to foster a cordial and meaningful conversation in this particular thread, I'll have to respectfully disagree.

At this point, I'd again ask that you to make amends by locking the thread down, and voicing any of your specific critiques to me directly via email - after reading the story in full, of course.

Thanks,

Justin

Big-label mastering engineers don’t understand lossy formats

Reply #59
As far as I can tell, you seem to be the primary person launching ad hom attacks.


Please find a quote where I made an ad hominem attack directed at any specific individual. If I did, I'd be happy to apologize.


Perhaps, you missed it, but I pointed an example out before right here:

http://www.hydrogenaudio.org/forums/index....st&p=790942

Trying to discredit an argument by (falsely) claiming the author is "resorting to knee-jerk ad hominem attacks" is an ad hom attack, because it attempts to discredit the author and not the idea.  Most telling, you then failed to respond to the actual substance of the argument (e.g. that you were negligent in presenting information that was not true), which IMO implies bad faith. 



I know that I expressed my disappointment with the tone of the discussion after two anonymous ad hominem attacks were made against me. I'm doubtful that any objective reader would construe that as an ad hominem attack on my part.


Hmm, it seems to me that you're a little unclear on what an ad hom attack actually is.

Check wikipedia:

An ad hominem (Latin for "to the man" or "to the person"), short for argumentum ad hominem, is an attempt to negate the truth of a claim by pointing out a negative characteristic or belief of the person supporting it.

So as far as I can tell, no one here has done this to you.  Its pretty much just been you doing it to other people (e.g. greynol).

I am also ethically obligated to state that no conclusive proof has been presented to show that these particular mastering engineers were unable to hear differences between two similar audio files and adjust accordingly. So I did that too.

If you feel that I failed at expressing these two points clearly, I'll accept that as your personal critique. However, if you feel I have some other obligation that runs contrary to this, or that I made no concerted effort to fulfill both of these obligations, then we'll just have to disagree.


I see that you have corrected the false statements greynol pointed out, and the incorrect stuff about frequency changes from lossy encoding.  Failing to admit that you had made a mistake and correct those mistakes were my main complaints.  That said, I'm still not impressed with your handling of this, nor that you edited the article without making it clear that you were retracting quite a bit of it. 



On a more positive note, a few HA members wrote me personally to express their regret about the direction this conversation took.


As am I.  However, seeing as you eventually made the edits you needed to, perhaps you could have saved all of us this grief by being more open feedback in the first place. 

I'm especially surprised by the comments of the few, because I've already made it clear that I'm a big fan of the philosophy behind HA and of ABX tests in general. As mentioned, I've even sung their praises to thousands upon thousands of readers. I guess I'll just have to accept that some people don't get that alienating your allies is lousy strategy.


You shouldn't be surprised.  ABX isn't a cult.  We don't worship at it.  The reason we push ABX is that its a good way to avoid being wrong.  Its basically an end to that means.  But the goal is to be correct and avoid spreading misinformation.  If you put up an article implying that AAC encoding involves "notching" out frequencies, people are going to be angry.  Using ABX here would have been great, since it would have prevented someone from mistakenly believing it, but thats not really the point.

Does that make sense to you?

Big-label mastering engineers don’t understand lossy formats

Reply #60
What do you suggest would be good a way to amend the article? When you do so, please bear in mind that:
  • It already states that related ABX tests invite skepticism of some the mastering engineer's claims.
  • It links to articles on the merits of ABX tests in evaluating audio claims.
  • I've offered to write a follow up if anyone cares to present good evidence that refutes these claims.
  • I'm under an obligation to avoid libel, defamation, or any unproven statements on my part that these particular engineers can't hear the differences between these particular files.
  • And that this is a news article about new procedures and tools being used by mastering engineers - not an article about ABX tests, sample rates, bit rates, perception, bias, the loch ness monster, or anything else.


If you care to make specific recommendation, I'd be happy to hear it. But please be sure to read the article in full before making your critique. I'm certain that you're an intelligent person, and based on your current analysis, it's hard for me to believe that you have done more than skim it. Don't sweat it. It happens to the best of us.


Sorry to double post, but I missed this.  Yes I think your edits so far are very good.  You've removed almost all of the stuff that was misleading or irrelevant.  One minor point I would suggest that in some of the quotes, you've removed clauses and sentences from the original quote that significantly change what the author is implying.  In that case you may wish to add a [...] to make it clear that the quote has been edited to remove stuff the author originally said.  Usually this isn't a big deal, but since the meaning is changed a little its a good idea here.

For the future, please also be aware that moderators are generally tasked with helping make forums more civil and their discussions more focused. If you feel you've done a lot to foster a cordial and meaningful conversation in this particular thread, I'll have to respectfully disagree.


I think the improvements suggested by greynol that you've made to your article are quite an accomplishment.

At this point, I'd again ask that you to make amends by locking the thread down, and voicing any of your specific critiques to me directly via email - after reading the story in full, of course.


I think making them in public makes much more sense, since a much wider range of people will be able to comment on them.

Big-label mastering engineers don’t understand lossy formats

Reply #61
That's true. I did make a couple clarifying edits and fixed a typo too.

I took out the part of Mr. Meller's quote where he said that the "highs, lows and mids" were "filtered out completely". Once I realized that some of you took him literally rather than figuratively, I agreed it was a valid critique and felt his statements were clearer without it.

There were also two minor edits for style:

1) I got some good feedback in an email saying that the section about Shepherd's tests and the alternative tests was wordy and confusing in parts. I rephrased a couple of sentences to make it clearer and less verbose. The substance of the points did not change.

2) I acknowledged the criticism that it was unclear whether Apple's whitepaper explicitly recommends the use of equalizers in the mastering process. I still feel that's a semantic distinction, but I decided to add a couple words to make misinterpretation impossible. Again, the substance of the statement remains the same.

I'll take any valid criticisms to heart if they're specific. I also routinely clean up the copy in older articles if I'm alerted to punctuation errors or unclear sentences. If something is factually inaccurate, I'll even issue retractions and do follow-ups in future issues.

Since the above were not factual changes, I have no plans to do that in this case. Of course, I'm always happy to write more on the subject in the future, and I'm always looking for contributing writers with real expertise.

Chances are that I'm not going to look to anonymous axe-grinders to find valuable sources, or to publicly thank curmudgeons for their copy-editing recommendations and spell-checking services. That's just human nature. C'mon, you guys know all about "selection bias"* 

If you have an issue with a specific sentence, statement, or see a typo, feel free to shoot me an email. All that I ask is that you be human about it.

And as I said in the beginning, thanks for reading, and for the feedback.

-Justin

(* Yes, I know what "selection bias" actually means. This is what normal people call a "pun".)

Big-label mastering engineers don’t understand lossy formats

Reply #62
I'll try not to make this a long post as I've got a lot of things on my plate at the moment.

First, I'm delighted to see that the author is actively making an effort to improve his article.  I regret that I felt that he needed to be shamed into doing so.  Clearly there are better ways, even when a valid critique has been snubbed.

Regarding the article as a whole (which I've read at least three times in full and had done so at least once before the author joined in on the discussion), I guess I have trouble with its purpose.  It appears as though it's about the new "Mastered for iTunes" process and tools and how they're supposed to make delivering high quality downloads easier.  This is fine.  That the article is divided up to present two opposing viewpoints is fine too.  However, I have trouble with the way the sides are divided.  On the one side there appears to be people who seem to be happy with the tools but clearly don't know how to use them (or exactly why they were even provided in the first place; that Rick Rubin is either taking credit or is being given credit can't be taken seriously).  On the other side is another person who also doesn't understand the point of the tools nor how to use them properly.  I applaud that the author believes in double-blind testing methodology, but it's being presented as if it is for the end-listener and beneath mastering specialists.  It seems as if it's really not much more than an aside; an editorial by the author.

For me the issue here is not whether I'm for or against "Mastered for iTunes", it is about presenting the topic in a meaningful and informative way.  My concerns over which I may have "allies" or "enemies" is not about mastering, delivery formats or even the state of audio in general, it has to do with the presentation of factual information.

A stronger article, more beneficial to the iTunes consumer, would dive into the precise reason for why Mastered for iTunes was created and would present information provided by Apple and those who developed the specific tools or those who develop codecs in general.  It would point out how the tools were intended to be used and whether the benefit of them will be realized based on the way the tools are actually being used.

As it is right now I would title the article, "Mastered for iTunes, a wasted opportunity."  Whether it was intended or not, it points out the very sorry state of how music is produced for digital consumption and that there appears to be little hope that it will get better.  Until Meller and his peers understand that equalizing to prep for lossy conversion is an unnecessary and therefore harmful process, they will continue to polish cars in total darkness.

Big-label mastering engineers don’t understand lossy formats

Reply #63
At least from my point of view (and as I mentioned i my first post), the real story is just how poorly Apple has managed to communicate the goals of the "Mastered For iTunes" program.  Before I mocked their description for its almost condescending tone to the reader, but really I think even much of what they have tried to explain has been lost.  For example, in the original quote from Muller he attempts to determine what the frequency response of an AAC encoder is:

“I can tell you what doesn’t work,” he said. “One of my initial trial-and-error methods was to take digital fingerprints of both versions of the song, and then try to apply a [compensating EQ] to the CD version and pump that back through the AAC encoder.”

Now this is a perfectly logical thing to do to a speaker or a tube amplifier.  But its just plain insane to expect that to work on a perceptual audio encoder.  No one with the slightest clue of whats going on would expect that to work.  But to someone still living in the pre-digital era, its all they would have to go on.  Now to Muller's credit, he does correctly deduce that a perceptual audio encoder is very, very different from an amplifier through experimental means:

"Well, the problem with that is that input does not equal output. It’s highly program dependent, and you rarely get the same thing twice in a row."

Unfortunately, while he has (thankfully!) given up trying to compensate for perceptual encoding with EQ, he has no idea why AAC encoding works differently then an amplifier.  The notion that its actually analyzing a signal and converting it to a new form in a nonlinear, adaptive manner without actually altering the intensity per frequency is lost on him.  Thus while Apple has given him tools, they have failed to explain to him adequately what it is those tools are meant to do or why he should be using them.  And so he (and most of the people quoted) seem to be grasping the dark trying to relate an ill-explained set of objective handed to them by Apple to (irrelevant) experience using more conventional audio equipment like microphones and amplifiers. 

I suppose that while we have criticized Rubin, Muller, Shepherd and company for not understanding their trade, in some sense this isn't really their fault.  They've been handed a complicated goal by Apple with very little explanation or guidance.  And while yes we would hope they would take the time to learn their tools even without guidance, perhaps given their time constraints. their age, and their other responsibilities this is unrealistic.

Big-label mastering engineers don’t understand lossy formats

Reply #64
Aside from db1989's point in the next post which I overlooked, I agree with you completely.

Unfortunately articles like these left by themselves will likely not accomplish much good until they focus on presenting the true intention behind the process and whether the goals are being met from a technical (rather than monetary) standpoint.

Big-label mastering engineers don’t understand lossy formats

Reply #65
And while yes we would hope they would take the time to learn their tools even without guidance, perhaps given their time constraints. their age, and their other responsibilities this is unrealistic.
their age

What does that have to do with it?

Actually, maybe it’s best just to pretend that bit isn’t there. A mud-fest will solve nothing, and I certainly don’t want to be the one to have started it.

Big-label mastering engineers don’t understand lossy formats

Reply #66
And while yes we would hope they would take the time to learn their tools even without guidance, perhaps given their time constraints. their age, and their other responsibilities this is unrealistic.
their age

What does that have to do with it?


Quite a lot.  Experience is a double edge sword. It is a resource of potential contexts to illuminate a new situation, but it also constrains ones thinking by out-competing new ideas with past experience.  The more one specializes in a specific field, the harder it is to to look at things from a new perspective and the more tempting it is to use ideas that have previously proven successful.  This isn't a dig at someone, its just common sense.  Ideas that have worked before are likely to work again.

This problem is particularly acute in my line of work where we see the graying of scientists and the shutting out of new ideas:  http://online.wsj.com/article/SB1000142405...3334216604.html

But its actually a more general trait of human existence.  I actually have never met Meller nor do I know anything about his age.  But given that his goto concept was linear systems, time invariant systems, he almost certainly started his career more then 20 years ago when time variant processing was more difficult to implement (and thus less likely to be a significant factor in a new technique).  This is not to say that a younger person would have done better (and in fact I suspect that the newer generation is in some ways worse than the old...).  However, I don't think its unfair to suggest that someone's experience could (among other factors) guide his or her choice of intellectual tools when confronting a new problem.  Actually, I would be surprised if it did not.

Big-label mastering engineers don’t understand lossy formats

Reply #67
It's true that some DRC processors can also introduce clipping. Sometimes clipping is introduced intentionally.


Thank you David. That was exactly the point I was trying to make when I wrote that Dave Fridmann's work is 'not clipped because it's loud. It's loud because it's clipped. That's a world of difference.'

The rest of the discussion on the definition of clipping seems a little ego-driven to me. The main point remains that extreme compression and limiting are forms of clipping, and that they are often used in that way today for aesthetic effect.
Ego? No. Just irritated when someone attempts to re-define words that have perfectly good definitions already. Please try to understand: you can have extreme compression and limiting without any clipping at all. Engineers have been over-using compression to get a particular "sound" for decades (at least five of them!). Clipping the mix was (until recently) very rare. Any Time At All mix vs Revolution guitar, if you're a Beatles fan.

Anyway, it's a side point to your article. Sorry to drag the thread off topic.

cheers,
David.

Big-label mastering engineers don’t understand lossy formats

Reply #68
PS: The Dynamic Range Meter results for Ms. Kadry's latest work are worse than Frances the Mute (DR7), Amputechture (DR6) and The Bedlam in Goliath (DR6).


She blames it on the artists:

Quote
an artist's vision for their album is sometimes not my decision even if it ends up going against what you think is right

Big-label mastering engineers don’t understand lossy formats

Reply #69
It's true that some DRC processors can also introduce clipping. Sometimes clipping is introduced intentionally.


Thank you David. That was exactly the point I was trying to make when I wrote that Dave Fridmann's work is 'not clipped because it's loud. It's loud because it's clipped. That's a world of difference.'


That may have been the point you were trying to make, but instead you what you actually wrote was this:

Quote
Compression is clipping - In a very strict and literal sense. Whenever a compressor or limiter acts on peaks, it is inducing "clipping" by definition.


...which simply IS NOT THE CASE, and does not make the point you were trying to make.  Compression *may* cause clipping, or it may not.  It's all in how it's applied.  It does not cause clipping by definition as you appear to
say.

How about you just concede that?

As for what induces clipping, when the signal amplitude exceeds the digital 'container' capacity, clipping distortion results.  Alas for Spinal Tap, you can't go 'go to 11' when the format only goes to '10'.  Put another way (and ignroing for a moment the more accurate, psychoacoustic definition of 'loud'),  in digital world a signal gets clipped *because it is too 'loud'* at some point in the production chain.  After that point you can't un-clip it, even if you reduce the peak amplitude.  The clipping distortion remains.

 
Quote
The rest of the discussion on the definition of clipping seems a little ego-driven to me. The main point remains that extreme compression and limiting are forms of clipping, and that they are often used in that way today for aesthetic effect.


Except, they aren't necessarily so.


[EDIT:  I see 2bdecided made much the same point much more succinctly.  Mods, feel free to delete this if it seems too redundant.]

Big-label mastering engineers don’t understand lossy formats

Reply #70
Since others are piling on while admittedly not adding to the discussion, I think it's more than appropriate to leave your post since it does make and attempt to add to the discussion.

How about you just concede that?

It might ease the irony in the ego-driven comment.

Big-label mastering engineers don’t understand lossy formats

Reply #71
Perhaps TrustScience would have found our reactions more understandable if we just told him up-front how little many of us tend to respect mastering engineers, as a matter of engineering (as opposed to artistic) technical competence. Mastering engineers don't necessarily need a firm grasp of signal processing fundamentals to do their jobs well, yet this topic is probably more about signal processing than it is about artistic or subjective notions of sound quality.

The incorrect statements that appear to be made by respected mastering engineers in the article -- the premise of EQing a master to compensate for encoder coloration, using nulling comparisons as a metric for encoder quality, etc. -- are something a lot of us have come to expect. (cf: widespread professional misconceptions about loudness equalization, the audio properties of vinyl, high-res digital audio formats.....) Stated baldly, that's why a bunch of us non-mastering-engineers seem quite at ease critiquing mastering engineers about how they do their jobs.

In that context, I find TrustScience's situation as a journalist rather unenviable. If he were going to placate us, he would probably need to interview mastering engineers while operating under the premise that he might know more about the topic than they do...

Big-label mastering engineers don’t understand lossy formats

Reply #72
In that context, I find TrustScience's situation as a journalist rather unenviable. If he were going to placate us, he would probably need to interview mastering engineers while operating under the premise that he might know more about the topic than they do...

The problem is that some of the "engineers" simply perpetuate FUD about lossy formats/encoders. I guess most have read the article in the OP by now, I found another piece by Scott Hull, the owner of Masterdisk:
Quote
The bottom line is that experienced engineers can -- and do -- make noticeably better sounding AAC files than you get using the standard, automatic encoder.

Warranted, "better sounding" is a loose term. But the point of AAC is not "better sound", but transparency. And regarding that no amount of mastering engineering tricks can improve on AAC, especially not ones by those who don't even understand that digital clipping of CDs at full scale is bad (which isn't an artistic choice, but bad engineering). They probably are talking about creating a new master specifically for AAC encoding. With that they can manage to create a "better sounding" AAC file, but sure as hell the master they derived that from will also "sound better" than the previous master.

It's one thing if you don't understand something and shut up about it. It's far worse to not understand something, but in turn having strong opinions about it, and sprouting your misconceptions out to the public. All the while putting in your weight of apparent knowledge in the field because you mastered some CDs before and have a big sign up front.
It's only audiophile if it's inconvenient.

Big-label mastering engineers don’t understand lossy formats

Reply #73
Warranted, "better sounding" is a loose term. But the point of AAC is not "better sound", but transparency. And regarding that no amount of mastering engineering tricks can improve on AAC

If you are the painter and decide what scene and style to paint in, surely your choices can serve to hide flaws in your selection of colors?

-k

Big-label mastering engineers don’t understand lossy formats

Reply #74
Warranted, "better sounding" is a loose term. But the point of AAC is not "better sound", but transparency. And regarding that no amount of mastering engineering tricks can improve on AAC

If you are the painter and decide what scene and style to paint in, surely your choices can serve to hide flaws in your selection of colors?

My point is that when master A sounds better than master B when encoded to AAC, A will sound better than B when left lossless too, unless you encounter AAC artifacts, and you are able to defeat them with your new master. You could then also try to diminish that problem by choosing a higher encoding bitrate. I doubt the mastering engineers consistently find AAC artifacts at common bitrates.
It's only audiophile if it's inconvenient.