So I've become involved in a rather colorful argument
(I'm Publius in the thread) with somebody on stevehoffman.tv. The original thread revolved around shooting down an old audiophile canard, about how subsample delays cannot be represented in PCM. In the course of that debate, I've begun to question a couple things.
- Is it ever accurate to use the term "time resolution" in any sort of technical context? To the best of my knowledge, it has no universally agreed upon technical definition. Most of the times I've seen it used are either for SACD/DVD-A marketing fluff, or to describe FFT window lengths. I'm tempted to just go quasi-logical-positivist on everybody and say that it is a completely meaningless phrase.
- Is there any meaningful time-domain constraint on audio quality that is directly related to the sampling period? Subsample delays (as I've shown above) are not meaningfully related. Bandwidth is a frequency-domain attribute. Pre-echo potentially gets more audible at lower sampling rates, but this is not a concern with sigma-delta ADCs, and it is of debatable audibility at 44.1 to begin with. Some DSP operations may be harder to implement at lower sample rates, but most of the issues involve seem implementation-related. I'm suspecting that there are no clear general limits as to what can and cannot be accomplished in PCM, except with respect to very domain-specific or system-specific situations; and so any claims of 44khz always being limited in ways different from bandwidth may be regarded with skepticism.