This viewpoint is common and intuitive, but unfortunately is a complete fallacy, for the following reason.
Originally Posted by crazycheese
While analogue appears at first glance to be of infinite resolution, it is susceptible to noise which, by definition, is unpredictable. The amount of real information present in an analogue signal is fundamentally limited by the amount of noise that is present. Analogue may be considered infinite-resolution in the sense that the noise will average out over a long period of time. So, say, if you record a continuous analogue sine wave over a huge number of cycles, and then average all those cycles into a single cycle, the sine component will remain while the noise will average out. Doing this over an infinite length of time will remove the noise entirely, and I think this is what you are alluding to.
But, this is true of digital signals too -- if they have been properly dithered. This is, indeed, the very purpose of dither, because it means that quantisation noise will become, for all practical purposes, random noise. Just like in an analogue system. Because of this, dithered digital processing is infinite-resolution in the same sense as analogue processing. At the same time, it retains all of its other advantages over analogue, such as error-free storage and transmission, the ability to be processed any number of times without adding any distortion, etc etc. Analogue has literally no advantages over digital if the digital is done correctly.
Which leads me to my point regarding fixed vs floating point. In my view, both Paul and Datenwolf make correct assertions. Floating-point's advantage over fixed is that it means the programmer never needs to worry about clipping during mixing. No disputing that. The exponent will just grow as it needs to. And, since x86 processors have already paid the cost of the extra silicon required, floating-point isn't even slower than fixed-point on x86. Also true. But what I think has been missed is that floating-point cannot be linearised with dither, and 32-bit floating-point doesn't have enough resolution to that we can simply ignore the nonlinearity. If you sum, say, 256 24-bit signals into a 32-bit float you are going to lose approximately the bottom 8 bits of the precision present in the originals. And not in a benign way - it will be of truncating form and manifest as distortion. It is probably acceptable insofar as it is probably inaudible -- but a single instance of 32-bit floating-point quantisation distortion is eminently measurable at the 24-bit level (I have measured it), so 256 lots of this is a quite scary proposition in my view for anything purporting to be "high-end audio".
FWIW, this is also the reason that the majority of professional DSP engineers (or, at least, all the ones I've spoken to) think that converting fixed-point to floating-point with anything other than an exact power-of-two multiplication is a dreadful hack.
In the end, 32-bit floating-point is a risky proposition for precision audio purposes. It is very definitely unsuitable for certain types of laboratory environment, is probably fine for things which will only be listened to, but in my opinion is somewhere in between for recording/mixing/mastering purposes which need a little bit extra performance. 64-bit floating-point would be fine, because the distortion will be at a vanishing level, and possibly on x86 not even that slow. 32-bit fixed-point is generally great for audio, but Paul has raised a real case where it won't have sufficient headroom.