I hope i'm wrong but i think Linux kernel developers are forgetting to maintain the quality of code, adding features to the system is important but the quality should come on top of priority.
My feeling is that the extent of hotplugging we have had has been "hacked" on there incrementally, and this is the reason it is less-than-stellar. The current implementation rewrite is a consequence of this and probably will make the quality go up, not down.
Forgive my ignorance, but what exactly is cpu hot-plugging? Is it, as the name implies, installing and removing a cpu while the machine is powered on? If so, I guess this is server-level stuff, I can't see me needing to (or indeed being able to) swap out the cpu on my desktop.
It does say to me, if as you say it's aimed at servers, and that does make sense, then why was the current system such a botched, messy, untidy job? I mean, if there's one instance where mutli-cpu support and hotplugging might be useful, it's in servers, supercomputer clusters, and that sort of thing. And that's the kind of territory where linux has tradtionally been the go-to choice, if I recall correctly. It's a tad ironic, is all I'm saying.
- subsystem starts simple and grows incrementally
- design abstraction which was probably OK for the initial code is not sufficient to deal with growing complexity
- developers get together at conference and agree on how it should be done
- one developer writes first pass of code following new model
- old code tossed over nearest clump of cactus and badmouthed even by the people who wrote it
The difference here seems to be that the author of the new code also authored a particularly colourful description of the old code