You mean we should move init into the kernel? Linux isn't a microkernel, after all. :P
Originally Posted by WorBlux
A daemon/server is the wrong way to handle hardware events, IMHO. Linux actually provides a much cleaner and simpler approach which udev ignores:
a hotplug "helper" command, set up by (example)
This is what mdev does.
echo /sbin/mdev >/proc/sys/kernel/hotplug
System logger -- curent system loggers don't really log all that well. Improvements can be made. By integraging the interface into the init system, you can verify what process a message is actually from. Rotation is integral rather than an afterthought cron process. Both things are hard to do well unless you have interface built into the init system to do so.
Session manager. Probably extraneous but also modular and optional. I can see the advantage of having the session manager talk to the init. On systems setup for single user or to autologin, having this crosstalk can prioritize services needed to get to a usable session first.
I can't speak for "the average developer", but I can write a working init.d script in a few minutes, and have a pretty good expectation of it working right. Also you can do some testing in regular userland, depending on the daemons involved...
Linux however especially on the desktop is moving away from Unix. The kernel is budding some really cool features, let's make use of them to solve problems traditionally associated with Unix.
From a developers perspective a single unit file is a lot easier to make than a dozen different system V scripts.
No experience with unit files.
Then why is KMS so highly esteemed?
In addition some of the most Unix of programs weren't designed that way. Take X for instance. It used to do gpu mode-setting, input handling, graphic memory management and a whole lot of other stuff.. It simply made sense to do it that way as it solved a lot of problems.
And what's the point of Modular Xorg?
Unix also has several other approaches to IPC (for example, sockets...). Some of those are well-suited to this scenario.
The primary UNIX philosophy is to do what works. You don't necessarily want small programs piped together where the inter-process communication is not linear and unidirectional.
A single large program with 5 functions and 5 options per function will have 25 options. The number of ways those can be combined is pretty large (several million if you use up to 10 options). A single binary (in one address space) has potential for various problems that multiple binaries (= multiple address spaces) may not face. So you have a chance of hitting an untested corner case.
If each program uses a defined interface (the point of standardization), you can have ~30 possible combinations of flags per program, giving 160 scenarios to test, as long as each program properly handles the interfaces.(Note: if buffer overflows and such bugs were impossible, the same would be true of a particularly modular single binary. Unfortunately, they aren't impossible.)
If you don't use a defined interface, you're doing something wrong.
Now there are some things systemd is doing right/modularly. But when init might get its own copy of fsck (quote not handy, but ISTR something about plans for that that on Poettering's website), there is something that's _not_ modular. And with 10+ different filesystems on Linux, that's simply a bad idea.
In fact, I consider any project where that could be considered to be too misguided to trust with critical system components.