On a desktop system, a slighty reduced sustained transfer rate is a small price to pay for having the system always respond quickly to user input, even when under heavy i/o load. For this reason, I use BFQ on my workstation.
As far as I can tell, there's really not much to see here, except for one thing.
One thing i noticed though, was the differences between NGINX and Apache, 27000 vs 17000 pages served.
I'm no web server expert, but are there really that much of a difference in normal usecases?
Usually benchmarks are not all that comparable to real world performance, but this difference seems huge to me.
Serving a web page is serving a webpage, right? So they should be quite equal, performance-wise?
If anyone can enlighten me on this, it would be much appreciated, as I don't see why Apache would have such a large market share if the performance gap was this big
Yes, there is a big perf difference with default Apache vs default other servers, due to their working models (fork vs event vs threads). You may be able to configure apache to be better, but by default it's quite bad. I haven't checked in a while, but IIRC the more performant apache modes were build-time options, and incompatible with some often-used plugins.
Market share depends on a lot more than performance. Inertia, "what everyone else uses", support for some plugin that others do not have. Also, dynamic pages' performance is often different from static pages.
The desktop also freezes in Windows 7 under heavy I/O. I see this all the time when running Blender or Firefox on Windows, you get the typical "not responding" message on the app's window. There's nothing you can about it besides buying a SSD, then it flies.
i see linux struggling when copying more than one file at a time. the strangest thing is though that windows seems just as bad, but openindiana seems fine, on the same hard-disk! seagate 3tb 1tb/platter disk, which can do about 170mb/sec, and over gigabit network, so disk should be able to keep up if it's not doing too many random seeks. (mostly sequential transfers)
I've also noticed that Linux's samba can't keep up with high speed networks (over 1 gigabit/sec) and can't actually match ssd performance as it uses a single thread with heaps of cpu. I tried playing with async options to no avail so far. (using infiniband max 16 gigabit network, nfs does around 1.4gigabytes/sec cached, it's just samba that's broken, it'd probably be faster if i swapped slots with the video card, running at 4x pci-e on one host atm)