Bufferbloat To Be Fought In Linux 3.3 With BQL
Another feature coming to the Linux 3.3 kernel is Byte Queue Limits (BQL), which attempts to fight "bufferbloat" in networking.
Byte Queue Limits is reported to bring significant performance improvements across nearly all Linux package schedulers and AQMs. Byte Queue Limits is a way to limit a network controller's hardware queues by number of bytes rather than number of packets, which can reduce buffer bloat. A much more detailed description of BQL can be found from the 2011 LPC page. This is merged into the Linux 3.3 kernel with the "net-next" pull.
Dave Taht, a Linux kernel network developer working to fight the "bufferbloat" problems has written into Phoronix about BQL in the Linux 3.3 kernel. Since networking isn't one of my areas of interest, I'll leave it at this and let those interested to read more about Linux Byte Queue Limits from his provided resources.
If any Phoronix readers do do some further exploring of Byte Queue Limits, feel free to share your findings within the forums.
Byte Queue Limits is reported to bring significant performance improvements across nearly all Linux package schedulers and AQMs. Byte Queue Limits is a way to limit a network controller's hardware queues by number of bytes rather than number of packets, which can reduce buffer bloat. A much more detailed description of BQL can be found from the 2011 LPC page. This is merged into the Linux 3.3 kernel with the "net-next" pull.
Dave Taht, a Linux kernel network developer working to fight the "bufferbloat" problems has written into Phoronix about BQL in the Linux 3.3 kernel. Since networking isn't one of my areas of interest, I'll leave it at this and let those interested to read more about Linux Byte Queue Limits from his provided resources.
You may or may not be aware of the latest efforts towards defeating 'bufferbloat' [1] that have gone into the linux 3.3 kernel.
They are - Byte Queue Limits [2], and huge improvements across nearly all the packet schedulers and AQMs in Linux.
The test results thus far for 'latency under load' are *compelling*.
http://www.teklibre.com/~d/bloat/pfifo_fast_vs_sfq_qfq_linear.png
Someone doing more extensive testing of various subsystems affected... network I/O, network filesystems, web performance, voip... would be *comforting*. I'm curious if it were possible to sign you up to evaluate some scenarios using this new stuff?
(after this test, SFQ was fixed to perform equivalently to QFQ [3], and in either case, now, we see about a 65x improvement vs the default pfifo_fast qdisc... and there is more in the loop for 3.3, notably SFQRED [5])
While I am writing some tests on my own, my own efforts are focused on fixing the home router disaster[4], and the beneficial effects of all the new stuff extend out to servers and desktops as well, and I lack the hardware and time to play with your test suite all that much, although I'm going to try a few things in the coming weeks.
I can, however, suggest several useful test scenarios for you... this is the simplest
Build a couple 3.3 kernels, test for a baseline, then in order:
1) switch the qdiscs to 'sfq' (tc qdisc add dev whatever root sfq) run your network related tests
2) Turn off GSO and TSO (via ethtool) and with sfq on, run your network related tests.
3) Hammer down on BQL's controller some with max_limit
There are many other options available to play with such as qfq, and sfq can be tuned up to handle bigger workloads soon [5] but that requires somewhat more setup than what I describe above.
I figure fixing bufferbloat would be of deep interest to your readers, particularly the network gamers.
Footnotes:
1: two bloat articles in acm queue
http://queue.acm.org/detail.cfm?id=2076798
http://queue.acm.org/detail.cfm?id=2071893
2: http://linuxplumbersconf.org/2011/ocw/proposals/171
3: http://www.spinics.net/lists/netdev/msg184613.html
4: http://www.bufferbloat.net/projects/cerowrt
5: http://www.spinics.net/lists/netdev/msg185147.html
If any Phoronix readers do do some further exploring of Byte Queue Limits, feel free to share your findings within the forums.
9 Comments