Page 2 of 11 FirstFirst 1234 ... LastLast
Results 11 to 20 of 106

Thread: BFS Scheduler Benchmarks

  1. #11
    Join Date
    Sep 2009
    Posts
    1

    Default

    Being a RT kernel user for audio stuff (under Ubuntu Studio), I'm interested in lower latencies under heavy load. But like other people here, I'm wondering if the benches proposed here really evaluate the latency benefits of a new scheduler.
    Solitary's suggestion seems better (and fun), and as an additional latency performance indicator, it could be interesting to compare the number of buffer underruns between BFS and CFS for a standardized recording session.
    I'll inform the ubuntustudio-devel mailing list about this article, maybe something interesting can arise.

    Is there anyone here with a RT kernel, able to comment on general responsiveness?
    Thanks.

  2. #12
    Join Date
    Jun 2009
    Posts
    14

    Default

    --- a/kernel/kthread.c
    +++ b/kernel/kthread.c
    @@ -16,7 +16,7 @@
    #include <linux/mutex.h>
    #include <trace/events/sched.h>

    -#define KTHREAD_NICE_LEVEL (-5)
    +#define KTHREAD_NICE_LEVEL (0)


    Replace in BFS patch, 0 to -5 and benchmark again.

  3. #13
    Join Date
    Jan 2009
    Location
    UK
    Posts
    331

    Default

    Quote Originally Posted by sega01 View Post
    Nice post, but it would have been nice to see BFS compared to say deadline (and maybe Anticipatory). I've been using deadline for ages and it has been quite good to me (sounds like possibly a similair design to BFS, too), and I wonder how close in performance it is to BFS. But thanks for letting me know about BFS; might be useful (just want to know if there is much difference from deadline to BFS though).
    I don't see how you can create a meaningful benchmark for comparing a process scheduler to a disk IO scheduler.

  4. #14
    Join Date
    Mar 2007
    Location
    West Australia
    Posts
    356

    Default

    I don't believe you did a proper review comparing speed of 3D applications like games. Maybe a few more would have helped? Also, why the high resolution for pandman? Should be lowest res to max out cpu, not gpu..

  5. #15
    Join Date
    Sep 2009
    Posts
    2

    Default

    Wow, I'm sorry.

    I misunderstood the article and misremembered a few things. I was thinking of CFQ; I thought BFS was a I/O scheduler. Sorry!

    Much more interested in BFS now.

    Thanks for pointing that out, but I'm sorry for my error.

    Thanks,

  6. #16
    Join Date
    Dec 2008
    Posts
    17

    Exclamation This test reinforces the wrong way of doing things

    Many have hinted at the problem with this benchmark and with Ingo Molnar's treatment of Con's contribution: Con is not interested in pushing disk throughput or cpu performance. The point of BFS is to have a system that does not skip, lock or drop frames during high cpu load. Despite great advances in scheduling and despite faster and faster cpus, even multi-core systems, we still experience latency when several things are happening at the same time. These could be cron jobs like man-db or updatedb blocking the disk, compilations running in the background or many other uses of the cpu that should be rather fast, but _never_ block the user interface. So far optimizations have intended to make the system fast is absolute terms and to be fair in the sense that all processes get equal access to the cpu, but what we need is that certain processes need to take a (short) break if we do something with our mouse, the keyboard, or while watching videos and listening to music.

    Benchmarks measuring performance are detrimental to Con's effort as they reinforce the way things have been done before: get the last bit out of the cpu, at the expense of the user experience. Being fast in absolute terms is making my experience of the system worse as I am annoyed by unresponsive UIs or dropped frames while I am unable to measure whether a certain process took a few more seconds to complete. I am really surprised that the BFS did so well in this test. Nonetheless, this does not speak for or against the BFS as its goal is completely unrelated.

  7. #17
    Join Date
    Aug 2008
    Posts
    99

    Default

    I wish I had time to do it myself, but I'd like to see some PTS targets that try to measure scheduler performance and latency. The RT Linux wiki has some realtime performance tests (like cyclictest), and various mailing list discussions of BFS and the old SD revealed efforts by some others. Also, PTS could run a compilation test in the background while measuring 3D performance, and if any games report full statistics on their frame times (i.e. minimum frame time (=maxfps), maximum frame time (=minfps), mean, standard deviation, maybe even a histogram), add those results to a graph.

  8. #18
    Join Date
    Sep 2009
    Posts
    41

    Default

    @mutlu_inek: great post.

    Its really depressing how all benchmark suites do nothing but measure throughput without even a single remark about responsiveness.
    I tried various distros over the years, and while the support and setup surely came long ways Linux still stands no chance against Windows when it comes to UI responsiveness. Heck, I would go so far to claim that WinXP on a 800MHz CPU runs way better (as in never having to wait for the system to grab your input or skip audio) than any setup with Linux I tried (like my 2.5 GHz Dualcore). I went though varies variables, like ATI/NVIDIA Hardware, OSS/propietary drivers... but this has proven true for me always.

    Im aware that benchmarks for responsiveness are hard to come by/create. But for a quick comparison you could try lowering the audio-buffer and see when it starts skipping with CPU load, then compare the values for both shedulers. Having a >0.5 seconds delay when changing volume (Ubuntu, and it still skips without noteworthy CPU load) sure is a telltale sign of an unresponsive system.

    Looking at pure CPU tests surely is the wrong idea...

  9. #19

    Default

    @Mutlu_inek

    It will be good to see some some benchmarks which measure responsiveness, but what benchmark will show you this? However, maybe there are such tests. For some people BFS is more responsive and for some others it's not. It will be great to have some problems fixed in both schedulers.

    @Discordian

    UI responsiveness is a different thing. It's sad, but even Win95 had very good 2D acceleration. When comes to audio skipping I have much better experience with Linux then with Windows (it seems improperly configured Pulse Audio can introduce some problems, but I don't use it).

    @Unix_epoch

    Also, PTS could run a compilation test in the background while measuring 3D performance
    Yes, this will be interesting.
    Last edited by kraftman; 09-14-2009 at 03:50 PM.

  10. #20
    Join Date
    Sep 2007
    Posts
    51

    Default

    I found it interesting that BFS outperformed CFS on serving Apache pages, that one is sure to interest some people. Especially if it holds up with more processors.

    Quote Originally Posted by ssam View Post
    can someone make a benchmark that times how long it takes for a gtk button to respond to a click. then the same test but with N CPU eating processes running at the same time, so we can see what happens when N approaches the number of CPU cores.
    That's an quite interesting benchmark, that one should be added to the next version of PST

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •