Page 11 of 11 FirstFirst ... 91011
Results 101 to 106 of 106

Thread: BFS Scheduler Benchmarks

  1. #101
    Join Date
    Oct 2007
    Location
    Under the bridge
    Posts
    2,146

    Default

    Quote Originally Posted by Svartalf View Post
    No, more like a pissing match between Con and the LKML bunch.
    Or maybe the kernel devs refuse to implement pluggable schedulers, so the only solution is to rip the existing one out.

  2. #102
    Join Date
    Jul 2008
    Location
    Greece
    Posts
    3,798

    Default

    Quote Originally Posted by Apopas View Post
    Well, is there any way to check that indeed the BFS scheduler has been applied?
    I don't see any difference in my system in comparison with the previous kernel.
    No boosts, no slowdowns, no hangs, nothing... while I expected serious problems since I use reiserfs which causes problems with BFS according to Kolivas.
    Well, if you had no interactivity problems before, then there's nothing to "improve" in the first place. If you don't have problems like those described here:

    http://article.gmane.org/gmane.linux.kernel/886413

    Then your machine is not affected by the CFS interactivity problems.

  3. #103
    Join Date
    Mar 2009
    Location
    Hellas
    Posts
    1,098

    Default

    Quote Originally Posted by RealNC View Post
    Well, if you had no interactivity problems before, then there's nothing to "improve" in the first place. If you don't have problems like those described here:

    http://article.gmane.org/gmane.linux.kernel/886413

    Then your machine is not affected by the CFS interactivity problems.
    No I didn't have any of these problems. Also, I have a single core processor while BFS sines with multicores according to its creator. But not even a single regression or problem with the new scheduler? Strange...

  4. #104

    Default

    Quote Originally Posted by Apopas View Post
    No I didn't have any of these problems. Also, I have a single core processor while BFS sines with multicores according to its creator. But not even a single regression or problem with the new scheduler? Strange...
    Some bugs are probably SMP related

  5. #105
    Join Date
    Jul 2008
    Location
    Greece
    Posts
    3,798

    Default

    From other reports I saw, single-processor systems don't seem to suffer from this. So all is fine there.

  6. #106
    Join Date
    Aug 2008
    Posts
    99

    Default

    Five pages of posts since I was last here and only ten of them are BFS-related?

    Anyway, I've been thinking more about a benchmark for responsiveness. Using cyclictest from the RT Linux Wiki, create threads that sleep for some number of milliseconds that's not an even multiple of HZ, and measure how long it takes for them to actually wake up. Several threads would be created with different SCHED_FIFO priority levels, plus several threads at SCHED_ISO on BFS and SCHED_OTHER on both schedulers. Gather all the delay statistics from the threads (including a histogram of latencies), and plot them on a 3D bar graph with the x axis being thread priority grouped by scheduling queue (i.e. FIFO, RR, ISO, OTHER), y axis being latency, and z axis being frequency of that latency for that thread. Each scheduler would have a graph plotted for no load, medium load, and heavy load, resulting in six graphs which could be visually compared. Then, the minimum, mean, maximum, and standard deviation of latencies would be plotted for the two schedulers and three loads, giving another graph with three lines on it, with a shaded stripe indicating standard deviation around the mean line.

    I don't have time to implement this, but it would be really helpful to have something like this in PTS. Any takers? Please?


    P.S. I mostly disagree with the way I was quoted by kebabbert. I only think it would be a Good Thing if there was a single point release devoted to optimization, kind of like Snow Leopard. Instead of the usual commit window where everyone is bombarding LKML with new features and drivers, there's a shorter release cycle where all the subsystem maintainers engage on a virtuous and heroic quest to seek out latencies and hidden bugs in their respective domains. Yes, I know it's just a romantic way of describing a code audit, but marketing works, you know? I didn't intend to suggest that bug fixing doesn't happen.

    Plus, as a kernel developer* I would like to have a subset of the kernel API that I know won't change for X years, to reduce my maintenance costs and allow me to focus on cool new ideas.

    *I'm a kernel developer in the sense that I write code that runs in the kernel, not in the sense that I participate in LKML and influence mainline.
    Last edited by unix_epoch; 10-01-2009 at 07:32 PM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •