Page 1 of 4 123 ... LastLast
Results 1 to 10 of 34

Thread: ULatencyD Enters The Linux World

  1. #1
    Join Date
    Jan 2007
    Posts
    14,335

    Default ULatencyD Enters The Linux World

    Phoronix: ULatencyD Enters The Linux World

    Daniel Poelzleithner has announced to the Linux kernel world his new project named ulatencyd. The focus of ulatencyd is to provide a script-able daemon to dynamically adjust Linux scheduling parameters and other aspects of the Linux kernel...

    http://www.phoronix.com/vr.php?view=OTAwNQ

  2. #2
    Join Date
    May 2010
    Posts
    190

    Default

    I'm keen to see this adopted into the kernel.

  3. #3
    Join Date
    Jan 2009
    Posts
    191

    Default

    damn, something like this should have been made long ago. not part of kernel maybe (while this wouldn't hurt) but anyway. i hope, however it will be tight closer with systemd or systemd author will reimplement it in even cleaner way (continuing his userspace alternative to famous "cgroup latency ~200 liner").

  4. #4
    Join Date
    Jan 2010
    Location
    Portugal
    Posts
    944

    Default

    After so many years of nobody caring about latency, suddenly we're surround by projects aiming to improve it. What the hell is going on?

  5. #5
    Join Date
    Dec 2010
    Location
    MA, USA
    Posts
    1,205

    Default

    so.... what exactly is different about this as compared to changing the nice level of programs? i don't understand what the benefit of this is

  6. #6
    Join Date
    Jul 2008
    Location
    Greece
    Posts
    3,777

    Default

    Quote Originally Posted by devius View Post
    After so many years of nobody caring about latency, suddenly we're surround by projects aiming to improve it. What the hell is going on?
    They got a taste of BFS and liked it

  7. #7
    Join Date
    Jan 2011
    Posts
    10

    Default why ulatencyd

    Hi,

    the first reason why nothing like this happend before was, that there was just nothing you can do. Cgroups is the first kernel interface that gives userspace enough power to control kernel behavior in a way that gives good results. In the good old ages there was a renice daemon, but this can't protect you enough in rough cases.
    Heuristic analyzing of the system is something that should never be in the kernel. In fact, everything that you can put in userspace without too much cost on runtime, should be put there.
    The reason I don't want it in init (systemd, upstart, etc) is, that init is the most important program of the system. In my opinion should it be as slick as possible, especially heuristics are something that really don't belong there.
    But of course, I agree that a good interface between the init daemon and ulatencyd will benefit. I just haven't implement a dbus interface yet, and no good ideas how it should look like.

    About systemd: I'm a little bit unsure about their use of cgroups and the main purpose seems to make sure they can kill a daemon completely which seems a little bit awkward to me.

    BTW: I was able to write a rule in one evening that protects the computer form swap of death, at least when a process is eating all your memory. For the case that a group of small processes is tearing you down, it does not work yet, but rules for that are in the pipeline :-)

  8. #8
    Join Date
    Jan 2009
    Location
    Columbus, OH, USA
    Posts
    323

    Default

    Quote Originally Posted by schmidtbag View Post
    so.... what exactly is different about this as compared to changing the nice level of programs? i don't understand what the benefit of this is
    The benefit is this has the potential to actually work.

    Nice level probably doesn't do what you're thinking of, if you're talking about it in the context of user-visible latency. Adjusting timeslice lengths to be longer for preferred programs creates this situation where other threads have a shorter slice. As timeslices approach zero, cache thrash, and scehduling overhead approach infinity.

    Also,Linus chimes in:

    "There really isn't anything to fix. 'nice' is what it is. It's a
    simple legacy interface to scheduler priority. The fact that it's also
    almost totally useless is irrelevant. It's like male nipples. We
    wouldn't be better off lactating, and they look like some odd wart
    that doesn't do much good. But it would be worse to remove it."
    -http://article.gmane.org/gmane.linux.kernel/1071951

    "But the fundamental issue is that 'nice' is broken. It's very much broken at a conceptual and technical design angle (absolute priority levels, no fairness), but it's broken also from a psychological and practical angle (ie expecting people to manually do extra work is ridiculous and totally unrealistic)."
    -http://lwn.net/Articles/418739/

  9. #9
    Join Date
    Dec 2010
    Location
    MA, USA
    Posts
    1,205

    Default

    Quote Originally Posted by Wyatt View Post
    The benefit is this has the potential to actually work.

    Nice level probably doesn't do what you're thinking of, if you're talking about it in the context of user-visible latency. Adjusting timeslice lengths to be longer for preferred programs creates this situation where other threads have a shorter slice. As timeslices approach zero, cache thrash, and scehduling overhead approach infinity.

    Also,Linus chimes in:

    "There really isn't anything to fix. 'nice' is what it is. It's a
    simple legacy interface to scheduler priority. The fact that it's also
    almost totally useless is irrelevant. It's like male nipples. We
    wouldn't be better off lactating, and they look like some odd wart
    that doesn't do much good. But it would be worse to remove it."
    -http://article.gmane.org/gmane.linux.kernel/1071951

    "But the fundamental issue is that 'nice' is broken. It's very much broken at a conceptual and technical design angle (absolute priority levels, no fairness), but it's broken also from a psychological and practical angle (ie expecting people to manually do extra work is ridiculous and totally unrealistic)."
    -http://lwn.net/Articles/418739/
    lmao that metaphor linus said was genious. but really if nice is THAT useless why is it there? personally, i've used nice before and it works GREAT for me. for example, there was a year where i used screensavers for my background. screensavers can be somewhat cpu intensive, so i set the nice level to 19 (or maybe it was -19? i forget at this point). then, whenever another program demanded cpu power, the screensaver would get really choppy and unresponsive while the program had little to no slowdown at all.

    based on my experience, nice isn't broken at all, it works great. thats why this new thing is confusing to me because if i were to use it in my example, i don't see how anything would change at all.

  10. #10
    Join Date
    Oct 2010
    Posts
    5

    Default

    Quote Originally Posted by poelzi View Post
    BTW: I was able to write a rule in one evening that protects the computer form swap of death, at least when a process is eating all your memory. For the case that a group of small processes is tearing you down, it does not work yet, but rules for that are in the pipeline :-)
    Wow... sounds great! Can you tell us how long until we could try this?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •