Page 1 of 2 12 LastLast
Results 1 to 10 of 39

Thread: Another Major Linux Power Regression Spotted

Hybrid View

  1. #1
    Join Date
    Jan 2007
    Posts
    14,356

    Default Another Major Linux Power Regression Spotted

    Phoronix: Another Major Linux Power Regression Spotted

    Since Friday there's been a number of Phoronix articles about a very bad power regression in the mainline Linux kernel, which is widespread, Ubuntu 11.04 is one of the affected distributions, and has been deemed a bug of high importance. This yet-to-be-resolved issue is affected Linux 2.6.38 and 2.6.39 kernels and for many desktop and notebook systems is causing a 10~30% increase in power consumption. Nevertheless, this is not the only major outstanding power regression in the mainline tree, there is another dramatic regression now spotted as well that is yet-to-be-fixed.

    http://www.phoronix.com/vr.php?view=15943

  2. #2
    Join Date
    Jan 2008
    Location
    Seattle
    Posts
    120

    Default

    Any word on AMD processors?

    And would that be a Dunkles Weißbier?

  3. #3
    Join Date
    Aug 2007
    Posts
    437

    Default

    Jees, the difference between 2.6.28 and 2.6.38 could easily mean an extra hour or two battery power. We are indeed going backwards.

  4. #4
    Join Date
    Sep 2008
    Posts
    989

    Default

    Quote Originally Posted by FunkyRider View Post
    Jees, the difference between 2.6.28 and 2.6.38 could easily mean an extra hour or two battery power. We are indeed going backwards.
    But performance, flexibility and features are going up at the same time. Hmmm.

    Sounds like we need to option-out the stuff that sucks power the most, and maybe over time we'll have a linux-lowpower package in Ubuntu.

    Normally, using more power, to the extent that it's correlated with increased performance, is a "good thing" for servers and other computers running on A/C power. This usually means that you are seeing increased utilization of your hardware. A very hot chip is a busy chip, and if you get corresponding performance increases, that just confirms that you're getting what you paid for.

    But I don't think we need to have any kind of stand-off between those who want performance (energy be damned) and those who want to get 8+ hours of battery life on a ThinkPad X-Series. Instead, we should just isolate those particular things that are most pivotal in determining the power consumption, and then: if they are performance-enhancing things, we should keep them (generally speaking), but provide an option to disable them, ideally at runtime, for power savings. But if it turns out that we're surrendering energy consumption without any performance benefit, that's bad for everyone, so that needs to be fixed, of course.

    Simple non-kernel example: if you're running a composited desktop, it's going to use way more energy than a non-composited desktop, because not only will the display output be awake all the time, but the GPU will always be awake, processing the frames of the composited pipeline. On most laptops with a discrete GPU, keeping this beast awake constantly is a huge drain on battery life. With a pure 2D environment, you use less energy because the GPU (if you even have one) can go to sleep since it has no OpenGL contexts open against it. You just rasterize off the CPU and go to sleep. And since we usually expect much less (in terms of rendering complexity) from a rasterized scene, we significantly save on the number of computations overall, so of course we'll use less power!

    But does that mean we should get rid of composited desktops? No -- because they provide much-desired features and eye candy, and (sometimes) increased performance on certain drivers/workloads. They do consume more power, and the computations that go into compositing are more complex than rastering a simple desktop, but we think the cost is worth it. Or at least, we think the cost is worth providing the user with the option of running with or without a compositor, rather than just going one way.

    Recognizing the pivot points in the kernel that most dramatically affect power usage will help the kernel more closely resemble the desktop in terms of allowing users to trade off performance/features and power. We just have to make sure that, at each pivot, that we're actually getting some kind of benefit from it -- otherwise it's a bug and should be fixed, no option needed.

  5. #5
    Join Date
    Nov 2009
    Posts
    328

    Default

    Normally, using more power, to the extent that it's correlated with increased performance, is a "good thing" for servers and other computers running on A/C power...
    Most probably this is not the case here, we are under a regression that affects power consumption. If there is power consumption increase is because your computer resources are used more frequently, unnecessarily, so there is less resources for user applications.

    My opinion here is that correctly solving those regressions will give a "little" more performance to user applications.

  6. #6
    Join Date
    Dec 2007
    Location
    Edinburgh, Scotland
    Posts
    574

    Default

    Are we absolutely sure this is a kernel bug and it isn't a piece of user space that's not talking to the kernel properly like udev or the like

    I compile my own minimal kernels (about 2MB) with everything I don't use switched of and everything I do use compiled in (rather than compiled as a module)

    I'm tempted to see if I'm as effected by this bug on my laptop as everyone else

  7. #7
    Join Date
    Apr 2011
    Posts
    2

    Default

    Quote Originally Posted by allquixotic View Post
    Normally, using more power, to the extent that it's correlated with increased performance, is a "good thing" for servers and other computers running on A/C power.
    That might be true on workstations, but is certainly not the case for rack-dense servers where THE problem is maximizing performance per watt, both from energy cost and problems of heat dissipation. Perhaps I'm missing something, but how is this bug not driving the server people nuts? 15% greater power consumption at idle would massive problem in a server farm, let alone cloud server containers.

  8. #8
    Join Date
    Apr 2010
    Posts
    271

    Default

    Quote Originally Posted by mthome View Post
    That might be true on workstations, but is certainly not the case for rack-dense servers where THE problem is maximizing performance per watt, both from energy cost and problems of heat dissipation. Perhaps I'm missing something, but how is this bug not driving the server people nuts? 15% greater power consumption at idle would massive problem in a server farm, let alone cloud server containers.
    Because most server people aren't running a Beta OS, let alone a bleeding edge kernel, on a production hardware. You're seeing why

  9. #9

    Default

    Quote Originally Posted by amphigory View Post
    And would that be a Dunkles Weißbier?
    No, just some Hacker-Pschorr. I was low on beer and busy with this testing so just walked to a store that's a block away to pickup some Hacker where as otherwise the beer I normally drink is about six kilometers away. So I was just being efficient and drinking slightly-less-good-but-still-amazing Munich beer.

  10. #10
    Join Date
    Jan 2008
    Location
    Seattle
    Posts
    120

    Default

    Hmmm... seat of the pants I am getting about 20% less battery life using 2.6.38 & 2.6.39-rc kernels. My netbook is an Acer AO721-3574 with AMD Neo K125 (single core). I'm running Debian Unstable (Sid) with drm/mesa/xf86-video-ati from git.

    The only things that I have changed in kernel config over the last couple of months are the addition of CONFIG_CGROUP_SCHED, CONFIG_FAIR_GROUP_SCHED, & CONFIG_TRANSPARENT_HUGEPAGE.

    It will be interesting to see what the cause is.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •