Page 1 of 3 123 LastLast
Results 1 to 10 of 31

Thread: Making A Code Compiler Energy-Aware

Hybrid View

  1. #1
    Join Date
    Jan 2007
    Posts
    15,646

    Default Making A Code Compiler Energy-Aware

    Phoronix: Making A Code Compiler Energy-Aware

    There's a discussion on the LLVM development mailing list about making the compiler become energy-aware to provide an optimization level that would provide the most power-efficient binaries. However, it isn't clear whether this would make sense over simply trying to assemble the fastest binary...

    http://www.phoronix.com/vr.php?view=MTM1MzE

  2. #2
    Join Date
    Feb 2012
    Posts
    71

    Default

    Couldnt agree more that "more faster = more energy efficient" on all of todays, and a lot of yesterdays hardware.

  3. #3
    Join Date
    Dec 2010
    Location
    MA, USA
    Posts
    1,485

    Default

    Saying "faster=more energy efficient" is very narrow-minded. That is undoubtedly true, but that's only when you look at a single-instance task. For example, if you have a task running 24/7 that does not ever max out the CPU, optimizing the code for speed might in fact use up more power, because the CPU is going all-out on a task that will never end; in other words, you can't "hurry up" an infinite procedure that is already going as fast as it can. As long as the CPU doesn't get maxed out and as long as the program keeps up with it's task, I'm sure reducing instruction sets needed would help increase power efficiency.

    Overall, I'm sure the end result of this is minor. But, other articles on Phoronix have shown that, for example, updates to GPU drivers can both increase performance and power efficiency on the same hardware. Who says a compiler can't do the same?

  4. #4
    Join Date
    Aug 2012
    Location
    Pennsylvania, United States
    Posts
    1,939

    Default

    Quote Originally Posted by schmidtbag View Post
    Saying "faster=more energy efficient" is very narrow-minded. That is undoubtedly true, but that's only when you look at a single-instance task. For example, if you have a task running 24/7 that does not ever max out the CPU, optimizing the code for speed might in fact use up more power, because the CPU is going all-out on a task that will never end; in other words, you can't "hurry up" an infinite procedure that is already going as fast as it can. As long as the CPU doesn't get maxed out and as long as the program keeps up with it's task, I'm sure reducing instruction sets needed would help increase power efficiency.

    Overall, I'm sure the end result of this is minor. But, other articles on Phoronix have shown that, for example, updates to GPU drivers can both increase performance and power efficiency on the same hardware. Who says a compiler can't do the same?
    Same thing with games. The game's main logic loop runs forever, forever pegging the CPU. I wish there was a way to tell the CPU (combined with thermal sensors) "Screw the frame rate, never get above xyz degrees in temperature."

  5. #5
    Join Date
    Dec 2010
    Location
    MA, USA
    Posts
    1,485

    Default

    Quote Originally Posted by Ericg View Post
    Same thing with games. The game's main logic loop runs forever, forever pegging the CPU. I wish there was a way to tell the CPU (combined with thermal sensors) "Screw the frame rate, never get above xyz degrees in temperature."
    That is a good point, although I suppose the easiest way to do it is to detect it's own frame rate and realize it doesn't need to go beyond the refresh rate of the monitor. Detecting temperatures would be too much of a headache, and besides, some systems run at 70C idle. That could make a game run at 1FPS that could otherwise be beyond 100.

  6. #6
    Join Date
    Jan 2012
    Posts
    86

    Default

    Quote Originally Posted by Ericg View Post
    Same thing with games. The game's main logic loop runs forever, forever pegging the CPU. I wish there was a way to tell the CPU (combined with thermal sensors) "Screw the frame rate, never get above xyz degrees in temperature."
    There is and it is called underclocking. You can tune it to temperature by running something like Prime95 and adjusting the clock until the thermals are in the right place.

    My opinion is that if your hardware is overheating then you didn't put enough cooling on it. It is never the program's fault because it is actually using the available hardware. If you have a six core CPU clocked at 4 GHz there should not be any problem running six computation threads at 100% CPU. If there is, then you need to slow it down to 3.6 GHz or give it a bigger heatsink.

    Always annoys me when I read people complaining about some game making their system shut down. It isn't the complainer's fault, oh no, because it runs World of Warcraft just fine so their system must be perfect. As if WoW was the ultimate game.

  7. #7
    Join Date
    Apr 2013
    Posts
    9

    Default

    I'm still waiting for -Olinus BTW -O3 that also takes cache size into account.

  8. #8
    Join Date
    Sep 2008
    Location
    Vilnius, Lithuania
    Posts
    2,666

    Default

    Quote Originally Posted by Ericg View Post
    Same thing with games. The game's main logic loop runs forever, forever pegging the CPU. I wish there was a way to tell the CPU (combined with thermal sensors) "Screw the frame rate, never get above xyz degrees in temperature."
    Interesting thing about that - I have made a game (pretty simple, 2D card game based on Might and Magic Arcomage), which is using OpenGL, but I only update the window when the mouse is moving (or something else is happening, of course, like animations are playing). For a simple 2D game it works really well, and at least the GPU sleeps a lot while playing it. I am yet to implement a framerate limit, though, so it's possible to get some crazy framerates by moving the mouse around in the window very quickly.

    Quote Originally Posted by curaga View Post
    I'm still waiting for -Olinus BTW -O3 that also takes cache size into account.
    I think it's called -march native Or -mtune, at least.
    Last edited by GreatEmerald; 04-16-2013 at 03:50 PM. Reason: Silly vBulletin thinking that ":P" is bad, ":p" is good

  9. #9
    Join Date
    Aug 2012
    Location
    Pennsylvania, United States
    Posts
    1,939

    Default

    Quote Originally Posted by GreatEmerald View Post
    Interesting thing about that - I have made a game (pretty simple, 2D card game based on Might and Magic Arcomage), which is using OpenGL, but I only update the window when the mouse is moving (or something else is happening, of course, like animations are playing). For a simple 2D game it works really well, and at least the GPU sleeps a lot while playing it. I am yet to implement a framerate limit, though, so it's possible to get some crazy framerates by moving the mouse around in the window very quickly.
    Which for 2D games that works well, but I play a lot of older games that are still 3D based (Knights of The Old Republic 1 and 2 comes to mind) where they peg the GPU to max just because its a continuous load, even though if the GPU would tone down the clocks it would still run just fine without stuttering, but there's no way to TELL the GPU that.

  10. #10
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    5,333

    Default

    Quote Originally Posted by GreatEmerald View Post
    I think it's called -march native Or -mtune, at least.
    See http://lkml.org/lkml/2013/1/26/161

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •