Page 2 of 2 FirstFirst 12
Results 11 to 18 of 18

Thread: Compilation Times, Binary Sizes For GCC 4.2 To GCC 4.8

  1. #11
    Join Date
    Aug 2012
    Location
    Pennsylvania, United States
    Posts
    1,756

    Default

    Quote Originally Posted by duby229 View Post
    I've got a 955BE and compile time is like nothing. At most even on huge packages like the kernel or gtk or firefox it only takes a few minutes. And most other packages take less than a minute or even just a few seconds. I can't imagine that compile time is still much of a problem.

    My emerge -e world is over 700 packages and it only takes about an hour and a half.
    Yeah, Duby, I've got an undervolted i5 Sandy Bridge in my ultrabook. And until the 3.9 kernel hits [stable] in Arch, I have to compile all my own kernel builds and they take hours on this thing. So lucky for you for having a great CPU, but not everyones so lucky.

  2. #12
    Join Date
    Nov 2007
    Posts
    1,353

    Default

    That doesnt make any sense tho. The I5 sandybridge is faster clock for clock than my chip so even at a lower clock it should still be awesome. Really hours? I cant believe hours... Maybe half an hour at like 1GHz with only one core enabled. There's just no way it takes hours to compile a kernel on a modern chip like yours.

  3. #13
    Join Date
    Oct 2008
    Posts
    2,913

    Default

    Quote Originally Posted by duby229 View Post
    I've got a 955BE and compile time is like nothing. At most even on huge packages like the kernel or gtk or firefox it only takes a few minutes. And most other packages take less than a minute or even just a few seconds. I can't imagine that compile time is still much of a problem.

    My emerge -e world is over 700 packages and it only takes about an hour and a half.
    You're missing the point. If you spend 5 minutes recompiling a project every time you make a single change, you can easily spend over 50% of your day recompiling. At least when you are debugging issues, and constantly making small changes and then testing the results. Working on larger changes at a time reduces the problem.

    It's the entire reason the -O0 (no optimization) level even exists. Because time is more important to a developer than the resulting binary speed when you are debugging issues. I'd agree it's a virtual non-issue for release mode code, though.

  4. #14

    Default

    Quote Originally Posted by GreatEmerald View Post
    And Gentoo users.
    Lets not forget Gentoo developers.

  5. #15
    Join Date
    Aug 2012
    Location
    Pennsylvania, United States
    Posts
    1,756

    Default

    Quote Originally Posted by duby229 View Post
    That doesnt make any sense tho. The I5 sandybridge is faster clock for clock than my chip so even at a lower clock it should still be awesome. Really hours? I cant believe hours... Maybe half an hour at like 1GHz with only one core enabled. There's just no way it takes hours to compile a kernel on a modern chip like yours.
    make -j4 on a i5 dualcore at 1.6ghz can take hours, yes. It sucks -.-

  6. #16
    Join Date
    May 2008
    Location
    Kongsberg, Norway
    Posts
    50

    Default

    Guys, don't talk about compile time not being important unless you're a developer and what you're working on is a multi-GB sized project (yes, binary executables, libraries and such, and no data/source code). I've spent one day just recompiling a tiny portion of a project for internal release and usage. About 40 MB (in release) in 8 different versions. Now imagine compiling something about 100 times larger than that, and luckily only in two versions. Obviously, a large portion of this is rarely rebuilt at all, but just copied from central storage, and you only rebuild as small portion of the software as possible, but still, there's a reason for having nightly builds and such. It saves a huge amount of time for developers.

    ... so yeah, compile time matters a lot.

    Oh, and I have a quadcore i7@3.4GHz. Stuff can be compiled fast, but still not at all fast enough.
    Last edited by AHSauge; 03-18-2013 at 12:20 PM.

  7. #17
    Join Date
    Mar 2012
    Posts
    5

    Default

    Quote Originally Posted by Ericg View Post
    You are obviously NOT a developer. If I'm coding and I make 1 change and I re-compile to test it, I'd rather not have to think to myself "Well....I'm gonna be here for a while." For release builds code-speed may be more important, but for test builds? A 10-second kernel build? i would be beyond happy. Itd be like a free hardware upgrade to me haha
    It's quite obvious that we work differently.
    Do you seriously stare at your screen while the compiler is doing work?
    Or can't you write makefiles so that you have to rebuild everything every time you touch a file?
    Compiling is background work for me. I NEVER wait for compilation to finish. I do actual WORK while waiting.
    Every build system I have built or used will not rebuild stuff that does not need rebuilding.
    Even extremely large code projects will usually take a minute or so to generate new binary images if I want to try something out.
    Clean compilations I usually do overnight, lunch, breaks etc.

    GCC's compilation speed is not a problem.
    I'd still give an arm and an leg for something that is 10% faster on average, even if it's 10 times slower.
    Don't believe me? Ask anyone who is building or using anything that is computational intensive.

  8. #18
    Join Date
    Oct 2010
    Posts
    287

    Default

    Quote Originally Posted by Ericg View Post
    make -j4 on a i5 dualcore at 1.6ghz can take hours, yes. It sucks -.-
    You're doing it wrong, -j4 is overkill for a dualcore (even with hyperthreading). -j3 would probably yeld better results. If you also have 1GB of RAM or less -j2 might be even faster.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •