Page 2 of 4 FirstFirst 1234 LastLast
Results 11 to 20 of 40

Thread: 2 cores vs. 4 cores in Linux

  1. #11

    Default

    Quote Originally Posted by frantaylor View Post
    Did you see the status of that bug report: RESOLVED INSUFFICIENT_DATA

    Did you read the comments in the bug report?

    "this bug has long past the point where it is useful.
    There are far too many people posting with different issues.
    There is too much noise to filter through to find a single bug.
    There aren't any interested kernel developers following the bug."

    It is not even a bug report, it is just a random flame fest.
    Yeah, but it's only one of the reports. And it seems you didn't even read. Jens Axboe is still working on it. There's his patch also. Your reaction is very funny

    This is a problem in Linux with scheduling I/O and many cores. One process gets all the bandwidth and others can't get a word in edgewise.
    It seems it's not:

    Fair queuing would allow many processes demanding large levels of disk IO to each get fair access to the device, preventing any one process from denying the others.
    Even SFQ allowed this and CFQ moved even further. However, I don't expect from you to know this (if you even don't understand kernel versioning...). You and your friend already proved this in another thread.

    Have a nice trolling there:

    http://www.phoronix.com/forums/showt...t=18073&page=7
    Last edited by kraftman; 08-05-2009 at 11:51 AM.

  2. #12
    Join Date
    Jul 2008
    Location
    Greece
    Posts
    3,796

    Default

    My desktop applications freeze when there's heavy I/O. Of course others might pretend everything is OK simply because they don't understand the issue and think these freezes are normal/acceptable/unavoidable or they don't get them at all.

    I do. There's a problem with Linux I/O and graphics. I don't know who's at fault. The problem is there. I always had it, with every PC I ever used. Windows does not have this problem; the GUI is always fluid no matter how heavy I/O load there is.

  3. #13

    Default

    Quote Originally Posted by RealNC View Post
    My desktop applications freeze when there's heavy I/O. Of course others might pretend everything is OK simply because they don't understand the issue and think these freezes are normal/acceptable/unavoidable or they don't get them at all.

    I do. There's a problem with Linux I/O and graphics. I don't know who's at fault. The problem is there. I always had it, with every PC I ever used. Windows does not have this problem; the GUI is always fluid no matter how heavy I/O load there is.
    Yes, there's definitely problem with I/O on some configurations. It's science 2.6.18 like mentioned in bug report (but it's due to bug not due to design like some trolls want to profff; however long standing one, but not everyone is affected). Graphic is another case

    Easy way to check you're affected is to copy file which is bigger then your RAM. System becomes unresponsive for some amount of time.
    Last edited by kraftman; 08-05-2009 at 02:53 PM.

  4. #14
    Join Date
    Jul 2009
    Posts
    351

    Default

    Quote Originally Posted by kraftman View Post
    Yeah, but it's only one of the reports. And it seems you didn't even read. Jens Axboe is still working on it. There's his patch also. Your reaction is very funny

    It seems it's not:

    Even SFQ allowed this and CFQ moved even further. However, I don't expect from you to know this (if you even don't understand kernel versioning...). You and your friend already proved this in another thread.

    Have a nice trolling there:

    http://www.phoronix.com/forums/showt...t=18073&page=7
    I work with bugs every day. When the developer marks the bug as closed, that means "I'm not working on this any more"

  5. #15

    Default

    Quote Originally Posted by frantaylor View Post
    I work with bugs every day. When the developer marks the bug as closed, that means "I'm not working on this any more"
    So check date "closed" and when Jens uploaded the patch. There are also other reports like this one. If they'll close all reports there will be new, because bug is still there... Believe or not, but I'll probably switch to FreeBSD or Solaris, because of this (if it will really piss me of). However, I don't copy big files too much and I have Windows installed, so it's hard decision.
    Last edited by kraftman; 08-05-2009 at 04:55 PM.

  6. #16
    Join Date
    Jul 2008
    Location
    Greece
    Posts
    3,796

    Default

    "CLOSED NEEDINFO" and "CLOSED WORKSFORME" doesn't mean there's no problem. It just means that "The Bazaar" failed.

  7. #17
    Join Date
    Sep 2008
    Posts
    201

    Default

    @kraftman

    I was experiencing an issue something like that bug report on 2.6.30, but it seems to have improved in 2.6.31 RC5. Have you tried that?

  8. #18

    Default

    Quote Originally Posted by krazy View Post
    @kraftman

    I was experiencing an issue something like that bug report on 2.6.30, but it seems to have improved in 2.6.31 RC5. Have you tried that?
    No, I'm using only distro provided kernels right now - 2.6.30.4, but I'll try this one and 2.6.30 with Jens patch.

    @RealNC

    "CLOSED NEEDINFO" and "CLOSED WORKSFORME" doesn't mean there's no problem. It just means that "The Bazaar" failed.
    Exactly :>
    Last edited by kraftman; 08-06-2009 at 03:53 AM.

  9. #19

    Default

    EDITED:

    I compiled 2.6.31-rc5, but X doesn't start. However I did this in vt:

    I copied big file from ntfs partition using ntfs-3g to my home directory, ran "top -d 0.2" (to notice eventual slowdowns) as root in another vt and then I started copying file from home to ntfs partition (so both files were copied simultaneously). There were no single visible latency! (I can do the same with previous kernels, but after some time system becomes unresponsive).

    It seems rc5 behaves much better or bug is even fixed. However, I need to try this in some DE, because it can be hard to catch latencies in vt.
    Last edited by kraftman; 08-06-2009 at 10:45 AM.

  10. #20
    Join Date
    Jul 2009
    Posts
    351

    Default Regression testing?

    Quote Originally Posted by RealNC View Post
    "CLOSED NEEDINFO" and "CLOSED WORKSFORME" doesn't mean there's no problem. It just means that "The Bazaar" failed.
    Apparently "The Bazaar" does not do regression testing, either.

    How do bugs like this make it into "RC" kernels? Does not "RC" mean "we have tested this and we think it is good"?

    This is one reason why Linux has crummy market share. There are so many regressions. Normal non-hacker type people do not want to deal with regressions. They want to turn their computers on and get to work.

    I wonder what can be done to deal with the regressions. Linux has no central testing lab and no formal process.

    With a formal process, you are not even half done when you fix the bug. Next you have to write the regression test for the bug and then test the regression test. This usually takes more effort and more resources than fixing the bug. And then you have to run all the regression tests all the time. This requires an automated framework to run the regression tests and report the results. This is all enormous work but it needs to be done if you want to ship a quality product every time.

    When you look at the bugzilla.kernel.org, there is not even a bug status for "needs testing". When something is "fixed" it gets marked as RESOLVED and that is that.

    RedHat etc. have to do this testing and their kernels have hundreds of patches to the stock kernels to fix the problems that are not caught by the "Bazaar" process. When you look at these patches you see that most of them are fixes for regressions, things that used to work and then stopped working for some reason, and the regression was not caught. Or else they are driver patches for new drivers that never worked right in the first place because they did not get tested well. Some of the distribution patches stick around for years because they do not get accepted upstream for one reason or another. These patches need to be maintained as the code changes and that requires even more effort.

    I don't know how it can be fixed. Nobody "owns" linux, so nobody wants to take the responsibility to do all the regression testing that should be done. The distributions "own" their kernels, but if they all do their own regression testing then there is enormous duplicated effort.

    I worry that Linux is going to turn into even more of a chaotic mess as it gets bigger and gets more features. It is not the slim and trim kernel that it was back in the 90's.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •