Page 2 of 3 FirstFirst 123 LastLast
Results 11 to 20 of 27

Thread: More Work On Red Hat's Wayland Project

  1. #11
    Join Date
    Dec 2007
    Location
    /dev/hell
    Posts
    297

    Default

    what can we learn through Wayland?

    What X could gain from it?

  2. #12
    Join Date
    Jan 2009
    Posts
    4

    Default

    Why not put Xserver into kernel? Linus should consider..

    As a service in user space, xorg can NEVER be efficient as WinXP.

  3. #13
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,386

    Default

    Not sure I agree. Once you put modesetting, memory management and command submission into the kernel you don't gain much from moving anything else down.

  4. #14
    Join Date
    Nov 2007
    Posts
    1,024

    Default

    Quote Originally Posted by bridgman View Post
    Not sure I agree. Once you put modesetting, memory management and command submission into the kernel you don't gain much from moving anything else down.
    I guess in theory it would remove the additional context switches from dispatching window events and stuff, but then to really get any advantage you'd need to move the whole WM and Compositor into the kernel.

    And, honestly, there's no guarantee that would actually make any noticeable difference besides having a more crash-happy and insecure kernel. The only time I feel any latency in Linux desktop stuff is when X is doing a bunch of rendering/blitting work because my video chipset still lacks EXA/GL acceleration.

    Really, the only reason modesetting, memory management, and command submission belong in the kernel is because the kernel _needs_ them to be able to reclaim control of the display. The kernel has zero use for window management, window event dispatch, or so on, so there is absolutely no need for those functions in the kernel.

  5. #15
    Join Date
    Oct 2007
    Posts
    29

    Default

    Quote Originally Posted by elanthis View Post
    I guess in theory it would remove the additional context switches from dispatching window events and stuff, but then to really get any advantage you'd need to move the whole WM and Compositor into the kernel.
    Then you end up moving everything into the kernel and ending up with a single address space instead of virtual memory.
    Quote Originally Posted by elanthis View Post
    Really, the only reason modesetting, memory management, and command submission belong in the kernel is because the kernel _needs_ them to be able to reclaim control of the display. The kernel has zero use for window management, window event dispatch, or so on, so there is absolutely no need for those functions in the kernel.
    The alternative is to go to a micro-kernel based operating system and a lot of these issues disappear (others come up though).

    Personally I'd love to see a c++ OS project doing a microkernel and using a linux driver compatibility layer for a lot of hardware support at the start. It'd be doable and interesting but wouldn't be useful for a long time.

  6. #16
    Join Date
    Dec 2007
    Location
    /dev/hell
    Posts
    297

    Default

    Quote Originally Posted by Ze.. View Post
    Personally I'd love to see a c++ OS project doing a microkernel and using a linux driver compatibility layer for a lot of hardware support at the start. It'd be doable and interesting but wouldn't be useful for a long time.
    And performance will suck
    source: http://www.kernel.org/pub/linux/docs/lkml/#s15-3

  7. #17
    Join Date
    Nov 2007
    Posts
    1,024

    Default

    Quote Originally Posted by Vighy View Post
    And performance will suck
    source: http://www.kernel.org/pub/linux/docs/lkml/#s15-3
    Taking advice from quotes from 5+ years ago about problems encountered 17+ years ago by people who were never knowledgeable C++ programmers is pretty stupid. C++ has no problems with kernels or performance on any sane compiler when being used by any sane developer. Anyone thinking otherwise has their head up their arse, and that fully includes Linus and company.

  8. #18
    Join Date
    Jan 2009
    Posts
    1,603

    Default

    i have a question regarding eagle (if anyone knows)

    from what i can understand by reading stuff around the net eagle is an egl implementation for mesa


    will gallium (when OpenGL ES is implemented as stated here) make it obsolete??? (or i understood the whole thing wrong?)

  9. #19
    Join Date
    Dec 2007
    Location
    /dev/hell
    Posts
    297

    Default

    Quote Originally Posted by elanthis View Post
    Taking advice from quotes from 5+ years ago about problems encountered 17+ years ago by people who were never knowledgeable C++ programmers is pretty stupid. C++ has no problems with kernels or performance on any sane compiler when being used by any sane developer. Anyone thinking otherwise has their head up their arse, and that fully includes Linus and company.
    The note was not only related to C++, but also to microkernels (as you can read below, in the link)

    And remember that even today C++:
    a) produces more bloated code. (with every compiler, otherwise you are simply coding in C)
    b) has issues (remember when in RadeonHD they needed to convert back inline functions to macros since the compiler was not doing the right job?)
    c) leads developers to bad attitudes, seducing them with wonderful tools and then giving them worse performance (read what Keith Whitwell says: http://sourceforge.net/mailarchive/f...ame=mesa3d-dev )

    And what about microkernels? that's a design pattern with intrinsic performance issues, and that's not going to change in the future, since it's a design pattern and not an implementation.

    The kernel is a critical piece of software. it's not admitted performance loss there.

    If you want to a microkernel written in C++, write it, and you will find that performance sucks

  10. #20
    Join Date
    Nov 2007
    Posts
    1,024

    Default

    a) produces more bloated code. (with every compiler, otherwise you are simply coding in C)
    No, it doesn't. Yes, you can code various things identically in C++ as you would in C, and it produces identical output. That's a very intentional feature of C++, after all.

    Using any non-C feature does NOT magically produce more bloated code for no reason. Using a class or a method does not produce any more output than a struct and a function. Some features of C++ do produce additional output. Exceptions and RTTI are the two features that do that. Both of those can be turned off with compiler flags, which is useful if you have decided that the functionality provided by those features is not worth the "bloat."

    Don't even start to complain about those flags being non-standard, either, because the Linux kernel already makes use of a crap load of GCC extensions and specialized linker scripts, so it's not like it's being written in ANSI-standard C anyway.

    b) has issues (remember when in RadeonHD they needed to convert back inline functions to macros since the compiler was not doing the right job?)
    What does that have to do with C++? C compilers are not following the C++ spec when compiling C-with-non-standard-extensions, and therefor C++ should not be used?

    Macros, which are a standard part of C++, and which are used where appropriate by experienced C++ programmers, prove that C++ is worse than C?

    Any number of GCC versions are blacklisted by the kernel because of bugs in the nice-simple-easy-perfect C compiler or code generator (like the inline bug with RadeonHD). Why does C get a free pass?

    c) leads developers to bad attitudes, seducing them with wonderful tools and then giving them worse performance
    Bad attitudes...?
    Seducing them with wonderful tools? How dare they offer wonderful tools!
    Already covered the performance one enough.

    C++ is a more complicated tool than C, and hence requires more effort to learn to use effectively and efficiently. C++ can make coding MUCH easier, but it requires a greater investment of time and learning than C in exchange. Unfortunately, most programmers go through school, learn all the mathematical fundamentals of computer science, learn all the syntax and features of C++, learn all those fancy design patterns... and never actually learn how C++ (or C, or assembler, or Java, or anything really) actually works. And they end up writing a ton of crappy code in it as a result. And then the elitist grey beards start hating C++ because those poor kids paid $160,000 for four years of bad education.

    What those people fail to realize is that it's not really C++'s fault, any more than it's C's fault that there is an entire army of clueless novices writing horrendous C code in the hobbyist game community. Drives me nuts, but it's not C's fault that people who don't know what they're doing try to use a tool they don't understand. Nor is it JavaScript's fault that hordes of graphics designers keep pumping out a tremendous amount of truly horrific scripts in their barely-comprehensible HTML markup. Programming is a very complicated task that requires a great deal of education and a fair bit of talent. Unfortunately, a great many people keep getting diplomas claiming that they're Real Programmers(tm) despite lacking the necessary education or talent, and those people are pretty much exclusively taught C++ or Java, so we end up with a whole mess of people who shouldn't be writing code writing it in C++.

    The fallacy that people such as yourself or people like Linux keep believing in is the idea that those very same people would be great programmers if only they used C instead of C++. That's silly. I'm ready to believe that we'd have less bad programmers if most universities used C (and a good dose of LISP) in their core curriculum, but only because C is harder to use and requires more low-level knowledge before being able to accomplish anything.

    The thing is, once you do know those things, it starts getting really tedious having to do so much boilterplate code over and over and over because every little task in C requires a ton of work to get done, where-as C++ gives you equivalent control where you need it and lets you abstract away all those details when you don't need that control; and it does it without massive overhead (no overhead compared to C in most cases), making it useful for high-performance, systems-level tasks, and it is used very extensively in places where performance and efficiency are critical (e.g., games).

    Side story: I recall some years back a poor guy who hated C++ with a passion. Total die-hard C user. One of his memorable quotes: "When I first read about inheritance I laughed my balls off: those fucking idiots wrote up a whole fucking language feature instead of just using DUH cut-n-paste!" Never mind of course that inheritance reduces the likelihood of a bug fix not getting into all those cut-n-pasted copies. Never mind that having one method instead of ten nearly identical functions results in a smaller executable, which means more of the program fits in the code cache, which means less page faults, which can result in significantly better performance. Never mind that it organizes functionality and keeps identical behavior in one place instead of ten and thus makes it easier for new programmers to come up to speed on a large existing project. But no, C++ is a total train wreck, a bloated pile of useless features, and it even forces coders at gunpoint to use at least 40 different design patterns in every module. Ayup.

    And what about microkernels?
    I'm not really qualified to comment on them, so I didn't.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •