Page 2 of 3 FirstFirst 123 LastLast
Results 11 to 20 of 28

Thread: Gallium3D's Softpipe Driver Now Runs Faster

  1. #11
    Join Date
    Aug 2008
    Location
    Finland
    Posts
    1,578

    Default

    Quote Originally Posted by highlandsun View Post
    Yes, that's the biggest problem. Everyone thinks that optimized java/python is good. Blecch....
    It's not just that... Even if you put modern coders to write Assembly, you'd still probably have at least as bloated code if not more. Most of the current generation of coders simply didn't learn to take everything and a bit more out of their computers.

  2. #12
    Join Date
    Jul 2009
    Posts
    47

    Default

    Quote Originally Posted by nanonyme View Post
    Hehe, you're probably right. With good enough low-level coders it could use a lot less system resources than it does. (not that nearly anyone does that low-level coding anyway anymore)
    Well, first of all developers are harder to come by that memory. Nearly everyone nowadays has 1GB or more, meaning nobody will mess with X-Server for months just to get 4 or 5MB saved space out of it.

    Secondary there will always be the dispute low-level versus high-level, as we move to higher and higher levels of abstraction. I can remember an uncle of mine disputing C++ and object-oriented programming in general, because of the performance lost - he had learned to think about memory with the requirements of his old Apple in mind.

    I'm one of a last few of students in germany that will still be learning C(starting in 2 weeks), basically because I also signed up for embedded development, but aside from that it's Java and C++, and in some cases even C#(which I'm also going to have to learn - unholy :\ ). I'm not even sure if there is an option to learn more assembler than the usual "hey, it speeds up C".

    Today I see C++-Developers bitching about Python and Java,(although Java certainly gained performance with the 6 series, I've even seen 3D done right with it). Tommorrow I will see the Java people bitch about the performance loss of OS-independent virtual machine scripting langs in the style of python or something like that.

    Look at what KDE is doing today opening everything on the Desktop for scripting, signaling and remote controlling. Sure it will eat performance like it's nobodys business, and the Atari Kids will have a smile, but in the end, when the real use of things like remote-control desktop enviroment(which is not just VPN/SSH!), social desktop or context-sensitive applications hits, you'll see that none of this could have been made with C without driving everybody nuts.

    Sure, within the X-Server or the Kernel it makes no sense or isn't even possible to switch to a high-level lang, but even if everybody knew C, there's still a world of hurt with the specifics of driver development.

  3. #13
    Join Date
    Jul 2009
    Posts
    47

    Default

    Quote Originally Posted by nanonyme View Post
    It's not just that... Even if you put modern coders to write Assembly, you'd still probably have at least as bloated code if not more. Most of the current generation of coders simply didn't learn to take everything and a bit more out of their computers.
    While I agree to the statement in theory, think about it again. Why do you think so many people are using KDE/Gnome with Compiz, a piece of software written by people that have obviously never even heard the words "stable", "optimized" or "doesn't crash and burn the shit out of your system". Simple, because it had features not offered by other solutions. Whether these features are that good is not a developers problem. It only shows there's a market for it.

    So let's do this one again from the KDE-angle, because I use that: You could a)optimize the shit out of 3.5, a series with code already called unflexible and unmaintainable by tons of developers and third-parties. You could only barely introduce new features due to lack of manpower and code flexibility. In the end you have a rock-solid system with a feature-set equal of Windows XP in 2009.

    or b)start all over, modernize your code and feature scripting to modern langs(aka the ones people use), introduce tons of features, while having releases that eat Ram and performance like crazy.

    In the case of KDE b) was really the only option.

  4. #14
    Join Date
    Aug 2008
    Location
    Finland
    Posts
    1,578

    Default

    Quote Originally Posted by fabiank22 View Post
    Today I see C++-Developers bitching about Python and Java,(although Java certainly gained performance with the 6 series, I've even seen 3D done right with it). Tommorrow I will see the Java people bitch about the performance loss of OS-independent virtual machine scripting langs in the style of python or something like that.
    Eh, I've seen all of that for at least the past ten years... Don't think anything has changed. People have always complained about the slowness of interpreted languages like Java and Python and others. I've also seen just the same complaining about C++ being a ridiculous memory hog and nothing in those claims has changed either.
    You're right when you say that developers are hard to come by if you mean software project designers by that. Coders are cheap and you find them under every rock and bridge.

  5. #15
    Join Date
    Jul 2009
    Posts
    47

    Default

    Quote Originally Posted by nanonyme View Post
    You're right when you say that developers are hard to come by if you mean software project designers by that. Coders are cheap and you find them under every rock and bridge.
    You're right. Let's change my statement to "developers that know what they are doing" or qualified developers(and by that I mean qualified enough to correctly optimize a project of the magnitude "X-Server"). Sure, you can get "Web-Designers" or PHP-People cheap even below the average wage of a kitchen help(at least in germany, where you make more money serving salad than badly managing servers/designing Web-Pages)

    But even when people still had to go through assembler and compilers had about the intelligence of "Ugh, FORTRAN not compile, not know problem, ugh" things nevertheless weren't all good. We just changed from speed to feature, all the while gaining a lot of speed on hardware-basis. Just look at Linux back then: sure it was hand-optimized and fast and all that, but that didn't make people magically use it. No, features did. What we need isn't a Softpipe driver so optimized it could run Doom III(although I'm certain that would be possible in some way), we need features, features, features, and a little speed to keep up. Because (good) manpower isn't going to magically appear.

  6. #16
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,434

    Default

    This has been one of the big debates in software engineering for at least 40 years. When programming languages were arcane and close to the metal there was a self-limiting effect such that only highly technical and detail-oriented people could write code in the first place.

    A lot of effort has been made to simplify programming tools and lower the entry threshold for programming, allowing many more people to become effective programmers. This works as long as the highly technical folks are willing to focus their efforts on design, review and mentoring the new people rather than writing code themselves - that way "leaf" code may become inefficient but the overall design and "core" code of the major software projects can stay clean.

    There has been some progress in this area (Linus doesn't code much these days) but we are probably only 30% of the way there.

    In the meantime fast hardware and cheap memory *do* cover a multitude of sins
    Last edited by bridgman; 09-24-2009 at 11:33 AM.

  7. #17
    Join Date
    Jun 2006
    Posts
    3,046

    Default

    Quote Originally Posted by whizse View Post
    You have never tried software rendering in Mesa, have you?
    I have with Heretic II to verify what the scene was supposed to look like when I had rendering errors with RagePRO, Rage128, and G200/G400 cards. "Painfully Ssssloooow" doesn't even begin to describe it.

    Having said this, it's more due to trying to do precisely the OpenGL rendering pipeline in software, which is really something designed to have some semblance of a hardware assist. Having a softpipe that optimizes for a rendering pass and removes most of the un-used/redundant crap before it's attempted per frame would actually improve the speed quite a bit- much of the loss in speed is due to attempting to do the OpenGL 1.4 pipeline in it's entirety and trying to do the shader side without optimizations at all.

    Joking aside, having a finely tuned software rendering option in Gallium would be very nice. Microsoft has shown that it doesn't have to be that slow: http://techreport.com/discussions.x/15968
    Indeed. And it's useful for doing software Vertex shading for IGPs and it's possibly useful as a base for starting Larrabee driver work. This observation doesn't diminsh the need for Radeon support, but you need to make sure your assumptions are correct before applying them to a set of shader cores- as it can be as complicated and time consuming as the process they're going through right now with the softpipe.

  8. #18
    Join Date
    Jun 2006
    Posts
    3,046

    Default

    Quote Originally Posted by bridgman View Post
    In the meantime fast hardware and cheap memory *do* cover a multitude of sins
    It does, indeed...

  9. #19
    Join Date
    Jun 2006
    Posts
    3,046

    Default

    Quote Originally Posted by fabiank22 View Post
    Sure, within the X-Server or the Kernel it makes no sense or isn't even possible to switch to a high-level lang, but even if everybody knew C, there's still a world of hurt with the specifics of driver development.
    No kidding. It's not easy, even with something as "simple" as a network card. Worse, we're talking about modern GPUs, which are sophisticated stream processing engines as much as anything else. A supercomputer within your computer, as it were. They're not easy to drive poorly, let alone efficiently.

  10. #20
    Join Date
    Aug 2008
    Location
    Finland
    Posts
    1,578

    Default

    Quote Originally Posted by Svartalf View Post
    No kidding. It's not easy, even with something as "simple" as a network card. Worse, we're talking about modern GPUs, which are sophisticated stream processing engines as much as anything else. A supercomputer within your computer, as it were. They're not easy to drive poorly, let alone efficiently.
    Yeah, you're right. The hardware we're running on top of is quite a lot more complicated than it used to be in the good old days when coders took everything out of the hardware. Probably unrealistic to expect as efficient usage of the hardware as before but hey, we're allowed to be nostalgic, aren't we?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •