Page 2 of 2 FirstFirst 12
Results 11 to 13 of 13

Thread: Intel's Brewing New Linux Graphics Driver Features

  1. #11

    Default

    Quote Originally Posted by stqn View Post
    Hopefully good quality videos will show up on https://www.youtube.com/user/fosdemtalks/videos
    FOSDEM didn't show up in the Xorg dev room at all to record. I was the only one recording any videos.

    These videos were all just recorded from a very easy Flip Mino HD camera. Tapping? No tapping unless it was the sound of MacBook Pro keyboard or beer bottle hitting the desk...

  2. #12
    Join Date
    Oct 2008
    Location
    Sweden
    Posts
    983

    Default

    Quote Originally Posted by elanthis View Post
    I wish ApiTrace were as awesome as the Linux folks said it was. I mean, the current incarnation may be great for debugging the stack, but it's not at all that useful for debugging complex applications. It's still a lot less useful in many ways than the (no longer updated, buggy, and poorly designed) gDEBugger, and of course nowhere even remotely close to PIX, PerHud, etc.
    [..]
    It's still a pretty young project, so hopefully it will improve with time. I did however notice that icculus wrote basic profiling support for it and used that to track down a performance problem in a game so it's clearly useful for some uses.

    Further profiling support and the planned ability to trim down traces to just a couple of frames also seems like really useful tools.

  3. #13
    Join Date
    Oct 2010
    Posts
    352

    Default

    Quote Originally Posted by Serafean View Post
    Hi,

    I'm getting a bit worried about/for the intel camp. Based on the news I've been reading for the past year, I feel like they're isolating themselves.

    How are they isolating themselves? The main thing I see is their refusal to use Gallium3D. I admit to not knowing how complete a rewrite it would require, but getting a bunch of APIs implemented for "free" can't be something they can simply ignore, no?
    Second : TTM came, and suddenly appeared intel with GEM. I don't know the technical differences, but knowing that both radeon and nouveau use a GEMified TTM, I get a feeling something isn't quite right.
    Third : UXA/SNA. I guess an acceleration architecture tailored to specific hardware has its advantages, but it really feels like Intel is playing in a corner of the sandbox where none of the others go.
    This leads me to fragmentation : SNA/Glamor/Uxa. It sounds like a major rewrite is taking place all the time, leaving the user unsure of what is best...

    All the same, thanks to the Intel devs for their OpenGL 3.0 push, this is a great example of work that benefits everyone!

    Serafean
    True, like the article says, Glamor sucks because it has to go through the entire OpenGL stack and that slows it down, while SNA sucks because it's back-end has to be rewritten for each hardware generation. I guess next they will "invent" a common IR between the OpenGL and SNA (and in the future OpenCL) that will allow them write a single back-end for each hardware generation. After that they will implement all the high level APIs to generate this IR. Sounds familiar? Maybe because that's exactly what Gallium3D does? But like Intel said, "there is no technical reason for Intel to switch to Gallium3D", they will just implement their own version.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •