Page 2 of 4 FirstFirst 1234 LastLast
Results 11 to 20 of 40

Thread: Intel OpenGL Performance: OS X vs. Windows vs. Linux

  1. #11
    Join Date
    May 2011
    Posts
    1,501

    Default

    Confused. Why doesn't Intel have a unified driver that's pretty much the same across all systems? They actually have different teams working on different drivers? WTF?

  2. #12
    Join Date
    Dec 2010
    Location
    MA, USA
    Posts
    1,312

    Default

    As another thought, I remember from previous tests that Linux usually performed worse than Mac and Windows at higher resolutions but sometimes came out on the top on lower resolutions. Linux also seems to perform worse on widescreen than 4:3. On the other hand, Mac tended to have the opposite results. I know its very time consuming and probably tedious for Michael to test so many screen resolutions but the results have proven to be apparent.

  3. #13
    Join Date
    Jun 2009
    Posts
    2,929

    Default

    Quote Originally Posted by johnc View Post
    Confused. Why doesn't Intel have a unified driver that's pretty much the same across all systems? They actually have different teams working on different drivers? WTF?
    Because the Linux stack is based on the in-kernel DRM and Mesa, and others aren't?

    Intel is doing the Right Thing here.

  4. #14
    Join Date
    May 2011
    Posts
    1,501

    Default

    Quote Originally Posted by pingufunkybeat View Post
    Because the Linux stack is based on the in-kernel DRM and Mesa, and others aren't?

    Intel is doing the Right Thing here.
    But why would that matter? They can use the Linux infrastructure while keeping the core driver stuff the same. Unless they insist that the Windows version be closed source -- which makes no sense since it's 100% open on the Linux side.

  5. #15
    Join Date
    Jun 2009
    Posts
    2,929

    Default

    I don't know if opening up their Windows driver and completely bypassing Mesa and parts of the kernel and X.org would be a better way to go, especially considering that Intel has been one of the primary drivers behind all recent developments: GLSL compiler, DRI2/GEM/KMS. AMD didn't go that way either.

    Intel does not go for the high-performance, "every-frame-counts" crowd, so it probably makes more sense to share much of infrastructure maintenance and work with the likes of AMD, RedHat, VMWare and the like, instead of maintaining their own port of the Windows driver and updating it all the time to keep up with the kernel and X, like Nvidia is doing with their blob.

  6. #16
    Join Date
    Jan 2009
    Posts
    1,679

    Default

    Quote Originally Posted by pingufunkybeat View Post
    Because the Linux stack is based on the in-kernel DRM and Mesa, and others aren't?

    Intel is doing the Right Thing here.
    They could have probably gone Gallium3D which is cross platform and have the "same" driver on all 3 OSes (+benefit from other peoples work) but probably they have their reasons for not doing it.

  7. #17
    Join Date
    Oct 2011
    Location
    Rural Alberta, Canada
    Posts
    1,030

    Default

    Quote Originally Posted by Phoronix
    When moving to the latest OpenArena release (v0.8.8) that now takes advantage of GLSL shaders, bloom, and other more modern OpenGL features, the Windows 7 and Linux performance was close at a low resolution of 1280 x 1024.
    *Casts a weary eye at Micheal*

  8. #18
    Join Date
    Jun 2009
    Posts
    1,136

    Default

    Quote Originally Posted by johnc View Post
    But why would that matter? They can use the Linux infrastructure while keeping the core driver stuff the same. Unless they insist that the Windows version be closed source -- which makes no sense since it's 100% open on the Linux side.
    1.) probably will take more time to make their winblob use linux stack than write the OSS driver from scratch
    2.) they will have to deal as nvidia/amd blob with a hell of dependencies issues and refactor a freaking bunch of LOC
    3.) intel rightfully believe is better to exploit the native graphic system instead of make frankesteins to go in the cheap
    4.) its excellent to test new techniques that can be ported from linux to windows/mac
    5.) is a great way to push linux native stack to the next level
    6.) they trully believe in the kill da blobs mantra
    7.) using the native graphic system allow them use and test and support emerging technologies like wayland in real time that later can be profitable for the mobile division[tizen/x86 mobile combo]

    "They can use the Linux infrastructure while keeping the core driver stuff the same" no. a gpu driver is a massive massive endeavor and the complexity of doing that is plainly insane[just modify the windows code that handle FB memory can affect millions of LOC just to make it compile lets not go to the grimmy scenario of optimizations] and if you take the nvidia/AMD case that after 10+ years making this transition and you still find many many issues[amd more nvidia but the latter is not problem free either]. sure but nvidia/amd gpus are better/faster/etc so the driver should be simpler, right? no, basically all the infrastructure is the same just the device specifics ASM are different[1-5% of the code maybe] and couple of other very small subsystems, regardless the GPU the infrastructure is equally massive

    and like i said a zillion times already linux native drivers and gallium promise a very bright future[gallium design is quite a killer actually] but the problem is not linux is lack of developers. so linux have like 10 guys rewriting an entire graphic stack and making the drivers at the same time[before any dumb question no you have circular dependencies between an stack and the drivers so you can write one before the other], if we had like 100 developers probably linux would murder anything else in FPS and we will be loling khronos cuz we have opengl5 working before the draft but sadly this work is too complex and require a hell of expertise in electronics and that kind of developer just don't grow in trees

  9. #19
    Join Date
    Dec 2008
    Location
    San Bernardino, CA
    Posts
    232

    Default

    The amount of work done by Intel to advance the Linux graphics stack is very commendable! These tests show; however, there is yet more performance work to be done. I really hope the Intel Linux Graphics team is able to catch up to, and even surpass the Windows team in performance and features.

    Being that the HD3000/4000 IGP is not the fastest GPU to begin with, in this case, every frame does count. Especially if we will be running some AAA titles (ie. from Valve). Here's hoping we start seeing performance (and feature) payout in coming releases due to the collaboration with Valve!

    Thanks Michael for these very useful Linux vs Windows comparisons. It lets us know that much more work needs to be done.

  10. #20
    Join Date
    Jun 2012
    Posts
    346

    Default

    Quote Originally Posted by pingufunkybeat View Post
    I don't know if opening up their Windows driver and completely bypassing Mesa and parts of the kernel and X.org would be a better way to go, especially considering that Intel has been one of the primary drivers behind all recent developments: GLSL compiler, DRI2/GEM/KMS. AMD didn't go that way either.
    Look at this the way I do:

    The driver is processing some data passed in by some game. That data should be the same regardless of the host platform. As such, the processing within the driver should be identical across all platforms. The ONLY parts of the driver that should be different across OS's are any calls that use OS API's, which ideally would be replaced in a 1:1 manner.

    ...then you get into things OS A supports that OS B doesn't, and you start to see a lot of kludges in the code base to make things work. OS A supports created a thread in a suspended state; OS B doesn't. And so on and so forth.

    For example: My driver needs to create a thread in a suspended state. On Windows, simply invoke CreateThread() with the CREATE_SUSPENDED flag. Done.

    On Linux, pthread_create() is the obvious choice...except there's no way to suspend the thread on thread creation. So now you need to kludge the code to approximate the same behavior, often at a performance loss. And of course, its non-standard behavior between different devs, which can (and will) lead to issues when drivers start talking to eachother...[Seriously, POSIX needs to add a parameter to pthread_create() to allow for a suspended startup. Causes too many headaches, espeically in languages like Ada that separate thread creation from thread start.]

    Now, when you run into problems like that a couple hundred times while writing the driver...you get the idea. The driver can be no better then the interface to the OS.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •