Page 3 of 4 FirstFirst 1234 LastLast
Results 21 to 30 of 40

Thread: Intel OpenGL Performance: OS X vs. Windows vs. Linux

  1. #21
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    4,991

    Default

    Quote Originally Posted by 89c51 View Post
    They could have probably gone Gallium3D which is cross platform and have the "same" driver on all 3 OSes (+benefit from other peoples work) but probably they have their reasons for not doing it.
    DX on Windows may have something to do with it

    Also, the Windows team is much bigger but also produces a worse driver. If that team were to start coding for the lin driver too, I'm very afraid the lin driver quality would take a nosedive to the win driver levels.

  2. #22
    Join Date
    Feb 2009
    Posts
    25

    Default

    STOP DOING TEST JUST IN MAC AND DO IT IN PC AND MAC IF YOU WANT OSX VS WINDOWS VS LINUX.

    and please stop telling me that its the same hardware because i know it same hardware BUT this is not true benchmarks

    and why dont you create a benchmarks between windows and linux in PC and install nvidia or ati official drivers (NOT MESA) ??
    Last edited by nir2142; 08-29-2012 at 01:10 PM.

  3. #23
    Join Date
    Oct 2011
    Location
    Rural Alberta, Canada
    Posts
    1,019

    Default

    Quote Originally Posted by nir2142 View Post
    and why dont you create a benchmarks between windows and linux in PC and install nvidia or ati official drivers (NOT MESA) ??
    Because he already has - not recently, but he has in the past.

  4. #24
    Join Date
    Jun 2010
    Location
    ฿ 16LDJ6Hrd1oN3nCoFL7BypHSEYL84ca1JR
    Posts
    1,020

    Default

    Quote Originally Posted by curaga View Post
    DX on Windows may have something to do with it
    Almost 2 years ago...
    http://cgit.freedesktop.org/mesa/mes...09c1b8903d438b

  5. #25
    Join Date
    Jun 2009
    Posts
    1,100

    Default

    Quote Originally Posted by gamerk2 View Post
    Look at this the way I do:

    The driver is processing some data passed in by some game. That data should be the same regardless of the host platform. As such, the processing within the driver should be identical across all platforms. The ONLY parts of the driver that should be different across OS's are any calls that use OS API's, which ideally would be replaced in a 1:1 manner.

    ...then you get into things OS A supports that OS B doesn't, and you start to see a lot of kludges in the code base to make things work. OS A supports created a thread in a suspended state; OS B doesn't. And so on and so forth.

    For example: My driver needs to create a thread in a suspended state. On Windows, simply invoke CreateThread() with the CREATE_SUSPENDED flag. Done.

    On Linux, pthread_create() is the obvious choice...except there's no way to suspend the thread on thread creation. So now you need to kludge the code to approximate the same behavior, often at a performance loss. And of course, its non-standard behavior between different devs, which can (and will) lead to issues when drivers start talking to eachother...[Seriously, POSIX needs to add a parameter to pthread_create() to allow for a suspended startup. Causes too many headaches, espeically in languages like Ada that separate thread creation from thread start.]

    Now, when you run into problems like that a couple hundred times while writing the driver...you get the idea. The driver can be no better then the interface to the OS.
    1.) no, in the case of DX is quite different[most drivers] and in the opengl cases you have many variants with specific custom extension [AGL,WGL,GLX among others] and in the case of WGL depending the driver its emulated over DX. so it takes some analisys depending from and destination of the port[this should not happen but it is like that for many reason]

    2.) no, data[api calls but whatever] from a game is not os independant at all[maybe textures] nor is tool independant nor is OS graphic stack independant because you somehow assume opengl is a language or some sort of proto IR language but in the real world is a library that is amazingly flexible and is used in conjuctions many languages[mostly C++/ASM(x86/arm)] and the driver need to be very smart to know what it can do and can't do[or emulate] in each OS. even if is true every OS can manage the hardware they do it extremely different[1:1 translate yeah in movies <-- id4 comes to my mind] sometimes it helps sometimes that force you to rethink a million lines of code.

    Additionally you assume somehow every gl api call is a direct ASM gpu function, no opengl is hardware agnostic[which make it more complex tho] so you can use CPU/GPU/preprocessors/clusters/etc and many of those are not possible on windows while on linux are perfectly standard[no in OSS drivers for now tho] you also wrongly assume that every OS api call do the same is called the same and perform the same which neither is true[is like saying an F35 and helicopter should be the same cuz both fly] [there are many good sites that explain this very deeply and easy to understand google it] and even an algorithm that is efficient in windows can be terribly slow on linux/mac compared to a modified algorithm using the techs in that native OS[many many examples of this | google is your friend] and this mostly drive you to rewrite half of your glsl interpreter to try to find a mid point between OSes.

    and just to name few more factors that make this ridiculous that would force you to rethink most of that code to meet an performance expectation filesystem, cpu scheduler, vectorization, I/O subsystem, latency, memory handling, interrupt handling/OS flexibility[windows pretty much allow any dirty hack you can think of where linux abort compilation or sigsegv your ass out] and many more

    3.) please explain to me this suspend thread thing[why you think is so important] cuz you have like 6 posts getting IANAL about it and after 10 years of developing threaded apps[c++] for linux i never have found a technical reason to suspend threads in efficient code[i always design my apps to be thread safe/small portion/atomics type/etc] and in my windows time i don't remember using them either, so i would like some example or something to get your point here

  6. #26
    Join Date
    Jun 2009
    Posts
    1,100

    Default

    Quote Originally Posted by nir2142 View Post
    STOP DOING TEST JUST IN MAC AND DO IT IN PC AND MAC IF YOU WANT OSX VS WINDOWS VS LINUX.

    and please stop telling me that its the same hardware because i know it same hardware BUT this is not true benchmarks

    and why dont you create a benchmarks between windows and linux in PC and install nvidia or ati official drivers (NOT MESA) ??
    because is an intel i5 OSS driver vs other OS intel drivers benchmarks??? too much weed??

  7. #27
    Join Date
    Apr 2011
    Posts
    305

    Default

    1. Intel HD 4000 is fast. 16cores*4shaders64bit or 8shaders32bit*FMAC*1.25ghz = 170gflops64bit(nvidia comparison) or 340gflops32bit(AMD comparison) or 500macGflops(AMD 6000 and less without FMAC).

    2. On Linux is a little slower. Not because of Linux but because of OpenGL version support that is newer on Windows.

    3. On Windows its only a little newer: yes can do OpenGL 3.1 and 4.0, but not 3.2, 3.3, 4.1, 4.2, 4.3. And its not a year away, Intel works faster now.

    4. Prefer free software. That way all of them will submit to as.

  8. #28
    Join Date
    Apr 2011
    Posts
    305

    Default

    Quote Originally Posted by jrch2k8 View Post
    1.) no, in the case of DX is quite different[most drivers] and in the opengl cases you have many variants with specific custom extension [AGL,WGL,GLX among others] and in the case of WGL depending the driver its emulated over DX. so it takes some analisys depending from and destination of the port[this should not happen but it is like that for many reason]

    2.) no, data[api calls but whatever] from a game is not os independant at all[maybe textures] nor is tool independant nor is OS graphic stack independant because you somehow assume opengl is a language or some sort of proto IR language but in the real world is a library that is amazingly flexible and is used in conjuctions many languages[mostly C++/ASM(x86/arm)] and the driver need to be very smart to know what it can do and can't do[or emulate] in each OS. even if is true every OS can manage the hardware they do it extremely different[1:1 translate yeah in movies <-- id4 comes to my mind] sometimes it helps sometimes that force you to rethink a million lines of code.

    Additionally you assume somehow every gl api call is a direct ASM gpu function, no opengl is hardware agnostic[which make it more complex tho] so you can use CPU/GPU/preprocessors/clusters/etc and many of those are not possible on windows while on linux are perfectly standard[no in OSS drivers for now tho] you also wrongly assume that every OS api call do the same is called the same and perform the same which neither is true[is like saying an F35 and helicopter should be the same cuz both fly] [there are many good sites that explain this very deeply and easy to understand google it] and even an algorithm that is efficient in windows can be terribly slow on linux/mac compared to a modified algorithm using the techs in that native OS[many many examples of this | google is your friend] and this mostly drive you to rewrite half of your glsl interpreter to try to find a mid point between OSes.

    and just to name few more factors that make this ridiculous that would force you to rethink most of that code to meet an performance expectation filesystem, cpu scheduler, vectorization, I/O subsystem, latency, memory handling, interrupt handling/OS flexibility[windows pretty much allow any dirty hack you can think of where linux abort compilation or sigsegv your ass out] and many more

    3.) please explain to me this suspend thread thing[why you think is so important] cuz you have like 6 posts getting IANAL about it and after 10 years of developing threaded apps[c++] for linux i never have found a technical reason to suspend threads in efficient code[i always design my apps to be thread safe/small portion/atomics type/etc] and in my windows time i don't remember using them either, so i would like some example or something to get your point here

    I agree to the most.

  9. #29
    Join Date
    Mar 2012
    Posts
    127

    Default

    Quote Originally Posted by artivision View Post
    1. Intel HD 4000 is fast. 16cores*4shaders64bit or 8shaders32bit*FMAC*1.25ghz = 170gflops64bit(nvidia comparison) or 340gflops32bit(AMD comparison) or 500macGflops(AMD 6000 and less without FMAC).

    2. On Linux is a little slower. Not because of Linux but because of OpenGL version support that is newer on Windows.

    3. On Windows its only a little newer: yes can do OpenGL 3.1 and 4.0, but not 3.2, 3.3, 4.1, 4.2, 4.3. And its not a year away, Intel works faster now.

    4. Prefer free software. That way all of them will submit to as.
    So higher opengl version = higher performance? I see...

  10. #30
    Join Date
    Jan 2009
    Posts
    462

    Default

    Quote Originally Posted by nir2142 View Post
    STOP DOING TEST JUST IN MAC AND DO IT IN PC AND MAC IF YOU WANT OSX VS WINDOWS VS LINUX.

    and please stop telling me that its the same hardware because i know it same hardware BUT this is not true benchmarks

    and why dont you create a benchmarks between windows and linux in PC and install nvidia or ati official drivers (NOT MESA) ??
    My hope is that Michael is doing it to annoy you, so that you move to another forum. I hear that the anandtech forums are full of like-minded individuals. You could go there. Let me help you. Here's a link. See you around.

    F

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •