05-04-2010, 10:15 AM
Not sure I agree with this. You're looking at a level of activity that only started a couple of years ago and declaring the initial results "too late", which is fair, but I would argue that the reason for that lateness is "starting too late", not the inability to keep up with new hardware introduction.
Originally Posted by allquixotic
Using ATI hardware as an example, in 2007 there was decent 2D support for the 3xx/4xx GPUs (ie the 2002-2004 SKUs), experimental 3D for the same parts, plus the initial RE'ed avivo driver for 5xx. The only fully supported parts were r2xx and earlier, which were ~6 years old at the time. This matches what you are saying.
2-1/2 years later, the graphics stack has been rebuilt around a common memory manager, which is a critical pre-requisite for further 3D work, 2D has been largely rewritten to make use of the 3D engine, *and* hardware support has gone from being ~6 years behind to ~6 months behind on average, in the sense that Evergreen has had modesetting support for a while (3 months behind), power management on EG is happening in sync with older parts (6 months behind), but acceleration is just staring to light up now (9 months behind).
Measuring status at the start of a project is fine, and it would be totally fair to say that open source development was 6 years behind in 2007 when the project started, but given the amount of catch-up in the last couple of years it's really hard to argue that your "years behind" statement applies today or will apply in the future.
05-04-2010, 10:26 AM
Yes there are only a few settings but they are not all the Y/N type setting. Your timer frequency has multiple settings. The default kernel settings you get even in a vanilla kernel is not optimized to server or desktop but a compromise of both worlds. You might want to look at part of the discussion here as to an indication why defaults are chosen :
Originally Posted by Shining Arcanine
05-04-2010, 10:34 AM
Yeah, I agree with that: benchmarks should test as many applications as possible to extract valid results, including legacy applications. The issue is that the sample size is rather small and is heavily biased against OpenGL: noone is actually shipping OpenGL 3.x/4.x software right now (except Unigine perhaps) so we might be testing modern D3D10+ pipelines against legacy OpenGL pipelines - can't really draw solid conclusions that way. That's why I would prefer GL3.x vs D3D10 tests for example.
Originally Posted by deanjo
That said, I wouldn't give too much weight on the "default renderer" argument. More often than not, the default is D3D simply because Windows doesn't ship OpenGL IHVs by default (and because Intel's Windows OpenGL stack sucks but that's another story entirely). Google is creating a WebGL implementation on top of D3D for precisely those reasons (how sad is that?)
05-04-2010, 10:47 AM
BTW has anybody bothered to figure out the bang per buck comparison yet with the Specview tests? You don't get a whole lot of value with the dual Opteron 2384 + FirePro V8800 system considering you could build yourself the complete i7 + 9800GTX+ for less then the price of the V8800.
05-04-2010, 10:58 AM
Can you tell me how Phoronix gets the Hardware to perform the tests? Has the Hardware been donated by an organization or company? What organization or company?
05-04-2010, 11:03 AM
Pretty sure it's a mix and match. Some AIB's send their wares to but I believe most of the product is paid for out of pocket.
Originally Posted by YAFU
05-04-2010, 11:04 AM
My first post! Maybe my last!
Originally Posted by BlackStar
Having just started to have a go at learning OpenGL I found your comment of useful interest. You state that the design is 'broken'. Design is the 'how'; the how of a specification. I have read recent the OpenGL specs but find it not much more than API functionality spec. It is hard to gain an understanding of the overall process desired. Threading is hardly mentioned in the glspec32.core spec.
So, do you think that the design is 'broken|old' because the specs are also in need of updating to reflect modern GPUs and CPUs: multi-processors and threaded software techniques?
05-04-2010, 11:14 AM
I think the spec somewhere mentions that it very consciously doesn't specify how it should be implemented. That's entirely up to the hardware manufacturers. The spec can only make sure that it doesn't get in the way of a particular implementation.
Originally Posted by Vanir
05-04-2010, 11:31 AM
I see all this flame about DX, OpenGL, etc, and I see nothing about the real conclusion you're all omitting: if one can reasonably play games with Ubuntu, maybe better with other Linux distros, why then should anyone pay for a Windows license at all?
Maybe because some titles haven't made into Linux yet? Because some people believe that paying for Windows and those extra FPS are worth? Who knows?
My point is that the article proves that Linux is competently able to handle games, as long as you **don't** have an Intel IGP. Period. Want more details? More benchmarks? More options enables/disabled? Fine. But I believe that more benchmarks certainly will not change the conclusion of the article. At least, that's my opinion.
05-04-2010, 12:35 PM
Not sure how you can draw a line like that when there are failures to even run the applications on certain setups, not to mention tests were done with their closed source drivers. Other facets of gaming have to be considered as well such as audio.
Originally Posted by Caveira
Yes under ideal circumstances and hardware configurations linux can keep up to windows in gaming and those circumstances depend heavily on outside 3d party closed source support. We still however do not have a good X vs Y comparison because not all in real life usage capabilities and functions were explored.