It also nice that you rant about fglrx because "fglrx most of the time can't release a version that supports the current version of Xorg.", it's a pretty argumented point.
And then you rant against FOSS drivers too, trying to say that they (ATI/AMD) don't do anything good, expecially for Linux gaming. It would be nice to understand what kind of gaming you were expecting from a HD3200 integrated graphics...
It wasn't an off topic defending post for nVidia. The actual topic at hand is the PhysX SDK support is coming back to Linux. I defended nVidia in such a light. I'm not saying they aren't scumbags, most corporations are. But unfortunately, like most scumbags in the IT world (Adobe, Oracle, Google, etc) we still have to deal with their crap. Fortunately because of Linux, we don't have to deal with some other scumbags (like Microsoft). Now am I saying they are scumbags because they won't open source their products? No. They are scumbags for creating crappy products that somehow become standard fair so there is a high requirement for people to be able to use things like Flash, Java, Google with spam.
I put my money where my mouth is. nVidia works, though occasionally has weird issues (Dual-head works on one driver, then broken on the next, tuning resolution for HDTV works in Windows, but they took forever to release it for Linux, etc.) but they still have better support, even with only providing a closed source driver, than AMD does. That is the plain and simple truth.
AMD's GPU offering also tends to be tuned more for DirectX performance than OpenGL, nVidia was one of the first GPU manufacturer's to provide OpenGL acceleration.
I'm not even defending them, I was presenting experience. As of this current date, fglrx will not work with Xorg 1.9. nVidia just released a beta that does, not to mention you can easily get the current driver to work with the ignoreabi option.
By the way, I still like Matrox, it's just their lack of performance and their slow development of Linux drivers that eventually forced me to pick between nVidia and ATI, and because of all the reviews I read had told me to go with nVidia if I wanted a NEW video card (ATI's older, original Radeons worked with most things but were crash prone) go with nVidia. I would still stay by that until the open source drivers for (now) AMD video cards work without major suckage.
I'm just saying that there are some cases where the CPU is actually better suited for particular physics tasks, even if the majority are much faster on a GPU, and even physx keeps some parts on the CPU rather than offload them (cloth physics does not fall under this category as far as I know).
I'm just saying that there are some cases where the CPU is actually better suited for particular physics tasks,[/uote]
The only place really where this would happen is on a very limited data set (read one particle at a time) with a very limited set of applied laws of physics applied to them where clock speed would become king. As soon as that data set becomes more complex the mass parallelism of GPU's start pulling away.
Sure Physx puts some less demanding tasks on the CPU, net result though is still that utilizing the GPU as well is a far more efficient means. Lets get one thing straight here, Physx has never claimed nor will it ever likely do so to be a 100% accurate model of applied physics. It has always been eyecandy and marketed as such. At best it could be use for "movie quality" physics. This is not a limitation by any means on the GPU. It is a limitation of being able to produce acceptable effects realtime for the end users enjoyment. You won't see Physx being used for intermolecular interaction fluid analysis as it was never intended to be as such. You can however have every bit as accurate of a model done on GPU with huge gains over a pure CPU setup again because of it's massive parallel capabilities vs the relatively small parallel capabilities of a CPU if you wish to take the time and code the more accurate equations into your application. Same results as CPU, just more of them calculated in the same period of time.even if the majority are much faster on a GPU, and even physx keeps some parts on the CPU rather than offload them (cloth physics does not fall under this category as far as I know).
This isn't just a limitation of Physx but also of Bullet, Havok, etc etc as they are not meant to be 100% real life interaction replication. They are all there for eye candy, nothing more, nothing less. It just happens to be a fact that as that eye candy gets more complex, LIKE the more accurate realworld interaction physics calculation applications out there, a GPU's massive amount of parallelism will thump a CPU pretty much every time if coded correctly.
Physx is for gaming. Nothing more nothing less but a GPU isn't limited to Physx.
I was trying to separate game-physics from high-accuracy physics. Only recently has GPGPU been able to match CPU level accuracy - this is due to the floating point units of the hardware. GPU physics is not some magic that makes all & every physics calculation better, it's just highly suited to most of the algorithms (most, not all).
Interactivity problems is not a bunch of bull as you seem to be forgetting one little detail: the data must be moved to and from the GPU. This takes time and is why gpu physics is mostly limited to eye-candy. If this wasn't an issue, then I assure you that CPU physics would be everywhere in games right now.
There's also the issue of course of a GPU not being limited to physics - it was never designed for physics in the first place. It has to do (gosh) graphics too. Where to perform what calculations is as much a balancing act in games as anything else.