I'll just add briefly that merging back into Nouveau mainline, if done properly, could well be the best thing for both projects after a certain amount of time spent apart. Hopefully the two communities continue to work together on their common ground, and maintain their compatibility points (e.g. gallium on top of pscnv, mostly unmodified ddx) so that the merge, if it happens, will be seamless.
Of course, any such merge would have to maintain older paths for pre-NV50 hardware, so the merge wouldn't just be a simple matter of copy-pasting the entire pscnv work over into the nouveau branch. BUT, if it turns out that pscnv can do certain things better than nouveau without negatively impacting any particular use of the GPU (2d, 3d, video playback, GPGPU, frying an egg), then that's great. But please remember that if you do merge with nouveau, do some performance comparisons on the 3d side for a while to determine whether you are negatively impacting that. I don't think very many users would appreciate (hypothetically) a merge with nouveau that improves GPGPU performance at the expense of 3d.
I agree here, not everything is shareable, but some is. When you know how to implement something efficiently, you go much faster.
I understand your problem, but if you were a graphic designer, I can assure you, you would love to actually have GPGPU made simple with HMPP.
Windows and MacOS are using it more and more.
You'll find some stats on what are the nouveau users actually run on. Nv40 is merely 25% and I guess the number decreases every day.
But I completely agree with you. We hope pscnv and nouveau will merge some day, but pscnv needs to experiment stuffs first.
Any project that increases the Nvidia graphics knowledge base and experience is great.
I'd love to see some OpenCL support in open source drivers. I've only done CUDA but would be willing to switch to get away from Nvidia binary only drivers.
For those wondering about the OpenGL 4 comment, here is what was said in the email:
For 2011 I'm hoping the community can work together to solve..
codec and video
Portability (BSD*, OpenSolaris - which is not dead, and in a crazy world ReactOS)
Highly optimized shader compiler - There was a recent proposal on this, but frankly the people who wrote it have nfc about NVIDIA hw and it was a terrible fit :(
So a company is writing software (not just their compiler, but the underlying layers as well), optimized for their needs and products (obviously to sell something, to justify all this development), and generously releasing code and documentation to the open source community at large... how would the community not welcome this with open arms, especially when that information may directly benefit parallel projects or bring fresh eyes to related problems. That seems to be pretty much how open source projects work these days as I understand it.
It's even difficult to argue it being a 'specialized niche product' [as measured by the narrowly targeted Phoronix survey] considering computers, GUIs, multimedia (CDs and soundcards), 3d, cell phones, etc, were once all niche products, and how bad people are recognizing something they want (until Steve Jobs packages it in a pretty box); especially considering all the excitement in the media around GPGPUs and the delivery of actual products (the 2 latest releases of Photoshop being a very visible example), it's hard to ignore GPGPUs potential application to pretty well everything.
Really... who cares if 2d/3d graphics rendering is a secondary priority for this company... that's like bashing the MySQL team for not making disk storage a higher priority, someone will address that need (but it does seem they are working on it, and have hired community developers who have worked on it... which is great for the community!!!) Open source allows people to solve their own problems, and the useful solutions are merged back in... and if it only meets the needs of their customers, that's OK too (they seem to be sharing the fruits of their efforts for us to use for our purposes).
I really appreciate the work they are embarking on...
Just wanted to say a huge thank you to the PathScale team for contributing this work despite all the undue criticism.
The beauty of open source is you can have different goals but still benefit from each others work. If there are problems with the current graphics infrastructure in linux then having a fork that experiments with different ways of doing things can only be called a good thing. Especially when that fork is supported by a professional compiler company.
There is no doubt that GPGPU is the future, maybe not within the next year for desktop applications; but 5 years from now we'll be very happy that the open source community started work now.
One another note I'm very excited about what a hybrid model using both CPU/GPU resources in a cluster model such as mapreduce can do for the HPC market; if its easy enough to work with and the compilers are smart enough this could make many things possible. Imagine writing your code in a high level language and having it effortlessly run on something like Amazons EC2. This would really help the academic community and smaller data driven businesses.
Furthermore having more attention paid to the compiler pipeline and IR languages can also only be seen as a good thing as everyone, not just NVIDIA users win here.
Looking forward to seeing the fruits of your labour, rock on! =)
Just a little question. You are replacing a lot of components in the current graphics stack. Which of those will be open-sourced and pushed upstream?