Page 3 of 4 FirstFirst 1234 LastLast
Results 21 to 30 of 39

Thread: The State, Direction Of The PSCNV Nouveau Fork

  1. #21
    Join Date
    Aug 2009
    Posts
    2,264

    Default

    Quote Originally Posted by MPF View Post
    Ack, they deserve it.

    But nVidia cards really are well designed and are the currently most used ones :s

    I felt like you until I received a free laptop from my school with an nvidia card. I had to choose between slow 2D support and the latests kernels or the feature-full almighty blob.

    Nouveau has been a great fit so far and I hope Pathscale's effort will help improve the open source support of these cards.
    Look, they already provide accelerated X on Fermi and trust me, Fermi changed _a lot of things_!
    Well that's the bonus side. I am not, and can not be, denying people to open source stuff. I just think effort might be more usedlfull elsewhere. Howwever nouveau could use an R&D department to do things for them.

  2. #22
    Join Date
    Sep 2008
    Posts
    989

    Default

    I'll just add briefly that merging back into Nouveau mainline, if done properly, could well be the best thing for both projects after a certain amount of time spent apart. Hopefully the two communities continue to work together on their common ground, and maintain their compatibility points (e.g. gallium on top of pscnv, mostly unmodified ddx) so that the merge, if it happens, will be seamless.

    Of course, any such merge would have to maintain older paths for pre-NV50 hardware, so the merge wouldn't just be a simple matter of copy-pasting the entire pscnv work over into the nouveau branch. BUT, if it turns out that pscnv can do certain things better than nouveau without negatively impacting any particular use of the GPU (2d, 3d, video playback, GPGPU, frying an egg), then that's great. But please remember that if you do merge with nouveau, do some performance comparisons on the 3d side for a while to determine whether you are negatively impacting that. I don't think very many users would appreciate (hypothetically) a merge with nouveau that improves GPGPU performance at the expense of 3d.

  3. #23
    Join Date
    Feb 2009
    Location
    France
    Posts
    309

    Default

    Quote Originally Posted by allquixotic View Post
    Oh, the OpenGL 4 thing was a Michael guess? I take it the "ReactOS" mention was also Michael?

    Heh... the article makes it seem like Christopher Bergstrm wrote "OpenGL 4" in his email:
    I'm pretty sure it is, indeed. I've just asked Christopher about it.



    Quote Originally Posted by allquixotic View Post
    So the article was worded in a misleading manner, and in fact OpenGL 4 was not claimed as being on the todo list. That's fine. See, that is good for PathScale, because it just shows that you guys are being honest about what you think you can realistically accomplish, and what your priorities are. Making outlandish claims that you don't follow up on just makes you seem like a fool. Thankfully, this was Michael's words, not PathScale's, so you guys are off the hook on that

    But without even a plan for OpenGL 4 support, that just further solidifies the stance that this driver is not targeted to mainstream users, and indeed will not be very useful to them compared to either nouveau mainline, or the proprietary nvidia driver. Unless you plan on implementing an older version of the OpenGL spec instead? 2.0? 2.1? (Neither you nor Michael mentioned that, so I won't assume anything).
    Well, Pathscale don't like gallium in its current implementation. It doesn't mean they won't work on it. As I said the main gallium contributor of nouveau (the nv50+ driver) is hired by Pathscale.


    Quote Originally Posted by allquixotic View Post
    Yes, it does. But there are three "minor" problems:
    (1) Complete OpenGL support (at any version) requires a great many things that have nothing to do with shaders, and everything to do with poking registers on the hardware. In many cases, trying to shoe-horn these things into a shader pipeline will just reduce your performance dramatically, since the hardware optimized to perform that specific task is best-suited to doing so, without loading down the centralised general-purpose resource (shader cores). These are graphics-specific features that will not magically get implemented by having a fantastic compiler targeted at NVC0 GPGPU.
    You barely touch registers while doing 3D, you only do so when you create new channels.

    I agree here, not everything is shareable, but some is. When you know how to implement something efficiently, you go much faster.

    Quote Originally Posted by allquixotic View Post
    (2) What's "fast" for GPGPU is not necessarily fast for 3D. For instance, the volatile nature of TTM's BO management is actually an optimization feature. The cost of the buffer management (moving/evicting buffers) is intended to be an amortized cost so that the overall performance of the system for 3d and intensive 2d tasks is improved. The simple fact that GPGPU requires your pointers to not move around is a design restriction imposed on memory managers implementing GPGPU, which, if imposed equally upon the 3d pipeline, will reduce performance there. At least, this is how I understand the rationale behind why TTM is implemented the way it is today. After all, logic says that it is much easier to implement a buffer management system that does not move around buffers or evict them at all (because the natural state of an object in memory is to remain in a fixed location); the fact that they have had to implement a buffer manager that does move/evict BOs demonstrates that there is a design or performance incentive to do so. This performance incentive may only apply to 3D, but in my opinion, that's perfectly fine (seeing how I use 3D extensively, but never use GPPGU).
    IIRC, the blob doesn't move the buffers

    I understand your problem, but if you were a graphic designer, I can assure you, you would love to actually have GPGPU made simple with HMPP.
    Windows and MacOS are using it more and more.

    Quote Originally Posted by allquixotic View Post
    (3) There are a great many users of Nvidia cards pre-NV50, and these cards will continue to be used for many years. Some of them are still being sold new in retail stores in some countries. Users of these cards want fast 2D and 3D. But since their card is incapable of supporting GPGPU hardly at all, your work does not apply to them. It is fine if this doesn't bother you at all, but realize that this will be one of the main resisting forces from the community (and distro adoption, similarly) to ensure that your driver remains a niche driver for HPC professionals.
    Please have a look at this email I sent to the ML: http://lists.freedesktop.org/archive...er/006747.html

    You'll find some stats on what are the nouveau users actually run on. Nv40 is merely 25% and I guess the number decreases every day.

    But I completely agree with you. We hope pscnv and nouveau will merge some day, but pscnv needs to experiment stuffs first.

    Quote Originally Posted by allquixotic View Post
    It's neat that PathScale is trying to reach out to the community and claims to understand the appeal of cooperating with the community... and I think you've been successful in that, to the extent that people are aware of your company, aware of what your driver's intention is, and don't hate you for it. After all, you guys aren't really doing anything evil; you're just working on a driver for your own particular needs, and the needs of a few others. To those who find this useful, all power to you! And thanks for writing open source software (at least as far as your driver is concerned; "booooo!" for HMPP )
    Yes. That's it! Also, you find GPGPU is a niche market, some find 3D being a niche market on Linux. Is it a reason not to expand both?

  4. #24
    Join Date
    Feb 2009
    Location
    France
    Posts
    309

    Default

    Quote Originally Posted by allquixotic View Post
    I'll just add briefly that merging back into Nouveau mainline, if done properly, could well be the best thing for both projects after a certain amount of time spent apart. Hopefully the two communities continue to work together on their common ground, and maintain their compatibility points (e.g. gallium on top of pscnv, mostly unmodified ddx) so that the merge, if it happens, will be seamless.

    Of course, any such merge would have to maintain older paths for pre-NV50 hardware, so the merge wouldn't just be a simple matter of copy-pasting the entire pscnv work over into the nouveau branch. BUT, if it turns out that pscnv can do certain things better than nouveau without negatively impacting any particular use of the GPU (2d, 3d, video playback, GPGPU, frying an egg), then that's great. But please remember that if you do merge with nouveau, do some performance comparisons on the 3d side for a while to determine whether you are negatively impacting that. I don't think very many users would appreciate (hypothetically) a merge with nouveau that improves GPGPU performance at the expense of 3d.
    Hey hey, I fully agree with that but we are far from the merging point.

    I've been hacking libdrm so that it could either use pscnv (if present) or nouveau. It is not ready for use yet, I'll try to finish it ASAP.

  5. #25
    Join Date
    Aug 2008
    Posts
    231

    Default

    Any project that increases the Nvidia graphics knowledge base and experience is great.

    I'd love to see some OpenCL support in open source drivers. I've only done CUDA but would be willing to switch to get away from Nvidia binary only drivers.

  6. #26

    Default

    For those wondering about the OpenGL 4 comment, here is what was said in the email:

    For 2011 I'm hoping the community can work together to solve..

    GL4 support
    codec and video
    Portability (BSD*, OpenSolaris - which is not dead, and in a crazy world ReactOS)
    Highly optimized shader compiler - There was a recent proposal on this, but frankly the people who wrote it have nfc about NVIDIA hw and it was a terrible fit

  7. #27
    Join Date
    Dec 2008
    Posts
    161

    Default

    Quote Originally Posted by allquixotic View Post
    I don't think PathScale has any intention of creating a generally useful driver for the benefit of the community at large...
    Perhaps you could clarify your core concern; with less accusations, assumptions, and ill will...

    So a company is writing software (not just their compiler, but the underlying layers as well), optimized for their needs and products (obviously to sell something, to justify all this development), and generously releasing code and documentation to the open source community at large... how would the community not welcome this with open arms, especially when that information may directly benefit parallel projects or bring fresh eyes to related problems. That seems to be pretty much how open source projects work these days as I understand it.

    It's even difficult to argue it being a 'specialized niche product' [as measured by the narrowly targeted Phoronix survey] considering computers, GUIs, multimedia (CDs and soundcards), 3d, cell phones, etc, were once all niche products, and how bad people are recognizing something they want (until Steve Jobs packages it in a pretty box); especially considering all the excitement in the media around GPGPUs and the delivery of actual products (the 2 latest releases of Photoshop being a very visible example), it's hard to ignore GPGPUs potential application to pretty well everything.

    Really... who cares if 2d/3d graphics rendering is a secondary priority for this company... that's like bashing the MySQL team for not making disk storage a higher priority, someone will address that need (but it does seem they are working on it, and have hired community developers who have worked on it... which is great for the community!!!) Open source allows people to solve their own problems, and the useful solutions are merged back in... and if it only meets the needs of their customers, that's OK too (they seem to be sharing the fruits of their efforts for us to use for our purposes).

    I really appreciate the work they are embarking on...

  8. #28
    Join Date
    Feb 2009
    Location
    France
    Posts
    309

    Smile

    Quote Originally Posted by Craig73 View Post
    Perhaps you could clarify your core concern; with less accusations, assumptions, and ill will...

    So a company is writing software (not just their compiler, but the underlying layers as well), optimized for their needs and products (obviously to sell something, to justify all this development), and generously releasing code and documentation to the open source community at large... how would the community not welcome this with open arms, especially when that information may directly benefit parallel projects or bring fresh eyes to related problems. That seems to be pretty much how open source projects work these days as I understand it.

    It's even difficult to argue it being a 'specialized niche product' [as measured by the narrowly targeted Phoronix survey] considering computers, GUIs, multimedia (CDs and soundcards), 3d, cell phones, etc, were once all niche products, and how bad people are recognizing something they want (until Steve Jobs packages it in a pretty box); especially considering all the excitement in the media around GPGPUs and the delivery of actual products (the 2 latest releases of Photoshop being a very visible example), it's hard to ignore GPGPUs potential application to pretty well everything.

    Really... who cares if 2d/3d graphics rendering is a secondary priority for this company... that's like bashing the MySQL team for not making disk storage a higher priority, someone will address that need (but it does seem they are working on it, and have hired community developers who have worked on it... which is great for the community!!!) Open source allows people to solve their own problems, and the useful solutions are merged back in... and if it only meets the needs of their customers, that's OK too (they seem to be sharing the fruits of their efforts for us to use for our purposes).

    I really appreciate the work they are embarking on...
    Thanks for this message I really appreciate it.

  9. #29
    Join Date
    Mar 2010
    Posts
    2

    Default

    Just wanted to say a huge thank you to the PathScale team for contributing this work despite all the undue criticism.

    The beauty of open source is you can have different goals but still benefit from each others work. If there are problems with the current graphics infrastructure in linux then having a fork that experiments with different ways of doing things can only be called a good thing. Especially when that fork is supported by a professional compiler company.

    There is no doubt that GPGPU is the future, maybe not within the next year for desktop applications; but 5 years from now we'll be very happy that the open source community started work now.

    One another note I'm very excited about what a hybrid model using both CPU/GPU resources in a cluster model such as mapreduce can do for the HPC market; if its easy enough to work with and the compilers are smart enough this could make many things possible. Imagine writing your code in a high level language and having it effortlessly run on something like Amazons EC2. This would really help the academic community and smaller data driven businesses.

    Furthermore having more attention paid to the compiler pipeline and IR languages can also only be seen as a good thing as everyone, not just NVIDIA users win here.

    Looking forward to seeing the fruits of your labour, rock on! =)

  10. #30
    Join Date
    Jan 2009
    Posts
    627

    Default

    Just a little question. You are replacing a lot of components in the current graphics stack. Which of those will be open-sourced and pushed upstream?

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •