Page 2 of 2 FirstFirst 12
Results 11 to 18 of 18

Thread: NVIDIA Still Working On Linux 3.11+ Support

  1. #11
    Join Date
    Sep 2008
    Location
    Vilnius, Lithuania
    Posts
    2,399

    Default

    Quote Originally Posted by sirdilznik View Post
    Most major distros already had their own stopgap patches in place for new/upcoming releases, so this is a non-issue for most people.
    Their patches were stabs in the dark, reporting lies to the driver. This latest official patch at least makes sure that everyone below 128 GiB are not affected; the unofficial ones don't.

  2. #12
    Join Date
    Jun 2009
    Posts
    504

    Default

    Quote Originally Posted by synaptix View Post
    Yeah, doubt any home computer user has that much RAM. Most I seen for a personal home computer was 16GB. Even if they were running a home server, still wouldn't need 100+GB.

    I've only seen datacenters run that amount of RAM since they need it.

    Regardless, supporting latest is nice. 3.12 is so much better than most previous versions I've used.
    Haswell already supports 32GB memory and some Clevo resellers are offering 32GB memory as a customizable option (at a very huge price tag though).

    Soon it will be 64GB memory...then 128GB...then 256GB...and then Nvidia will have a problem again.

  3. #13
    Join Date
    Jul 2008
    Location
    Berlin, Germany
    Posts
    790

    Default

    Quote Originally Posted by Sonadow View Post
    Soon it will be 64GB memory...then 128GB...then 256GB...and then Nvidia will have a problem again.
    I don't think that you can buy a system with 256 GB RAM and not break the bank any time soon. Most consumer mobos max out at 32 GB currently (4*8GB). Even entry level server mobos that support registered memory like the ASUS KCMA-D8 would need to be equipped with 32 GB RDIMMs which are quite expensive still ($500+ a pop).

    By the time 256 GB RAM systems become affordable, I am sure NVidia will have found a way to make the driver work properly with newer kernels.
    Last edited by chithanh; 11-03-2013 at 12:06 PM. Reason: I fail at math

  4. #14
    Join Date
    Sep 2011
    Posts
    139

    Default

    Bleh Nvidia and Amd just need to open the source code of their binary blobs and get the code into the kernel and this would not be an problem.


    But no they think they have some uber national secrets embedded in their code LOL

  5. #15
    Join Date
    Jun 2011
    Posts
    809

    Default

    Quote Originally Posted by Sonadow View Post
    Haswell already supports 32GB memory and some Clevo resellers are offering 32GB memory as a customizable option (at a very huge price tag though).

    Soon it will be 64GB memory...then 128GB...then 256GB...and then Nvidia will have a problem again.
    Sonadow, You are being silly / not thinking about this clearly / are constructing a very poor argument. Please use some common sense;

    1st. The current patch is a *temperary workaround*, until Nvidia resolves the issue. (which i imagine wil be solved in a release or two - ie: a short time from now).

    2nd. By the time 128+ Ram becames "the norm" for even a minority of average users, this issue will be long gone and probably long forgotten, as well.... being as Nvidia is already working on it and i can't imagine this is going to take YEARS to solve ~ which is the time-frame that would be required, in order for your argument / point to make *any sense*, at all! (which it doesn't).

    3rd. Intel supporting 32GB or 64GB is nothing new, nor is this likely a new concept for Nvidia, either. There are 2nd Gen i7s (Extreme editions) that can support 64GB of RAM, with the proper MOBO, etc...and obviously Intel XEONs have support large amounts of RAM for years. (ie: 128GB+ of RAM). Not to mention that it isn't uncommon to See Mac Pros that exceed 32GB of RAM, in professional environments (Like Studios). Example; Article / reference; http://macperformanceguide.com/blog/...GB-in-use.html <note the 2009/10 models support well above 32GB of RAM, regardless of H/W configuration, even in MacOSX Mountain lion>

    anyway, i don't see this as a problem at all. Something changes in the kernel that Nvidia was using. now they will adapt their driver - no big deal.

  6. #16
    Join Date
    Nov 2012
    Location
    France
    Posts
    459

    Default

    Quote Originally Posted by Rallos Zek View Post
    Bleh Nvidia and Amd just need to open the source code of their binary blobs and get the code into the kernel and this would not be an problem.


    But no they think they have some uber national secrets embedded in their code LOL
    They cannot do that for legal reasons. There are tons of software patents and IP-protected stuff in the proprietary drivers.

    The only legal way to get around would be to work more on the open source drivers so that they work just as well (or better) than the proprietary ones.

  7. #17
    Join Date
    Sep 2007
    Location
    Connecticut,USA
    Posts
    941

    Default

    Quote Originally Posted by Calinou View Post
    They cannot do that for legal reasons. There are tons of software patents and IP-protected stuff in the proprietary drivers.

    The only legal way to get around would be to work more on the open source drivers so that they work just as well (or better) than the proprietary ones.
    They are taking baby steps in that direction now which is good by releasing some basic docs, but getting Nouveau on a par with the nvidia blob will still take time as the major piece right now that's sorely needed is reclocking and the docs for that function won't be released for some time (IF it can be released)

  8. #18
    Join Date
    May 2011
    Posts
    1,295

    Default

    Quote Originally Posted by DeepDayze View Post
    They are taking baby steps in that direction now which is good by releasing some basic docs, but getting Nouveau on a par with the nvidia blob will still take time as the major piece right now that's sorely needed is reclocking and the docs for that function won't be released for some time (IF it can be released)
    To me it sounded like their long-term plan was to move the icky stuff into firmware microcode and then they could have equitable open-source drivers.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •