Page 3 of 5 FirstFirst 12345 LastLast
Results 21 to 30 of 42

Thread: Speeding Up The Linux Kernel With Your GPU

  1. #21
    Join Date
    Mar 2009
    Location
    in front of my box :p
    Posts
    733

    Default

    Um, wait. Can anybody explain me slowly what I am missing here. Okay, I got lots of work so I was too "lazy" to read the links. But. Doesn't such a thing need drivers to access the hardware? And if this is done all in kernel... oh, wait. Where were Nvidias free as in freedom (L)GPL/BSD/MIT drivers? (nouveau doesn't count)

  2. #22
    Join Date
    May 2007
    Location
    Third Rock from the Sun
    Posts
    6,532

    Default

    Quote Originally Posted by allquixotic View Post
    Maybe Software RAID could somehow be accelerated by the GPU, although you'd need a very large stripe size for it to be worth it. With say RAID-5, you might want to be able to calculate parity bits faster. If you factor in GPU setup latency and the GPU can still do that faster than the CPU, that's great -- go for it. But what about the vast majority of the people who either don't use RAID, or use hardware RAID that offloads those calculations to dedicated hardware anyway?
    This has already been researched and implemented:
    http://www.google.ca/url?sa=t&source...e-UC3g&cad=rja

  3. #23
    Join Date
    Oct 2009
    Location
    .ca
    Posts
    392

    Default

    Silly question:
    Can't GPUs be made to assist the CPU in software rendering (3D, video) via a standard instruction set extension (say 'SSE42' or 'x88')? Wouldn't that allow us to get rid of all the 'graphics driver' mess and have such things HW accelerated independently of the specific hardware?

  4. #24
    Join Date
    May 2007
    Location
    Third Rock from the Sun
    Posts
    6,532

    Default

    Quote Originally Posted by not.sure View Post
    Silly question:
    Can't GPUs be made to assist the CPU in software rendering (3D, video) via a standard instruction set extension (say 'SSE42' or 'x88')? Wouldn't that allow us to get rid of all the 'graphics driver' mess and have such things HW accelerated independently of the specific hardware?
    Yup it could in theory but it would be much slower and inefficient. That is basically what intels Larrabee was trying to do.

  5. #25
    Join Date
    Aug 2009
    Posts
    2,264

    Default

    Quote Originally Posted by deanjo View Post
    Yup it could in theory but it would be much slower and inefficient. That is basically what intels Larrabee was trying to do.
    Lol, no... Larrabee was some sort of CPU design that allowed you to do vector calculations in the CPU. These vector registers could aid in 3D rendering, but that's about it, basically.

    I still have that Intel paper somewhere in my Gmail account is case you don't believe me...

  6. #26
    Join Date
    May 2007
    Location
    Third Rock from the Sun
    Posts
    6,532

    Default

    Quote Originally Posted by V!NCENT View Post
    Lol, no... Larrabee was some sort of CPU design that allowed you to do vector calculations in the CPU. These vector registers could aid in 3D rendering, but that's about it, basically.

    I still have that Intel paper somewhere in my Gmail account is case you don't believe me...
    You are arguing but saying the same thing. SSE and the likes brought vector specific registers to the x86 much like Altivec did for the PPC. In fact AVX is an effort to further improve on those capabilities.
    Last edited by deanjo; 05-08-2011 at 05:22 PM.

  7. #27
    Join Date
    Aug 2009
    Posts
    2,264

    Default

    Quote Originally Posted by deanjo View Post
    You are arguing but saying the same thing. SSE and the likes brought vector specific registers to the x86 much like Altivec did for the PPC. In fact AVX is an effort to further improve on those capabilities.
    Wasn't the thing we were arguing about that the CPU offloads these calculations to the GPU with a standardised instruction set instead of replacing them with the CPU instruction sets?

    I thought you were saying that Larrebee ofloaded them. I meant to say that it does them.

  8. #28
    Join Date
    Oct 2009
    Posts
    1,987

    Default

    Just to put this into perspective, this project is all about nvidia trying to turn the linux kernel into their own personal proprietary blob. Drop dead nvidia!

  9. #29
    Join Date
    Aug 2009
    Posts
    2,264

    Default

    Quote Originally Posted by droidhacker View Post
    Just to put this into perspective, this project is all about nvidia trying to turn the linux kernel into their own personal proprietary blob. Drop dead nvidia!
    No. nVidia needs Linux to manage their hardware. Don't forget that they are working on a CPU that can never match AMD or Intel. Linus would never accept it and nVidia knows this.

    Linux is key to their hardware adoption in this regard and therefore they can't and won't do that.

  10. #30
    Join Date
    May 2011
    Posts
    20

    Default

    Quote Originally Posted by droidhacker View Post
    Just to put this into perspective, this project is all about nvidia trying to turn the linux kernel into their own personal proprietary blob. Drop dead nvidia!
    Seeing as the linux kernel is GPL'ed I don't see how this could be possible ... no conspiracy theories please ...

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •