Page 1 of 2 12 LastLast
Results 1 to 10 of 42

Thread: Speeding Up The Linux Kernel With Your GPU

Hybrid View

  1. #1
    Join Date
    Jan 2007
    Posts
    14,894

    Default Speeding Up The Linux Kernel With Your GPU

    Phoronix: Speeding Up The Linux Kernel With Your GPU

    Sponsored in part by NVIDIA, at the University of Utah they are exploring speeding up the Linux kernel by using GPU acceleration. Rather than just allowing user-space applications to utilize the immense power offered by modern graphics processors, they are looking to speed up parts of the Linux kernel by running it directly on the GPU...

    http://www.phoronix.com/vr.php?view=OTQxMQ

  2. #2
    Join Date
    Jul 2008
    Location
    Greece
    Posts
    3,795

    Default

    Too bad there are no plans for a CUDA state tracker. In the last two months, I started with some CUDA programming and I must say that it's much nicer to work with compared to OpenCL. From my point of view, OpenCL is the inferior choice.

  3. #3
    Join Date
    Oct 2009
    Location
    .ca
    Posts
    403

    Default

    About time someone started to look into using GPUs as general co-processors/vector units.

  4. #4
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    5,128

    Default

    One bug in either Nvidia's drivers or in the cuda code, and what happens to the kernel?

    Hang? Oops?

  5. #5
    Join Date
    Jul 2008
    Location
    Greece
    Posts
    3,795

    Default

    Quote Originally Posted by curaga View Post
    One bug in either Nvidia's drivers or in the cuda code, and what happens to the kernel?

    Hang? Oops?
    Same thing that happens with a bug in the kernel itself.

  6. #6
    Join Date
    Dec 2008
    Location
    Poland
    Posts
    119

    Default

    A better choice would have been OpenCL, which can run on both AMD and NVIDIA GPUs and is an open industry standard.
    Better choice is the one who has means and can do it in affordable amount of time. Why climb on tip of the tree if you can get low hanging fruit without much effort?

  7. #7
    Join Date
    May 2009
    Posts
    30

    Default

    Presumably even if the CUDA option ends up being a bit of a bust, the work on parallelising the kernel could have good payoffs in the non-gpu kernel given the ever increasing core counts of systems.

  8. #8
    Join Date
    Nov 2008
    Posts
    770

    Default

    To utilize GPU power for filesystem decryption, the better choice would be to move the FS to userspace (FUSE) instead of GPU-stuff to the kernel.

    In either case, much care needs to be taken to avoid compromising the key. GPU memory isn't protected much, and leftover memory usually gets assigned to the next task without clearing it first. Do either CUDA or OpenGL make any guarantees there?

  9. #9
    Join Date
    Jun 2010
    Location
    ฿ 16LDJ6Hrd1oN3nCoFL7BypHSEYL84ca1JR
    Posts
    1,052

    Default

    Quote Originally Posted by NSLW View Post
    Better choice is the one who has means and can do it in affordable amount of time. Why climb on tip of the tree if you can get low hanging fruit without much effort?
    Because it is nvidia-only.

  10. #10
    Join Date
    Jul 2009
    Posts
    72

    Default simd instruction set

    Quote Originally Posted by not.sure View Post
    About time someone started to look into using GPUs as general co-processors/vector units.
    but we already have vector units on our CPUs. Does eCryptFS use SSE or AltVec or VIS (the less known sparc simd instruction set) to accelerate encryption?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •