Page 4 of 5 FirstFirst ... 2345 LastLast
Results 31 to 40 of 42

Thread: Speeding Up The Linux Kernel With Your GPU

  1. #31
    Join Date
    Aug 2009
    Posts
    2,264

    Default

    Quote Originally Posted by matthewbpt View Post
    Seeing as the linux kernel is GPL'ed I don't see how this could be possible ... no conspiracy theories please ...
    Well it's simple:
    1. nVidia hands out no documentation, so you need the blob;
    2. Open source GPL parts refer to the blob;
    3. Linux depends upon proprietary.

  2. #32
    Join Date
    May 2011
    Posts
    20

    Default

    Quote Originally Posted by V!NCENT View Post
    Well it's simple:
    1. nVidia hands out no documentation, so you need the blob;
    2. Open source GPL parts refer to the blob;
    3. Linux depends upon proprietary.
    Yes, except no parts of the kernel need, or are likely to need, CUDA to run. This just gives one the option of running kernel code on GPU, there's no chance that the kernel will ever move to requiring a GPU for kernel level operations.

  3. #33
    Join Date
    Nov 2007
    Posts
    1,024

    Default

    Quote Originally Posted by deanjo View Post
    Yup it could in theory but it would be much slower and inefficient.
    No, it wouldn't. It just requires standardization of the GPU instruction sets. That isn't happening right now because there's still a lot of innovation going on, but it may well happen in the future.

    Keep in mind that running x86 code on the CPU is a bottleneck in the same way, as the x86 instruction set sucks. (And don't start arguing crap like how the i7 is so fantastic -- the micro-architecture of the i7 that makes it so fast and the instruction decoder are not the same thing.)

  4. #34
    Join Date
    Aug 2009
    Posts
    2,264

    Default

    x86 is going to die soon. The only os requiring it is Windows and after seeing a Windows 8 presentation by Steve Balmer Windows is going platform independant. They showed Windows 8 running live on a couple of SOCs, including nVidia's ARM Tesla platform and Texas Instruments SOC. The also recompiled Office for ARM and Office now has GPU acceleration and not only in presentation form. IE now also has GPU accel.

    Given that AMD and Intel already have a RISC architecture I wouldn't be surprised if x86 moves to some legacy mode. There is also a Windows Itanium port, but not for consumers. Intel can afford a lottle speed loss if we. compare AMD CPUs.

    (Windows 8 can run on a device with the size of your phone now, including Aero)

  5. #35
    Join Date
    Oct 2007
    Location
    Sweden
    Posts
    174

    Default

    Quote Originally Posted by V!NCENT View Post
    The intent is obvious:
    -Intel good CPU, bad GPU
    -AMD arguably better but slower CPU, great GPU
    -nVidia crappy CPU, good GPU

    With the fusion stuff comming up, nVidia can only compete with good shaders, whilst keeping the nessecary CPU stuff there. So in order for nVidia to get a piece of the cake, they must do this.

    Quite frankly, with all my AMD fanboyism aside, I realy like what they're doing. They may:
    -Improve the Linux kernel (and break some for a short while) in a way that's accepted
    -Give Linux a serious technological edge
    -Shader cores are much better for kernels, I think, because a kernel is all about management and what greater way to have this furiously multicored? In fact I like anything that's not time-sliced.
    -Maybe gives nVidia a good reason to open source or imrpove Gallium... Maybe...
    *Yawn*
    How can AMD's cpus be better but slower? There is truly some arguing there to be done. Bring it on!

  6. #36
    Join Date
    Aug 2009
    Posts
    2,264

    Default

    Quote Originally Posted by misiu_mp View Post
    *Yawn*
    How can AMD's cpus be better but slower? There is truly some arguing there to be done. Bring it on!
    Less power consumption and better memory transfers. AMD has been making better CPU's for a long while. For example proper multi-core by not placing two dual cores on one package but realy proper quad-core design on a single die, resulting in much less latency for example. They asterd 3DNow! and also x86-64.

    Now back to you.

  7. #37
    Join Date
    Oct 2007
    Location
    Sweden
    Posts
    174

    Default

    Quote Originally Posted by V!NCENT View Post
    Less power consumption and better memory transfers. AMD has been making better CPU's for a long while. For example proper multi-core by not placing two dual cores on one package but realy proper quad-core design on a single die, resulting in much less latency for example. They asterd 3DNow! and also x86-64.

    Now back to you.
    As a rule of thumb for modern, power-saving CPUs, faster is better.
    Since quite a while, it is not true that AMD has lower power consumption. This is painfully clear with recent tests comparing with latest intel cpus. AMD cpus are slow, whether they have better transfers and latency or not. They are forced to compete with their best designs in the medium and low tier of cpus. They do this on the basis of price. Only technical advantage (or rather marketing advantage) they have is in providing more cores, promising better interactivity (even though overall performance is not better than competing products). It doesn't make a difference they were first with multi-core and 64bits. This belongs to history books, not current business considerations (except maybe some licensing deals they are still using for the 64bit instruction set).
    Sadly, AMD's K10 is *old* and obsolete. We can wait and see what Bulldozer brings, but Intel has technologically superior products today. I am getting tired of waiting for AMD.

    Your turn.

  8. #38
    Join Date
    Oct 2007
    Posts
    912

    Default

    Quote Originally Posted by misiu_mp View Post
    As a rule of thumb for modern, power-saving CPUs, faster is better.
    Since quite a while, it is not true that AMD has lower power consumption. This is painfully clear with recent tests comparing with latest intel cpus. AMD cpus are slow, whether they have better transfers and latency or not. They are forced to compete with their best designs in the medium and low tier of cpus. They do this on the basis of price. Only technical advantage (or rather marketing advantage) they have is in providing more cores, promising better interactivity (even though overall performance is not better than competing products). It doesn't make a difference they were first with multi-core and 64bits. This belongs to history books, not current business considerations (except maybe some licensing deals they are still using for the 64bit instruction set).
    Sadly, AMD's K10 is *old* and obsolete. We can wait and see what Bulldozer brings, but Intel has technologically superior products today. I am getting tired of waiting for AMD.

    Your turn.
    I'll dive in here and state: faster clock speed does not always mean better. There really isn't a rule of thumb anymore; there's what you need and which hardware better suits it.
    AMD does give better performance per watt in some areas, but not in others, and the same goes for Intel. The two companies are also appearing to diverge in their focus areas - AMD looks to investing more in parallel tasks, Intel still going for more straight-forward number crunching, with both power and performance in these areas indicating that.
    None of which has anything to do with offloading any kernel tasks onto a GPU.

  9. #39
    Join Date
    Oct 2007
    Location
    Sweden
    Posts
    174

    Default

    Quote Originally Posted by mirv View Post
    I'll dive in here and state: faster clock speed does not always mean better. [...]
    AMD does give better performance per watt in some areas, but not in others, [...]
    None of which has anything to do with offloading any kernel tasks onto a GPU.
    I wasn't talking about clock speeds, but the performance. Performance per watt is not AMDs advantage any more pretty much anywhere (don't know about opterons).
    Based on tests on sites like TomsHardware and AnadTech, you can see that amd systems consume more power during load (and often during idle) than the competing Intel cpus.
    Modern power saving CPUs almost switch off when not used, so even the most powerful processors will only be using power during computation and only as much of it as is needed to finish the computation. Here comes AMDs greatest disadvantage: their slowness forces them to working longer. Not only does it take more time, but also more power.
    They do sell more cores for a price than intel, but that's pretty much a marketing move. They don't really have an advantage in multiprocessing - this technology has spread over the whole computing industry, even arm is dual-core. AMD had an advantage here before, but today, I don't think intel's multi-core solutions are worse than AMD's. They do use their performance superiority to dictate a higher price for their multi-core products though.
    Yes I know it is not about kernel and GPUs. It was a hostile thread takeover =), but it doesn't look like it will trigger a flame war, so I guess we will stop this soon.

  10. #40
    Join Date
    Oct 2007
    Posts
    912

    Default

    Well, I generally don't take benchmark scores at face value. There's a long history of bias - typically towards Intel, but I guess some would be towards AMD too. I see quite a bit of "intel uses less power....until the entire system power is considered, then it's AMD", so it's quite likely that each uses about the same power in the end.
    Now, portables (so...laptops) with encrypted disks and Fusion based processors - that will be interesting with battery life (just to feed back on-topic for laughs).

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •