Page 3 of 4 FirstFirst 1234 LastLast
Results 21 to 30 of 34

Thread: 15-Way Open-Source Intel/AMD/NVIDIA GPU Comparison

  1. #21
    Join Date
    Aug 2012
    Posts
    4

    Default

    Quote Originally Posted by curaga View Post
    Intel is very stable if you stick to the versions Intel recommends (eg, use Ubuntu). If you deviate, intel starts to be far more unstable than radeon.

    I can only guess that radeon gets testing from a wider base, whereas everyone at the Intel OSTC is forced to upgrade in lockstep :P
    I have been plaing gw2 through wine on archlinux with nearly latest compiled kernel and driver source for a while now, never had any problems. I love intel Might have to upgrade to the new CPU though to get more than 20FPS

  2. #22
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    5,130

    Default

    Arch people, eg means "for example". It does not mean Intel only works on Ubuntu.

    It all depends on how different versions you use compared to those Intel has tested. Saying you run Arch is not useful info there, useful info would be "Intel says use these versions, but my X server is foo instead, and kernel bar instead".

    Still, I'm far for the only one with these experiences; search this forum if you need more examples.

  3. #23
    Join Date
    Oct 2009
    Posts
    2,118

    Default

    Quote Originally Posted by Ericg View Post
    If the only thing you care about is GPU performance...yes. But Intel smacks AMD around on CPU performance.
    You appear to be on some seriously bad drugs, so I'll correct things for you;

    Intel: Great CPU performance, better power consumption perfectly workable and acceptable GPU performance (You wont play games on high, but low and maybe even medium should be okay), video decode support
    Intel: Very very very overpriced CPUs with no to negligible benefit.

    AMD: Okayish CPU performance, worse power consumption (REALLY bad until you start running DPM kernels if your GPU is integrated), Good and acceptable GPU performance, no video decode support YET--You need kernel 3.10 and Mesa 9.2/10.0.
    AMD: Amazing CPU's with 99.99% performance of intel for 25% the price. No brainer.
    Best GPU performance of all.
    Video decode acceleration supported by open source drivers -- aka "out of the box".

    Personally I'm sticking to Intel Integrated graphics unless I really need a discrete card, and at that point I'll get an AMD discrete, not an AMD integrated.
    Your loss....

  4. #24
    Join Date
    Dec 2007
    Posts
    2,375

    Default

    Quote Originally Posted by Ibidem View Post
    What on earth is up with the Radeon HD 6450?
    It's all about the memory bandwidth. The 6450 probably has ddr3 memory rather than gddr5 memory.

  5. #25
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    5,130

    Default

    Yeah, but so does the Intel integrated, using system memory, yet having many times higher fps.

  6. #26
    Join Date
    Dec 2007
    Posts
    2,375

    Default

    Quote Originally Posted by curaga View Post
    Yeah, but so does the Intel integrated, using system memory, yet having many times higher fps.
    Single channel vs. dual channel.

  7. #27
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    5,130

    Default

    The 6450 very likely has more than one DDR chip in there. How many pennies does it save to wire them in single channel vs dual?

    If it were one mem chip, then the savings would be clear, but with multiple chips I don't see who thought it was a good idea to pinch pennies there.

  8. #28
    Join Date
    Jan 2010
    Posts
    364

    Default

    Quote Originally Posted by curaga View Post
    The 6450 very likely has more than one DDR chip in there. How many pennies does it save to wire them in single channel vs dual?

    If it were one mem chip, then the savings would be clear, but with multiple chips I don't see who thought it was a good idea to pinch pennies there.
    You also need additional logic in the memory controller and don't forget about the interconnects between die and packaging, which need a lot of space. And the PCB becomes more complex. It's a fact this GPU only has a 64 bit memory interface; it almost sounds like you are doubting that?
    Last edited by brent; 07-02-2013 at 04:53 PM.

  9. #29
    Join Date
    Nov 2007
    Posts
    1,353

    Default

    Quote Originally Posted by brent View Post
    You also need additional logic in the memory controller and don't forget about the interconnects between die and packaging, which need a lot of space. And the PCB becomes more complex. It's a fact this GPU only has a 64 bit memory interface; it almost sounds like you are doubting that?
    Also I'm pretty sure each individual chip has only a 16bit interface, so it would take at least 4 chips or more to get 64bits of bandwidth.

  10. #30
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,464

    Default

    The other thing to remember is that integrated GPUs are getting more powerful every year -- you can't automatically assume that older dGPUs are more powerful than integrated GPUs any more. AFAIK the Intel HD 4600 has the same number of shader ALUs as the HD 6450 dGPU and a higher engine clock.

    I *think* the ROPs on the HD 4600 are 4 pixels wide (and there are 2 ROPs) so 2x the 6450 there. Combine that with wider / faster memory as well (128-bit vs 64-bit) and it seems to me that the HD 4600 *should* be faster than the HD 6450.

    The GPU in Trinity/Richland (>2x the ALU count, wider ROPs, 2 channel memory) is a better comparison.
    Last edited by bridgman; 07-02-2013 at 10:36 PM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •