Page 10 of 11 FirstFirst ... 891011 LastLast
Results 91 to 100 of 101

Thread: Intel HD 4000 Ivy Bridge Graphics On Linux

  1. #91
    Join Date
    Nov 2007
    Posts
    209

    Default

    Quote Originally Posted by allquixotic View Post
    ..[thorough expalantion]...
    Yep, exactly!

    Quote Originally Posted by allquixotic View Post
    Like I said, setting a goal for your team to achieve this (what I stated above in my bulleted list) for HD8000 is what would restore my faith in AMD's open source graphics initiative. Failing that, I will probably not purchase another AMD product going forward, nor will I evangelize for AMD to the many people who come to me asking for advice on computer upgrades. Just throwing that out there, take it for what you will.
    Count me in. Until now, 'cause of AMD effort in open source graphics drivers, whenever people ask me what graphic card to buy, I'll recommend AMD card. But, if say, one year from now and their open source quality is just a bit better, I'll jump to other ship. I don't have all the time in the world (can't wait forever) just for AMD..

    Look, we are CHEERING for you, AMD, but you often dissapointing us rather than not.


    Quote Originally Posted by fuzz View Post
    Considering I use Gentoo: Of course. And I'm not talking about just graphics here. An APU is not just a graphic card.
    Ah, power user then. But say, someday you might met the urgency to install linux quickly, and Gentoo can't cut it. And the distro you rely on is not that up-to-date comparing to Gentoo, result in outdated driver for your GPU with their limitation. Then what to do? And how long you'll wait until they (AMD) perfecting their driver?

    Looking back, we don't have much choices for GPU with open source driver in linux land. But now, Intel's increasing their pace, and we have HD4000, good enough for light gaming/3D/etc. Plus their driver is very good comparing to the rest. Like @allquixotic said, it comparable to #1 AMD card from 2 year ago, in open source area. When haswell is out, well, let's see what will happen then.

  2. #92
    Join Date
    Apr 2011
    Posts
    336

    Default

    Some of you are totally wrong about Ivi-bridge GPU. The open source driver gives an equal performance with a 500gflops Radeon with Catalyst, but that is not all. You can overclock IvyGPU almost twice 650mhz to 1250mhz, that is not possible with AMD and Nvidia mobile products because they are drain 50watts when IvyGPU drains 5-10watts, so you may have an open source Teraflop. Also Intel gives the same GPU for Celleron and Pentium to, and farther more gives CPU cores for GPU acceleration as ARM and CELL does, that is at least +200-300macGflops for every IvyCore@3ghz. SandyGPU is 2-2,5 times slower. The actual SandyGPU and IvyGPU is an Imagination PowerVR mp6-mp16 (tile graphics) based on ARMmp-Core, that is the company with OpenRL and with many ARM design. An Ivy SlimLaptop usually Celleron or Pentium probably starts from 350box at least in my country and a PicoUnit from 200box or less, so I will not buy another Nvidia or AMD GPU from now on.

  3. #93
    Join Date
    Sep 2008
    Posts
    989

    Default

    Quote Originally Posted by t.s. View Post
    Looking back, we don't have much choices for GPU with open source driver in linux land. But now, Intel's increasing their pace, and we have HD4000, good enough for light gaming/3D/etc. Plus their driver is very good comparing to the rest. Like @allquixotic said, it comparable to #1 AMD card from 2 year ago, in open source area. When haswell is out, well, let's see what will happen then.
    I just want to make sure you don't misunderstand me: if you look at what the raw hardware is capable of, given a hypothetical "perfectly optimized" driver that delivers ideal performance on all workloads, that Radeon HD5970 from 2 years ago would absolutely obliterate the Ivy Bridge GPU. But that's just hardware. The reason why the Ivy Bridge GPU is competitive with the HD5970 on Linux, with the open source drivers, is that the r600g driver for the HD5970 is only using about 5 to 25% of the graphics card's total potential.

    For one, it only uses one of the two GPUs on the HD5970, a card that has two massive GPUs that are more or less separate (separate enough that they need special code in the drivers to access them both at once, and AFAIK the renderer has to be multi-threaded also).

    So that cuts its utilization down to 50% immediately because it's only using one of the two GPUs.

    Then, of that one GPU that's being used, it uses between 10% and 40% of it, depending on what application you're running. These are estimates based on the comparison between r600g and Catalyst, and we're assuming that Catalyst is near-optimal (Catalyst is probably getting about 85% to 95% of each GPU's potential on most workloads; due to necessary overhead it's never going to use 100.0% on real apps. It'll be lower if your system RAM or CPU or HDD are a bottleneck for the specific application).

    It's this low utilization that allows the Ivy Bridge GPU to look so good. I figure the current Intel drivers are using -- worst case -- about 50% of the Ivy Bridge GPU at a minimum. And it only gets better if your cooling solution can handle overclocking, or if you run some particular application that hits only well-optimized paths in the driver.

    I just didn't want you thinking that the Ivy Bridge hardware is so fast that it literally has more power than AMD's hardware. It doesn't, by a long shot, even if you're comparing it against their Fusion APUs. But when you use what you've got, you get much better results than when you wantonly waste it.

    It's been a battle ever since AMD started their open source initiative: how do we open up the bottlenecks in the driver so that, ultimately, the only component that gets bottlenecked is the GPU (except in cases of extremely high FPS where the CPU gets bottlenecked always)? When you've attained that, your driver is "ready" for prime time. AMD's drivers are a long, long, LONG way from that right now.

  4. #94
    Join Date
    Oct 2008
    Posts
    3,153

    Default

    Quote Originally Posted by artivision View Post
    Also Intel gives the same GPU for Celleron and Pentium to
    That's not true, they have a couple different levels of the GPU. The one Michael and everyone else has been testing is the top of the line (at least for now) but they provide cheaper options in some of their other CPUs. You can't get the high end version in slower hardware like a Celeron or Pentium.

    The actual SandyGPU and IvyGPU is an Imagination PowerVR mp6-mp16 (tile graphics) based on ARMmp-Core, that is the company with OpenRL and with many ARM design.
    That's also false. The Intel GPU is designed in-house, on Intel's own IP. They don't base it on anything from the ARM camp - maybe what you are thinking of now is the current Atom system-on-a-chip that they've got running in smartphones. That indeed licenses a standard ARM GPU for it, but it's not based on Ivy or Sandy bridge CPU.

  5. #95
    Join Date
    Apr 2011
    Posts
    336

    Default

    No you are wrong, any Intel graphics today its based on PowerVR, not only AtomGPU. Also Intel wants to dismiss this company because Imagination has bad open source policy.

  6. #96
    Join Date
    Apr 2011
    Posts
    336

    Default

    As for Cellerons and Pentiums it's the same GPU with lower frequency.

  7. #97
    Join Date
    Aug 2007
    Posts
    6,634

    Default

    Its not fully the same, it lacks the h264 encoding feature. You need Core i to get that. But i still want to see a working Linux app using it - it is provided via vaapi since ages. There might be some other minor changes as well, but thats the most important one.

  8. #98
    Join Date
    Nov 2007
    Posts
    209

    Default

    @allquixotic: Yep, I know. When the two are equally optimized, AMD HD5970 is on another level. What I want to state is how bad AMD open source driver now.

    @artivision: I second smitty3286. Intel GPU is in-house made. Just 2 gen atoms use powerVR GPU. This is common knowledge that you can googling around. Not too hard to find (the fact).

  9. #99
    Join Date
    Sep 2008
    Posts
    989

    Default

    Quote Originally Posted by artivision View Post
    No you are wrong, any Intel graphics today its based on PowerVR, not only AtomGPU. Also Intel wants to dismiss this company because Imagination has bad open source policy.
    Sorry but this is nonsense. Any desktop x86 or x86_64 processors produced by Intel, except for Atom, are based on an Intel-developed series of IGPs. Even ignoring the facts (which you seem to want to do), it's completely obvious why this is true:

    • The Intel chips that use the Imagination PowerVR, such as the "GMA 500", have never had any open source drivers released for them. They may have an open modesetting kernel module, but they still have a binary blob in userspace that does all of the hardware interaction.
    • No manufacturer to date has successfully released any 3d acceleration or video acceleration open source code for any chips based on Imagination PowerVR. Intel is no exception to this rule. Has Apple released any? No. Has Motorola released any? No. Has Samsung released any? No. Has HTC released any? No. Has Intel released any? No. The list goes on.
    • Intel does release full and complete documentation about every last detail of their own GPUs -- the ones that they develop themselves without using Imagination PowerVR technology. You can see the full details of the GPU by reading the Mesa source code. It isn't obfuscated and it isn't hidden behind opaque firmware or hex values. A programmer can actually understand how this chip operates.


    Put A and B together dude. Imagination Tech is, as you admitted, an anti-open source company. So why in the HELL would they allow Intel to release full disclosure open source drivers on their hardware, when every other company in existence has been denied the same? This company must have dozens of licensees, but they continue to keep their "intellectual property" as secret as they can. They would never in a thousand years jeopardize that by letting Intel open their drivers.

    So the conclusion is obvious: they're NOT using PowerVR in their own graphics core! You can find direct evidence of this anywhere you look. Failure to believe it just means that you're not a rational person or are incapable of making fact-based judgment (in other words, "you're insane" -- you can't argue with a madman.)

  10. #100
    Join Date
    Apr 2011
    Posts
    336

    Default

    16cores same cluster as PowerVR. Each core has 8*128bit(quad issue, fmac) vectors, with 64flops or 96macs and it's the same with PowerVR. SemiUnified vectors (PowerVR). Tile graphics with shader rasterisation, no raster units, only TMUs (PowerVR). The situations screams ARMmp core. Even if it's not PowerVR it's a copy paste of thinks that they are not patented any more.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •