Page 8 of 10 FirstFirst ... 678910 LastLast
Results 71 to 80 of 93

Thread: Why More Companies Don't Contribute To X.Org

  1. #71
    Join Date
    Dec 2009
    Location
    Greece
    Posts
    351

    Default

    Quote Originally Posted by drag View Post
    The next step after putting the GPU on die is just to suck the rest of the motherboard on there.

    With embedded systems it's called "System On a Chip". You have your graphics, processor, memory controller, drive controllers, wireless controllers, ethernet, etc etc. All on one chip.

    That's the future of the PC also.
    No, this won't happen for PCs. You see, the idea will be to use multiple APUs inside a PC for additional performance, the way we do crossfire now for additional graphics performance.

    If Pc processors become "system on a chip", then much functionality will become duplicated and inefficient. There is no point in having 4 drive controllers for example, if you have 4 APUs.

    System on a chip will be the future for all mobile devices though. In the end, Intel and AMD will produce only 2 lines (in general) of products:

    APUs and SOCs, in various editions depending on the needs.

  2. #72
    Join Date
    Dec 2009
    Location
    Greece
    Posts
    351

    Default

    Quote Originally Posted by deanjo View Post
    As far as timeframe goes I can't see anything radically changing in respect to discreet solutions for at least another 10-15 years.
    No. You are wrong.

    We need 3 shrinks in lithography to reach that point.

    32nm for first generation APUs-decent graphics performance in low-to-medium settings.

    22nm for second- this will reach good enough performance for modern gaming, probably able to play most modern games in medium(1440 or 1680) resolutions with all or most of effects enabled.

    16nm for the third- this will be the final nail in the coffin. By this time OpenCl will be mature and the gpu part will probably be bigger than the cpu part. Graphics performance will be plenty for mainstream gaming, and higher resolutions will simply need more APUs. Crossfire solutions will be mature enough for almost double scalling.

    11nm for the forth- At this time, AMD will probably stop selling discreet gpus, since the gpu part inside an APU will be by far the bigger anyway.

    We are expected to reach 16nm approx. in 2013-14. 11nm in 2015-16. This is not set in stone, and it might be pushed further, but at least Intel is positive that it can be done.

    You have to remember, that with each shrinkage gpu performance will almost double.

    By 2020, it won't be possible to find dedicated GPUs in stock...

  3. #73
    Join Date
    Dec 2009
    Location
    Greece
    Posts
    351

    Default

    And to make the previous messages on-topic:

    In the long term, it won't matter much which graphics server is used, X or Wayland.

    Graphics should be handled by the gpu directly and entirely, since gpus on die will be universal and powerful.

    This means we need better opensource drivers and compositing window managers, since they are what matters most.

    Don't be surprised if after 2015-16 you see dri drivers wholly inside the Linux kernel. It will make sense by then, if APUs take over the market(the most probable outcome).

    If you believe this is sci-fi, then try to explain why AMD bought ATI, and why Intel experiments with Larrabee...Try to understand what is the big deal behind Fusion and why AMD keeps thinking it is the future. Try to undestand why the 32nm revision of Core ix are simply dual core Nehalems with on package gpus, and Sandybridge will have on die gpus in its entire lineup...

    In reality, the big 3 companies (Intel, AMD, Nvidia), know that this will be the future from 2003-2004 or even past that. It was known because we cannot raise mhz anymore, and general code doesn't scale well after 8 cores.

    NVIDIA since it doesn't have cpus, tries to reach that point from the gpu side, with CUDA. Intel since it doesn't have dedicated gpus, it obviously tries to reach that point from the cpu side. AMD could choose to develop a gpu, but decided years ago that simply purchasing a gpu player would be better. And bought ATI.

  4. #74
    Join Date
    Dec 2009
    Location
    Greece
    Posts
    351

    Default

    I studied Wayland a little more. Previously i hadn't looked into its details.

    Before i mentioned what we really need is better opensource drivers plus better compositing managers, and that X or Wayland will be irrelevant.

    Well, it seems i am right, but with a correction:

    Wayland is a compositing manager itself.

    So it seems that it could provide advantages in latency after all, and it is the way forward, not because it is new but because it skips some unneeded communications between X and a compositor.

    Wow, this makes me want to install Wayland on my Arch ASAP(when Qt port is ready).

  5. #75
    Join Date
    Aug 2009
    Posts
    2,264

    Default

    I am excited by your posts, Templar ^^,

  6. #76
    Join Date
    May 2007
    Location
    Third Rock from the Sun
    Posts
    6,582

    Default

    Quote Originally Posted by TemplarGR View Post
    No. You are wrong.

    We need 3 shrinks in lithography to reach that point.

    32nm for first generation APUs-decent graphics performance in low-to-medium settings.

    22nm for second- this will reach good enough performance for modern gaming, probably able to play most modern games in medium(1440 or 1680) resolutions with all or most of effects enabled.

    16nm for the third- this will be the final nail in the coffin. By this time OpenCl will be mature and the gpu part will probably be bigger than the cpu part. Graphics performance will be plenty for mainstream gaming, and higher resolutions will simply need more APUs. Crossfire solutions will be mature enough for almost double scalling.

    11nm for the forth- At this time, AMD will probably stop selling discreet gpus, since the gpu part inside an APU will be by far the bigger anyway.

    We are expected to reach 16nm approx. in 2013-14. 11nm in 2015-16. This is not set in stone, and it might be pushed further, but at least Intel is positive that it can be done.

    You have to remember, that with each shrinkage gpu performance will almost double.

    By 2020, it won't be possible to find dedicated GPUs in stock...
    Your assuming that the current graphics capabilities will remain the same. This has never been the case. As we have seen in the past that the API's out there grow along with the generations of graphics. Not to mention that those same shrinkages on lithography also apply to discreet solutions.

    Until there is actual life like realtime rendering, the graphic demands will keep climbing and keep requiring a high class discreet solution.

  7. #77
    Join Date
    Dec 2007
    Posts
    2,321

    Default

    Quote Originally Posted by TemplarGR View Post
    Don't be surprised if after 2015-16 you see dri drivers wholly inside the Linux kernel. It will make sense by then, if APUs take over the market(the most probable outcome).
    Not likely. The current graphics and compute APIs are too big to live in the kernel and there is not much advantage to putting them there.

  8. #78
    Join Date
    Dec 2009
    Location
    Greece
    Posts
    351

    Default

    Quote Originally Posted by agd5f View Post
    Not likely. The current graphics and compute APIs are too big to live in the kernel and there is not much advantage to putting them there.
    Very likely. You are wrong, because you are thinking in terms of now, not in terms of the future.

    In the future, cpu and gpu will be interconnected, they won't be only for graphics work, they will work closely in everything.

    In order for them to work closely, you will have to move their drivers in the kernel. There is no alternative really...

    You do understand, that in the future, all cpus will have gpgpu cores inside? GPGPU, not GPU only. It will be used for calculations. Eventually it will be like SSE is now for example, a part of your processor. Kernel will have to manage them too. You want to tell me that half your processor will be managed in kernel space and half in userspace?

    Of course, MESA libs will remain in userspace. Only the drivers will have to move in. And you will move them, unless you want linux to be left behind the times.

    This will happen. I am not making this up. Carefully study the market and you will find out about it too.

    Of course, this is up to you. I am not a Mesa or Kernel developer, though if given the time i would love to be in the future. But stop being conservative here, and look closely what the hardware players are doing. Heck, i believe you are working for amd, not? Then watch what AMD is saying about the future of APUs...

    They will not say that dedicated gpus will vanish. In fact, they say the opposite. It is not for their interest to say otherwise now. But eventually, they will just move to apus only.

  9. #79
    Join Date
    Dec 2009
    Location
    Greece
    Posts
    351

    Default

    Quote Originally Posted by deanjo View Post
    Your assuming that the current graphics capabilities will remain the same. This has never been the case. As we have seen in the past that the API's out there grow along with the generations of graphics. Not to mention that those same shrinkages on lithography also apply to discreet solutions.

    Until there is actual life like realtime rendering, the graphic demands will keep climbing and keep requiring a high class discreet solution.
    No no you still don't get it, probably because you are a fan of NVIDIA...

    Let me give a very simplified example.

    Stop thinking in terms of performance for a minute. Think in terms of die area, because that is all that matters really, the size of a transistor and the die area. Of course architecture counts too, but architecure is more about how the code manipulates the transistors.

    Lets as say that in 45nm, you have in your pc 1 CPU and 1 GPU, and their die areas and number of transistors are the same.

    We are moving to 32nm, and because code doesn't manipulate the cpu much, we move the gpu in. We have 2/3 CPU and 1/3 GPU. Obviously, the gpu part of the package has 1/3 of performance of a discreet solution of same lithography and die area.

    Let us move to 22nm. Since cpu doesn't matter much, we decide to just shrink the cpu part and double the GPU part. Now we have 1/3 CPU and 2/3 GPU. At 22nm, the GPU part on die will have 2/3 the performance of a dedicated solution of the same lithography and die area.

    Now let us go to 16nm. We shrink the cpu, and double the gpu part again. Now we have 1/6 CPU and 5/6 GPU. The on die GPU part has 5/6 the performance of a dedicated GPU part, of same process and die area.

    And finally, 11nm. Shrink cpu, double gpu. We have 1/12 CPU and 11/12 GPU. Sure, we would prefer 12/12 of the GPU, but since it makes no sense for amd to have 2 lines of products, it cuts the dedicated gpu. You just use more APUs if you want more performance.

    Another think to consider, is that while a mixed solution will have less transistors for gpu than a dedicated solution, it will have the advantage of not having to move data around the pci bus. This will heavily offset this.

    Of course, this is a simplified example. The cpu part WILL advance too, and probably will not just shrink at every new node. But, as i said, code does not scale after 8 cores. We cannot increase the mhz, we cannot just put there more cores, we have to do something with the transistors. Since gpus can do much more parallel work, and give almost perfect scalling, gpu will get all the attention from now on...

    I hope i enlightened you.

  10. #80
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,385

    Default

    You and agd5f are saying pretty much the same thing. When agd5f says "the current graphics and compute APIs are too big to live in the kernel" he is talking about Mesa and similar components, since those are the components which implement the graphics and compute APIs.

    You may be talking about moving the Gallium3D bits into the kernel and keeping the common Mesa code in userspace, but that would be quite a bit *less* efficient than the current implementation. The current code does quite a bit of buffering in userspace to minimize the number of kernel calls required (kernel calls are slower than calls between userspace components); if the "driver" code (presumably the Gallium3D driver) were in kernel space then the number of kernel calls required would go up dramatically and performance would drop.

    re: "watch what AMD is saying about the future of APUs", agd5f does work for AMD (as you suspected) and was the first open source developer to work on the new AMD APU graphics hardware. Alex has a better understanding than most about the future of APUs... he just can't tell you everything yet

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •