Page 4 of 4 FirstFirst ... 234
Results 31 to 35 of 35

Thread: Port fglrx openGL stack to Gallium3D

  1. #31
    Join Date
    Jun 2009
    Posts
    1,121

    Default

    Quote Originally Posted by haplo602 View Post
    Why bother bridgman ? I know you are trying to be nice here, but the guy has no clue what he's talking about .... he's comparing DX11 to OpenGL4 on cards that are capable of neither (hint ... HD4850x2).

    Also I do not think his Quadfire setup has any benefits over a single HD4850x2 since the CPU will be most likely not able to feed the cards properly. Yet he's expeting the holy grail and more ...
    well at first i fixed it and i say later opengl 3.2/3.3 cuz i tested in directx 10.1 which my cards support(i missed the ultra short edit time, so it stayed as dx11 but is dx10.1). beside OpenGL is an incremental API opengl3.x only miss support for certain hardware specific acceleration like tessalation wich is only present in dx11 capable hardware but unigine just run fine with dx9 and 10 and OpenGL 3.x just without tessalation and other features in this hardware, but in any way opengl4 is incompatible or anything with opengl3.3. so testing with a non dx11 capable is not an issue since you still have to use most of the Opengl implementation wich is common to both api version, so that could give you an idea of the general performance of the gl implementation in the driver.

    about the quafire im aware it doesnt scale well, not at least in low resolution but my main purpose of this quadfire is to have something powerful to play with opencl calculation, so is not like im specting 500fps in COD mw2 or anything like that, is just when i tested the driver i was lazy to open my case and remove the second card. either way having the second card shoudlnt kill the performance but i agree that is not an impossible either. no for now my linux is too bleeding edge for fglrx so i have to make a clean install, and for that ill wait for my new disk cuz well im lazy to downgrade my distro. now if someone else have a dual boot system you could do some test with both oses and check if you performance is close or not cuz well i dont drop the possibility that fglrx just dont like X2 cards and this is just an specific case

  2. #32
    Join Date
    Oct 2009
    Posts
    2,086

    Default

    Quote Originally Posted by Svartalf View Post
    The big problem for them would be that it's more of a moving target than the way they're doing things right now. The main reason that the FOSS driver works as well as it does is that it's in lock-step with the Gallium3D API edge because it's part and parcel of that project. For them, it's a fairly extensive re-write for the parts that are breaking like you state- only to get to an edge that does the same thing on them with the same level of regularity right at the moment.
    I never said it was a good idea. I was simply rephrasing what the OP was asking for in a manner that makes a little more sense.

    And FYI: I don't agree with you.
    The KERNEL end of fglrx works the way the OP suggested. Its mainly the xserver end that breaks. Sure the changing kernel can break fglrx, but fglrx comes with the SOURCE CODE for the kernel interface, so that can be fixed by the community to a certain extend. What is needed is a similar open source INTERFACE for the xserver.

    Current:
    kernel -- open source kernel interface -- fglrx
    xserver -- fglrx

    Wanted:
    kernel -- open source kernel interface -- fglrx
    xserver -- open source xserver interface -- fglrx

  3. #33
    Join Date
    Oct 2009
    Posts
    2,086

    Default

    And the thing about it is this;
    A LOT of fglrx is NOT NEEDED (strictly speaking). The open source xorg drivers are good for most things everyone does (in fact, typically BETTER than fglrx), so the second component to the OP's dream involves cutting all the parts that the OP perceives as REDUNDANT, leaving the 3D acceleration components to stick in via G3D/mesa. THIS is the part of his request that would be really tough to implement.

  4. #34
    Join Date
    Aug 2008
    Location
    Finland
    Posts
    1,578

    Default

    Quote Originally Posted by bridgman View Post
    In many ways running the fglrx 3D userspace driver over the open source kernel driver would be less work *and* more useful. Even that would be a *lot* of work, however, since the memory management abstractions are quite different.
    From a completely theoretical point of view, sounds intriguing. From a realistic point of view, didn't some vendor already try the "we do opensource DRM and closed userspace 3D libraries" which ended them not getting their DRM code in Linux kernel at all?

  5. #35
    Join Date
    Oct 2009
    Posts
    2,086

    Default

    Quote Originally Posted by nanonyme View Post
    From a completely theoretical point of view, sounds intriguing. From a realistic point of view, didn't some vendor already try the "we do opensource DRM and closed userspace 3D libraries" which ended them not getting their DRM code in Linux kernel at all?
    I thought that the big problem was that the closed stuff was the ONLY use for the DRM, and that was why it wasn't accepted.

    With AMD, sharing the DRM between both the OPEN as well as the CLOSED userspace stuff would eliminate this issue. Especially since the DRM is *already in kernel*.

    Nobody said you couldn't use it for BOTH. Just that kernel stuff won't be accepted if it will ONLY be used for proprietary closed blobs.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •