I'm thinking the next gen dx11 cards will be better refined and properly support OpenGL 4.0 then.
my daddy do not have a pc... and he do not wana one.
Originally Posted by Xemanth
my daddy also do not give money to me ;-)
i earn my own money by advice people.
1 of my advised loses a lot of money on AMD-Shares only because i tould him the first 65nm K10 opteron was a big deal and a good cpu i tould tim he sould buy shares @12,60€!
ok after that the amd-shares fall deep very deep to 3 dollars! means 2,x€
ok bad exampel how to earn money...
but he still hold the shares and amd is going up again to 9 dollars...
and he buy a lot of amd systems ;-) to. (but no k10 only k8 and k10,5)
More bandwidth is always welcomed and, in my opinion, AGP was an idiocy in the grander scheme of things. PCI-Express is better only for the fact that any add-on board can be accommodated if nothing else.
Originally Posted by MattH
Look, the question is, in the long term, what does 'graphics card' (or dedicated graphics hardware, if you will) mean exactly? A framebuffer to blit
arrays of pixels? A polygon cruncher?
As much as I love Linux, the truth is that its 'multimedia' stack is broken. It is a bad design, mostly for historical reasons, but also because the domain problem is very complex and most solutions came from people or organizations that needed at most a framebuffer to blit pixels into.
Again, without going into many details, I've seen, for example a 'graphics card' made out of four PowerPC processors (Freescale e600 dual cores at 1.x smthg), two FGPAs and a bunch of other IC, all OF THE SHELF, built 'in house' running a distributed rt kernel ( i suspect it is a heavily modified linux, but i didn't go that deep ).
It has a dual 10GB interface because, as crazy as it sounds, IP, controllers & auxiliaries for PCIe interfacing cost an arm and a leg, and scale badly for this kind of research and prototyping, especially when there is no giant corportation pouring money on you.
Along with 4Gb DDR2 ram, all in a <120W full load envlope, this thing may not have the raw computing power of a modern ATI or nVidia card, but it makes up in other areas. For example, the host communicates with the 'card' via a STREAMS-type protocol, effectivly, the rt distributed kernel on the graphics 'card' IS the xserver along with the graphics driver. It impplements parts of OpenGL 2.1 (mostly because this is a research prototype there is no reason not to implement 4.0 specs, for example), you don't draw pixiemaps for buttons, you use a protocol to ask it to draw abstract buttons and the 'card' takes care of it.
All the 'graphics', raster, vector, 3D, GUI elements are IN THE CARD and you can do some out-of-this world stuff with this approach. Again, all OFF-THE-SHELF components.
This is what graphics hardware should be. AMD, nVidia & could then build specialized processing units (more or less specialized) and use an open rt kernel (distributed, prolly) that implements *everything*. Otherwise, everybody will keep having its more or less different arch with its more or less different abis, apis, protocols and all the crap.
Think of it as 'graphics as a service'... the XServer really IS a server on another 'machine'
I wonder if IBand makes sense? Lower latency and up to 4x the theoretical bandwidth, not horrifically priced any more lately, at least if using SuperMicro boxes..
Originally Posted by CNCFarraday
(I'm thinking of the 2 system-in-1u-case systems that have IB daughtercards, the QDR 40gbps are pricey but the DDR 20gbps not so much, and both use smaller ATM-style packets IIRC)
This also sounds like Xdmx:
But with lots of custom goodness..
Anyone else have major issues when you try to launch a 3d game with this opengl 4.0 preview driver using an hdmi connection? On my 5770 3d games load with the screen covered in artifacts and the game left unplayable....This isn't an issue over dvi...on a related note when I tried to test ubuntu 10.04 the hdmi connection in the default OSS driver wasn't working, just left me with a blank screen...again problem was solved my using a DVI connection
Well it is a 'pet project' if you will. A hardware implementation of the 'production' system which is virtual (runs atop multi-socket multi-core x86). The goals were to make it entirely with off-the-shelf components that could be interlinked, assembled, programmed and tested
Originally Posted by MattH
'in-house'. They never had any project that required PCIe development kits, from what I've heard (not my domain) it could be a pain in the ass to integrate complex heterogenous execution units - don't know exactly, but there are multiple listeners ('servers') and it would have required some sort of pcie multiplexor and it was too complicated for them etc. Maybe if this is re-designed with other goals in mind and there are some people who worked with pcie interconnects it would be a breathe.
Not quite like XDMX. From a bird's eye view, yes, it looks like it, but actually it is a 'xserver on a board' so to speak. The entire graphics backend is delegated to another execution context, not just the blitting, poly chrunching etc. In theory, you can 'crossfire' these cards, as the rtdks can talk to other ones on different boards, all the software is there. The 'real' system (the virtual one) does this and, in principle, it could be done with the real hardware.
What about OpenGL 3.3 and 4.0 in official driver? When would it be possible? Or just update preview driver to support xserver 1.7.
05-05-2010, 01:16 PM
catalyst 10-5 and 10-6 brings tons of openGL fixes and yes ogl3,3/4.0 in the stable driver.
Originally Posted by Wielkie G
Tags for this Thread