Section 7.1 of the Linux Kernel Module Programming Guide for more on this topic including examples.
First of all, I'd like to thank every helpful member of this community for indirectly helping me with some problems I previously had with some linux distros.
I'm an experienced C++ developer, and I've been a lurker on these forums for quiet some time now, and as a developer, I'm always wanting to try new stuff. I've always been interested on linux, however I'm a windows guy for now.
Recently though, I've become a little more intimate with linux as I wanted to try and solve job-related problems without spending lots of cash on windows licences, and I must say I like the way most things (that I've seen so far) work on linux under the hood. This led me to want to research about kernel driver development and so on.
My research has led me to some questions:
1) Does every driver communicate with an application using a file "link"?
2) If so, is it efficient? I mean, imagine I want to output a 1080p@60fps uncompressed video through some kind of device that has enough bandwidth for that; is it done through writting the raw data to the file (and all the other meta-data ofcorse)?
Since I'm the owner of an AMD gfx card, I'm also interested on AMD's open source initiative, and so I also researched about the xorg and all it's components, but everything's (MESA/DRI/DRM) a bit confusing for me to understand. Which leads me to the next question:
3) As I understand, the xorg components work as follow:
Is this correct? If not, could someone correct this please?Code:Application --> OGL API --> Mesa --> Xorg --| | |--> DRM (kernel driver?) |-----> DRI----|
4) Where exacly do drivers like radeon/radeonhd/fglrx stand in that chart?
5) What about Gallium 3D, where does it stand?
Thanks in advance
Last edited by mdias; 02-14-2009 at 08:24 PM.
Section 7.1 of the Linux Kernel Module Programming Guide for more on this topic including examples.
The "direct rendering infrastructure" was developed to allow an application to talk directly to the graphics driver, although that brings some other complications (see below).
Direct is App -> OGL API -> Mesa -> DRM -> hardware, with a couple of dotted lines going to the X server because Mesa needs to stay coordinated with the X server's understanding of window locations and which driver owns the hardware at any given instant. The coordination protocols are collectively called DRI.
Indirect is App -> OGL API -> X server -> Mesa -> DRM -> hardware, with no need for DRI protocol because all drawing is already going through the server.
You might have already read these, but I left a couple of sticky threads around with some info :
If you are doing 2D drawing or video playback then the path is :
App -> 2D/video API -> X server -> DDX -> DRM -> hardware
or, in the simplest case (older GPUs, no 3D) :
App -> 2D/video API -> X server -> DDX -> hardware
radeonhd and radeon (aka -ati) are Device Dependent X (DDX) drivers. They handle modesetting, 2D and video acceleration, and they also set up communication between X server and drm.
There is discussion about writing both video and 2D acceleration code which bypasses the server and operates through DRI in the same way that Mesa does.
The fglrx driver is our proprietary implementation which includes DDX, DRM and 3D driver components. You either use fglrx *or* you use radeon/radeonhd + drm + mesa.
Gallium3D is that redesign; a new, simple driver API created to expose the hardware functionality of modern shader-based GPUs. The functionality exposed by Gallium3D is also useful for supporting other APIs as well, including things like DirectX, video acceleration, and possibly 2D acceleration.
Last edited by bridgman; 02-14-2009 at 10:21 PM.
In this chart, Xorg mostly refers to the DDX driver, DRI to the dri driver, and DRM to the drm driverCode:Modsetting --> Xorg --| 2D Drawing / Video --> Xorg --|--> DRM OGL APP --> libGL/Mesa --> DRI --|
Fglrx has it's own DRM, DDX, DRI, and libgl replacements
Modsetting is now moving into the kernel(Intel has it merged in already), so DDX will only handle Video and 2D acceleration(both of which should be able to be moved into the Gallium driver, though that's not part of any plan)
I've always noticed some kind of lag clicking and highlighting the menus on X compared to windows, even though this problem has been less and less of an issue to me (maybe due to faster computers), could this be the reason?
Also, how high performant can they be? Is it much slower than using a DMA mechanism? Is it possible to have some numbers about this?
For what I recently read, I was thinking that somehow Mesa comunicates with DRM, which I was thinking was the actual driver. I'm completely lost here.
But if a new kind of hardware features comes out, will Gallium have to be upadted along with Mesa? Or is it all one and the same in the end? If so, I'm guessing Mesa's interfaces will be radically different and incompatible with current applications.
Thank you all for the explanation and links.
and yes bridgman, I had read those stickies some time ago
Last edited by mdias; 02-15-2009 at 09:01 AM.
No, not really. It's either a) heavy window manager (gnome or kde, eww) or b) bad 2d acceleration. If you eliminate the issue a by using a light wm, and then loading the system to ram to not have your HD slowing things, it's only the 2d performance that matters. If this is good, or done entirely in software, you'll have menus opening faster than you can click.I've always noticed some kind of lag clicking and highlighting the menus on X compared to windows, even though this problem has been less and less of an issue to me (maybe due to faster computers), could this be the reason?
For a demonstration, get a Puppy, DSL, or Tinycore livecd; Puppy and TC always boot to ram, DSL needs the bootcode "dsl toram" entered. It's blazing fast.
Depending on the device, either your ram or your cpu is the bottleneck. Here are some numbers from dd'ing:Also, how high performant can they be? Is it much slower than using a DMA mechanism? Is it possible to have some numbers about this?This is pretty much the ideal case, only limited by my cpu.bash-3.2$ LANG=C dd if=/dev/zero of=/dev/null bs=32M count=32
32+0 records in
32+0 records out
1073741824 bytes (1.1 GB) copied, 0.5946 s, 1.8 GB/s
bash-3.2$ LANG=C dd if=/dev/zero of=/dev/null bs=4K count=3200
3200+0 records in
3200+0 records out
13107200 bytes (13 MB) copied, 0.00156306 s, 8.4 GB/s
Cool, I was thinking they were much slower, it's nice to see it's not the case.
I've heard nice stuff about DSL and Puppy, I will try them as soon as I can.
We did some tests forcing 3D apps to run through indirect rendering rather than the normal direct rendering and found a 5-10% slowdown.
One of the big arguments for kernel modesetting is that it moves all hardware accesses into a single component (the drm). Right now both ddx and drm directly access hardware registers. Once modesetting has moved into drm, then only the drm driver will directly access the hardware; mesa and ddx will just pass GPU command and data buffers down to drm. Both ddx and mesa will still need device-dependent code (since they translate high level API commands into low level GPU commands) but only drm will actually touch the hardware.
Note that even this is a big confusing because both ddx and mesa need to set registers on the GPU (setting up state information before drawing) but this is done by passing "set register X to Y" command buffers down to drm.
2D acceleration is radeon/radeonhd -> drm
3D acceleration is mesa -> drm
Modesetting goes around drm today, but kms moves modesetting into drm.
Under Gallium3D, rather than changing mesa/drivers you would change the Gallium3D drivers instead. The gallium3d hw drivers are at src/gallium/drivers (rather than src/mesa/drivers/dri).
Since the Gallium3D "driver API" is very different from the current Mesa HW driver, the code above the HW driver layer needs to be restructured as well, into what Gallium3D calls "state trackers". In the mesa source tree "gallium" appears alongside the entire classic "mesa" tree. The difference is that rather than supporting only the GL API, the new structure can support a variety of different acceleration APIs, each with their own state tracker.
The difference, btw, is that the old Mesa hw driver API was based around GL functions (which made sense, old GPUs were designed around GL as well) while the new hw driver API (Gallium3D) is based around the common shader functions exposed by modern GPUs.
If you want a quick tour through the source :
- start at the top of the mesa project :
- click on "src" - you'll notice that there is one folder called "mesa" (the "classic mesa" tree) and one folder called "gallium" (the same tree restructured to work around gallium3d)
- click on "gallium" - you'll see three folders called "state trackers" (the hw and system independent stuff), "winsys" mostly OS dependent stuff with a bit of hw dependent code) and "drivers" (GPU dependent stuff). The "r300" folder covers r3xx through r5xx.
- go back one page anc click on "mesa", then click on "drivers" -- you'll see a bunch of different environments like d3d, glide, and dri. All the hw acceleration we talk about here is under the "dri" tree
- click on "dri" - there are your device-specific trees; one for each supported family of GPUs. The "r300" tree covers r3xx through r5xx plus rx690; we're in the process of adding an "r600" tree to support r6xx and r7xx GPUs.
Is this becoming any clearer ?
Last edited by bridgman; 02-15-2009 at 10:29 AM.
Ahh yes! I understand it now!
Thank you very much for your very informative posts!
I still think there should be only one component with device-dependent code though.
Why not have something like Gallium3D as the last component before the kernel driver for both 2D and 3D acceleration? I believe that would be better. But I could be wrong as I'm not a driver developer...
Last edited by mdias; 02-15-2009 at 11:12 AM.
- 2d acceleration
- video acceleration
- 3d acceleration
- command and buffer handling
Each of these functions hits a completely different set of hardware in the GPU, and historically the functions have been partitioned into different drivers. They could be combined into a big super-mega-honkin' driver but there wouldn't be much to gain from that.
There will be some simplification over time :
- right now drm handles command and buffer management, modesetting will move there as well so all higher levels just have to pass command buffers
- on chips without dedicated 2D hardware (latest ATI and NVidia GPUs, not sure about Intel) it should be possible to layer 2D acceleration over Gallium3D, although you wouldn't necessarily be able to take advantage of chip-specific hardware intended to make 2D-over-3D easier or more efficient.
- for the subset of video acceleration handled by ddx today (Xv, XvMC) video acceleration can probably be layered over Gallium3D as well
What Gallium3D can't handle is video acceleration using dedicated VLD-level hardware or 2D acceleration using dedicated 2D blitter hardware. Shaders are pretty versatile but dedicated hardware still has advantages in some areas. It is also probably not practical to support older GPUs through Gallium3D since they don't have the general-purpose shader hardware.
What is happening already is an ongoing cleanup of the driver stack - modesetting moving into drm and swallowing up all of the other kernel graphics drivers, and more things being layered over Gallium3D as working drivers for Gallium3D start to appear. The cleanup could not have happened sooner because the pre-requisites -- Gallium3D and a broadly accepted memory manager -- have only just started to show up in production code recently.
If you only look at newer GPUs without 2D hardware, and ignore things like UVD, then it probably will be feasible to run all the acceleration through Gallium3D-over-drm -- and modesetting will already be in the drm. There has been talk of extending Gallium3D to handle non-3D hardware blocks (VLD video hardware, 2D blitters etc..) so that at least the winsys code can be re-used even if the pipe drivers are not, but then it wouldn't really be Gallium3D any more
The bigger design issue for the X/DRI community is whether we are going to move to a model where the compositor becomes a standard part of the stack. We are rapidly approaching the point where most of the remaining complaints about graphics on Linux will be a consequence of the "mix and match" graphics stack making everything more difficult, and that will be where we lose relative to current MacOS and Windows graphics.
Last edited by bridgman; 02-15-2009 at 11:45 AM.