If by any chance r300 docs are any good to you:
there on page 25 one may find AA config register. I'd be very delighted if r200 or r300 would get a working SSAA and line smooth. It seems ridiculous that almost ten years old stuff is still so secret that it is next to impossible or needs great deal of work to make any complete docs available without NDA and big money.
Features explained in above links would be so nice in open source media players that it almost makes me angry to remember days when it was just posible to play svcd on P2 300MHz and mach64 or so. That's only because proprietary player makers (Windvd et al) get access to the docs.
Is AMD waiting for The Open Graphics Project to publish their first asic and then just kill the competition like a bug?
With regards to hardware accelerated video playback in the open source drivers, the issue is that if they document the way to send the compressed video frames to the GPU, it then becomes possible for someone to write a windows driver that can intercept these compressed (but stripped of any DRM that may have been applied to them) video frames. And that means that AMD would then be in violation of the license it has with Microsoft (the one that allows AMD to see all the secret bits you need in a display driver before Windows and the media players running on top of it will give you the compressed-but-no-DRM video frames).
Originally Posted by vesa
AMD would then be opening themselves up to a lawsuit from MS (and content companies relying on MS and the GPU vendors to protect the decrypted video stream)
Just a word of note:
If you render in a 2x2 grid, you will essentially get something only slightly better than 2xAA, since you only have 2 "levels" of testing per pixel per dimension.
By rotating the grid about 30%, (RGSS) you can get 4 "levels" of testing per pixel per dimension. Same amount of work, but vastly better image quality.
I also remember my Radeon 8500 could sample sub-pixels, but how it did it... I don't know
It's not so secret, it's just a lot of work, and there are only a few of us on the open source team. Not only does the information have to be compiled and written, it then has to go through IP and legal review. If we spent time now creating docs for r2xx chips, it would take a way from getting info out on newer chips and other features that aren't documented at all yet. That's our priority right now. We do hope to eventually get documentation out on older chips, but in the meantime, the r1xx/r2xx programming information is already available in the mesa 3D drivers and radeon ddx source code.
Originally Posted by vesa
I've checked the r200 register databases and I did not see any AA regs, so presumably we used the 3D engine. The mesa r200 3D driver has pretty much everything documented at this point.
As to r300, IIRC, smooth lines require shaders. There's no dedicated hw for them like on older asics.
I found a description of how FSAA was implemented on the original Radeon (R100):
Rather than render to a double-sized backbuffer and downscale it, the R100 renders to four separate backbuffers (each offset by half a pixel in a different direction) and then blends them. This is more efficient because you only need 4 times the memory rather than 5--in the blending step, you blend three of the backbuffers onto the fourth. It also makes optimal use of the hardware resources because the R100 has three texture units.
There's no magic hardware register to do this, it's entirely done at the driver level.
I imagine that FSAA on R200 on the proprietary drivers on either Linux or Windows works the same way, only you have more choices of AA levels that are hardware-efficient, because of the more versatile pixel pipeline.
It seems discussion is converging to:
on driver level, am I right? I've tested it briefly and yes, the results are nice but speed is not. I bet there is some secret sauce here too in the hardware and (old)proprietary drivers. A few more links follow...
(accanti.c, remember to pull jitter.h too
The same in Java:
Oh and btw, I'm not buying the explanation about sniffing unDRMed video before the GPU – any reverse engineer worth his/her salt would get to the buffer anyway. More plausible would be HW decrypters in current GPUs for AACS and fear of key slipping out. That too is moot already. It seems also very strange to hold cashier of the (enter your brand here) supermarket responsible for murder because she/he sold a knife to someone for cutting cucumber (which particular someone may have bought also) and later found stuck in third persons back... Somewhat analogical to AMD providing docs to Linux driver development and the usage for doing Really Bad Thing One Must Not (by MPAA).
Neither am I buying any new graphics hardware until these things get sorted out. I'm very well served with 2 to 5 years old stuff from scrap yard Old Radeons for real-time kernels and NVidia for 3D and video. [Keep IT recycling]
Forgotten stuff: http://graphics.stanford.edu/courses...ffer-sig90.pdf
Last edited by vesa; 04-03-2009 at 05:15 AM.
I found an old whitepaper describing R200's FSAA implementation, which was given the marketing name "SmoothVision":
It looks like the "secret sauce" lies in the way the sample buffers are arranged in RAM--i.e. the stuff about 16-sample blocks, which is kept intentionally vague in the whitepaper for obvious reasons. IIRC R200's Z-buffer compression (HyperZ) is also based on 16-pixel blocks, so there's probably some kind of connection.
Thank you all for your advises, you're great. I'm certainly enlightened. As soon as I have something working with 3d engine I will update the thread.
Tags for this Thread