thats pretty much what you do, whenever you try to get something onto your screen
most of the time you can split the rendering of your screen into many tiny jobs. (sounds ideal for opencl doesnt it)
opencl is probably only viable for raytracing. other stuff should probably still be handled by opengl.
Of course you can do raytracing in GL/D3D, what do you think people where using all those years before OpenCL was released?
this fake shader lights and shader effects are just bad in quality if you compare this to an real raytracing lighting.
show me your D3D code man.
they do raytracing in software on the cpu in the past-
The scenes in a ray tracer are still rendered into a single frame which is displayed once fully assembled. There is a very definite FPS involved in the process.
The FPS will be dependent on a similar combination of factors that concerns triangle-mesh rendering, including the destination framebuffer size, number of objects, and complexity of the lighting equations used.
The "RPS" you mention is more or less the same idea as the "triangles per second" or "fragment fill rate" that you have on contemporary 3D rasterization hardware. All it indicates is how complex of a scene the hardware can manage while maintaining a usable FPS.
Carmack mentioned that he hoped that we'd have MIXED MODE renderers in within 3-5 years. These are not actual ray tracing engines, but rather traditional triangle rasterizers that used some extremely simplified and inaccurate ray tracing techniques to compute shadows and lighting on the GPU during rendering rather than on the CPU before rendering. That's it, nothing more.
Also, just to be clear, even if you're right about ray-tracing magically becoming feasible, DirectX is in no way being threatened by OpenCL, because DirectX has DirectCompute -- same damn thing, just a different API and syntax. More games make use of DirectCompute than OpenCL by a huge margin right now, today. (Not for rendering; for physics and such.)
There are very good reasons for using frames instead of displaying each drawing operation on the fly. Seeing each pixel update as it is traced (or rasterized) would be extremely annoying to the user - try it! Modify glxgears to turn off double-buffering (it's a 2 line change). You won't have "frames" anymore but the result won't be pretty.
Our system also takes advantage of GPUs' strengths at rasterization and shading to offer a mode where rasterization replaces eye ray scene intersection, and primary hits and local shading are produced with standard Direct3D code. For 1024x1024 renderings of our scenes with shadows and Phong shading, we achieve 12-18 frames per second.
on any kind of raytracing hardware you can have allways the max FPS the monitor can handle.
the only difference between a low RPS hardware and a high RPS hardware is the black or white ant Noise over the screen.
more RPS means less ant Noise
and your talking about FPS with raytracing is just complete nonsence !
watch some exampel videos on youtube on slow hardware and on fast hardware the only difference is the Ant Noise