That's fine, since it's not my intention to troll, but it will be a lot of text ;)
Originally Posted by mirv
First of all nVidia is trolling here, because high levels of geometry are not possible with triangles, simple because triangle will be triangles: no truly rounded shapes. It is true that triangle rendering is faster than raytracing up to the point that it reaches criticle mass. For example: Crytek, the engine that powers Crysis. The reason this engine is so beloved is not because of it's graphical effects, but because it pushes the limits of triangle rendering: you can have an entire forest with shadows rendered. It it almost reaching the point where ray tracing actually becomes faster in doing this. Why is that? It is because once you have the compute power to trace rays from every pixel on your screen, geometry detail doesn't slow down the rendering process; it doesn't matter if you are in a square room or in a forest: the computations remain the same.
PC Perspective: Ray tracing obviously has some advantages when it comes to high levels of geometry in a scene, but what are you doing to offset that advantage in traditional raster renderers?
David Kirk, NVIDIA: I'm not sure which specific advantages you are referring to, but I can cover some common misconceptions that are promulgated by the CPU ray tracing community. Some folks make the argument that rasterization is inherently slower because you must process and attempt to draw every triangle (even invisible ones)—thus, at best the execution time scales linearly with the number of triangles. Ray tracing advocates boast that a ray tracer with some sort of hierarchical acceleration data structure can run faster, because not every triangle must be drawn and that ray tracing will always be faster for complex scenes with lots of triangles, but this is provably false.
There are several fallacies in this line of thinking, but I will cover only two. First, the argument that the hierarchy allows the ray tracer to not visit all of the triangles ignores the fact that all triangles must be visited to build the hierarchy in the first place. Second, most rendering engines in games and professional applications that use rasterization also use hierarchy and culling to avoid visiting and drawing invisible triangles. Backface culling has long been used to avoid drawing triangles that are facing away from the viewer (the backsides of objects, hidden behind the front sides), and hierarchical culling can be used to avoid drawing entire chunks of the scene. Thus there is no inherent advantage in ray tracing vs. rasterization with respect to hierarchy and culling.
This is absolute nonsence! Each ray can contain information about the level of brightness for example and so you can push it through post processing HDR and all that stuff. The second is that you can't do anti-aliasing. That is also totally not the case because you could cast rays from the center of a square of four pixels and do a some kind of texture filtering post processing. This way you can even enhance the rendered image like it happens in reality->tv concersion; by applying different, 'non-square matrixes' you can almost achieve an analog PAL/NTSC type of picture with about 600x800 rays. It doesn't cost the overhead of AA in rasterising rendering, which can be a serious bottleneck in rasterising.
PC Perspective: Antialiasing is somewhat problematic for ray tracing, since the "rays" being cast either hit something, or they don’t. Hence post-processing effects might be problematic. Are there other limitations that ray tracing has that you are aware of?
Which is why I am settping up to the plate to make an implementation.
PC Perspective: While the benefits of ray tracing do look compelling, why is it that NVIDIA and AMD/ATI have concentrated on the traditional rasterization architectures rather than going ray tracing?
David Kirk, NVIDIA: Reality intrudes into the most fantastic ideas and plans. Virtually all games and professional applications make use of the modern APIs for graphics: OpenGL(tm) and DirectX(tm). These APIs use rasterization, not ray tracing. So, the present environment is almost entirely rasterization-based. We would be foolish not to build hardware that runs current applications well.
Again, complete BS; since it depends on the way you code it. You can have more efficiency by casting rays on matrixed textures, and then only cast rays in the part of the texture that has been raycasted. nVidia is either trying to cover it's ass, or I am just plain genious :)
PC Perspective: Is there an advantage in typical pixel shader effects with ray tracing or rasterization? Or do many of these effects work identically regardless?
David Kirk, NVIDIA: Whether rendering with rasterization or ray tracing, every visible surface needs to be shaded and lit or shadowed. Pixel shaders run very effectively on rasterization hardware and the coherence, or similarity, of nearby pixels is exploited by the processor architecture and special graphics hardware, such as texture caches. Ray tracers don't exploit that coherence in the same way. This is partly because a "shader" in a ray tracer often shoots more rays, for shadows, reflections, or other effects. There are other opportunities to exploit coherence in ray tracing, such as shooting bundles or packets of rays. These techniques introduce complexity into the ray tracing software, though.
I see this as an intermediate step in the path to raytracing. The reason for this is that very soon CPU's will have so much multithreading that is doesn't matter, and with deadlines professional programmers just take the easy, dirty and lazy steps and hybrid rendering is just too complex and takes up more time. It also depends on what implementations there are in the future and in the end implementations decide how stuff get's done, and not nVidia. Sorry guys...
PC Perspective: Do you see a convergence between ray tracing and rasterization? Or do the disadvantages of both render types make it unpalatable?
David Kirk, NVIDIA: I don't exactly see a convergence, but I do believe that hybrid rendering is the future.
Rasterization hardware is only relevant untill CPU's get powerfull enough. Ofcourse today rasterization is king, but because it's the only way, but the future will surely change that simply because GPU's are additional costs and no-one will buy GPU's when there will be powerfull CPU's that can handle everything and have on-die GPU's for desktop compisiting and video playback. Green computing also comes to mind.
PC Perspective: In terms of die size, which is more efficient in how they work?
David Kirk, NVIDIA: I don't think that ray tracing vs. rasterization has anything to do with die size. Rasterization hardware is very small and very high-performance, so it is an efficient use of silicon die size. Rasterization and ray tracing both require a lot of other processing, for geometry processing, shading, hierarchy traversal, and intersection calculations. GPU processor cores, whether accessed through graphics APIs or a C/C++ programming interface such as CUDA, are a very efficient use of silicon for processing.
Couldn't have said it better...
PC Perspective: Because GPUs are becoming more general processing devices, do you think that next generation (or gen +2) would be able to handle some ray tracing routines? Would there be a need for them to handle those routines?
David Kirk, NVIDIA: There are GPU ray tracing programs now. Several have been published in research conferences such as Siggraph. Currently, those programs are roughly as fast as any CPU-based ray tracing program. I suspect that as people learn more about programming in CUDA and become more proficient at GPU computing, these programs will become significantly faster.