Page 2 of 3 FirstFirst 123 LastLast
Results 11 to 20 of 26

Thread: The VDrift Racing Game Continues Speeding Up

  1. #11

    Default

    Oops, I meant to say isn't really. Stupid lack of post editing.

  2. #12
    Join Date
    Sep 2009
    Posts
    119

    Default

    I'll see if I can get any luck with my joystick and this game... I'll tell you guys how it works out later.

  3. #13
    Join Date
    Dec 2009
    Location
    Italy
    Posts
    176

    Default

    Quote Originally Posted by marek View Post
    I think it's pretty clear from the list of implemented techniques that this game needs float textures for most of its graphics awesomeness. This is a huge problem in the open source driver stack since it's patented.
    I'm really worried this will become more and more common as games get more sophisticated and mesa progresses. Will it become so common to kill (make useless) the OS graphic stack? I hope not.

  4. #14
    Join Date
    Aug 2009
    Posts
    2,264

    Default

    Quote Originally Posted by kbios View Post
    I'm really worried this will become more and more common as games get more sophisticated and mesa progresses. Will it become so common to kill (make useless) the OS graphic stack? I hope not.
    Limitations usually lead to much better solutions because those have to be found by thinking out of the box.

    I mean realy... who needs floating points? Remove that comma! Change the meaning of the color values and see how that leads to much more computation speed.

    Realy a child could have figured that out...

  5. #15
    Join Date
    Jan 2009
    Posts
    621

    Default

    30fps means 33ms latency, that sucks. Good for a movie, bad for a game.

    Quote Originally Posted by kbios View Post
    I'm really worried this will become more and more common as games get more sophisticated and mesa progresses. Will it become so common to kill (make useless) the OS graphic stack? I hope not.
    It's already too late. Float textures have been available in graphics APIs since 2003 or so. The only difference is that they became more common with 2004/2005 GPUs as those had more bandwidth and could do blending and multisampling when rendering to float textures, so they were more useful back then. Today float textures are a must, but they're often not mandatory, e.g. disabled at low graphics settings or to support Intel hardware.

    VINCENT> Feel free to continue writing random stuff.

  6. #16
    Join Date
    Aug 2009
    Posts
    2,264

    Default

    Quote Originally Posted by marek View Post
    30fps means 33ms latency, that sucks. Good for a movie, bad for a game.
    Good enough with motion blur. But like I said for anything that requires reflexes and rapid movement 60fps is the norm.

    VINCENT> Feel free to continue writing random stuff.
    Hey... 1,3m equals 1300mm, but hey... do whatever you like...

  7. #17
    Join Date
    Aug 2009
    Posts
    2,264

    Default

    OK so in order to not sound like shouting random crap, I've dug into this.

    Basicaly I already gave you a workaround for floating point textures, but floating point textures themselves are not patented. Funny, eh? What you mean is the algorith for the shadow mapping.

    We are talking about US Patent 7450123.

    This patent describes a few things.
    1. use of layers.
    2. the algorith for calculating how much light shines on a pixel by use of the z-buffer
    3. placement in the rendering pipeline.

    Now the workaround:

    1. Use of layers.
    There are two layers:
    -the texture layer (oh yeah fscking rly?)
    -the depth layer

    Why not mix those layers into one layer (preprocessing)?
    For example after each pixel color value comes the depth value.

    2. the algorith
    The algorithm puts the depth layer between the texture layer and the light source. The Z-buffer-values and Z'-buffer-values for each pixel in the texture layer is calculated to determine how much light shines on each pixel color value in the texture layer. After that the color values are 'corrected'.

    So why not calculate the angle of the light source that shines on a certain 'deep' pixel and the further the angle to the light source, the darker the pixel will become?

    This will eliminate the problem with doing anything with the z-buffer because the further away the light source, the less steep the angle will be. Ofcourse then afterwards another step can be taken to calculate the intensity of the brightness of the entire rendered depth texture as if it was a normal texture. So the further away the light is, the less bright the rendered depth texture will be. Avoiding this algorith entirely while maintaining correctness.

    3. placement in the rendering pipeline.
    The third thing described in this patent is the placement of this algorith in the pipeline. Now that we have chopped up the algorith into multiple passes, you can literaly place it almost anywhere you like, even avoiding the subpipeline of the algorith itself as described by the patent.



    C'mon how hard was that?

  8. #18
    Join Date
    Jan 2009
    Posts
    621

    Default

    I don't mean any algorithm for shadow mapping. The patent for float colorbuffers is actually US Patent #6,650,327, it is owned by SGI, and their statement in the ARB_texture_float specification is pretty clear (they sued ATI in the past because of it).

    As a graphics developer (that's what I was paid for), I use float textures all the time for various effects, and I would not like to design a rendering engine without them. I guess VDrift developers would agree with me here. When I request floats, I don't want scaled ints, because my algorithms would not work with them, clear?

    As a driver developer, I would not mind having float textures and colorbuffers in Mesa and the driver I maintain, but most Linux distributions will not enable it by default since it's potentially infringing.

  9. #19
    Join Date
    Aug 2009
    Posts
    2,264

    Default

    6650237 covers ramming floatingpoints operations through graphics cards.

    327, however, does not cover non-drivers, like game engines, unless they perform float geomatric calculations and acces the framebuffer themselves while doing it.

    Why not have the driver convert floats to scaled ints? It requeres some work, but this could be seen as optimisation work as floating point operations are much slower than integers.

  10. #20
    Join Date
    Jan 2009
    Posts
    621

    Default

    Because you would lose precision. float has 24 bits of precision, but the range is huge. 16-bit int has a sucky range, that's 5 fucking numbers, where would you put the fixed decimal point? There would be terrible losses on both sides of the point, and that's the best int my r500 can do.

    Graphics algorithms are often tuned for the underlying data type to get the most from the least. If you change it, you will break it, and next day you get tons of bug reports.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •