Page 4 of 4 FirstFirst ... 234
Results 31 to 38 of 38

Thread: A Low-Latency Kernel For Linux Gaming

  1. #31
    Join Date
    Mar 2012
    Posts
    122

    Default

    Quote Originally Posted by kraftman View Post
    The sad thing is you're still mixing things up. The stock Linux kernel is much more responsive (exactly in terms of latency) than Windows kernels, so you don't have to use low-latency one to make it better.
    Theres nothing sad about me responding to someone who suggested that the rt kernel has a max latency of around 30ms, and stating thats too slow for an interative game. I never claimed those, nor any other figures were true, thus I don't particularly see your point that I'm mixing _anything_ up. Furthermore, I didn't claim that either Linux nor Windows was more responsive, only that the numbers quoted were more competitive. I'd hazard a guess that the amount of work required to produce an interative game may vary between platforms, but I don't particularly know anything about the platforms under the hood, thus I wouldn't and didn't suggest either was faster. Perhaps you could read what I've said, and understand the context of the discussuon before you jump the gun in future. Furthermore, if you are going to disagree with me (or rather, in this case, what you've imagined I've said) than you ought to actually back that up, otherwise you come across as a frothing at the mouth fanboy, which'll quickly get yourself ignored.

    @minuseins - Whilst I'd be all too happy to game at 10ms latency with regards to pretty much all my equipment, when faced with opponents who're suggesting the human eye can't see over 24fps, I think actually displaying the differences between 100 and 1000 are measurable would be beneficial. Today we generally sit with game engines that have 33-50ms tick rates and monitors which have a delay of around the same number, and getting anything faster is significantly harder than it was 10 years ago. The industry is going in the wrong direction because of the "good enoughs" myths, and as someone who'd like to get things faster, I don't think theres any issue with setting the ideal speeds fairly high, as long as it gets us moving in the right direction.

    With regards to latency, modern games generally take very little network traffic, however you are right that there is quite a difference between the 33ms tick rates, and 1ms tick rates, you'd be talking about 1000 times as much traffic in a 32 man server. I'm not sure off the top of my head if that'd be completely unfeasible, but it would eliminate many from online gaming; as a competitive LAN player though, the option would be nice. On top of that, input lag is additional to network latency (and server tick rates) as you're responding to the data you've been given. As you previously said, you're already behind, so limiting any further input lag is still beneficial, and shouldn't be ignored. Alas the competitive gamer doesn't get a whole lot of say in the matter, so at best I can cross my figures and hope they don't artifically limit those OLED (if they ever release computer monitors) to stupidly low refresh rates (ie, 60hz, which freaking sucks).

  2. #32

    Default

    Modern games stabilized around two numbers: 30 FPS (mostly console games) and 60 FPS (PC games). 30 FPS is the lowest acceptable FPS, and it's chosen out of necessity, because it turns out that some people prefer (with their dollars) richer graphics (possible with 33 ms per frame) over smoother experience.

    As for network games, network latency (ping to server) is rather orthogonal aspect to visual latency because of two factors:

    1) ALL network games [try hard to] predict server response in advance (by essentially running the same code - on possibly stale/incomplete data - on clients) and then correct/compensate/lerp after getting authoritative server data. Games would be unplayable if you there was a delay of 20-50 ms between you pressing forward and you actually moving forward.

    2) Game tick (i.e. when game entities are updated, when they "think" - which in network client-server based games only happens on servers) already happens at different (often lower) rate than drawing. For user experience, it's much more important to update animations (which often are local to clients), non-game-affecting physics (particles, smoke, etc), and other "visual" things in a network game, which happens client-side.


    P.S. When talking about games, it's misleading to use FPS. It's better to compare frame time in milliseconds, and for latency tests, it's better to compare standard deviation of this value instead of an "average FPS".

  3. #33
    Join Date
    Jun 2011
    Posts
    78

    Default

    Quote Originally Posted by JanC View Post
    I want your hardware that jitters between 100-1000 frames/s...
    What if its just a costum GLXgears demo? 1600fps is fairly easy to reach on that one, and then just add a rotateable camera and a script which produces the weird sinus curve of reaction time. It would be really annoying to smoothly move.

  4. #34
    Join Date
    Mar 2011
    Posts
    106

    Unhappy

    Quote Originally Posted by kraftman View Post
    The sad thing is you're still mixing things up. The stock Linux kernel is much more responsive (exactly in terms of latency) than Windows kernels, so you don't have to use low-latency one to make it better.
    by the way talking about "wiwi responsiveness" is being DUMB.....

    i explain : have you tried to do a search in the registry of win7 x64 ...?
    have you tried to save it ?
    few days ago i used nt regopt http://www.larshederer.homepage.t-online.de/erunt/

    at end it shows a box about the results , then i saw the registry of my newly installed wiwi7 is above 2 Go.....

    M$ does worse at each wiwi

    i can not wait to play with linux and steam ;']

  5. #35
    Join Date
    Aug 2011
    Posts
    65

    Default

    just m2c as guy-with-his-own-thoughts, didn't study in any of these fields

    generally, because some ppl are like "1ms reaction time must mean 1000fps" ... this is generally _not_ true. In a "normal/simple" game engine you have a step/redraw loop that tries to use the system to the max there is a coupling between these two to some extent. Generally it would also be possible to draw a frame 1ms after the input is recorded (this is the lag) and then wait 9ms before fetching input again (which equals 100fps being drawn), in which you can eg. run other stuff that may not need to be "this much instant" (animation updates for instance), so the loop would look something like this:
    input --1ms--> action/redraw completed --9ms--> physics/animation/shadow pre-calculations, ...
    So you need hardware that can push out a frame in less than 1ms (lets say 200ms for updating content, 800ms for drawing), but you then don't have to drive it in a tight loop but rather in a burst mode (and with some logic I am sure you can increase the available render time for at least part of the scene/output, spreading the load further without sacrificing latency)

  6. #36
    Join Date
    Dec 2009
    Posts
    29

    Default Phoronix could learn a thing or two

    Check out the techreport.com GPU testing. They do meaningful benchmarks showing minimum framerates and render latency.

    That's where it really counts for gaming.

    I suspect this kernel would show better minimum fps and less jitter as others have mentioned.

    I'd also be curious to see this compared to the kernel ck-low latency patch set.

  7. #37
    Join Date
    Jan 2009
    Posts
    462

    Default

    What a strange thread. A collection of unverifiable statements of fact and logic arguments that challenge the intellect. Low latency is a good thing. High throughput is a good thing. The lower you can make the latency without affecting throughput, the better. In certain cases where throughput is static or completely arbitrary, additional latency related gains can be made. In cases where latency is arbitrary, throughput related gains can be made.

    There's a theoretical limit to both latency and throughput. Unless some new magical scheduler algorithm or multifaceted mystical hardware comes about, we're not going to get an order-of-magnitude improvement out of today's commodity hardware. It's my opinion that we should stop trying to squeeze that extra 10% our of an already well-optimized kernel (with available low latency options), and start focusing on new tech and/or reducing the things in kernel that cause the latency to begin with. Schedulers are fun and all, but we're not going to improve much if we keep re-implementing the same thing over and over again. Fair queuing (or whatever CFS is doing) was probably the last significant leap we'll see in a while.

    In the mean time, if you want your game to run smoothly when background tasks are running, linux provides a "nice" way of facilitating this.

    F

  8. #38

    Default Latency :)

    That benchmarks are mostly same is ofcourse extremely positive. Then there is no exuse to not run a low-latency kernel. And low-latency kernels improves the smoothness of the frames (less timing jitter.)

    And frames above 73hz really isn`t useful anyway.


    Good topic, not so good article. I completely agree on the jitter issue. Low latency is what got vintage systems so good reputation. While I reduced latency on windows recently, for instance, the lower I got it, I would first get a lot of chopping removed when the characther was moving. Then it felt like an Atari 512. Then Amiga, Then singeltasking Mac OS. And that is where you want to be, to feel like the game is the only task running. ULTRASMOOTH. That just set an expecation to quality, which may be for instance, why systems like Amiga, was reknowned for quality software. That is just the feel you got interacting with the machine.

    I also earlier made a config on linux that gave me 0.3ms reliable audiostreams latency. The games looked amazingly good, it gives you the feel of hardcoded asm, custom arcade machines.

    While I researched drivers on windows, (windows update drivers are out of date) many people did not recognize this issue. Even if these drivers go through several layers, most people just update the gfx driver and is happy. They don`t realize you have to have current drivers, in the signalchain before it. Some even say, supposedly wise "the best tweak is, install, leave it alone". LOL, there is going to be so much latency on such a windows install, because their kernel is so poor. (and choppy gaming)

    Obviously many people on linux don`t understand this either. Rarely do you see low-latency kernels for games, and many argue against it, and the author of the article didn`t notice the improvement in frame-jitter.

    Great to see all those responses from those who do understand this though. That should be the mainstream thinking on this.

    I also wrote LKML on this. https://lkml.org/lkml/2012/8/30/325

    Maybe those who just follow standard configs in their distros, will now get low-latency configs, so that desktop users truly can see how great games can run here (better than any other system). Because it really is odd, when people talk about linux gaming, and run choppy standard ubuntu-kernels, and even benchmark on that. That is not interesting.

    And ofcourse high FPS in a benchmark means nothing. Nobody uses extremely high screen refreshes. Infact if you think of an optimal screenrefresh according to "minimal psychovisual noise" principles 72.734hz is the best. Just how much is a flickering dot on screen visible in one frame, at 1000hz going to mean to you? It will be noise. Other research have also found 72hz to be optimal, in a workenvironment. 72.734 is a very still and quiet, non-noisy refreshrate.

    So then we talk about 72/73 FPS with low-jitter, on a low-latency kernel, as the optimal target.

    And even though wayland is more optimized than x-server, I did not notice the 30ms lag claimed in this thread, when trying this with doom3 on linux. It was extremely smooth. Will ofcourse be great fun, to test it on wayland aswell.

    While nice can improve things slightly (and only slighty) the major benefits come from config`ing the kernel for low latency.
    After config, the best process improvement would ofcourse be to have the gaming-process and it`s depencies (it`s whole signalflow) run with higher priority. (this is not true in windows, which needs processes at idle, for cpu2 not to choke cpu1 and max load.)
    To the point of almost being a singletask running.

    Peace Be With You.
    Last edited by Paradox Uncreated; 08-31-2012 at 05:44 AM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •