Exploring Input Lag Inside and Out

by Derek Wilson on 7/16/2009 12:45 PM EST
Comments Locked

85 Comments

Back to Article

  • psilencer - Tuesday, August 18, 2009 - link

    First time poster, so be gentle!

    For each of the cases you analyze the bandwidth and take the lag to be the inverse of the bandwidth. This is incorrect. Lag and bandwidth not related as such. Consider a road with a constant speed limit. Lag would be related to the length of the road (the time it takes for some signal starting at A to reach it's destination B). Bandwidth is related to number of lanes (how many signals you can send from A to B within some time). Although there is some relationship between the two, it is not the inverse.

    With this in mind, everything analyzed by this article is incorrect.

    Consider a mouse that has 500 reports/second. Taking the inverse gives 2ms, which is the average time between completed reports. However, you don't consider that multiple "reports" may be pipelined in the mouse. Say for example, your mouse has a camera, some simple processing logic to decipher the data from the camera, and then the usb interface. For simplicity, assume that these units process one and only report at a time (and bandwidth/latency would have the inverse relationship). In that case, each section works at 500 reports/second, and would have a latency of 2ms. However the total latency of the mouse would be at 6ms, since each report needs to go through each section.


    This also applies to the CPU and GPU.

    Sorry, if I'm completely wrong, just ignore this =P

  • siberx - Thursday, July 30, 2009 - link

    Fantastic article - I smile each time AnandTech posts one of these groundbreaking articles that just cuts straight through the BS and gets to the truth behind issues that have been muddled in hearsay and rumours for years.

    I am personally particularly sensitive to input lag, and with my current LCD even in a fast game like TF2 or UT I find the lag intolerable if vsync is enabled - I have to run with it disabled in just about any game demanding fast response.

    My question, however, is the effect that multi-gpu solutions have on input lag. I have never seen something describing exactly how both ATI and nVidia's multi-gpu solutions affect lag, as well as how different multi-gpu rendering modes (AFR, SFR, etc...) affect lag. I would assume that using a multi-gpu solution would, in most cases incur at least an extra frame of delay to mix or move frames between cards, etc... but an actual analysis of this would be very useful. It may, in fact, be worthwhile to disable multi-gpu when running an older twitch game to improve latency...

    Additionally, testing with a couple other LCDs to see how they compare latency-wise would be interesting - I get the feeling your Dell panel is a fair step faster than your standard-issue modern panel doing overdriving to reduce switching times...
  • race2 - Saturday, August 1, 2009 - link

    When you say that all non-Nvidia driver Triple Buffering for OpenGL programs are simply one frame flip queues, do you mean that D3DOverrider's forced Triple Buffering is a one frame flip queue as well?
  • race2 - Saturday, August 1, 2009 - link

    Sorry, first time posting here. Previous comment was not meant to be a reply.
  • arcsign - Sunday, July 26, 2009 - link

    It's nice to know that the whole input lag issue is finally getting some attention. I've been trying to find ways to improve it, without buying new hardware, for a little while now, and came across some options that might be of interest for future articles. (I don't have access to much in terms of equipment to measure these things, so my testing hasn't been so much empirical as it has "well, that seems a bit better... maybe.")

    -- The two that stick out in my mind as far as software options go are (at least for WinXP) the boot.ini options "/INTAFFINITY," and "/TIMERES= xxxxx." The former assigns all interrupts to the highest numbered core, and the latter changes the resolution of the Windows kernel timer.

    -- It would also be interesting to see what sort of effects overclocking might have on various latencies, as I've noticed that Windows doesn't always agree with the BIOS/CPU-Z as to the processor's speed, and in cases where a game uses Windows Performance Counters to calculate time deltas for networking/inputs/etc, if there are any counters that depend on an accurate cpu speed, this could present a problem. (Although this isn't directly related to input lag, it is related to the interaction between the game and the player...)

    -- AHCI multimedia timers versus TSC's (more of an issue in XP than more recent OS's, as I believe Vista and 7 both require the use of the AHCI timers) may also have a significant effect on gameplay.

    Anyways, nice article, and keep up the good work.
  • William Gaatjes - Saturday, July 25, 2009 - link

    Hello, you might find something interesting on the website of Avago .

    Avago technologies manufactures optical mouse chips.
    Another manufacturer is SGS thomson or st electronics.

    Here is a link to avago chips.

    http://www.avagotech.com/pages/en/navigation_inter...">http://www.avagotech.com/pages/en/navig.../navigat...

    You might find some information you seek there.




    I noticed you where writing about 3 keynumbers but you mention 4 on the page : "Reflexes and Input Generation".
  • William Gaatjes - Saturday, July 25, 2009 - link

    And a very nice article i forgot to add.

  • camylarde - Tuesday, July 21, 2009 - link

    Now all that remains is to incorporate a multiplayer fps game and dissect how network comunication affects it, and how that knowledge can be used to clearly select wallhackers and aimbotters from the regular pack, just by watching a demo of them, and doing basic math counts of their reported network lag.
  • DerekWilson - Monday, July 20, 2009 - link

    This is something we would love to do, and while it is on the table we may not have the time in the near term to get something like that up right now.

    But trust me, we've been thinking of many cool ways to use high speed footage :-)
  • JimboMahoney - Monday, July 20, 2009 - link

    I also found Fallout 3 extremely laggy until I edited the Fallout.ini file from this

    iPresentInterval=1

    to this:

    iPresentInterval=0

    (Thanks to TweakGuides.com for this tip).

    It seems that Fallout 3 has VSync enabled at all times, even if you disable it in the menu, unless you make this change. The game was pretty unpleasant to play before I did this (I never use VSync).
  • DerekWilson - Monday, July 20, 2009 - link

    This is how we disable vsync.

    We got the same results in lag with present interval set to either 1 or 0 ... it really didn't make a measurable difference in our testing.
  • DerekWilson - Monday, July 20, 2009 - link

    to clarify a little, this is why i think that Gamebryo (or Bethesda) must do some sort of internal timing that strictly enforces framerate, CPU time, or something based on some other factor than present interval.
  • NetSoerfer - Monday, July 20, 2009 - link

    On page 5, the fifth paragraph begins with "If our frametime is just longer than 16.67ms...". The next paragraph is meant to describe the opposite but begins with "When framerate is lower than refresh rate...".

    Longer frametime equals lower refresh rate. The second paragraph should read "When framerate is higher than refresh rate..." or "When frametime is shorter than refresh rate...".
  • DerekWilson - Monday, July 20, 2009 - link

    No, the next paragraph is not meant to describe the opposite case ...

    The first paragraph you cite describes the effects of double-buffered vsync on framerates both lower than refresh (first half of the paragraph) and higher than refresh (second half of the paragraph).

    The second paragraph you cite describes the effects of a 1 frame flip-queue with vsync or triple buffering on framerates that are lower than refresh.

    Sorry if that wasn't clear.
  • Per Hansson - Sunday, July 19, 2009 - link

    Hi, I tried your recommendation with "overclocking" the mouse (erm, we are really just changing the speed of the USB port, not the mouse right?)

    Anyway, I've got a MS IntelliMouse Explorer v3.0
    When I run "Direct Input Mouse Rate" it shows my lag as 8ms at 125hz...

    So I used the driver hidusbf and changed the frequency to 1000hz, this resulted in 1.4ms and 700hz with my mouse...

    But now to begin with I had the mouse speed set to max in the Intellipoint mouse setup, and also "enhance pointer precision" enabled...

    And at 125hz / 8ms lag that gave me a good speed, a bit slower than I had in Win2K but still acceptable (current os is XP x64)
    But now with my "overclocked" mouse the movement is waaay to slow, I need a bigger mousepad to move the mousepointer all across my monitor
    Is this intended or just due to MS drivers or whatever?

    I was planning on getting the Microsoft Habu gaming mouse developed by Razer because the current iteration of the Explorer 3.0 is a POS with crap microbuttons that keep failing, think I've been through 3 of these in the last 2 years, even replaced them with ones bought at Elfa but they also failed after a couple months
    Anyway, will all mouse have this speed issue at high ouse rates? (above 125hz)
  • MarktheC - Monday, July 27, 2009 - link

    Re: "But now with my "overclocked" mouse the movement is waaay to slow, I need a bigger mousepad to move the mousepointer all across my monitor. Is this intended or just due to MS drivers or whatever?"

    Yes, this is "how it works" (but it can be fixed).

    What's happening is this: At 125 Hz and a given on-the-pad mouse speed, each mouse report might be returning (say) 16 counts/report.
    The XP/Vista/7 "Enhance pointer precision" code uses the "16" value to lookup an acceleration curve (SmoothMouseXCurve/SmoothMouseYCurve) and apply a scaling factor to the mouse input (approx x 1.4 when the mouse count is 16). The pointer moves ~1.4 * 16 = ~22 pixels.

    If the report rate is changed to to 1000 Hz, each mouse report returns 2, 2, 2, 2, 2, 2, 2, 2 instead (same gross movement of 16, but spread over 8 times as many reports). Now the XP/Vista/7 "Enhance pointer precision" code uses "2" to lookup the acceleration curve and returns a scaling factor (~0.6 when the mouse count is 2). The pointer moves ~0.6 * 2 * 8 = ~9 pixels and you perceive the mouse as slow.

    This is (somewhat) described here:
    http://www.codinghorror.com/blog/archives/000977.h...">http://www.codinghorror.com/blog/archives/000977.h...
    http://www.microsoft.com/whdc/archive/pointer-bal....">http://www.microsoft.com/whdc/archive/pointer-bal....

    BUT Microsoft made a silly design mistake!:
    http://donewmouseaccel.blogspot.com/2009/06/out-of...">http://donewmouseaccel.blogspot.com/200...t-of-syn...

    A solution is to tweak the Registry: HKEY_CURRENT_USER\Control Panel>Mouse>SmoothMouseXCurve and SmoothMouseYCurve values.
    Treat each group of 4 bytes as a 32-bit integer, and divide by 8 (for 1000 Hz). AFAIK, doing this for both SmoothMouseYCurve & SmoothMouseXCurve should return the acceleration back to normal.

    A BETTER solution may be to stick with "Enhance pointer precision" and 125 Hz for normal Windows work, and use 1000 Hz only for gaming AND TURN OFF "Enhance pointer precision" when gaming (if required by the game: most modern games uses DirectX to read the mouse, which ignores the "Enhance pointer precision" checkbox anyway).

    Re: "I was planning on getting the Microsoft Habu ... will all mouse have this speed issue at high mouse rates? (above 125hz)"

    I don't know: I expect the Habu driver will do the right thing and not need any fix as above, but I don't know...
  • DerekWilson - Monday, July 20, 2009 - link

    Actually ... the report / second rate should have zero impact on the speed of the pointer. I do say should -- something odd could be happening like it could be dropping counts in order to assemble reports that fast (i.e. your mouse could be too overclocked and might be doing things wrong). But I am not a hardcore mouse overclocker myself so I'd do a little research on it.

    I would recommend, if your mouse can't actually hit 1000Hz, to drop it down to 500 reports/second instead of 1000 ... it should be more consistent that way, and maybe it will fix your pointer speed issue.

    The CPI (reported as DPI) will have an impact on pointer speed. But so will things like setting mouse speed to maximum and using "enhance pointer precision" ... though these latter two don't really have desirable results.

    I strongly recommend leaving mouse speed at the middle notch ... setting it higher actually skips pixels (though "enhance pointer precisions" makes your mouse able to move one pixel at a time if you move it really slowly). And I also recommend not using "enhance pointer precision" as well ...

    These MS pointer ballistics can cause problems in older games, but if the developer did the "right" thing and used either DirectInput or raw input devices then the pointer speed settings shouldn't affect games (only the sensitivity slider in the game should affect pointer speed if it's done right). In most cases going forward you should be able to use the OS to manipulate your pointer speed without negatively impacting your game ... but there is a chance that these settings could negatively impact your gaming experience if the developer used a less desirable way to access the mouse data.
  • Per Hansson - Monday, July 20, 2009 - link

    Thanks, the behaviour is the same at 250hz and 500hz
    Those rates just slow down the mouse more...

    There would be no way at all that I could set the mouse speed slider to the middle and get used to that, same for not having enhance pointer precision on

    Guess sometimes you just can't win eh? ;)
    In fact I was quite annoyed by the change in ballistics going from Win2K which supported acceleration which I used and really liked to WinXP which only has this "enhance pointer precision" option
  • Xcrypt - Thursday, November 20, 2014 - link

    You shouldn't enable enhance pointer precision, nor should you have your mouse speed set to maximum. Both will adversely affect your ability to aim, especially the acceleration will.
  • valnar - Sunday, July 19, 2009 - link

    "It is possible to overclock your mouse."

    Now I've seen everything. :)
  • DerekWilson - Sunday, July 19, 2009 - link

    It was bound to happen wasn't it?

    This has been around for a few years now, but (for obvious reasons) never made it into the mainstream gaming community. And, really, now that high performance mice are much more available it isn't as much of an issue.
  • Kaihekoa - Saturday, July 18, 2009 - link

    From the conclusion this point wasn't clear to me.
  • DerekWilson - Sunday, July 19, 2009 - link

    at present triple buffering in DirectX == a 1 frame flip queue in all cases ...

    so ... it is best to disable triple buffering in DirectX if you are over refresh rate in performance (60FPS generally) ...

    and it is better to enable triple buffering in DirectX if you are under 60 FPS.
  • Squall Leonhart - Wednesday, March 30, 2011 - link

    This is not always the case actually, there are some DirectX engines specifically the age of empires 3 engine as an example, that have hitching when moving around the map unless triple buffering is forced on the game.
  • billythefisherman - Saturday, July 18, 2009 - link

    First of all I'd like to say well done on the article you're probably the first person outside of game industry developers to have looked at this rather complex topic and certainly the first to take into account the whole hardware pipeline as well.

    Sadly though there are some gaping holes in your analysis mainly focused around the CPU stage. Sadly your CPU isn't going to run any faster than your GPU (and actually the same is correct in reverse) as one is dependent on the other (the GPU is dependent on the CPU). As such the CPU may finish all of its tasks faster than the GPU but the CPU will have to wait for the GPU to finish rendering the last frame before it can start on the next frame of logic.

    No game team in the world developing for a console is going to triple buffer their GPU command list.

    I intentionally added 'developing for a console' as this is also an important factor I'd say around 75% (being very conservative) of mainstream PC games now are based on cross platform engines. As such developers will more than likely gear their engines to the consoles as these make up the largest market segment by far.

    The consoles all have very limited memory capacities
    in comparison to their computational power and so developers will more than likely try to save memory over computation thus a double buffered command list is the norm. Some advanced console specific engines actually dropping down to a single command buffer and using CPU - GPU synchronisation techniques because of CPU's being faster than GPU's. This kind of thing isnt going to happen on the PC because the GPU is invariably faster than the CPU.

    When porting a game to PC a developer is very unlikely to spend the money re-engineering the core pipline because of the massive problems that can cause. This can be seen in most 'DirectX 10' games, as they simply add a few more post processing effects to soak up the extra power - you may call it lazy coding, I don't, it's just commercial reality these are businesses at the end of the day.

    So both your diagrams on the last page are wrong with regards to the CPU stage as they will be roughly the same amount of time as the GPU in the vast majority of frames because of frame locality ie one frame differs little to the next frame as the player tends not to jump around in space and so neighbouring frames take similar amounts of time to render.

    Onto my next complaint :
    "If our frametime is just longer than 16.67ms with vsync enabled, we will add a full additional frame of latency (with no work being done on the GPU) before we are able to swap the finished buffer to the front for scanout. The wasted work can cause our next frame not to come in before the next vsync, giving us up to two frames of latency (one because we wait to swap and one because of the delay in starting the next frame)."

    What are you talking about man!?! You don't drop down to 20fps (ie two more frames of latency) because you take 17ms to render your frame - you drop down to 30fps! With vsync enabled your graphics processor will be stalled until the next frame but thats all and you could possibly kick off your CPU to calculate the next frame to take advantage of that time. Not that thats going to make the slightest jot of difference if you're GPU bound because you have to wait for the GPU to finish with the command buffer its rendering (as you don't know where in the command buffer the GPU is).

    As I've said on the consoles there are tricks you can do to synchronise the GPU with the CPU but you don't have that low level control of the GPU on the PC as Nvidia/ATI don't want the internals of thier drivers exposed to one another.

    And as I've said not that you'd want to do such a thing on PC as the CPU is usually going to be slower than the GPU and cause the GPU to stall constantly hence the reason to double buffer the command buffer in the first place.

    I've also tried to explain in my posts to your triple buffering article why there's a lot cobblers in the next few paragraphs.
  • DerekWilson - Sunday, July 19, 2009 - link

    Fruit pies? ... anyway...

    Thanks for your feedback. On the first issue, the console development is one of growing importance as much as I would like for it not to be. At some point, though, I expect there will be an inflection point where it will just not be possible to build certain types of games for consoles that can be built on PCs ... and we'll have this before the next generation of consoles. Maybe it's a pipedream, but I'm hoping the development focus will shift back to the PC rather than continue to pull away (I don't think piracy is a real factor in profitability though I do believe publishers use the issue to take advantage of developers and consumers).

    And I get that with GPU as bottleneck you have that much time to use the CPU as well ... but you /could/ decouple CPU and GPU and gain performance or reduce lag. Currently, it may make sense that if we are GPU limited the CPU stage will effectively equal the GPU stage in latency -- and likewise that if we are CPU limited, the GPU state effectively equals the CPU stage (because of stalling) in input latency.

    Certainly it is a more complex topic than I illustrated, and if I didn't make that clear then I do apologize. I just wanted to get across the general idea rather than a "this is how it always is" kind of thing ... clearly Fallout 3 has even more input lag than any of my worst case scenarios account for even with 2 frame of image processing on the monitor ... I have no idea what they are doing ...

    ...

    As for the second issue -- you can get up to two frames of INPUT LAG with vsync enabled and 17ms GPU time.

    you will get up to these two frames (60Hz frames) of input lag at 30FPS ...

    I'm not talking about the frame rate dropping to 2 frames then 1 frame (20 FPS) ... I'm talking about the fact that, at best, your input is gathered 17ms before your frame completes on the GPU (1 frame of input lag) and (because it missed vsync) it will take another frame for that to hit the screen (for a total of two).
  • billythefisherman - Monday, July 20, 2009 - link

    I have to re-iterate: well done on tackling this rather complex issue, I applaud you! (I just wish you hadn't whipped up your punters so much in the benefits of triple buffering!)
  • Gastra - Saturday, July 18, 2009 - link

    For (quite a lot if you follow the links) of information on what an optical mouse see:
    http://hackedgadgets.com/2008/10/15/optical-mouse-...">http://hackedgadgets.com/2008/10/15/optical-mouse-...
  • DerekWilson - Sunday, July 19, 2009 - link

    That's pretty cool stuff ... And it lines up pretty well with our guess at mouse sensor resolution for the G9x.

    It'd still be a lot nicer if we could get the specs straight from the manufacturer though ...
  • PrinceGaz - Friday, July 17, 2009 - link

    "For input lag reduction in the general case, we recommend disabling vsync. For NVIDIA card owners running OpenGL games, forcing triple buffering in the driver will provide a better visual experience with no tearing and will always start rendering the same frame that would start rendering with vsync disabled."

    I'm going to ask this again I'm afraid :) Are you sure Derek? Does nVidia's triple-buffer OpenGL driver implementation do that, or is it just the same as what most people take triple-buffer rendering to be, that is having one additional back buffer to render to so as to provide a steady supply of frames when the framerate dips below the refresh rate? Have you got confirmation either from screenshots or something else (like nVidia saying that is how it works) that OpenGL triple-buffering is any different from Direct3D rendering, or how AMD handle it?.

    Because if you don't, then all you are saying is that triple-buffering is a second back-buffer which is filled to prevent lags when the framerate falls below the refresh rate. Do you know for sure that nVidia OpenGL drivers render constantly when in triple-buffer mode or are you only assuming they do so?
  • PrinceGaz - Friday, July 17, 2009 - link

    Just to add

    "For input lag reduction in the general case, we recommend disabling vsync"

    It is rather ironic that you used that phrase, when in the your previous article you were strongly stating the case for v-sync always being used (preferably with triple-buffering).

    Unless you are certain that nVidia's OpenGL implementation of triple-buffering works how you think it does (and not how most people think it does), posting articles may be unwise.
  • DerekWilson - Sunday, July 19, 2009 - link

    In the "general case" we mean /when triple buffering is not available/ (real triple buffering is not available in the majority of games), in order to reduce input lag, vsync should be disabled.

    Here's a better break down of what we recommend:

    Above 60FPS:

    triple buffering > no vsync > vsync > flip queue (any at all)

    Always EXACTLY 60 FPS (very unlikely to be perfectly consistent naturally)

    triple buffering == vsync == flip queue (1 frame) > no vsync

    Below 60FPS (non-twitch shooter):

    triple buffering == flip queue (1 frame) > no vsync > vsync

    Below 60FPS (twitch shooter or where lag is a big issue):

    no vsync >= triple buffering == flip queue (1 frame) > vsync

    ... to us the fact that at less than 60 FPS you start on the exact same frame with either triple buffering, no vsync or with a 1 frame flip queue is enough -- you will get less than 1 frame of difference in the lag and the difference you get with no vsync is only on part of the frame anyway.

    ...

    And ...

    NVIDIA has absolutely CONFIRMED that they do OpenGL triple buffering in the way that I described it in the previous article. This confirmation is directly from their OGL driver team.

    I was just able to sit down with AMD two days ago and, while they don't do things in exactly the way I describe them, their OpenGL triple buffering does do something quite interesting at > 60FPS ... which I'll talk about in an upcoming article. They also said they are looking into what it would take to do what I was talking about -- but that they haven't yet because their workstation customers tend to care more about what happens at < 60fps (which is a case where a 1 frame flip queue == triple buffering).

    The situation with DirectX is a little more restrictive with both NVIDIA and AMD... I will also address this issue in an upcoming article.
  • BoFox - Friday, July 17, 2009 - link

    Anandtech does yet another ground-breaking article, thanks to Derek Wilson and the use of a high-speed camera this time!

    It is rather interesting how you mention the human reaction time! To detect and process the image signal in your brain, and to react to it (usually faster if through reflex), are all part of the entire reaction time of 200-300 ms, correct? Well, we should be testing it on the professional gamers like Jonathan 'Fatal1ty' Wendel, to see how much he has tuned his reaction time in making it so reflexive that it's as low as 150ms or even lower. Why not try interviewing him and use your high-speed camera to test that on him, and on an average gamer?!? That would be a SUPER-awesome article here to read!!! :D

    About the rendering ahead, some games are designed to render ahead by as much as 3 frames, and changing that to 0 could incur a large performance hit--sometimes by as much as 50%. I have experienced this with some games a while back, cannot remember which ones exactly, and that was on a single video card (not when I was using SLI). The performance penalty would usually bring down the frame rates to below the refresh rate, thus making it very undesirable. There are some other articles cautioning against it, but as long as it does not make a difference in the game that you are playing, then great. At least, as long as it does not drop the frame rates to below the refresh rate, then great!

  • DerekWilson - Sunday, July 19, 2009 - link

    If your framerate is above refresh rate (except for the multiGPU case) you should definitely not use anything more than 0 frames of render ahead. Triple buffering as I described in my previous article (and as NVIDIA does in OpenGL) will be a good option, but double buffered w/ vsync or no vsync will definitely be better than any sort of render ahead.

    If your performance is below refresh rate, you'll want to either use no vsync, exactly 1 frame render ahead, or triple buffering. double buffered vsync (0 flip queue/pre-rendered frames with vsync) will give you a significant drop in performance and increase in input lag when performance is below refresh. This is as you say -- by as much as 50%. But only when performance is already below refresh.
  • nvmarino - Friday, July 17, 2009 - link

    Hey Derek, thanks for the great article. Nice job addressing many of the BS theories and comments about lag going on in the threads of your triple buffering article...

    And now I've got even more fuel for the fire of my lust for someone to release a 1920x1080 resolution display with support for 120Hz refresh rate at the INPUT and good black levels. If only I could set my PC to spit out 120Hz to my display, my 24Fps Blu-ray and 60Fps TV content would play without judder, AND my games would have zero tearing (well as long as they don't run above 120Fps) and minimal lag. Not only that, but I'd also have an ideal setup for shutter-based steroscopic 3d, and it would open the door for some of the new stuff like smoothing video via frame interpolation of 24P content to 120P currently going on in the PC videophile community. Can you say HTPC nirvana! Where's my 120Hz!!!!
  • DerekWilson - Friday, July 17, 2009 - link

    yeah, i want a true 120hz 1920x1080 display too :-) mmmm that would be tasty.
  • Belinda fox - Friday, July 17, 2009 - link

    If someones reaction times are up to 220ms and your results are around the 100ms mark surely this says something about the validity of results and conclusion.
    Sorry i never read all the article just first and last one, due to reaction times being mentioned as part of lag.
  • BoFox - Friday, July 17, 2009 - link

    Just read the article first.

    If you had better common sense, you would figure out that the human response time of >200ms comes AFTER seeing what is on the monitor, which takes 100ms to display in the first place.

    Let's say that I appear just around the corner right in front of you, in a FPS game, like Unreal Tournament 3, for example. First, it takes like 50ms ping time for your computer to receive that information. Second, it takes your monitor 100ms to display the information. Third, it takes you 200ms to react to what you're seeing.
  • lyeoh - Sunday, July 19, 2009 - link

    And AFTER you react to what you are seeing you press a key/button or move your mouse and the relevant portion of the input lag takes into effect (input device, game, network).

    With a 100+ms input lag handicap even on the LAN you could be "dead" before you even see anything or be able to do anything about it.

    So would be good if have some figures on wireless keyboards and mice vs wired (usb and ps2).

    I personally think wireless stuff will add a lot more lag due to the modulation/demodulation, and other "tricks" they have to do.
  • DerekWilson - Friday, July 17, 2009 - link

    That is definitely an issue ...

    i didn't get into network play as it gets really sticky though ... the local client does do limited prediction based on last known info about other players -- if this prediction data is accurate enough at small intervals and if ping isn't bad then it isn't a huge issue ... but at the same time, servers may resolve hits (as apparent to the client) as hits even if they are misses (in reality as per what the player did) or it may not (the difference results in either increased or reduced effectiveness of techniques like bunny hopping). but it is way more complex and depends on how the client predicts things between server updates and how the server resolves differences between clients and all sorts of funky stuff.

    that's why i wanted to stick with mouse-to-display issues here ...

    but you are right that there are other factors (including human reaction time) that add up to make it so input lag can be an important slice of a whole chain of events and can affect the gaming experience.
  • snarfbot - Friday, July 17, 2009 - link

    i have a crt and a plasma tv, both essentially lag free.

    my friends have tn monitors, also no perceivable lag.

    the worst offender is my friends samsung tv, its noticeably laggy.

    but no where near as bad as wireless mice. they gotta be in the 30+ms region.
  • nevbie - Friday, July 17, 2009 - link

    My mouse wire is about the same length as my monitor cable. It's too bad the latter doesn't have same relative bandwidth.

    1. But 16ms? Is it still around 16ms if the resolution is halved, which would cut the amount of data per frame to 1/4, or is the 16ms some kind of worst case?

    2. Does the "input lag"(I'd prefer calling it response time) change if you use VGA (analog) connection instead of DVI (digital)? That would be a funny workaround.
  • DerekWilson - Friday, July 17, 2009 - link

    1) scanout is always around 16ms regardless of resolution. blanking is always around 0.5ms regardless of resolution as long as the refresh rate is set to 60Hz. it's the VESA standards that determine this. Higher refresh rates mean faster transmission times. cable bandwidth limits maximum resolution+refresh. LCD makers are really the ones to blame for higher refresh rates not being available with lower resolutions ... I believe the traditional limit to this was LCD response time.

    2) i don't believe so.
  • bob4432 - Friday, July 17, 2009 - link

    so how slow is my ps/2 trackman wheel connecting into a 4 port kvm? never thought about this area of gaming...interesting.
  • DerekWilson - Friday, July 17, 2009 - link

    I'm not sure if the kvm adds significant delay or not ... that'd be an interesting question.

    I do believe that ps/2 is 100Hz (10ms) though ... so ... ya know, slower :-)
  • anantec - Thursday, July 16, 2009 - link

    i would love to see some analysis of wireless input devices, and keyboards.
  • anantec - Thursday, July 16, 2009 - link

    for example, the wireless controler for playstation 3
    and recently razer made a wireless gaming mouse
    http://www.razerzone.com/gaming-mice/razer-mamba/">http://www.razerzone.com/gaming-mice/razer-mamba/

    as the article mentioned, usb device is limited to 1000Hz, what about bluetooth devices?
  • Stas - Thursday, July 16, 2009 - link

    Really appreciate the article. Even though I didn't learn much from it. It is VERY useful to point people to when the topic comes up. "Huh, Input lag? What's that? I'm on broadband, dude, and I have a QUAD-CORE!!!" <--- With ppl like that, I, now, have more than 2 options (murder them, or spend 30 min explaining input lag), I can send them to AnAndTech. You guys, own.
  • marraco - Thursday, July 16, 2009 - link

    Thanks for such useful article.

    The tweaking on the mouse was one I never had seen before (and I search for tweaks regularly).

    One thing that I would add, is that some games (as FEAR 2) include controls for mouse smoothing, and/or some way to increase mouse responsivenes. They really affect my imput lag perception.
  • jkostans - Thursday, July 16, 2009 - link

    I use a G5 mouse, and a CRT running at 100Hz. As long as my computer pushes 100+fps I don't notice any input lag at all.
    This is what I've found to contribute to input lag in order of significance:

    Framerate, refresh rate, mouse rate. Oh and I've never used an LCD I can't stand the ghosting.
  • RubberJohnny - Friday, July 17, 2009 - link

    OT - I used to be a diehard CRT ONLY user then i realised there is NO ghosting on modern LCD monitors...you may have seen smearing on LCD tvs but thats caused but the scaler resizing SD material to fit the panels native res.
    On monitors there is no scaling = no ghosting.
    Got a 24inch samsung 6 months ago and wish i'd done so earlier, crisper, larger image and widescreen being the main reasons i'll NEVER use a crt again.
  • DerekWilson - Friday, July 17, 2009 - link

    I agree that with modern LCD panels ghosting is not as large an issue and color (depending on backlight) and contrast (depending on panel) are much better these days as well.

    Refresh rate is the only real outstanding issue these days (imo). And the only 120Hz display I saw was a bit over saturated / over bright and not high enough contrast.
  • jkostans - Saturday, July 18, 2009 - link

    I have yet to see an LCD without ghosting, it may be minor but it's still annoying. And even the 120Hz LCDs supposedly still have measurable input lag regardless of the non-existent ghosting. LCDs are still a downgrade as far as I'm concerned.
  • DerekWilson - Sunday, July 19, 2009 - link

    Well you can actually see how much (or little) ghosting there would be in our high speed footage... even the advertised latencies on the 3007WFP are pretty bad compared to most panels these days (especially TN panels). Despite that there were only a few cases where we could see ghosting take a whole frame (and it never seemed to take more than a whole frame).

    We should test more panels with high speed cameras and see what happens...
  • Freeseus - Thursday, July 16, 2009 - link

    I feel like there is a HUGE section of delay missing from this article. Perhaps it was chosen specifically not to be included because rigorous testing/comparison would have to be performed in order to show any sort of suggested "average" numbers. Either way, I feel it should have been addressed... even if it were just at the end of the article.

    I'm referring to the added delay of wireless mice.

    Aside from the added delay of transmitting all the mouse action to the receiver, the biggest issue is the inconsistencies of mouse performance and precision. I'm sure there's a direct correlation between this and the battery. I'm referring specifically to the amount of battery used up in order for the mouse to broadcast continuously at extremely high intervals to insure precise movement. But obviously this includes any issue where the battery needs to be recharged as well. And on top of that, the mouse seems to be non-responsive occasionally during use. Completely unacceptable in a work or 'twitch gaming' environment.

    Anyhow, it would have been nice to see this addressed because many people make the argument that wireless mice are better. And when it comes to FPS gaming or even work, I can't think of a reason not to have a wired mouse. Do I really need to have that accuracy all the time. Yes.
  • DerekWilson - Thursday, July 16, 2009 - link

    I agree that the current state of wireless mice is generally pretty poor ... though I've never used a wireless mouse targeted at gaming (gaming mice were the first optical mice I was able to use without mousing too quickly for the sensor even during desktop use).

    Testing wireless mice is definitely something worth looking into.
  • Vidmar - Friday, July 17, 2009 - link

    What about PS/2 mice? Are they better or worse than USB?
  • DerekWilson - Friday, July 17, 2009 - link

    PS/2 mice are slower ... iirc they come in at about 100Hz (10ms).
  • Vidmar - Monday, July 20, 2009 - link

    Really?? My old PS/2 MS Wheel Optical Mouse v1.1 is currently running at 200Mhz. IE: 200 reports/second. I've never felt like it doesn't keep up in any game.
  • lopri - Thursday, July 16, 2009 - link

    I'm loving this article and chewing threw every page right now. Just wanted to say thank you for such in-depth analysis as is rarely found elsewhere.
  • aguilpa1 - Thursday, July 16, 2009 - link

    Lots of variables that we never consider when trying to do fast gaming. I would be curious how much lag is in a racing sim like GRID or Colin McCrae DIRT. Those are intense graphics games and demand the fastest everything to keep you from going in the ditch. I have noticed I compensate at times by estimating when to start turning before the turn arrives.
  • crimson117 - Thursday, July 16, 2009 - link

    Isn't part of that just intentional skidding / drift in racing games, to mimic the "lag" of rubber catching asphalt at high speeds?
  • hechacker1 - Thursday, July 16, 2009 - link

    You say TF2 is GPU limited, but with my 4850 I find the first core is pegged at 100%. The same applies to my older 3850.

    With core i7 920 @ 166x20 = 3320MHz and +166 for Turbo mode, hyper threading on, I see TF2 using 6 cores, The first is pegged out at 100%, the second and third vary from 50-100% depending on the action (32 player server). The other three hover around 25%.

    1920x1080. Benq 2400G (bought for its low input-lag)

    All highest settings, 4xMSAA, Aniso 8x, Disable vsync, FOV 90

    My framerate hovers around 100FPS for most Valve maps.

    I use this autoexec to get more threading and higher quality textures:

    rlod 0 matpicmip -10 clnewimpacteffects 1 mpusehwmmodels 1 mpusehwmvcds 1 clburninggibs 1 matspecular 1 matparallaxmap 1 rthreadedparticles 1 rthreadedrenderables 1 clragdollcollide "1" jpegquality 100 rthreadedclientshadow_manager 1

    Most people say TF2 is a CPU limited game. Perhaps that only applies ATI?

    Even without the autoexec.cfg, I see the game use 100% on the first core.

    Very good article though. I hope this shuts up the false info that 60fps is too fast for humans to notice.

  • DerekWilson - Thursday, July 16, 2009 - link

    even if a core is pegged at 100% that doesn't mean the game's performance is CPU limited.

    at 2560x1600 we were hovering around 110fps but at 1152x864 we were constantly well over 200 fps. As lowering resolution doesn't change the load on the CPU, this clearly indicates that we were GPU bound -- at least at 25x16.

    For our 1152x864@120Hz test, we might have been CPU bound, but I don't have the data to know for sure here (I didn't test any near resolutions).
  • hechacker1 - Thursday, July 16, 2009 - link

    Oh yeah.

    Flip queue to 0.
    ATI A.I. at Low or "standard" (I've read "high" mode can use more CPU?)

    Latest driver. Windows 7 x64 7201.
  • Qiasfah - Thursday, July 16, 2009 - link

    In the article you stated that TF2 was GPU limited (and it was in the situations you were testing), however you should find that in battle situations with other active characters present it becomes heavily CPU limited. It would be interesting to see if there was a difference in input lag due to this in the midst of battles rather than sitting idle.

    I run an i7 920, and even with multicore enabled (an option which will very commonly double a persons TF2 framerate) i get the same dips in FPS regardless of graphical settings. It would be interesting to see how overclocking affects the performance of this game.
  • DerekWilson - Thursday, July 16, 2009 - link

    More than likely, in TF2, you'll be bottlenecked at the network when it comes to performance ...

    But the way Valve does things is with local prediction (running code on the client) and then checking predictions on the server. This should mean that our test shows what you can expect to actually /see/ whether or not what actually /happens/ is the same (if you are very laggy on the network or if there are lots of players or whatever).
  • codestrong - Thursday, July 16, 2009 - link

    "Beyond that, GPU is the next most important faster (factor?), and a mouse that can do at least 500 reports per second is a good idea." Nice work by the way. I've been interested in this since Carmack mentioned input lag during his work on quake live.
  • DerekWilson - Thursday, July 16, 2009 - link

    yeah, i meant factor. thanks.
  • SiliconDoc - Tuesday, July 21, 2009 - link

    Yes, nice article and nice work on getting the job done without a super expensive camera, on an interesting subject for gamers.
  • Zolcos - Thursday, July 16, 2009 - link

    The article is logically inconsistent. On page 1 it states "input lag is defined as the delay between the when a user does something with an input device and when that action is reflected on the monitor" and on page two it has "Input lag starts from before we even react".
  • DerekWilson - Thursday, July 16, 2009 - link

    i'll fix that...

    "The impact of input lag is compounded by what goes on before we even react."
  • yacoub - Thursday, July 16, 2009 - link

    The input lag everyone's most concerned with is the amount the display adds, because while all the rest is consistent, displays add a variable amount depending on which one you get. The ones that add more than ~20 ms add a NOTICEABLE amount (for most people) which takes input lag to the point that it becomes frustrating.
  • DerekWilson - Thursday, July 16, 2009 - link

    Part of the point was to explain that there is a lot at the end of the chain that can significantly impact performance and it's all about the display.

    If we do consider a 100ms threshold as valid, then based on our numbers from TF2 it is clear that we would end up in the >100ms input lag range with a monitor that adds more than 20ms of lag.

    And if we can't expect a twitch shooter to come in under the mark, how is everything else going to do? Not well I would imagine.

    I did think about looking at a wide array of monitors, but I feel like that might be better suited to a more focused review of monitor performance rather than an exploration of input lag in general.
  • yacoub - Thursday, July 16, 2009 - link

    Sure but for whatever reason, all of the lag prior to the display's lag is essentially transparent because it doesn't add up to be enough to be perceptible. This would equate to your threshold.

    When using a display with little or no noticeable display lag, any FPS game will feel very responsive and without discernible latency (assuming your GPU hardware is up to the task of rendering the frames quickly enough and you're not using one of the early optical mice from a decade ago that had terrible tracking refresh rates, etc etc).

    Yet simply switching to a display with higher latency is enough to make input latency noticeable and frustrating for FPS gamers. So the key issue is finding a TN or IPS display since those panel technologies have the least input lag. Of course most panels out there are -VA based panels because they are cheaper to produce than IPS, and TN may be snappy in display response but they have a number of other downsides.

    What matters most is getting panel makers focused on IPS-based displays (or new panel technologies that significantly reduce the input lag most non-TN displays presently suffer. And hey, the more they produce and sell, the lower the production cost per unit so the better the pricing can be and the more opportunities for improved technology to be added to the IPS design.
  • ocyl - Friday, July 17, 2009 - link

    @ yacoub
    Did you read the article at all?
  • yacoub - Friday, July 17, 2009 - link

    Yes. I must not be explaining myself well, so forget it.
  • DDuckMan - Saturday, December 18, 2010 - link

    While this article was great, I'm still not sure if I am better off disabling SLI to eliminate the syncronization lag or having the higher framerates with SLI enabled in twitch games. It seems to me that with 120Hz monitors, vsync (which I need for 3D) and SLI lag would not be as important as keeping the framerate above the monitor refresh rate. I don't have the equipment to properly test, so I am looking forward the the next article.

    http://hardforum.com/showthread.php?t=1569281
  • burner1980 - Thursday, March 10, 2011 - link

    Quote: "Input lag with multiGPU systems is something we will want to explore at a later time."

    I`m still waiting patiently and looking forward to a follow up investigation. The topic of input lag is VERY important to gamers who play FPS. I do notice it in racing games, too.
    I suggest to use true 120Hz monitors in the follow up article. They of course won`t reduce input lag, but help to reduce screen tearing and thus allowing to optimize one`s settings to reduce input lag while keeping screen tearing at a low enough level.
    I´m also courious if using a 3 screen setup a la Eyefinity oder Vision Surround using two GPUs will have an impact.
  • dmnwlv - Thursday, April 28, 2011 - link

    Impressive report.

    Regarding mouse polling rate (I may have missed it out):

    1) I believe the actual mouse input into the CPU is already calculated and the end result (of that action) already registered before you get to see it on screen. It does not wait for the GPU/monitor to finish processing before determining the end result. Hence the influence of mouse response is even more substantial if we take out the whole chunk of lag times that were included in the total lag calculation here - Derek Wilson, pls correct me if I am wrong.

    Coupled with the predictive ability of human (also reported here) to react accordingly in advance from the existing state of game situation, it seems to match and explain why it is hard to imagine a few milliseconds of difference in mouse lag can have an impact to the overall gaming experience. The brain and reaction is (trying its best) interpolating and working in tandem with the CPU than the monitor.

    2) And another scenario where the user already intended to do a series of / continuous / extended action (eg, drawing a long curve line), does the response rate of the mouse play a part in drawing the most accurate curve that the person input/intended? - Maybe Derek can help on this as well.

    Thanks for the great report.
  • mathew7 - Thursday, September 8, 2011 - link

    What I realized (i)racing in the past year is that a consistent lag can still offer playable experience when everything is under control (like no tire grip losing in racing). I think this is why consoles seem to get away with laggy TVs. What I mean is that the brain will train itself to anticipate any movement. For instance, there was a comment about turning in earlier. But if the lag is inconsistent (let's say +/- 1 frame, that would mean 32ms between lowest and highest lag), the brain cannot adapt to it, as this is unnatural. FPS stutter is such an inconstistency, as is loading something during play (and game controls freezing for much less than 1s).

    But even consistent lag will not offer the best performance (I'm talking about lap times, not FPS), because anticipation is just 1 part, reflex is another, and your reflexes are what input lag affects most. During my racing, I found that I can actually recover from most slides with my LCD TV as opposed to my (much newer) LCD monitor. Some tests with a low-res high-speed camcorder (192x108x240fps), cloned displays and fast steering wheel movement showed 2-4 frames lag between TV and monitor. That is 32-64ms. The TV itself had around 1-2 frame lag compared to the wheel, but this included the complete chain of wheel sampling->USB->game CPU->GPU->TV.
  • junior262133 - Monday, March 4, 2013 - link

    Oh sweet baby Jesus thank you for this article. It saddens me knowing that Counter strike Global offensive has horrendous input lag in comparison to counter strike source. This issue has been plaguing me since it was in beta. I would mostly rank first in css when playing, but going over to global offensive was a living nightmare. What baffles me is almost no one seems to notice, considering most of us come from a background of CS:s or 1.6 on a Crt.

    On top of that nvidia has completely removed the option to set pre-rendered frames to zero.

    i have tried submitting tickets in to patch cs:go but they seem to not be getting any attention. Maybe i'm talking to the wrong people. If you could please do so, visit these 2 pages in which i try to address the issues at hand. It wouldn't hurt if you could explain in one comment that the reason there is input lag in Cs:go is because of the game code and not some random pc glitch of mine. Most people seem to be oblivious to the problem.

    http://64bitvps.com/csgo/ticket/mouse-laginput-mis...

    http://64bitvps.com/csgo/ticket/let-us-have-the-op...

    THANK YOU SO F*****G MUCH!!!!!
  • jigglywiggly - Saturday, October 5, 2013 - link

    does afr in sli add one frame of input lag?
  • ibaldo - Sunday, October 6, 2013 - link

    I am looking for information about possible difference in lag between monitor connections.
    I mean, comparing VGA vs DVI.
    Thanks for this excellent article!
  • willCho - Tuesday, January 15, 2019 - link

    I would like to watch the video, but I can't find it anywhere and can't find it on youtube. Can someone post a link?

Log in

Don't have an account? Sign up now