I'm surprised the "humans don't notice 100 ms" argument is even made. That's trivially debunkable with a simple blind A/B test at the command line using `sleep 0.1` with and without `sleep` aliased to `true`. To my eyes, the delay is obvious at 100 ms, noticeable at 50 ms, barely perceptible at 20 ms, and unnoticeable at 10 ms.
Not to mention that 100 ms is musically a 16th note at 150 bpm. Being off by a 16th note even at that speed is – especially for percussive instruments – obvious.
On the other hand, if you told me to strike a key less than 100 ms after some visual stimuli, I'm sure I couldn't do it – that's what "reaction time" is.
> strike a key less than 100 ms after some visual stimuli, I'm sure I couldn't do it
The video game "Super Hexagon"[1] is a surprisingly interesting demonstration of this limitation, and how the mind tries to workaround the problem. The higher difficulties ("hexagonest"[2]) seem impossible at first. In the easier levels you can react, but now the time it takes to see the scree, parse the simple graphics, recognize that you need to turn left to avo##CRASH##...##GAME OVER##, You quickly learn that if you think about the game, you are guaranteed to fail due to insufficient reaction time. Winning requires a mental state possible related to meditation: don't think, let the muscle memory do it's job (maybe from a faster path limited to the spinal-chord and brain-stem?).
At this speed the player is reacting (and hitting with correct timing) about ten notes (arrows) per second. This is possible by memorizing and reacting to group patterns instead of individual notes, but more importantly, reacting unconsciously.
A player can play at this level without conscious attention, even while holding a conversation, or while focusing at different parts of the screen, or spaced out. It feels like the fingers play by themselves.
This is so automatic that the player may have little feedback on how well they are doing. They may think they are making blunders and about to lose, while "the fingers" have correctly hit every note so far.
Definitely feels like a faster neural path was built.
A big difference between Hexagon and most other rhythm games like Stepmania, Beatmanias IIDX, Guitar Hero, etc, is that pattern is the same. Just like musicians memorize complex music that they later perform[1] with precise timing, the same is possible for these step/note patterns. I've personally seen someone perform one of the insane speed/note-density DDR songs blindfolded. Humans have surprisingly good internal timing and mechanical ability, iff you can memorize/practice it before hand.
Super Hexagon doesn't allow any of that. The game is randomly generated each time, making memorization impossible. It's 100% reaction against the incoming wall pattern, with no pauses. Except you don't actually have time to "react".
I recommend this[2] essay that does a much better (and more poetic) job of explaining how uniquely Supper Hexagon interacts with human perception.
Rhythm games are not about memorization, unless you call pattern recognition and muscle memory memorization.
First of all, players don't need to memorize the charts. Take a good player, give him an entirely new chart and he will perform much better than any beginner ever will, even with memorization. Shuffle mode, where the notes are mixed and therefore can't be memorized is just a minor handicap. Much, much easier than memorizing a whole song.
There are two things at work here.
The reaction time is actually not that short. For high level play, players typically set notes to appear about 500ms before action. At that rate, anyone can hit a single note, the difficulty is that there are lots of them.
The difference between experienced players and beginners is that experiences players recognize blocks instead of single notes, just like you read entire words rather than individual letters.
Super Hexagon is akin to "sight reading" or performing music "a prima vista", while DDR as played by most players is a type of rehearsed musical performance (or, well, dance, I suppose).
Being able to play a prima vista is a rather specialized skill even among musicians; I've only known a handful of people who can do it really well. I don't think it has a ton of utility (at least I never found it to have a lot) and so it's not something that many people actively develop or train, though.
> I've personally seen someone perform one of the insane speed/note-density DDR songs blindfolded
Rock Band 4 has a "Brutal" mode where notes are almost completely hidden save for a slice of a millisecond at the top, but although it does give a hint of an information that may be processed in the end it mostly doesn't matter, since at that point you can effectively do it blindfolded, which is as a process no different to learning to play the actual song.
Music video games can easily thwart the pre-memorization with the randomized chart (often considered essential to master the game). Players typically learn the timing, but possibly not the entire patterns. It is still remarkable that top-ranked players can do much, much better on a new song at the first glance---probably because the music is much more predictable than the pattern itself, so one can guide the pattern recognization with the music.
While the order of the walls in Super Hexagon is random, each level has a predetermined set of walls. Once you identify which wall pattern is coming you know exactly the maneuver needed to traverse it.
This makes Super Hexagon more a game of quick pattern recognition than reaction time.
When I play it I am generally focused at the edges of the screen to quickly identify the next pattern and only using my peripheral vision to maneuver around the walls in the center.
You are missing his point though. It's not just memorizing the whole song. If it was, when you play a _new_ song, you'd be as rusty as when you first started to learn to play.
But new versions come out, with new sets of songs, and people play songs of the same approximate difficulty as well. If it was simply memorization, they wouldn't be able to play same or near difficulty songs. They'd have to "ramp up" as they memorized the new song.
Obviously, people's brains are reacting to similar patterns (each song has a "style" and many songs are of the same "style" with things like jumps, diagonals, triplets and so on), as well as other subconscious neural reactions.
When you're playing 9-step (out of 10) step songs, you're not watching individual arrows at all. It'd be impossible. You unfocus your eyes and play through your peripheral vision. Your body reacts. You don't need to think at all.
(So nobody is arguing that the Hexagon game isn't "more random" . They're arguing that it's more than just randomness at play.)
It's really no different that carrying on a conversation while playing a game of casual pong with a friend. Everyone just calls it "muscle memory." The pattern is never the same with a ball bouncing off a table. It's not rocket science.
I used to play Stepmania a fair amount, and one of my controllers had a noticeable amount of latency. Despite this, I was about to be about as effective as with the lesser-delay controllers.
I think part of the reason is from my experience as an organist: some instruments have a noticeable delay from the time that the key is pressed to when the sound reaches my ears, whether due to delays built into the control mechanism itself, as well as sound delay (the console is sometimes located on the opposite side of the hall from the pipes). I think this has caused my brain to be wired to accommodate these sorts of situations.
Also, there's weapons in some games (both melee and long range) that have considerable "wind up". You click, and they don't actually attack for awhile.
Chivalry, for example.
Part of the fun is learning to control the wind up correctly so that you are in the exact place (you being a moving, rotating person AND then being a moving rotating person) somewhere in the future as opposed to "close in until within range/in-scope, then immediately triggering the left-mouse button." You have to continue guiding the attack as the attack is winding up.
Another interesting one is long range sniping in simulator and quasi-simulator FPS games like say, PUBG. There's bullet drop, and people can move toward, away, and side-ways. You've got to place the shot ahead of the target, for a given targets speed (walking vs running, vs in a vehicle), and lead the appropriate amount. The amount also changes with distance (obviously) as well as different guns have different bullet velocities (less velocity = more lead), as well as different scopes (2x/4x/8x/15x) which offer drastically different required mouse movements to get the appropriate crosshair movement (even if the "lead" corrected for magnification is the same).
God, I love our brains. I would love to read a textbook on EXACTLY that kind of learning. Subconscious, neural learning.
Oh, and that's one great thing about PUBG. The combination of low-level muscle memory for maximum speed target acquisition and firing, with the constant high-level mental strain of planning 2, 3, and 20 steps ahead. Solving for "this situation right now" (shot this guy on my screen AND how do I outflank this guy walking nearby but out of sight that is also trying to kill me), solving for "what happens next" (if I fire, will it attract another, full health, group of people that will kill me?) and solving for "how do I keep getting closer to the blue zone to reach the end game without dying."
All of those circuits are absolutely required to "win" in a game with 100 players and only one winner. For the fun of it, I really should write down my other mental processes that go on. Getting further in the game (closer to 100) is all about learning to train your brain into juggling many problems at the same time as well as specifically targeting areas where you are lacking. Like if you're not good at close combat, you can get near the end by sneaking but you will eventually die. And combat at short range, medium range, and long-range combat all have different strategies and tradeoffs. Combat with groups is different that combat with 1 enemy, or a pair.
I can comfortably send morse code at ~25wpm, approx. 125 characters/minute. ('Comfortably' meaning others can comfortably figure out what I am trying to send)
That equals ~375 hand movements/minute, or 6.25/second.
(Granted, I cheat - I use a single lever paddle and a microcontroller to get the timing exactly right.)
I've heard people going twice as fast, though that's very rare.
However, people more skilled than I can use a straight key at up to ~35wpm or so - that's ~9 taps a second, while also getting the timing just right without the aid of a microcontroller. That, in my book, is very impressive.
I still remember vividly the learning process when playing a game like Stepmania (maybe it actually was Stepmania, I don't remember the name).
The first season was played very conciously and slow. After each season, I recognized that certain patterns just came out perfectly without difficulty. Sometimes I was even surprised by myself.
However, what I wanted to say is, it was enjoyable just letting the brain do its job of processing patterns and reacting to them. It was relaxing.
As a guitar player, a lot of playing is muscle memory, combined with an overarching understanding of what you are trying to convey. In fact, the challenge is often to break free from the muscle memory so as not to fall into the same patterns (like minor pentatonic scale) over and over.
Part of practice is learning how to use muscle memory to handle the fast stuff – faster than you can process at full speed – freeing up your mind to think about the broader strokes. It's interesting to think that activities such as playing music, drawing, painting or practising sports are really a combination or interaction of fast muscle memory, slower rational/strategic thinking, and guiding emotion/intuition.
It's hard to explain, but with these sort of games your brain "feels" if something is off (a little bit of latency, etc). I get a similar feeling playing instruments. It's quick to adapt to these changes, but the feeling of something being "different" still lingers for a bit.
Used to play (still could if I got back to it) at that difficulty with my keyboard. Got to the point where I could have conversations while playing. I also play guitar so I stopped playing Stepmania to avoid extra strain.
When I used to play Stepmania and DDR a bunch that was entirely what ended up happening. You started processing chunks of the arrows, and mapping them to various foot/finger movements. At a certain point though people just straight up memorize the whole song though.
>At this speed the player is reacting (and hitting with correct timing) about ten notes (arrows) per second. This is possible by memorizing and reacting to group patterns instead of individual notes, but more importantly, reacting unconsciously.
You've just described how people play music in general :P the patterns trigger memory and other internal processes (whether it's recalling verbatim or dynamically based on a new pattern from some combination of scales), this escapes the limitation of input output response time because it's all internal with some arbitrary output delay which we account for by "leading" whatever the physical action is (for an exaggeration think about percussion instruments).
If you are at all musical it's fun to do a bit of introspection here... with anything slightly complicated where you do not have time to think about the position of each note (a pretty unnatural way to perform) you will realise that your fingers (or toes in some strange cases), react to a sort of stream of signals internally... it's probably not even a stream but more of a parallel matching against the pattern from which you pick a stream of notes internally because this is right brain stuff.
Disclaimer: i'm not a neuroscientists or musicologists, I just like thinking about the brain and am slightly musical.
To be fair, Super Hexagon relies heavily on repeated scripts of barriers, and only throws truly random geometry at you for the briefest moments where it's switching from one script to the next. I honestly found that the most frustrating part of the game - I'd see a new script and die.... But there's no way to force replaying that script. The game is random, so you have no way to learn to dominate one challenging bit, you just hope you'll figure it out next time it comes up, or hope it doesn't come up. I beat the "Harder" mode without ever figuring out how to beat some of the rarer scripts.
I don't think it's a matter of achieving a faster reaction time by bypassing higher brain functions. It's about looking several steps ahead and planning out multiple steps before you need to do them, and then anticipating when each action needs to happen.
One game that really demonstrated this to me was Crypt of the Necrodancer. It's a rougelike where each turn is the beat of a song, and the tempo of the song for each floor is slightly faster than the previous floor, starting at 120 bpm (1/2 second per turn) on floor 1 and topping out around 180 bpm (1/3 second per turn). For the slow songs, you have enough time between beats to fully plan your next action, but at some point, as the tempo gradually increases from floor to floor, you hit a point that this is no longer possible, and your brain has to "switch modes" to planning several beats ahead and thinking in patterns instead of individual moves.
This sounds kind of like how stenographers’ keyboards work. You plan and type words with key combos. Much faster than typing since you react to whole words and think in phrases (so I was told).
Hm, not exactly. Steno keyboards work by chording. The parent poster is talking about several precisely choreographed sequential actions, more like typing on a standard keyboard/typewriter but with precise rhythm.
On a similar note, a fast baseball takes less than 450 ms to go from the pitcher's hand to the plate. The "turnaround time" for the bat, from when it starts moving until it hits the ball, is 150 ms. That leaves about 300 ms in which the batter must first spend some time analyzing the path and spin of the ball, then decide whether and how to hit the ball. The only way to make it work is subconsciously.
I finished that game a few years ago (was most definitely worth the 60 cents I paid for it).
From what I recall, to finish the final level I was relying for the most part on 'muscle memory' in relation to the familiar patterns. I would visually interpret what kind of pattern it was, my position relative to the pattern and then there seemed to be a conscious disconnect to the actions I performed to get through the pattern.
I think it's interesting because the actions I'm referring to weren't merely a sequence of button presses performed as fast as you could. There was a critical factor of very small differences in durations between actions and how long you execute actions for that seem (and I assume are) completely beyond my conscious abilities.
I am with you with the relative movements. Most of the faster songs are fun because they have triplets or alternating patterns that are easy to adapt to once you're used to the pattern.
As for the game you played, it must have been the DDR series. You can't "finish" Stepmania since it's an open source rhythm game where the community has created thousands and thousands of songs to to play.
I got that game with an indie bundle a while back and played for about a minute before I gave up. Now I can blame my input devices rather than my truly terrible reaction time!
I was suspicious that there might be some artifacts of terminal updating that are amplifying the effect in that test, so I whipped up a little graphical test:
I can still get it right pretty much 100% of the time, but the difference does feel a lot more subtle than what I was seeing in iTerm.
(If you change the delay, you need to hit randomize to make it take.)
Edit: Also we should keep in mind that we're not testing the latency we set in either test. We're testing that delay plus the delay from I/O. So 100ms might be indistinguishable. But by the time we pile everything on, we're getting closer to 200.
The result is the same as far as UI design is concerned; don't assume that you can get away with 100ms; most of that leeway has already been wasted by the OS and hardware.
Adding step="0.01" to the input allows stepping down in 10ms increments.
Personally I can distinguish every time at 0.06.
I find it difficult to believe the rest of the hardware loop has 40ms latency, it would be difficult for smooth rendering to occur if it did.
I suspect that part of this is 'training' yourself but also what the researchers meant. They may very well have meant that above 100ms is jarring and noticed as a delay but below 100ms is not seen as a delay rather than truly being unable to notice in an A/B test.
I suspect this test would be more difficult if it sometimes did A/A and B/B.
>I find it difficult to believe the rest of the hardware loop has 40ms latency, it would be difficult for smooth rendering to occur if it did.
Why would I/O latency have any effect on the smoothness of rendering? Are you sure you aren't thinking of throughput?
Remember: TFA just told us that many keyboards on their own add more than 40ms of latency. I definitely wouldn't have guessed that, so I'm very reluctant to entertain any certainty about the rest of the system having low latency.
neat, 100ms is blatantly obvious, I can't imagine anyone not discerning that one.
I can get down to 24ms but no less... the weird thing is that 24ms is still completely obvious to me (clearly shorter but obvious in comparison to no delay), with a single test I can see which variable has the delay every time, but 1ms less and I can't... which makes me suspect it's being quantised due to any of the various things in between sleep and the output, display, driver, X, terminal emulator, CPU etc.
With that it's actually possible that my 24ms is larger than 24ms and is also being quantised to a larger duration (but not larger than 100 for sure).
It would be interesting to be able to test with some dedicated hardware.
Your probably right, it's an old TN panel in a 10yr old laptop... I'm gona have to steal someones shiny modern IPS in a minute :P (I know they are generally slower response but it's 10 years newer so you never know)
[edit]
No IPS still super slow. How common are >60Hz computer displays these days?
I can get it down to 1ms, so there's probably something wrong with my setup. (I just look for how the cursor moves; on the delayed one, I can discern the presence of the cursor on the next line for a split second.) Maybe monitor refresh rate, or the minimal resolution sleep can handle?
I'm using Alacritty, which is supposed to be really fast and everything!
Both of those appear to indicate that forking is expensive when you need to copy unusually large amounts of memory (which makes sense). In this context, I would expect reasonable/small memory use and so fast forking.
You can plainly see that 100ms is an eternity. While keyboards aren't the same thing, but high latency is noticeable. I had to change a keyboard because of the near eternity of when I pressed a key and when it registered on the screen. The difference between the old and the new is quite stark.
This rule comes from UX design user studies. However, the actual rule is that people perceive an event happening within 100ms as "instantaneous". Or, in other words, two events happening within 100ms of each other won't feel like distinct events. This doesn't mean that users won't notice the delay and it doesn't even mean users won't be frustrated by it, it's just a matter of human perception of distinct events in time.
Unfortunately, programmers/UX designers have a tendency to generalize this rule and use it to excuse slow user interfaces.
I rarely use transitions larger than 100ms in UX, I doubt i'm abnormal in not seeing the result as instantaneous (compared to no transition). On the other hand this certainly is not normal UX design, I am very irritated at how common excessively long transitions are used on the web... to the point that I actually have to wait for things to finish animating before I can interact with them, it feels so 90's like i'm supposed to be impressed with the transition rather than the transition to simply convey the introduction of something and then to get the hell out of the way (like it should).
Another point is that, even if a bit of latency isn't noticeable (say, 20 ms), when added to other sources of latency it can make things noticeably worse.
EDIT: Which the original author said better than me:
> Another problem with this line of reasoning is that the full pipeline from keypress to screen update is quite long and if you say that it’s always fine to add 10ms here and 10ms there, you end up with a much larger amount of bloat through the entire pipeline, which is how we got where we are today, where can buy a system with the CPU that gives you the fastest single-threaded performance money can buy and get 6x the latency of a machine from the 70s.
While you're factually correct about the musical stuff, note that the auditory system is not at all the same as the visual system, the latter being slower in processing information (i.e. full time for light entering retina to information being ready to do something with in the brain). Also resulting in reaction times for auditory stimuli being faster than for visual stimuli. So you can't just use one system to debunk a fairly general statement made about something visual related. Even though it's still false, but for other reasons.
"To my eyes, the delay is obvious at 100 ms, noticeable at 50 ms, barely perceptible at 20 ms, and unnoticeable at 10 ms."
That's how it goes with my ears when recording as well. Most "Live feedback" mechanisms for guitar programs (heck even computer-made guitar hardware) have about 50ms of latency, which is quite disconcerting when you're doing something heavily timing-based. 10ms is almost imperceptible to me (sounds more like the tiniest slightest reverb delay) and 5ms might as well be realtime as far as I'm concerned.
An example, in Windows 10, I could go to the built-in mixer, turn on the "Listen to this device" and play with my guitar plugged right into the computer, and you're looking at about 100 ms or so of latency. Completely unacceptable when you're listening to a metronome to keep on time. Alternatively, I can go to the Realtek HD Audio Manager in the taskbar (because I installed the actual RealTek driver package instead of relying upon the Windows drivers) and un-mute the line in, and even with an amp and mixer board in-line before the computer, the feedback is essentially instantaneous, which allows for pre-baked instant reaction times (eg muscle memory) to work and record in time with the metronome (or backing track.)
This is why I'll never buy a DAC that doesn't have a headphone monitor output on the front panel! The performance of low-latency audio has gotten worse over time, not better, at least on the Mac. (It was better under OS 9 than anytime since, I think.)
Another fun test you can do, if you have an analog mixer, is to run some input signal into the mixer, then to the PC, and then from the PC back out, and monitor both the original input and the signal from the PC at the same time. OS-supplied drivers typically make it sound like you're in the Grand Canyon.
But even with very expensive low-latency gear, you can still tell if you put one signal into the left headphone channel and another into the right. If there's actually zero latency, you'll perceive the sounds as coming from directly in front of (some people perceive as directly behind) you. It'll sound just like mono. Any latency, given the same volume level, will be perceptible as a difference in "location" of the sound.
My unscientific experiments suggest that you can perceive down to about 1ms latency quite clearly this way across most of the audible range. The minimum is frequency dependent (as you'd expect) with lower frequencies being less sensitive.
Presumably the human brain has evolved to be extremely good at arrival time comparisons (maybe even doing some sort of phase-difference stuff) as a way of triangulating the origin point of sounds. There are doubtless other specialized ways in which the brain is equally sensitive, so the 100ms rule of thumb seems pretty naive to me. 100ms might hold true for the "main loop" of the human brain at the level of conscious thought, perhaps.
As I understand it, when they talk about 100 ms being the fastest possible thing we can notice, they mean if we can notice a reply from a computer was instantanious or if there was a delay. If we get to compare instantaneous to 10 ms lag in realtime sound, we can very easily hear the difference. But if we type a question to a computer and get a text answer within 100 ms we will persive it as being instantaneous.
> Not to mention that 100 ms is musically a 16th note at 150 bpm. Being off by a 16th note even at that speed is – especially for percussive instruments – obvious.
The auditory system is especially good at processing serial data, and it has a better timing resolution than the visual one.
People with visual-auditory synesthesia are much better at tracking patterns in flashes of light for example (because the hear the flashes).
The other thing to consider is how is latency perceived when actually TYPING in real life. My guess is that the difference between the shortest latency and the longest latency hardly matters for the purpose of a human putting words on a screen. There's so many other cognitive activities going on that I can't imagine someone would notice unless they're specifically looking for it and nothing else. Really, does anyone complain about this stuff? even world-class touch typists?
Of course there is some threshold of latency when it starts to become noticeable/disconcerting for some fraction of people and probably quite a lot more when half of all typists start to notice. Anyone can definitely feel this on terminals connected to very remote machines.
This is an interesting topic and am glad that Mr Luu is looking into it. The measurements described are absolutely a valuable first step towards understanding what latency really means for human factors in typing (first, what is the f-ing latency). I expect the folks at Logitech and other places have significant proprietary research on this stuff, but maybe not?
I can see how high latency is acceptable if you're only putting in new characters, but as soon as I have to fix a typo or anything more complex, I get really mad really fast if the latency is anything beyond (ballpark estimate) 250 ms.
>On the other hand, if you told me to strike a key less than 100 ms after some visual stimuli
If it was audio cue, I'd point you to "Love Live School Idol Festival", a rhythm game which "perfect" judgement window is some 32ms. Decent players will get perfect most of the time.
The more obvious example is to watch a video encoded at 10 frames per second followed by a 30 fps video. Does it look different? If yes, something's wrong with the statement
There are plenty of examples on YouTube if anyone's interested.
That's not a fair comparison, because we can distinguish very high frame rates (eg. 1000fps vs 3000fps) by watching how the perceived image changes in reaction to eye movement. Consider an LED strobing at 100Hz. This is above the flicker fusion threshold, so with motionless eyes it will appear steady, but if you flick your eyes from side to side you will see multiple images (phantom array effect). This is limited to some extent by saccadic masking, but never perfectly, and saccadic masking strength varies from person to person.
Similarly, limited frame rate makes it impossible to keep the image at exactly the same place on the retina during smooth pursuit eye tracking, causing perceived blur that wouldn't be seen with a real life moving object.
Seeing a difference with 1000fps video doesn't mean we have 1ms temporal resolution perception, it just means some test signals allow conversion of temporal information to spatial information by eye movement. It doesn't generalize to arbitrary visual events. To investigate keyboard latency sensitivity properly there's no substitute for ABX testing of actual keyboards.
even the difference between 30 and 60 is pretty clear to most, I find watching movies at 60fps reeeally weird, it feels too real... I never actively look for the video framerate before, i just notice it, it's that obvious.
I, on the other hand, having grown up with mostly playing games at 60 FPS (the PS2 ran the games I played at 60 FPS, back when console games prioritized gameplay over looking pretty in screenshots), and have moved on the 144 FPS on PC, find the 24 FPS used in cinemas to feel agonizingly choppy. Especially when there’s a big sweeping paniramic shot. It sometimes takes me out of the experience. I really liked the 48 FPS when watching the hobbit movies.
I've grown up with FPS games too so it's not like i'm not used to 60FPS... i'm just saying from a film watching perspective, it starts feeling game like. There's been a number of articles dedicated to this perception, the general preference (whether you are consciously aware of it or not) is for <60FPS in film, the reasoning is that it gives this sort of dreamy state of perception, much like an animation, it relies on your subconscious to fill in the gaps, it makes a film feel more like a story than a documentary. On the other hand I agree that in some scenes this doesn't work well, fast moving wide angle shots of landscapes look awful. It will be interesting if film makers care enough about this detail in the future to play with it, perhaps as a newly adjustable dimension to film making (variable frame rate). So long as you don't consciously perceive the low frame rate (i.e it's matched to the movement of the scene) I think 24fps is ok, and preferable when it somehow activates that visually creative part of your visual perception. This would even be compatible with existing tech if you simply chose a high frame rate which is a multiple of the various frame rates you want to choose for different scenes.
Absolutely. In a past life I was a hardcore Quake 1 gamer...which, ahead of its time, had a way to adjust 'ping' for fairness. Any thing above 15 or 20 was noticeable...at 100 it was awful.
As a pianist, latency of 20ms makes an electronic keyboard instrument unplayable for me. I wasted lots of time messing around with different drivers for soft synths trying to get below that number. 100 ms is a lot. In audio, a 8ms delay has an effect you can clearly hear, and can be used to "widen" tracks or simulate stereo from mono sound sources, since 8ms corresponds roughly to the amount of time it takes a sound wave to travel from one side or your head to the other.
If you're trying this, note that libvte based terminals are capped at roughly 40fps, which adds serious jitter to the delay. xterm doesn't have this problem.
>Not to mention that 100 ms is musically a 16th note at 150 bpm. Being off by a 16th note even at that speed is – especially for percussive instruments – obvious.
There's a difference in constant, but consistent lag (delay) and variation of lag variance(lag)>0. If one note is a 16th note off that is lag variance.
IMO, a better measure would be from activation of the switch, as opposed to the beginning of key travel. I don't start waiting for the character to appear on my screen from the moment I begin to press down. My anticipation begins when I feel the tactile feedback of the switch activating (or the switch bottoming out on switches that don't offer tactile feedback). On a keyboard with good tactile feedback, I might not move the switch through it's full travel -- just a little bit above and below the activation and reset points.
I immediately switched to skeptic mode when I saw no actuation points or normalization for travel distance. All this is showing is that to move further it takes longer. Duh. Why even bother pointing out the polling rate? That's like setting up runners at random places on the racetrack to measure who's faster. The guy/gal that starts closest to the finish line isn't necessarily faster.
I'm using Cherry MX Speed switches, which have a 1.2 mm actuation point and 45 g actuation force. There are no published numbers for the Apple keyboards and the closest I could find was ~0.75 mm distance and ~60 g force. Cherry MX Reds, the most common mechanical switches, have a 2 mm distance and 45 g force. (I mention the actuation force because there is probably a slightly longer delay for higher actuation forces due to deforming the fingertip and motor unit recruitment, both of which are probably minimal.)
On top of it, the author seems to be making a statement about speed being the deciding factor for gaming keyboards. If that was the case, they'd all be using ultra-low actuation switches, which they aren't. In most cases, a 'gaming keyboard' is just a mechanical keyboard with aggressive styling, higher quality components (less plastic, braided cable), macro keys, multi-key rollover, USB or audio passthrough, and backlighting.
Then he uses 1 membrane gaming keyboard and 1 no name import that I can't even find the manufacturer for as his only gaming keyboards. There are no sample sizes reported, he just does his best to hit two keys at once, and uses a camera for determining when the key press starts.
Every step of the process has huge flaws and he doesn't even use a reasonable set of devices. Sorry to sound harsh, but I trust zero of the conclusions/interpretations of the data.
> I immediately switched to skeptic mode when I saw no actuation points or normalization for travel distance. All this is showing is that to move further it takes longer.
I thought his explanation (that delay from the start of the motion) was a good one. Unless someone is starting with a key partially depressed/near the actuation point (which you certainly could do), the key travel time is going to be a part of the perceived delay. Seems fair to me, given that delay and perception are the point of the post.
Now, if you think that gamers are starting with partially-depressed keys and therefore will actually experience less of a keystroke delay, that would be a valid counterargument.
> Every step of the process has huge flaws and he doesn't even use a reasonable set of devices.
I don't think this is being portrayed as an especially accurate study; the author includes plenty of caveats.
But that's not how people game. When I play FPS on my MX Blue keyboard I don't make my fingers sit above the keys, but into the keys, depressing them almost until they're in the activation point. I'd guess people on linear switches don't do this, but they also probably slam their keys faster.
The "gaming" keyboards he used were also pretty shoddy. Now, I'm not necessarily saying gaming keyboards offer better latency, because that's not why I got mine, but his methods seem weird.
Your criticism is harsh by pointing out a (debatable) point, key travel, using a certain amount of rationale and suddenly concluding ‘every step’ has flaws. Point out those flaws!
Furthermore, the OP is the first one doing real measurements on keyboard latency, so perhaps you should trust the article to hint towards a previous unnoticed problem.
Anyways, your comment is totally missing the point and makes generalizations based on one debatable measurement error.
'g' is gram-force. It's the weight required to "click" the switch over it's actuation point. 1 Gram is equivalent to 0.980665 centinewton, so for most measurements of keyboard switches, g's and cN's are used interchangeably.
That's fine though. It's about the experience end-to-end. I love the apple keyboards. I still type faster on them than other keyboards, including high-end mechanical keyboards. The short keys and actuation feel responsive, yet I've never felt like I've accidentally triggered a key.
That's fine, but then the study should be called Key Size Comparisons rather than Keyboard Latency, since it appears that the length of time the key takes to travel dominates the "latency" measurement.
Key travel time is itself a form of latency, distinct from throughput because it doesn’t count the time you take to move your fingers between keys, because you can start pressing a one key while the previous key is on the way down (as long as it’s not the same key), etc. But even aside from that, if key travel time is the biggest contributor to a keyboard’s overall input latency, then that’s what it is. There’s no real way to avoid it (except maybe pressing the keys harder?), so it doesn’t make sense to exclude it just because it’s mechanical rather than electronic.
Truth be told, I also prefer scissor switch keyboards, but there aren't any on the market that support the features I'd like:
- configurable keys & layers on a hardware level
- 60% layout
- NKRO
Apple's magic keyboard would be great if it supported configuration/NKRO and didn't cost so outrageously much.
Agreed, and I don't understand the hate many people (seem to) have for them - the most ergonomic keyboards are those that require the least amount of repetitive motion. Not that there isn't an adjustment period, but some seem to pull out their Jump To Conclusion mats a little early.
"the most ergonomic keyboards are those that require the least amount of repetitive motion"
Not sure I can agree with that. I was using one of the Apple keyboards for quite a long time while running liveops on an online game (which meant rapid response whenever incidents happened, jumping into shells and constantly typing like my life depended on it), and developed fairly severe wrist problems within a couple months. To the point where I almost had to leave the job on disability, per my doctor's orders.
On the recommendation of a friend, before taking that step I tried switching to a Kinesis Advantage for the better wrist alignment and less "hard" keystroke bottom as compared to the chiclets on the Apple keyboard. It was a bit of a learning curve (~2 weeks to get up to full typing speed when using it every day, a couple more to push past it), but at the end of it my wrists got better almost instantly, and I never had problems again. My typing speed went up as well.
I'm not sure how much of that improvement is due to the better alignment of keys (there's almost no hand movement when typing, and your wrists are always neutral) and how much is due to the bigger key action and softer bottom, to be fair.
Edit: it's worth mentioning that I'm also a boxer and a piano player, so my wrists take a lot of abuse on a regular basis. YMMV and if you're not having problems, I definitely found the Apple keyboards to be very easy and fast to use.
About 18 years ago I started typing on a Microsoft Natural Keyboard Elite. That helped a lot with my wrists. Since then I have upgraded a few times but always the Natural range of keyboards from Microsoft. They are a lot cheaper than the Kinesis Advantage and easier to learn I guess. I would love to try a Kinesis though but don't dare for that price :-)
The Advantage is certainly an investment, but I found it worthwhile. They last for eons, they're built like a tank. If you have significant wrist problems you might be able to talk your employer into paying for one, I know a couple people who have done that.
The Natural keyboards are good, but IMO the Kinesis is a worthwhile upgrade for the mechanical switches, programmability and durability.
Ditto, after a bad strain injury on my wrist I bought a Kinesis. I don't think I could really use a computer without it. For me though it's mostly wrist angle; while I love the tactile mechanical switches I don't think the impact is a huge factor unless you are prone to pressing very hard on a scissor switch or a linear switch.
I seem to recall the old IBM keyboard was designed it way it was to provide tactile and auditory feedback that the input had been registered before the key bottomed out. This to reduce long term strain injuries. After all, the primary use of a computer back then would be writing. Be it codes or reports.
There's essentially no range of motion in a MBP keyboard anymore: there is no difference between not pressing the key, actuating the key, and bottoming out the key. Some people (such as yourself) seem to prefer it, but to me it feels much rougher than comparable keyboards, like typing on a hard plastic surface. There is no give or play in the keystroke. Compared to a Lenovo T460p's keyboard (the other machine I own), the Lenovo is a lot "softer" to type on. (And quieter, too.) Mostly, I dislike the feel.
Now, my wrists also hurt. Now, when I'm not travelling, I was using an external Apple keyboard, not the MBP's primarily, so I'm hesitant to blame the MBP's keyboard directly. The external Apple keyboard has a greater keystroke distance from start to bottoming out, but still has like zero distance between actuation and bottoming out. (For some keystrokes, particularly ^+Tab and ^+Shift+Tab, I find it hard to keep control actuated, despite it being fully depressed; it just seems to require a lot of pressure to keep things in electrical contact). The pain in my wrists didn't start until I started using Apple keyboards, and has mostly stopped since I've replaced them. (I now primarily use an ErgoDox EZ, primarily for the split/ability to independently position my hands. I had tried, and had similar success w/ a Kinesis Freestyle, but I didn't own it.)
(I used a MBP keyboard for ~4yrs, with an ergo keyboard of some kind for when I'm not travelling for ~half of that. I've used Thinkpad's of varying models for ~10 years. Now, I could just be getting old, but replacing the Apple keyboard is what made the difference thus far for me. It doesn't make a lot of sense to me, admittedly: my wrists are, I feel, in the same bad position on the Lenovo as they are on the MBP/Apple keyboard.)
Because they're talking about USB keyboards they're probably talking about the chicklet Apple "Pro" USB keyboard, not the new MBP's built in keyboard. Which, I agree-- I've only typed on the actual laptop itself a handful of times, but man, those key movements are SHORT and I'm not sure I really like it. Time will tell, though -- I originally felt the same way about the chicklet keyboards i love now.
Not all repetitive motions are equal. It's far more complex than that. The relative angles your joints are held at during the motion, rotations, etc, are far more important than just some simple metric around number of repetitions or distance travelled. Saying less motion is better is like saying lifting less weight is better. You'll sooner fuck up your back lifting 40 pounds incorrectly than 80 pounds correctly.
you don't need to bottom out regular mechanical keyboards, which means you're not constantly slamming your finger against a hard surface. You do need to bottom out scissor switches.
Same -- I like doing speed typing on typeracer, and after going through mech keyboards and different switch types for a few years, I'm consistently fastest on a non-mechanical ThinkPad USB chiclet keyboard.
I've recently got rid of several extremely expensive mechanical keyboards to go back to a new ThinkPad chiclet. I am somewhat embarrassed to say it's been fantastic.
> Contrast this to the Apple keyboard measured, where the key travel is so short that it can’t be captured with a 240 fps camera, indicating that the key travel time is < 4ms.
Oops, yeah I just got to that part. My fault for not reading past the conclusion before posting.
Still, I guess in a way, timing it from the start of a key press is the most real measure of actual latency when you want to press a key. It's just got nothing to do with the processor etc.
Sounds like a good argument for the superiority of shallow keys. I've always liked Apple keyboards - actually used their wired USB keyboard for a long time specifically because it felt like typing on a Powerbook.
Superior is a strong word, if your preference is to bottom out flat keys then thats all well and good. I personally type like I am playing an instrument, pressing just enough to activate and then letting the key push my finger up. I find no enjoyment on typing on an apple keyboard, which I am now and it also causes me joint pain from all the constant bottoming out.
This is anecdotal so YMMV. I have a Macbook Pro 2017 and the shallow key travel makes me feel like its more accurate than the one on my previous Macbook Pro 2015 with longer key travel. Also, I feel like I am typing faster.
Presumably he measured it the way he did because it's easier to measure, rather than because he thinks it's the more meaningful way.
(Though maybe his way is more useful if he's interested in whether a keyboard gives an advantage in gaming, rather than whether it's pleasant to type with.)
the article specifically states the reasons for why he messured the whole travel, no guesswork needed
> This is because, as a human, you don’t activate the switch, you press the key. A measurement that starts from switch activiation time misses this large component to latency.
If you game on a mechanical keyboard, you've pretty much worked out exactly where the switch points are and will be working the keys in a fashion different to normal typing. So I'm not quite buying the way the measurements were made vs the purpose of the tests.
Do you really press every possible key down almost to the switch point, and then when you want to actuate the key, press it a tiny bit further?
I would guess (and maybe I'm wrong about this) that you rest your fingers on the keycaps, and press them in as far as needed to actuate, but no further. The key still needs to travel.
I'm using Romer Gs now (G910 at home for a while, and recently the G413 at work), making the switch from MX Browns as I found them too heavy (programming RSI).
Not sure how much I press them down (will try to analyse at next opportunity), but whenever I play a FPS (which is admittedly nowhere as often as I'd like these days) I certainly "preload" the fingers on the WASD keys in such a manner that the press is far quicker than when I touch type.
Precision and responsiveness is far better than using a chiclet type keyboard such as the MS Sculpt, which compares a bit to the Apple Macbook keyboards from memory (although the new Macbook Pro has even shorter travel again). The Sculpt (or MBP) has much shorter overall travel compared to the Romers, but the feel of when you are off/on really doesn't compare; the keys feel dead in comparison for gaming. YMMV.
That's effectively providing a definition of the latency he's discussing, not an explanation of why that's the sort of latency he finds interesting.
But I see he does goes on to say that he cares about game performance, rather than typing experience:
«
If, for example, you’re playing a game and start dodging when you see something happen, you have pay the cost of the key movement, which is different for different keyboards.
»
For me, I don't care about the time after switch activation, rather about time after the tactile feedback (the "click"). Ideally the character would appear on the screen at the same time as the click; not after, and not before. If a keyboard can 'cheat' and activate the switch before that and hide some latency, that's fine by me.
Seasoned gamers preload keys they are anticipating to use. On my keyboard I have less than a millimeter of travel from the preloaded point I use (which is right in front of the tactile bump and is quickly trained) to actuation.
In tacticale switches the bump and the making of the contact are mechanically connected.
Using the moment of finger/key contact quite obviously selects for travel, among other things.
>In tacticale switches the bump and the making of the contact are mechanically connected.
Nope! This is rarely (if ever?) the case. In alps switches, for example, there are two totally separate leafs, one of which handles the tactile feeling and the other of which is responsible for the actual actuation. If you browse through Haata's Plotly[1] you can see that many switches actuate well after the tactile bump. Though they are often pretty closely related in terms of their depth in the keypress, they are wholly unrelated from one another mechanically.
False. You can bend the leaf of a cherry MX switch into all sorts of wild shapes to move the tactile event up and down the press, but the actuation will stay in largely the same place. If you browse the force curves from the link I posted above, you can see that some switches (cherry MX Brown, for example) actuate well after the tactile event.
The tactile event on a Cherry MX Brown is ~1mm into the travel distance, and the actual actuation is ~2mm in. Kaihua Box Orange switches (still an MX-style switch) is an even better example of that. Kaihua Speed Bronze has the actuation point inside of the tactile bump instead of after the bump. I can't find any examples of switches that actuate _before_ the tactile bump (mostly because why would anyone design that?), but tactility and actuation are not inherently tied together in cherry MX switches, either.
They are both handled by a two-part leaf, which you can sort of see in some of the pictures on Deskthority[1]. There are two legs on the slider that have a surface to them that determines the tactility (or lack thereof in the case of linear switches) that slide linearly up and down the top leaf, which flexes it until it makes contact with the bottom leaf. That contact causes the actuation. All of the tactility is determined bu the shape of the slider legs.
So you are saying that what makes the tactility (the slider moving on the spring) and that what makes the contact (the slider moving the spring until it touches some other metal) are the same, which is exactly what I said ("mechanically connected").
How the making or breaking of the contact is related in terms of travel to the key press force doesn't have much to do with that.
The point I made was simply that on other kinds of keyboards the two are not related. On a rubber mat keyboard you can keep the dome depressed yet not actuate, for example. The collapse of the dome is also harder to control than the resistance against the spring. That makes preloading harder.
If you want a clear point, then why not use the moment it bottoms out. So when key travel ends, instead of when key travel starts. In most cases, it'll be much closer to the actual activation. If you want to be even more thorough, publish both times (which you can get from a single video).
A weird result with the Das 3 keyboard (25ms) considering it uses a cherry MX switch, same as the Kinesis (55ms).
The OLKB and Ergodox don't state the switches used, but it's almost certainly a cherry MX style switch with non-modified actuation points compared to the original.
> Note that, unlike the other measurement I was able to find online, this measurement was from the start of the keypress instead of the switch activation. This is because, as a human, you don’t activate the switch, you press the key.
He mentions the fact that some keys activate mid-travel, and that the very short travel is part of what makes the Apple keyboard so fast.
Actual activation might be hard to get, but when the key bottoms out is probably the best way to go. Or even better, why not post both when key travel starts and when it ends. If people are really curious, they can model a linear scale and use the key response plot to guess exactly where the activation happens.
> when the key bottoms out is probably the best way to go
I disagree. One of the greatest benefits of mechanical keyboards is that they actuate before bottoming out. With some practice you can learn to type without bottoming out, which can greatly increase your typing speed, and decrease the amount of stress on your fingers and wrists.
Sure, but the start and end value are the easiest ones to measure. As I alluded to, once you have the two extremes, you can the activation plot [1] to get the exact activation time. For the case of Cherry MX Blue, it would be at 62.5% between the start and end.
True gaming keyboards have mechanical switches which activate on first key press, not on full key down. Bit disappointed no mechanical keyboard was tested as they are perceived to be the ultimate keyboards when speed matters.
Mechanical key switching activate differently based on the switch. Most with feedback activate on the feedback, ie. when you hear/feel the click. So it isn't first press or full key down, but somewhere in the middle.
The most common key switch for gaming oriented mechanical keyboards is probably the Cherry MX Red. Its actuation point is around 50% of the way through the key travel.
Buckling spring switches, membrane domes, topre (membrane dome + spring), etc. all have actuation points somewhere in the middle of the key travel, not at the start or end.
Compare with musical keyboards where not only key number but also the velocity matters, so each key has two electrical contacts, and the whole thing is usually scanned around 10kHz for proper velocity measurement. Although key contacts are arranged in a diode matrix, the latency is usually below 2 ms, even with good old MIDI.
So neither the keyboard matrix nor the debouncing justify a latency of 10 ms or above.
It is nice to see that latency is an issue in gaming now, similar to realtime audio, where most operating systems are still not very usable - with the exception of OS X.
Now that would be fun! I think it would be possible to achieve a reasonable typing speed given the right "keyboard layout" (aka the QWERTY equivalent for musical keyboards).
I've done a simple test with my Macbook Air, using the app "Is It Snappy", as mentioned in the comments. I used the keyboard on my macbook, which is a chiclet style mechanism. The tests were all done in triplicate except for TextEdit, (only twice)
TextEdit: ~81 ms from key depression to when LCD pixels begin to shift. For Sublime Text: ~86 ms. Both of these text editors are pretty well matched.
However, if you boot into Recovery Mode in macOS, you gain access to a GUI without a V-synced/tripled framebuffer. Terminal in Recovery Mode: ~57 ms
Finally, the kicker. macOS users have always had access to a terminal interface upon boot called single-user mode:
Single User Mode: 37.5ms (all three measurements were the same)
If you want to experience what the latency on old computers felt like while typing, simply reboot your mac, and while it's starting up, hold 'Command + S'!
It actually does seem a little faster, which is sad. I would like to see how much normal userspace can be optimised, and where the delay occurs:
1. between the keyboard and CPU
2. in the input subsystem
3. in the app / system libraries
4. in the display subsystem
5. between the CPU and the screen
This is incredible. Single user mode feels like the screen updates BEFORE pressed the keys, like it is time travelling. I guess I've used to the high latency on newer computers.
Or if you're on Linux (Debian/Ubuntu), switch to one of the default virtual terminals with Ctrl-Alt-F1 (and Ctrl-Alt-F7 to switch back to your X server console on my machine) to experience the delight of low latency keyboard-to-screen typing.
I have something interesting about USB input latency to share.
In 2014 I built a new computer with a i5-4590 CPU, ASUS Z97-A motherboard and 8GB of Kingston DDR3-1866 memory.
One thing I instantly noticed was that my mouse (Logitech G400) would feel "delayed" compared to my older computer from 2010 (i3-550, Intel DH55HC motherboard). I could have the same OS (tested Win7 and Ubuntu), GPU and monitor between both computers and switch, and the difference was that obvious.
Another problem I had was DPC latency under Windows with driver: "usbport.sys". USB audio would drop out in correlation with DPC latency spikes and I believe it was related to the mouse latency. Under Ubuntu I had the same problem and logged a dmesg message that said "retire_capture_urb: x callbacks suppressed", with the same symptoms.
I got so frustrated I stopped using the new desktop and went back to my old desktop and a new laptop for about 2 years, after wasting my time running Prime95, MemTest86 for over 24 hours, and swapping the motherboard with another model that let me disable HPET (Supermicro C7Z87-O)
In the end, last year I decided to work on it again and swap the memory with some Corsair 2X4GB DDR3-1333 memory and just like that, the DPC lag spikes were gone, USB audio didn't drop out and my mouse stopped lagging.
It's an iOS video app for measuring latency. The page has this about Apple keyboards: "This one’s bizarre: the onboard keyboards on both the 2015 Macbook Pro and Macbook Air have worse latency than an external keyboard. Plugging in an external wireless keyboard improves response latency by 40 ms."
The MacBook Pro 2015 doesn't use an USB connection to connect the internal keyboard when using macOS. Instead it uses SPI. Unlike the newer MacBook Pros it still has the keyboard wired to USB, but doesn't use that because of power saving reasons.
The issue not mentioned in the article is number of lines on the keyboard matrix used to detect keypresses. Cheaper non-gaming keyboards can only detect up to 4 simultaneous keystrokes. Gaming keyboards can detect up to 6. That may not sound like a significant difference, but if you're moving diagonally (eg. W and D), running (Shift), holding an item (or use) (E) and jumping (space), that is 5 keys which need to be processed. Moving from my older Gaming keyboard to a generic Logitec keyboard, and I was no longer able to run diagonally in FIFA games while doing trick moves. So the non gaming keyboard made me stop playing that game.
Computers are fast. 100ms is eons for things that just run on CPU. With a 2ghz computer, you can count from 1 to almost a billion. That's a billion little steps that can happen in the blank of an eye.
The issue is that there are a bunch of steps that are polled (usb), or that require many context switches/interactions with the scheduler. All of this adds significant latency, regardless of the throughput of the CPU.
There are two "scan" rates: the rate at whitch keyboard matrix is scanned by whatever electronics is inside keyboard (that is probably independent of the outside interface) and the rate at which the interface is able to process input events, which for PS/2 means faster that the keyboard can produce them (as keyboard is essentially bus master on the AT/PS2 keyboard "bus") and for USB means as fast as the *HCI pools keyboard for interrupt events (old Apple's ADB works the same way and is in fact to some extent inspiration for USB <3.0 and Firewire "one big bus" high level model).
Edit: then there is third approach: simple keyboard/input devices with serial interfaces, where host provided clock is used to both clock the interface and keyboard scan logic. In essence the whole keyboard then looks like one big shift register. Keyboards that works this way includes original Macintosh, most Wyse and DEC terminals and MIT/LMI/Symbolics Lisp machines (and probably pre-sun4 Suns, sun4 and later have rs232-derived ionterface), also this is the way how controllers for Nintendo consoles before Wii work (IIRC Nintendo calls that "EXI bus") and how PlayStation 1/2 controllers work. IMHO this is to some extent where the idea behind SPI comes from.
Most importantly it removes the penalty of using a 1000hz scanning input which can have an adverse affect on the cpu whilst providing a better (yet insignificant) result.
I've gotten a FTDI down to 2 ms RTT, so if done right (using isochronous transfer) you can get USB down to 1 or max 2 ms. Looking at the latencies quoted here and in the article it's certainly not the problem.
Notice that just getting a thread scheduled every 2 ms is already impossible for Windows, certainly one running a game. You'll get a bunch of outliers within the second. So even if you got your keyboard down to 5 ms, great, but you are not running an operating system that can reliably do something within that timespan!
>So even if you got your keyboard down to 5 ms, great, but you are not running an operating system that can reliably do something within that timespan!
Maybe I'm not understanding, but that doesn't seem correct.
Most people (gamers) are running mice at 500/1000Hz polling rates and you can easily verify the movement made in each 1-2ms update. (And it is most definitely a noticeable difference going from a standard 125Hz rate to even 500Hz.)
Don't confuse throughout and latency. The mouse may be measuring and sending data every 1-2ms, but that doesn't say anything about the latency before the data is handled.
This is also a pretty important point: most monitors refresh at 60hz anyway even if your game seems to be measuring much higher framerates, so there's a worst case floor of ~17s on just visuals lagging behind input because you're waiting for the next screen refresh anyway.
The mouse measuring data every 1-2ms will increase the quality of the motion tracking, but it won't necessarily help you with latency unless the data gets to the game fast and the game handles the data quickly.
True, though these days 144Hz monitors are everywhere (Personally I used a CRT until LCDs were capable of 120Hz, not sure I could ever go back to 60), makes a big difference I think when down to a 7ms window. Using a mouse at 125 on a modern screen feels really janky.
I'm unaware of any pro gamer (or streamer) that's using PS/2, and that's about as serious a gamer as you get. Most are using a late model mechanical keyboard connected via USB, often with lots of RGB just because.
I think the only PS/2 holdouts you'll find are the old Korean Starcraft players that are still using a Qsenn DT-35.
PS/2 supports lots of RGB, I'm using one, because using the USB costs CPU where as the PS/2 does not, and PS/2 gives NKRO and lower latency, however insignificant they are.
It's entirely a preference based thing, the most relevant factor would be whether or not you need to hot swap it.
Pro gamers are paid to use whatever their sponsors want them to use.
And if their new fancy keyboard is good enough not to negatively affect their game, I don't see why they would turn down the money.
That's tangential though. Whether or not you have skipped frames, G-Sync/FreeSync tend to reduce input lag, because the panel's refresh events will be timed such that they're closer, on average, to the times when new frames become available.
Most competitive players will have their own configs for competitive play etc. I have tournament/LAN configs, online play configs, and my video settings mostly stay the same. All tuned for lowest latency and highest visibility.
Pro gamers in highly competitive, reaction-based games (such as most first-person shooters) usually disable it, yeah.
And I'm not an expert in this either, but I'm pretty sure double buffering is the norm, whether you use V-Sync or not.
One buffer in which your graphics card works on frames (I'll call this "graphics card buffer") and then a buffer into which finished frames are moved, so that your screen can read them out in peace (I'll call this "pre-screen buffer").
There's also "triple buffering", which introduces another one of those graphics card buffers, so that when a finished frame is being transferred from that first graphics card buffer to the pre-screen buffer, then your graphics card doesn't have to wait for that transfer to finish and can instead start working in the second graphics card buffer right away. (And then it transfers frames from those graphics card buffers in alternating fashion.)
So, the delay that V-Sync introduces is not that. What V-Sync does, is that instead of transferring finished frames from the graphics card buffer(s) into the pre-screen buffer as soon as the frame is finished, it waits with the transfer until your screen has finished with reading out the previous frame.
If you don't wait (have V-Sync disabled), your screen will read out some part of the previous frame and then read out the rest from the new frame. On the screen, you'll see this break between previous and new frame as screen tearing.
So, assuming your screen can display 120 frames per second and your graphics card happens to finish 120 frames in a second, then V-Sync will not introduce a delay. It'll wait once with transferring the first frame, but then they'll be in sync and no further delay should occur.
However, if your graphics card is able to calculate 240 frames per second (and your screen still does 120 frames per second), then with V-Sync, the graphics card buffer will be transferred into the pre-screen buffer only 120 times per second, making the graphics card slow down to 120 frames per second as well.
Without V-Sync, the graphics card buffer will be transferred 240 times per second, regardless of how often your screen can read it out from the pre-screen buffer. This means a frame will get loaded into the pre-screen buffer as your screen is reading it out. As a result, you'll get screen tearing and one half of your screen will be from the new frame, with a 1/240 s = 4.167 ms delay, the other half is still from the previous frame with a 2/240 s = 1/120 s = 8.33 ms delay.
So, a few numbers:
120 FPS with 120 Hz screen = 8.33 ms delay.
240 FPS with 240 Hz screen = 4.167 ms delay.
240 FPS with 120 Hz screen = ½8.33 ms + ½4.167 ms = 6.25 ms delay on average.
480 FPS with 120 Hz screen = ¼(1/480) + ¼(1/360) + ¼(1/240) + ¼(1/120) = 4.34 ms delay on average.
960 FPS with 120 Hz screen = 2.83 ms delay on average.
960 FPS with 240 Hz screen = 2.17 ms delay on average.
Most mechanical PS2 keyboards are a diode cascade.
You pressing the key triggers a wave of electricity cross the keyboard which is converted into the PS2 waveform, and pumped out at the same rate of the incoming clock waveform.
OLD PS2 keyboards don't generally have internal digital micro-controllers, they're effectively analog, the logic they do contain is blindly simple. This is why some struggle with N-Key Rollover.
---
OFC this is if you are using an -old- PS2 keyboard. I'm still using an 80's IBM Model M and above it roughly how it works.
This is absolutely untrue for any PS/2 keyboard, as well as for most pre-PS/2 PC keyboards (incuding AT and XT keyboards). All keyboards have always had a microcontroller in them which is responsible for scanning the key matrix and sending output over the serial link. (In PS/2, it's also responsible for reading input to change the state of keyboard LEDs.) There is nothing analog about them whatsoever.
Here is a set of pictures of an IBM Model M, for instance. The microcontroller is clearly visible in the ceramic DIP40 package.
PS/2 and AT are also bidirectional protocols for controlling the LEDs, key repeat, and various other commands that at a minimum requires some sort of digital state machine to manage the interface.
- I cannot find about "diode cascades" having anything to do keyboards. The closest is the usual diode matrix to deal with simultaneous key presses.
- The PS/2 interface involves sending scan codes, parity bits, and start and stop bits. How exactly does a "wave of electricity" get converted into a PS/2 waveform without some digital electronics?
- All the old schematics for mechanical keyboards I could find were the usual matrix scan. Even a textbook from the 1970s.
Anyway, reference please. I'd find it very interesting to see how such a thing would work.
I find my typing accuracy is affected by keyboard latency.
An interesting way to test this would be to use a sensor to note how the point where a key was definitely "headed down" (say > 10% of its total travel) and then waiting for the message to show up.
In older keyboards the main CPU was doing some of the debouncing so it knew "right away" that a keyboard was pressed. That cuts down on latency as well.
Another test case might be to put a USB -> PS/2 adapter on the keyboard. This lets the onboard CPU know to send the keycode as soon as it has decoded and debounced it.
I'd really like to see more research done on this subject.
I clock about 15 keystrokes per second while typing full speed, and I find serious issues with accuracy using most contemporary keyboards, even on gaming and higher-end OEM equipment.
Edit: Some keyboards are also practically unusable due to dropped inputs above a certain rate.
Isn't a big part of the reason "gaming" keyboards cost so much because they can register more than three keys (sometimes four) being hit at the same time - something many cheaper keyboards cannot do due to the way they are wired?
Yes, n-key rollover sometimes comes up. In practice, I've only had it become an issue in the wild either when more than one player is sharing the keyboard or, in some rare games, primarily simulators, where you might need to hold multiple keys.
It's a frequent issue in FPS games. Simultaneously pressing four keys isn't particularly unusual in an FPS, but it confounds an awful lot of non-gaming keyboards.
I've had keyboards which where marketed as professional keyboards have very obvious problems with this. Specifically, I had a Logitech MX5500 and if you where holding down the 'D' key and then pressed the 'S' key, the keyboard would never send the 'S' key (though I might remember those keys backwards). This is a problem because many a video game has you use the 'WASD' keys as direction keys to control movement, so trying to move backwards and to the right would fail.
If you hold C for crouching, you can't move on some diagonals and switch weapons at the same time. Many games/gamers use shift or ctrl now for crouch to work around that.
You might be thinking of ghosting as opposed to n-key rollover. Of course keyboards that address one tend to address the other as well, but still, small distinction.
That said.. how are you holding C to crouch? I assume for WASD movement. I just tried that and it's contorting my hand in weird and uncomfortable ways.
I shifted my hand to the right so that my middle finger is on D, my ring finger moved forward and back (WS), and pinky for A.
That evolved from gaming on a laptop where I'd just rock my index finger back to hit C or shift my thumb forward and rock back on the space bar to jump. I got disturbingly good at Q3Arena on a Powerbook G3, yes, using the track pad. It inspired some bad ergonomic habits. haha
The experience of 100 ms latency is definitely noticeable. Ask anyone who's ever ssh'd to a server across a continent or ocean. Empirically, I notice I have a lot more typos in such a situation.
This great article and the other recent one from him started me down the path of thinking about an end-to-end optimized modern computer and operating environment.
If you _could_ (not saying it's easy or even necessarily practical at all) drop replace all of the slow stuff like high-latency keyboards, syscalls, CPU-GPU communication (i.e. effective HSA) and build some proof-of-concept operating environment (that was explicitly incompatible with existing stuff).. I think you could get a pretty fast computer. It would be fun to try.
Anyway is there a Cherry MX keyboard that has low travel and low latency overall?
Not sure about low latency (especially since this article seems to have failed to test that at all...), but for low-profile cherry-style switches you could look into the new low-profile Kailh switches, which are sort of based on the old Cherry ML switches. The only manufactured board you can buy with these right now is the Havit HV-KB390L[1], as far as I know.
If you are really tied to the cherry mx style you could check out the kailh and cherry "speed" switches, which have slightly shorter travel distance and a higher actuation point.
I'm not sure if there's a common switch with less travel, but you could look into using spacer and/or landing pads to accomplish what you're looking for.
It would be cool to look at Linux vs the BSDs on this.
BSDs (dragonfly and openbsd) do feel like they react faster to keyboard input, but I've always wondered whether it's actually faster or just imagined.
This article prompted the following questions for me:
1. How well did the historical systems do, and why? Presumably non-negligible key travel is not a modern invention.
2. How much of the latency is key travel? (i.e. what korethr said)
3. How exactly were the keys pressed/how fast were they pressed? All the article says about the experimental setup is "The start-of-input was measured by pressing two keys at once – one key on the keyboard and a button that was also connected to the logic analyzer."
1. i suspect it comes down only a single program having full reign of the system at a time, and everything living in either RAM or ROM (no page faults etc).
So.. I noticed the huge latency difference between (Alt+Ctrl+F1) tty1 and Gnome with X11. After some tests using "Is it snappy?" app and my das keyboard I decided to check if Wayland had similar latency problems. After switching to Wayland I cut 40ms off my keyboard latency.. hmm. I don't know why that is but I assume there's some sort of frame buffer latency. The only thing I miss now is Plank :[
I'm looking for a low-latency, NKRO, chiclet-style keyboard. Would freak out if it also had individual LED backlights. Build it, and I promise to successfully market it as a musical instrument.
P.s.: I currently use external Apple keyboards to perform & make music (with things like http://qwerkey.xyz).
I've been wondering about this for longer than I care to admit. I always felt the perceived responsiveness of my old sparc 20 was much better than pcs I used 15 years later, and I thought it had to do with keyboard latency and interrupt processing.
> It’s possible to get down to the 50ms range in well optimized games with a fancy gaming setup, and there’s one particularly impressive consumer device that can easily get below that.
Oh. I guess I interpreted the passage I quoted as referring to a complete keypress-to-display latency, which wasn't tabulated in the article (vs. keypress-to-USB). And the Apple keyboard isn't that much of an outlier with the next-fastest keyboard coming in at 20ms. So I'm a bit let down but I'll accept it.
I'm using a das keyboard 4 on an iMac and did some tests to see what kind of latency different programs had for the cursor moving with just personal observations. iTerm seems to be significantly slower to move the cursor than Chrome or TextEdit.
Before seeing this article I've thought that iTerm seemed slow before to show text, it'd be nice to see a writeup on how long it took for text to render after keyboard input on a few different programs to get developers to use the same methods to keep this as quick as possible.
Completely missing the fact that enthusiast keyboards still prefer PS/2 for NKRO (no key-rollover) and HW interrupt support which results in sub 10ms delays.
What's it with font-size recommendations getting larger and larger every year? When I started writing CSS for my websites, the standard font-size was 12px (sometimes 10 or 11).
You have to look at fingertip travel time, not keycap travel time, for a more complete picture.
Those cracker-thin chicklets that comprise the Apple Magic keyboard will only make you faster if you rest your fingertips on the caps.
That may not work so well in all games; it depends on the controls. If you rest a finger on a certain key, but a situation requires you to use that finger to react with a different (though near by) key, you're at a disadvantage compared to having the finger raised so that it can strike either key with equal speed.
I don't have a way to measure raw USB events, but it takes around 90 ms between me pressing a key on the Macbook's (2015 model) keyboard and a character showing in the TextEdit.app.
By coincidence I just replaced my last Apple Keyboard with an MS 4000 today. I find it curious that they are both at the top of the list, even though I've never considered hardware latency an issue.
I've been experimenting with alternative protocols to HID. I'm not satisfied with fixed-interval input polling out of sync with vsync; it takes way too many samples to reach the desired latency (and especially jitter!) numbers.
As for keyboards, there's no real excuse not to just have a Bigass™ shift register (or a couple) and address every keyswitch with a trace and an interrupt crossbar. There's a limit to how cheap a good-enough keyboard can ultimately be, so a difference of a few cents in BOM and assembly, and slightly more intricate membrane layouts is not worth crying over.
The methodology here is broken for non-linear long-travel keyswitches, for which the "point at which the key starts moving" is the wrong point to measure. For example, I'm typing this on buckling-spring keyswitches. The break for these switches is part way down the stroke, this is intentional and does not contribute to "latency", since the intent of depressing the key above the break is not to send the keypress at that moment, but to be ready to cross the break once the previous key has been pressed. This is very similar to how good handgun triggers typically function: a light, firm, and smooth uptake before a break, and hopefully with a reset very near the break. It communicates to the user how close they are to actuating the switch.
In the case of linear long-travel switches (i.e. Cherry MX Red style switches), it's harder to decide where to consider the key "pressed", but in the case of non-linear long-travel switches (buckling spring, Cherry MX Blue) it's clear that the break is when the keypress should be registered, not the beginning of the uptake. In the case of non-linear switches, they don't need to bottom out to register keypresses, so they can have exceptionally low latency. With sufficient spring stiffness, and a long enough heel, buckling spring keyswitches should take less time to actuate than scissor-dome switches.
Added:
Regarding the testing methodology, it seems like you could do better with a combination of a motor, an armature, and a small strain gauge (or a measurement of motor load). The motor will apply consistent force at consistent peak acceleration between tests if you input consistent voltage. For non-linear switches, you can measure the break time as a junction in the applied force across the strain gauge or in the motor load on one of the analog inputs on your logic analyzer; for linear switches, you can measure the start of actuation the same way. Motor load would work just fine, since you're not interested in the exact force exerted, and thus don't need to calibrate for the efficiency of the motor.
"It all started because I had this feeling that some old computers feel much more responsive than modern machines."
s/modern/recent/
"For example, an iMac G4 running macOS 9 or an Apple 2 both feel quicker than my 4.2 GHz Kaby Lake system."
Kaby Lake system running OSX? I have one of the last G4 that came with OS9. Based on intuition it would make audio applications "feel" slower, when OSX came out I did not "upgrade", despite the marketing at the time. I guess I am not the only one.
"It turns out the machines that feel quick are actually quick, much quicker than my modern computer - computers from the 70s and 80s commonly have keypress-to-screen-update latencies in the 30ms to 50ms range out of the box, whereas modern computers are usually in the 100ms to 200ms range."
"... where we are today, where can buy a system with the CPU that gives you the fastest single-threaded performance money can buy and get 6x the latency of a machine from the 70s."
Methinks when someone spends that kind of money on a system, they will never accept findings like these. They are not likely to respond "inquisitively" to someone who describes achieving better speeds with a "less powerful" system that costs a fraction of what they paid. More likely, they will try to discredit them.
I suspect the culprit is not the hardware, but the software.
Back in the day there were very little between the keyboard and the screen, and most of it were either in rom or in ram.
Never mind that unless one were dealing with a big iron or similar, multi-tasking was a big nope.
These days most OSs have 10s to 100s of processes going right after a first boot on a clean install. And all those can trip something at "random" intervals.
Just the other day I replied to a Reddit thread, and told a gamer who experienced periodic frame drops in CS:GO to check the other system processes. He protested that he had nothing else running, but elsewhere in the thread a Task Manager screenshot revealed that Windows 10 Smartscreen Filter was chewing through files and bottlenecking the HDD(a problem I've also experienced).
Basically, desktop Windows has crossed the threshold of "needs SSD" to perform adequately. And in theory that shouldn't impact the performance of an older game, but it does, because what happens next is that the scheduler misallocates process time and throws off everything.
More like desktops in general, and smartphones as well even after the decade long reboot, are running out of actual new stuff to cram in there (in a sense the desktop, bar some security issues, was done with Windows 95).
Thus desktop developers keep adding questionable "quality of life" stuff like file system indexers to justify their continued version releases (never mind avoiding doing actual code maintenance).
Strike that. Obviously computers were far more expensive in decades past and the prices have come down dramatically. What I meant is a first user with an older computer who is achieving better performance, e.g. lower latency, than a second user having the costs of acquiring the latest hardware and software (who discarded her older computers).
Not to mention that 100 ms is musically a 16th note at 150 bpm. Being off by a 16th note even at that speed is – especially for percussive instruments – obvious.
On the other hand, if you told me to strike a key less than 100 ms after some visual stimuli, I'm sure I couldn't do it – that's what "reaction time" is.