Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hooray, with these changes, the tested setup finally manages to have a smaller input median latency (Console ~12 msec) than an Apple //e from 1983 (30 msec). It only took 41 years: https://www.extremetech.com/computing/261148-modern-computer... https://danluu.com/input-lag/

But wait! Not so fast (smile). This benchmark uses the compositor "raw Mutter 46.0", not GNOME Shell. Raw mutter is "a very bare-bones environment that is only really meant for testing."

In addition, this measurement is not end-to-end because it does not include keyboard latency. In this test, the "board sends a key press over USB (for example, Space)". Latencies just within a keyboard can go up to 60msec by themselves: https://danluu.com/keyboard-latency/

What are the true end-to-end numbers for the default configuration, which is the only situation and configuration that really matters? I wish the article had measured that. I suspect the numbers will be significantly worse.

I do congratulate the work of the GNOME team and the benchmarker here. Great job! But there are important unanswered questions.

Of course, the Apple //e used hardware acceleration and didn't deal with Unicode, so there are many differences. Also note that the Apple //e design was based on the older Apple ][, designed in the 1970s.

Still, it would be nice to return to the human resposiveness of machines 41+ years old.



I'd argue that it's actually a good thing that the author ignored keyboard latency. We all have different keyboards plugged into different USB interfaces plugged into different computers running different versions of different operating systems. Throw hubs and KVMs into the mix, too.

If the latency of those components varies wildly over the course of the test, it would introduce noise that reduces our ability to analyze the exact topic of the article - VTE's latency improvements.

Even if the latency of those components were perfectly consistent over the course of the author's test, then it wouldn't affect the results of the test in absolute terms, and the conclusion wouldn't change.

This exact situation is why differences in latency should never be expressed as percentages. There are several constants that can be normalized for a given sample set, but can't be normalized across an entire population of computer users. The author does a good job of avoiding that pitfall.

The Mutter thing is interesting. The author holds that constant, and GNOME sits on top of Mutter, so I think it's reasonable to assume we'd see the same absolute improvement in latency. GNOME may also introduce its own undesirable variance, just like keyboard latency. I'd be very curious to see if those guesses holds up.


> We all have different keyboards plugged into different USB interfaces plugged into different computers running different versions of different operating systems.

I've actually been curious about getting a wireless keyboard recently, but wondered about how big of a latency impact there would be. Normally I use IDEs that definitely add a bit of sluggishness into the mix by themselves, but something compounded on top of that would probably make it way more annoying.

A quick search lead me to this site: https://www.rtings.com/keyboard

It does appear that they have a whole methodology for testing keyboards: https://www.rtings.com/keyboard/tests/latency

For what it's worth, it seems that well made keyboards don't have too much latency to them, even when wireless (though the ones that use Bluetooth are noticeably worse): https://www.rtings.com/keyboard/1-3-1/graph/23182/single-key...

Just found that interesting, felt like sharing. Might actually go for some wireless keyboard as my next one, if I find a good form factor. Unless that particular keyboard does something really well and most others just have way worse wireless hardware in them.


If you get a modern wireless keyboard with 2.4 GHz wireless and use that instead of blutooth, the latency can be great - like you saw.

Blutooth is still crap though, avoid that if you care about latency. Or reliable connections.

Also be aware that despite being able, not all keyboards poll at 1KHz. My old wired keyboard polled at 66 Hz.

My current wireless keyboard has a single-key latency around the 1.7 ms range. Better than many wired ones!


Hubs have no noticable impact on latency.

https://www.youtube.com/watch?v=nbu3ySrRNVc explains it and has some statistics.


To those who downvoted this comment, can you please explain why?


Then use the Apple 2e and forgo all the nicities of modern operating systems. Honestly this take is a whole lot of words to shit on all the open source devs working hard to provide things to us for free and I’m not having it.


+100. It's my least favorite talking point because I'm old enough to seen it linked 100 times, I find it very unlikely it was faster when measured by the same methods, and the article itself notes the funny math around CRTs.


Or even better: don't use Gnome. /s


The methodology of the linked keyboard latency article including the physical travel time of the key has always irked me a little


Yet as a lover of Cherry Reds and Blues, in my opinion that time should most definitely be included. I am not a gamer, but I do notice the difference when I'm on a Red keyboard and when I'm on a Blue keyboard.


My initial gut reaction to this was - yeah, of course. But after reading https://danluu.com/keyboard-latency/ - I'm not so sure. Why exactly should physical travel time not matter? If a keyboard has a particularly late switch, that _does_ affect the effective latency, does it not?

I can sort of see the argument for post actuation latency in some specific cases, but as a general rule, I'm struggling to come up with a reason to exclude delays due to physical design.


It's a personal choice of input mechanism that you can add to the measured number. Also, the activation point is extremely repeatable. You become fully aware of that activation point, so it shouldn't contribute to the percieved latency, since that activation point is where you see yourself as hitting the button. This is the reason I don't use mechanical keyboards; I can't activate the key in a reasonable time.


>This is the reason I don't use mechanical keyboards; I can't activate the key in a reasonable time.

From what I understand, non-mechanical keyboards need the key to bottom out to actuate, whereas mechanical switches have a separate actuation point and do not need to be fully pressed down. In other words mechanical switches activate earlier and more easily. What you said seems to imply something else entirely.


If you're comparing a mechanical key switch with 4mm travel to a low-profile rubber dome with 2mm or less of travel, the rubber dome will probably feel like it actuates sooner—especially if the mechanical switch is one of the varieties that doesn't provide a distinct bump at the actuation point.


No, I’m speaking only of travel required to activate the key. There’s still travel to the activation point for mechanical keyboards. I’ve yet to find a mechanical switch with an activation distance as small as, say, a MacBook (1 mm). Low travel mechanical switches, like from Choco (as others have mentioned) are 1.3mm. Something like a Cherry Red is 2mm.


There are mechanical switches with near 1 mm travel, comparable to laptop keyboards. E.g. Cailh choc switches have 1.3 mm travel.

(I would love to see scissors-action keys available to build custom keyboards, but I haven't seen any.)


We don’t all press keys in exactly the same way. How would you control for the human element?


Variance isn't a reason to simply ignore part of the problem.


A lot of modern keyboards allow you to swap out switches, which means switch latency is not inherently linked to a keyboard.

It also completely ignores ergonomics. A capacitive-touch keyboard would have near-zero switch latency, but be slower to use in practice due to the lack of tactile feedback. And if we're going down this rabbit hole, shouldn't we also include finger travel time? Maybe a smartphone touch screen is actually the "best" keyboard!


Latency isn't everything; but that doesn't mean it's irrelevant either. I'm OK with a metric that accurately represents latency with the caveat that feel or other factors may be more important. If key and/or switch design impacts latency in practice; shouldn't we measure that?

I guess that is an open question - perhaps virtually all the variance in latency due to physical design is tied up with fundamental tradeoffs between feel, feedback, sound, and preference. If so - then sure: measuring the pre-activation latency is pointless. On the other hand, if there are design choices that meaningfully affect latency without meaningfully impacting other priorities, or even where gains in latency are perhaps more important than (hypothetically) small losses elsewhere - then measuring that would helpful.

I get the impression that we're still in the phase that this isn't actually a trivially solved problem; i.e. where at least having the data and only _then_ perhaps choosing how much we care (and how to interpret whatever patterns arise) is worth it.

Ideally of course we'd have both post-activation-only and physical-activation-included metrics, and we could compare.


I'm fine with wanting to measure travel time of keyboards but that really shouldn't be hidden in the latency measurement. Each measure (travel time and latency) is part of the overall experience (as well as many other things) but they are two separate things and wanting to optimize one for delay isn't necessarily the same thing as wanting to optimize both for delay.

I.e. I can want a particular feel to a keyboard which prioritizes comfort over optimizing travel distance independent of wanting the keyboard to have a low latency when it comes to sending the triggered signal. I can also type differently than the tester and that should change the travel times in comparisons, not the latentcies.


Because starting measuring input latency from before the input is flat out wrong. It would be just as sensible to start the measurement from when your finger starts moving or from when the nerve impulse that will start your finger moving leaves your brainstem or from when you first decide to press the key. These are all potentially relevant things, but they aren't part of the keypress to screen input latency.


Yeah that is definitely something that shouldn’t be included


Dealing with Unicode is not the challenge that people seem to believe it is. There are edge cases where things can get weird, but they are few and those problems are easily solved.

What really got my goat about this article is that prior to the latest tested version of Gnome, the repaint rate was a fixed 40Hz! Whose decision was that?


Unicode is more challenging when you're talking about hardware acceleration. On an Apple //e, displaying a new character required writing 1 byte to a video region. The hardware used that byte to index into a ROM to determine what to display. Today's computers are faster, but they must also transmit more bytes to change the character display.

That said, I can imagine more clever uses of displays might produce significantly faster displays.


Modern video acceleration wouldn’t copy 1 byte into video memory even if we stuck to ASCII. They have to blit those characters onto a surface in the required type-face.

The extra few bytes for Unicode characters outside of the usual ASCII range is effectively a rounding error compared with the bitmap data you’re copying around.


> What really got my goat about this article is that prior to the latest tested version of Gnome, the repaint rate was a fixed 40Hz! Whose decision was that?

From a previous VTE weekly status message, leading up to the removal:

"I still expect more work to be done around frame scheduling so that we can remove the ~40fps cap that predates reliable access to vblank information." (from https://thisweek.gnome.org/posts/2023/10/twig-118/)

So as with a lot of technical debt, it's grounded in common sense, and long since past time it should have been removed. It just took someone looking to realise it was there and work to remove it.


That’s a valid reason to choose a repaint rate, sure, but 40Hz? It’s halfway between 30Hz and 60Hz (yes, it is) but it seems like a poor choice, to me. 60Hz would have been much more reasonable.

Also, why do userland applications need to know the monitor refresh rate at all? Repaint when the OS asks you to. Linux is a huge pile of crap pretending to be a comprehensive OS.


Yes, you're right, you're absolutely the smartest person in the industry, and absolutely everyone else doing all of this work has no clue what they are doing at all, and should absolutely bow down to your superior intellect.

You not understanding why something was done, much with your orignal comments about 40hz, doesn't actually mean that it is wrong, or stupid. It means that you should probably spend time learning why before making proclamations.


I found the claim in the keyboard latency article suspicious. If keyboards regularly had 60ms key press to USB latency, rhythm games will be literally unplayable. Yet I never had this kind of problem with any of the keyboards I have owned.


Real physical acoustic pianos have a latency of about 30ms (the hammer has to be released and strike the string).

Musicians learn to lead the beat to account for the activation delay of their instrument - drummers start the downstroke before they want the drum to sound; guitarists fret the note before they strike a string… I don’t think keyboard latency would make rhythm games unplayable provided it’s consistent and the feedback is tangible enough for you to feel the delay in the interaction.


My wife has JUST started learning drums in the past week or so. She doesn't even have her own sticks or a kit yet, but we have access to a shared one from time to time. It's been interesting watching her learn to time the stick / pedal hits so they sound at the same time.

I'm a semi-professional keyboard player, and in the past I played with some setups that had a fair bit of latency - and you definitely learn to just play things ahead of when you expect to hear them, especially on a big stage (where there's significant audio lag just from the speed of sound). And some keyboard patches have a very long attack, so you might need to play an extra half beat early to hit the right beat along with the rest of the band.

If you watch an orchestra conductor, you may notice the arm movements don't match up with the sounds of the orchestra - the conductor literally leads the orchestra, directing parts before you hear them in the audience.


Absolutely, humans are incredibly good at dealing with consistent and predictable latency in a fairly broad range. Dynamic latency, on the other hand ... not so good.

I recall a guitarist friend who figured out their playing was going to hell trying to play to a track when their partner used the microwave. They were using an early (and probably cheap) wireless/cordless system and must have had interference.


that's awesome! what kind of music does she like to play?


My wife is a very good singer, and took singing lessons for years while singing in chorale and other group activities. She used to sing harmony in our church worship bad where I played keys weekly.

She's been learning Irish tin whistle for a few years, and is a big fan of the Dropkick Murphys and other celtic punk bands, along with 90s alternative bands lik Foo Fighters, Red Hot Chili Peppers, and Weezer. I've been learning guitar / bass / ukulele / mandolin, and it would be great fun if she can play drums and sing while I play something else....


Pipe organs can have hundreds of milliseconds of latency, and it's different with every organ. Organists learn to compensate for it.


30ms seems high! Though I might believe that.

On higher-end pianos (mostly grands), there is "double escapement" action, which allows much faster note repetition than without. I suspect the latency would be lower on such pianos.

> Musicians learn to lead the beat to account for the activation delay of their instrument

Yes, this is absolutely a thing! I play upright bass, and placement of bass tones with respect to drums can get very nuanced. Slightly ahead or on top of the beat? Slightly behind? Changing partway through the tune?

It's interesting to note also how small discrepancies in latency can interfere: a couple tens of milliseconds of additional latency from the usual — perhaps by standing 10-15 feet farther away than accustomed, or from using peripherals that introduce delay — can change the pocket of a performance.


For rhythm games, you want to minimize jitter, latency doesn't matter much. Most games usually have some kind of compensation, so really the only thing that high latency does is delay in visual feedback, which usually doesn't matter that much as players are not focusing on the notes they just played. And even without compensation, it is possible to adjust as long as the latency is constant (it is not a good thing though).

It matters more in fighting games, where reaction time is crucial because you don't know in advance what your opponent is doing. Fighting game players are usually picky about their controller electronics for that reason, the net code of online games also gets a lot of attention.

In a rhythm game, you usually have 100s of ms to prepare you moves, even when the execution is much faster. It is a form of pipelining.


The 60ms keyboard is wireless.

Some of them will batch multiple key presses over a slower connection interval, maybe 60ms, then the radio to USB converter blasts them over together.

So you can still type fast, but you absolutely cannot play rhythm games.


Rhythm is about frequency, not latency. As long as latency is stable, anyone adapts to it.


Yes and no. Latency across a stage is one reason why orchestras have conductors. An orchestra split across a stage can have enough latency between one side and another to cause chaos sans conductor. It takes noticeable time for sound to cross the stage.


I don't buy the 60ms latency either, but it's very easy to compensate for consistent latency when playing games, and most rhythm games choreograph what you should do in "just a moment" which is probably at least 10x more than 60ms


Sure, but not in FPS games, an additional 60ms you might as well as be down an entire standard deviation.


Only on movement inputs, which don’t (?) make as big a difference as aiming speed for most people I think. (I aim with my feet in first person shooters but I think that is a habit, maybe bad habit, picked up from playing on consoles for many years).

Lots of people might not be good enough to care about missing 1-2 frames.

One could also pre-load the wasd keys.


You could compare to one of the weird ones like the wooting, which has slightly higher actuation than usual switches?


Almost 4 frames (if you are talking a 16ms frame) right?


Could it be that the user simply learns to press a key slightly earlier to compensate for the latency? There is key travel time you have to account for anyway.


Rythm games almost always have a calibration setting, where they ask you to press a key on a regular beat. Then can also do it to check the visual latency by doing a second test with visuals only. This allows them to calculate the audio and video latency of your system to counter it when measuring your precision in the actual game.


Oooh, that explains why when watching some super high bpm runs I always got the impression that they were like a frame off the marked area - but it was just shifted ahead of the actual music and they were in actually in sync


Rhythm games will often have compensation for the delays of the input, audio, and visuals. Some do this without telling the users, others do it explicitly, e.g. Crypt of the Necrodancer.


The latency of the physical key going down is counted in that post, so it includes mechanical "latency" that will differ depending on how hard you press the keys and if you fully release the key.


Rhythm games are probably the easiest to play with high latency, as long as it's consistent.


> a smaller input median latency (Console ~12 msec) than an Apple //e from 1983 (30 msec).

> Still, it would be nice to return to the human resposiveness of machines 41+ years old.

A while ago someone posted a webpage where you could set an arbitrary latency for an input field, and while I don't know how accurate it was, I'm pretty sure I remember having to set it to 7 or 8ms for it to feel like xterm.


BTW, remember, most people still have 60hz monitors. Min latency can only be 16.6ms, and "fast as possible" is just going to vary inbetween 16.6 and 33.3ms.

The real improvement wouldn't be reducing latency as much as allowing VRR signaling from windowed apps, it'd make the latency far more consistent.


>BTW, remember, most people still have 60hz monitors. Min latency can only be 16.6ms, and "fast as possible" is just going to vary inbetween 16.6 and 33.3ms.

No the minimum will be 0ms, since if the signal arrives just before the monitor refreshes then it doesn't need to wait. This is why people disable VSync/FPS caps in videogames - because rendering at higher than <60FPS> means that the latest frame is more up-to-date when the <60Hz> monitor refreshes.

The maximum monitor-induced latency would be 16.6ms. Which puts the average at 8.3ms. Again, not counting CPU/GPU latency.

33.3ms would be waiting about two frames, which makes no sense unless there's a rendering delay.


The article is about testing actual user experienced latency, and Mutter still buffers 1 frame. Actual observed latency is going to be between 16.6ms and 33.3ms before the user can see anything hit their eyeballs.


Best-case median latency @60hz is 8.3ms (i.e. if there were 0 time consumed by the input and render, it would vary equidistributed between 0 and 16.6ms.


Linux, for better or for worse, doesn't have universal support for VSYNC.

Without it, you can race the scanout and minimum latency can get well under 1ms.

This was required for me to get a video stream down below 1 frame of latency.


Use xfce. It is much more responsive than gnome.


The only problem is, if you use xfce for any length of time it is horrible going back to any other DM.

Don't even attempt to use a modern windows install, you end up feeling that it is actually broken!


Xfce always felt like Linux to me. Like sure, there are other interfaces, but this is the Linux one.

I want an ARM laptop with expandable memory, user-replaceable battery, second SSD bay, and a well-supported GNU/Linux OS that has xfce as the UI - from the factory. That's the dream machine.


I put Gentoo+Xfce on Lenovo x86. Not sure what to do about touch display and rotate (fancy laptop).

I've not tried an ARM laptop but this setup also works on RPi.


IMO, better to install yourself. Too much potential for the manufacturer to add adware annoyances in a pre-install.

Although, mine is an x86-centric take. There are occasionally issues around getting the right kernel for ARM, right? So maybe it would be helpful there.


Xfce was my favorite until I found i3/sway. Even more responsive, and less mouse required since everything is first and foremost a keyboard shortcut, from workspace switching to resizing and splitting windows.


Xfce is very configurable, and it's fairly trivial to set up tiling-WM-style keyboard shortcuts for window positioning, and get the best of both worlds.


Less mouse is required, but less mouse is permitted. It's just "less mouse". Sometimes I just want to use the mouse.

(if you only want keyboard then i3/Sway is great though, obviously)


True, I find resizing and tiling specifically a 3 window screen in a 1/2, 1/4, 1/4 setup to be impossible to figure out - often I just move windows around repeatedly until I give up. If I could drag directly it would definitely make it easier. But that's relatively rare for me.


I have Super+Numpad assigned as window positioning hotkeys in XFCE. First window would be Super+4 to position it on the left half of the screen. Next two would be Super+9 and Super+3 respectively to position each in the upper-right and upper-left corners. Super+5 maximizes.


Sway actually has really good mouse support! Try dragging windows around while holding the $mod key :)


WOW! I had no idea. That's awesome, thanks for informing me :)


XFCE's goal is also implement a stable, efficient, and versatile DE that adheres to standard desktop conventions, rather than to import all of the limitations and encumbrances of mobile UIs onto the desktop in the mistaken belief that desktop computing is "dead".


The XFCE Terminal uses VTE. This makes no sense.


Well, the Apple II had a 280x192 display (53 kpixel), and my current resolution (which is LoDPI!) is 2560x1600 (4 Mpixel). When you phrase it as "in 2024 we can render 76x the pixels with the same speed" it actually sounds rather impressive :)


> so there are many differences

Yeah, one of those is that modern stacks often deliberately introduce additional latency in order to tackle some problems that Apple IIe did not really have to care about. Bringing power consumption down, less jitter, better hardware utilization and not having visual tearing tend to be much more beneficial to the modern user than minimizing input-to-display latency, so display pipelines that minimize latency are usually only used in a tiny minority of special use-cases, such as specific genres of video games.

It's like complaining about audio buffer latency when compared to driving a 1-bit beeper.


Such a disappointing but typical HN comment.

The changes described in the OP are unequivocally good. But the most promoted comment is obviously something snarky complaining about a tangential issue and not the actual thing the article’s about.


Why so negative?

This is obviously an improvement, who cares if it is not perfect by your standards?


It's an interesting observation, and it's constructive, providing actual data and knowledge. I downvoted you for negativity because you're providing nothing interesting.


I found parent to be constructive as to tone rather than content and upvoted them for that reason. The constructive part of grandparent's point can be made without needlessly crapping on people's hard work.


Input latency or drawing to the screen? What's the latency of the Apple machine running GNOME?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: