The companion article at https://pavelfatin.com/typing-with-pleasure/ has some example results, showing Atom, the only Electron-based one in the list, as having several times more latency than the others. I wonder how VSCode compares.
It's unfortunate that software seems to feel slower the newer it is, including developer tools such as text editors/IDEs. I suspect this is because most people - including younger developers - have never seen how fast things can be, but are instead constantly subjected to delays in using applications to the extent that they think it is an absolutely normal speed[1], and then propagate that notion to the software they themselves create.
[1] For example, everything that uses UWP in Windows. Someone who had only ever used the Settings app in Windows 10, or (even worse) the Calculator, might not realise how absurdly slow they are in comparison to the previous versions' where they would open nearly instantaneously.
A few months ago I found my old GameBoy and played through part of "Kirby: Nightmare in Dreamland". I know its not the most well-known game, so the relevant detail is that it relies a lot on timing and quick movements.
I was completely blown away by the responsiveness of the controls. The control feedback was instant, frames were rock-solid, and there were no loading screens at all.
When I put down the game, I was just tremendously sad. I'd forgotten what instant feedback felt like.
I think it says something about human interfaces. When I run old linux 2.4 distros, even if everything is crude (GUIs redraw without double buffering, no compositor nothing).. no font AA .. yet it feels 'better'. And it's not nostalgia only.
It really amazing playing with old technology. In Super Mario Bros on the NES, when you press the jump button Mario jumps the very next frame. And with a CRT you get instant response time.
For a PC, you'll most likely have USB latency, double-buffering, latency between the graphics card and the screen, latency due to picture processing in the screen. All that is additional to the 1ms that the pixels need to switch color after the instruction to do so has been received.
In addition to these excellent points, the "1 ms" pixel response time is measured grey to grey i.e. favourable to the display and not real-world. The bigger problem with advertised pixel response times is there is no industry standard measurement procedure (which grey to which grey?, is hdmi processing latency counted?), so pixel response times are vague, mostly useless marketing.
You don't have to believe these claims you know. And you need to add up all the latencies of all the components in the chain, from your keyboard and mouse to DirectX or whatever it is...
It's that old consoles (up until the Sega Saturn, Nintendo 64, Sony Playstation era) didn't draw to frame buffers. They didn't have enough RAM to have a real frame buffer.
You configured the graphics chip what you wanted it to draw, and as the scanline scanned across the screen, it outputted the color that was supposed to be output depending on the background/sprite/palette settings. You could change where the sprite is halfway across the scanline, and it might immediately, in the middle of drawing a scanline, output a different color. There was no delays anywhere in the system, only the propagation delay of electricity in wires and transistors. The various HW registers that controlled what and where the sprites were were connected via a small pile of non-clocked digital logic gates to a DAC to the analog output pins of the graphics chip, which was connected to the electron guns directly if you used component video or via a simple, no-latency analog circuit if you had composite video.
These days, you program the graphics card to draw what you want it to draw. Once you've programmed everything for it to draw, it draws that into the backbuffer. Once the current frame has been sent to the display, the backbuffer is swapped with the frontbuffer. When the next frame begin is sent to the display, your changes finally go out across the wire. Depending on the display (see especially motion adapting TV screens) there might be more delay. Then whatever changes you make are finally displayed on the screen.
It's kinda weird. Old video game consoles (up until the SNES and Genesis) had extremely low latency. And that's been gone for 25 years. Not only is it gone, but it will likely be gone forever- we don't even make the technology anymore to show the new generation what it was like. On the one hand, the new technology is "better"; there's no way to do today's advanced graphics without a deep drawing pipeline that outputs to a frame buffer. But it's also somehow worse. We can make it less bad with technology like 144Hz and Freesync, but the old era is gone.
I started on the Apple II and i'ts been downhill since then. If I remember correctly the Apple II has one of the smallest latency between touching a key and seeing the result on the screen.
While this is true of LCDs, OLEDs have basically no draw latency, so conceivably they could directly update as the frame buffer was populating like a CRT if the gpu driver and OLED driver allowed it. This would still have tearing though for a modern game, so it is likely that it would need to buffer and draw entire frames at a time, which could still give you only 1 frame of latency. Most of the older games were not updating the positional information of sprites between frames (just using scanlines for raster effects) so there was already 1 frame of latency on input updates.
The N64 had a unified memory architecture, so the frame buffer was just a region of memory within the unified system RAM that was drawn to. The Z buffer was the same way. (assuming the programmer chose to enable to the Z buffer) It was still a frame buffer though.
The scan speed is exactly the same for all screens when driven by the same timings (resolution, blanking, refresh rate), so there is no difference there. Most digital screens have some processing lag, for gaming screens usually 1-4 ms. This doesn't exist in a CRT. LC panels have a pixel response time, somewhere around 1-40 ms depending on the color change, drive mode and panel type ("1 ms" is always a grey-to-grey transition and generally with overdrive). CRT phosphor lights up practically instantly when hit by the electron beam.
So for a given set of timings, you'd always expect the CRT to be faster, because it represents the absolute minimum time-of-flight delay, assuming it is driven by a direct RAMDAC and not by a conversion box buffering an entire frame or significant portions.
Also, you can "move" electron beams at extremely high speeds. Even in a magnetic deflection CRT like the GDM-FW900 the electron beam can move at more than 80 km/s[1]. In some electrostatic deflection systems the speed of light is exceeded: they can draw a dot moving across the face of the CRT that moves at a higher speed than c. This is possible because "the beam" is a fictional object.
[1] 2304 dots across 482 mm with a pixel clock of around 384 MHz means these 2304 dots are covered in about 6 µs; 482 mm/6 µs = 80.3 km/s.
The NES-CRT connection is analog and the screen is drawn line by line, if you connect a digital monitor that will have to be buffered and you get some delay/lag of min 1 frame.
You misunderstand the video. A LCD controller must buffer at least one line, because the LCD matrix is updated line for line, all columns in parallel -- it's a matrix, much like a RAM matrix. However, the video just shows how the LCD is scanned out, it doesn't say anything about the buffering of the controller. A lot of non-gaming LCDs, and virtually all older non-gaming LCDs, buffer an entire frame, even if they are driven in their native mode.
I also can't be the only one annoyed by the input lag in the windows login screen and start menu search. Somehow both of them seem to think it's fine to just ignore keypresses for half a second before doing anything.
Annoyance aside I seriously worry about the accessibility of the windows lock screen. Through a weird display setup I sometimes find myself trying to log in blind and I just can't do it. There seems to be no way to reliably focus the password field, and for whatever insane reason it's not just always in focus
This insanity is apparently intentional, but can be fixed for the lock screen at least: Win+R->gpedit.msc->"Computer Configuration"->"Administrative Templates"->"Control Panel"->"Personalization"->"Do not display the lock screen"->Enabled
This poorly named option doesn't disable locking the screen, it just fixes it to not eat your first few keystrokes when you start typing your password.
This is one of the reasons why Windows drives me bonkers— all the interesting settings seem to be locked in disused basement lavatories. See also changing capslock to control.
Is it even possible to do it through system settings alone, even if obscure ones? I use AutoHotkey to do Caps Lock -> Ctrl remap, and it's one of the first programs I install on every new system.
My windows experience is as infrequent and as brief as I have been able to make it, but AFAIK editing registry keys + reboot is the only way to do it without third-party software.
Thanks for the AHK tip. The programs I usually find look like hastily-constructed malware on a dodgy sourceforge link & always make me feel real :-| about installing them on a fresh box.
The login screen one is particularly ridiculous, because my Ubuntu machine takes about as long to wake up as my Windows, but it sensibly buffers the keypresses until it can insert them so I never mistype.
It's just glaringly obvious that nobody has bothered for even a second to think about how to solve the problem (or indeed, just copied the solution that already exists elsewhere).
The start menu on the other hand I gave up on several years ago. I don't know what's in it anymore, and I don't care, because nothing could be worth the >5s wait for it to open.
I use Open-Shell (formerly Classic Shell). Before that I used Launchy (which is another crazy-fast alternative) tied to my CapsLock key.
I've also had tabs on my file Explorer windows, since Win 7 (thanks to QTTabBars). I'm commonly asked about that after I give remote desktop presentations.
> Annoyance aside I seriously worry about the accessibility of the windows lock screen. Through a weird display setup I sometimes find myself trying to log in blind and I just can't do it.
If you try running a screen reader, such as Narrator (built into Windows; edit: turn it on and off with Ctrl+Win+Enter, even on the logon screen), you should find that even if the focus somehow gets away from the Password field, you can get back there with the help of speech output. Logging in blind doesn't have to mean logging in with no means of orientation.
(Disclosure: I'm a developer at Microsoft on the Windows accessibility team, working on Narrator among other things.)
Fair enough, but on a screen who's sole purpose is to let me type in my password I really shouldn't require a Narrator to find the one and only input field. It's nice that the problem has a solution, but it shouldn't have been a problem to begin with.
The trouble is that this screen isn't just for entering your password. On my logon screen, I have not only the PIN field (because I enabled the TPM-based Windows Hello feature), but also these other links or buttons: "I forgot my PIN", "Sign-in options", network, Ease of Access, and power. So use of a full GUI framework, that supports accessibility tools such as a screen reader, is justified.
The user doesn't care about any of that. I should be able to wake up the box by typing the pin. The reality is that i have to press the first letter of it then wait then press it again and then I have to wait and see if a dot appears in the field.
The reason it works like this is that the user is forced to use the software. It should be the other way around. If people are forced to use something developers should do what they can to not make the experience incredibly annoying.
Sure, there can be situations where you need to cut corners but this is the log in screen.
After logging in I mistype an application name in the search box and wait for the dialog to pop up that says "Best matches" with an empty list under it. It can't even find "tunderbird"
If I type a single letter it shows "search teh web for "a".
But it does show Atom! Woah! Technology at work!
I get to chose, I can press the down arrow and select "apps" then press enter twice OR I can press the down arrow twice and press enter! Becareful! Dont press the arrow to fast or it jumps back to searching the web for "a".
You're making an uncharitable assumption about real people who work full-time on this stuff. I can assure you that they do put a lot of thought into their work. No, I never worked on the Start menu or search myself, but before we started working from home, I was in the same building with the folks that do.
It always seemed to me like they intentionally made it so to specifically hinder people from logging in blindly or unlocking blindly; I always assumed to avoid people accidentally typing their login password into an IM app if the machine wasn't actually locked.
The Windows 10 login screen definitely uses the UWP XAML framework. And no, I'm not revealing any inside knowledge; I knew this fact before I joined Microsoft (because I used to develop a third-party screen reader for Windows).
Agreed. The input lag often causes me to mistype my password as I double-type the first three characters, thinking that they didn't register the first time.
I actually don't think that developers who use things like VSCode are simply naive. Personally, I am a young person and I frequently do use vim in a low-latency terminal emulator (xterm, mlterm) without a compositor. Yes, I enjoy the fact that it is very fast. However, I still use VSCode when I'm making more complicated edits primarily because of the plugin ecosystem. Plugins "just work", are easy to configure, and all work together nicely. I get rich syntax highlighting and intellisense. I might be doing something wrong, but this was not the experience I have had with vim.
I like the UI simplicity of VSCode in contrast to native editors such as IntelliJ and Visual Studio, which I do use when I really need the features. I am interested in efforts like Onivim 2, which seek to combine these advantages.
You might be interested in coc.nvim, which can allow vim to use the LSP servers which powers VSC's intellisense. It's a bit of setup, but I've found with coc.nvim and a good syntax highlighting plugin, I can get a really great syntax highlighting and intellisense setup going.
I've found that I get perfectly passable syntax highlighting with vim, and haven't needed anything else on my current install. If I did use an editor other than vim, I would probably use VSCodium because open build, and I do really like the visual style.
Agreed. I've been editing since the late 70s. At the moment I use VSCode the most. The plugins are the thing. They are doing things I've seen no other editor do. Of course I haven't used every editor.
The latest was I typed
...nameOfArrayInOtherFile
and it auto inserted
import nameOfArrayInOtherFile from './tests/name-of-array-in-other-file.js';
a few lines above some how recognizing the pattern.
Another thing I've seen VSCode do is give me library specific warnings. Maybe that's common now-a-days in other editors but I hadn't seen it before. I'd seen language warnings but not library warnings.
Newer tools only come as alternatives to faster tools that continue to be usable at any time.
For instance I choose to use VSCode and consciously weight it down with plugins and extra linters, because the trade-off is fine for me.
But I know vim is only a click away, and if I wanted sheer speed I’d do it there. And I actually do use it on a day to day basis, it’s just not my primary editor.
The worst is when I see someone using vim or Emacs over SSH. If you think Electron apps are bad, imagine adding a network round trip between every keystroke!
Suggestion for anyone who's annoyed by SSH latency - try out mosh[0]. It watches to see if your keystrokes are echoed and if they are, it'll start echoing your keystrokes locally without waiting for network roundtrip.
That, plus the ability to rejoin a session even from a different IP, makes working over SSH doable even from airplane Wi-Fi.
Terminal Emacs over ssh is, well, just like anything else in a terminal over ssh. Can't say I notice the latency unless the datacenter is on the other side of the country.
idk, vim across ssh (rn from my place in the west coast to a vps in iirc the east coast) feels a lot snappier and less frustrating than my typical interaction with slack.
> Because sampling rate is fast enough to misinterpret contact bounce as keystrokes, keyboard control processor perform so-called debouncing of the signals by aggregating them across time to produce reliable output. Such a filtering introduces additional delay, which varies depending on microcontroller firmware. As manufacturers generally don’t disclose their firmware internals, let’s consider typical debouncing algorithms and assume that filtering adds ~7 ms delay, so that maximum total “debounce time” is about 12 ms, and average total debounce time is ~8.5 ms.
Debouncing in software is one of those things that 99 % of developers get wrong, and is something even hardware manufacturers get wrong all the time.
A lot of hardware debounces in a dumb and naive way: On the first state change, it waits 5-10 ms and sample the switch again to figure out if it was pressed or not. So you get an inherent "debounce" delay, which is entirely unnecessary.
Debouncing keys correctly works so: If the switch generates an edge, sent key down/up event immediately and ignore further switch transitions for 5-10 ms. There is no point in waiting for the switch to surely have finished bouncing before reporting the event, because if it is bouncing it MUST have been pressed/released and you know which one it is because you know the prior state of the switch.
---
Compositor delay due to VSync
Obviously compositors are using double-buffered vsync precisely because they intend to limit FPS to $smallValue in order to save power and prevent the 3D hardware from entering a high performance power state. They really should be using triple-buffered vsync, but only start to render if something changed, resulting in much lower latency without constantly running at 1000 FPS. There should be a way for the compositor to be notified of changes in client areas, since stuff like X11 damage protocol and RDP are a thing.
Under normal circumstances Windows' DWM compositor will avoid compositing at the display rate if nothing has changed. It used to be pretty easy to demonstrate this by using a hook application (like fraps) to put a framerate overlay on DWM, though I don't think that works anymore. If you have g-sync enabled for the active windowed application, the DWM composition rate (and the monitor scanout as a whole) will be tied to the windowed application's vsync, so if it's running at 30hz so is your desktop until you tab away. Many G-Sync monitors have a built in framerate overlay you can toggle on to observe this (though the lag on your mouse cursor will make it obvious).
I suspect in practice most software is not very good at giving the compositor the info it needs to avoid wasteful refreshes of the whole desktop, and the compositor probably also isn't putting in as much effort as it could.
That's a good point I totally forgot about. It actually means that most of the infrastructure for this is already around and implemented, but of course fundamentally this is an information problem - the compositor gets told when applications "damaged" their output, but since there is no concept of presenting a frame in these legacy 2D APIs (GDI/+, X11 etc.), the compositor has no idea if the application will continuing "damaging" its output, or if it is "done for now".
I think the best a compositor might be able to do is something along the lines of keeping track of the latest possible time it has to start rendering such that the frame can be swapped to the front and sent to the screen. This should result in the lowest possible average latency while rendering at the vsync rate. DWM is clearly not doing this (since the delay is discretized to 16.7/33 ms), the Linux might be doing this, or it might be rendering on every damage event and use triple buffering, either would be plausibel given the 8 ms average delay.
Seeing as how this and similar articles regularly crop up here, I'm curious as to when this started and why nothing is done about it. I still regularly use my 14 MHz Amiga 1200 for recreational programming and I _never_ experience input lag on that machine.
Thinking back, the first time I noticed input lag was on a Mac LCIII running Word in the mid-90s. Then for a long long time I didn't come across any particularly noticeable latency, except on really crappy websites and in really crappy Java apps. Then Microsoft bought Skype and started working their magic on it, and this seemed to open some kind of floodgate of high-latency crap. That's not even a decade ago.
After that, little by little, everything seemed to slow down noticeably. We've now reached a level when this is seemingly normal. Even programmer colleagues who are my age and older are looking at me like I'm curious when I complain about the latency. I'd say something toxic about Electron here, but it's prevalent in native programs as well.
Have things really gotten so much more complex since 2010 that we can no longer put a character on screen in a timely fashion?
There are underlying reasons, much of which can be gathered under the umbrella term of "virtualisation". Modern computers use layers upon layers of software or hardware to present an interface pretending to each higher level that things are simpler than they really are.
These abstractions, however, are leaky, and almost all of them are leaky in the temporal sense. There's a pretend continuity or constant throughput that's not really there.
Everyone knows that operating systems run short time slices of processes on physical cores, so that each program can "pretend" that it runs continuously on bare metal. But of course, not really, so there are gaps in the flow of execution that can occasionally be perceived by end-users.
If that were the only sin of pretense, then that could be worked around, or carefully tuned, but the reality is that it's just one of many layers.
The garbage collector of managed languages (e.g.: JavaScript in Electron) pauses execution within the process too.
Even unmanaged languages have variable overheads when allocating or de-allocating from the shared heap.
The desktop window manager helps each application pretend that it has a rectangular surface from (0,0) to (w,h), when in reality that is transformed and overlaid. That can introduce a lot of variation, particularly because the DWM has its own threads and its own garbage or heap.
The video card in turn is no longer just a block of memory mapped into the address space of the program doing the drawing, but is its own little computer with cores, threads, schedulers, locks, clocks, delays, and so forth.
The display in turn might further delay things because it has complex overdrive or scaling logic, so it needs to buffer frames.
> Everyone knows that operating systems run short time slices of processes on physical cores
The Amiga 1200 I mentioned earlier does pre-emptive multitasking on a 14 MHz CPU with 256 bytes of cache. I can switch between my IDE, my paint program, the OS desktop, a file manager and a simple text editor without any noticeable delay. It's as fast as flipping between desktops in FVWM on my PC (an operation which, incidentally, never seems to suffer from latency).
> The video card in turn is no longer just a block of memory mapped into the address space of the program doing the drawing, but is its own little computer with cores, threads, schedulers, locks, clocks, delays, and so forth.
The graphics architecture of my Amiga consists of several different chips all timed to a PAL signal and, since they're sharing memory with the CPU and other I/O, are also affected by constant interrupts.
> The desktop window manager
There's a DWM on my Amiga as well, called Intuition, providing several abstractions for programs to open screens and windows and render graphics and text in them. Plus, of course, GadTools, the system library for drawing UI widgets.
> The display in turn might further delay things because it has complex overdrive or scaling logic
The cheap, modern flatscreen connected to one of my Amigas upscales and upsamples _and_ does A->D-conversion on the analog RGB signal and yet manages to show my double-buffered displays scrolling in one pixel increments, with 50 Hz vsync without stuttering or tearing.
Yes, the layers of abstraction have increased in number and complexity, but so has the speed of the surrounding architecture. My PC's clock speed is more than 100 times that of the Amiga, it has 4000 times more RAM (in fact the caches in my cheap CPU exceed the amount of RAM on the Amiga), displays are now connected to the GPU with a wide-bandwith digital interface, and so on.
All of this could perhaps be valid excuses if it was consistent. Yet typing in a Firefox <textarea> feels faster than typing in for example FocusWriter, and typing in an xterm faster still. I can paint smooth freehand curves in Gimp with instant feedback (something the Amiga is not always capable of, depending on how much bandwidth the selected resolution requires). The computer is capable of full screen, full frame, fully vsynced full HD movie playback without stuttering or dropping frames.
The most interesting aspect is of course that a computer that might feel laggy in certain applications is fully capable of emulating an Amiga, complete with the perceived snappiness of the UI, despite all the overhead of emulation _and_ the supposed delays of the surrounding architecture.
I'm all for faster software, but input lag measurements need to be put in perspective. Just because an editor can process an input in 3ms doesn't mean the pixels will change on your screen in 3ms.
If you're using a 60Hz monitor, you're only going to see new frames every 17ms at most (1 second / 60Hz). Your graphics pipeline might have some additional buffering, adding 10s of milliseconds of lag. Your monitor likely has some input processing as well, adding anywhere from 10-20ms before it sends the frame to the physical display. Add a few milliseconds here and there for input processing and even the response time of the physical pixels, and you're looking at something like 50-60ms minimum for total display latency before you factor in the software.
Using 144Hz FreeSync or G-Sync monitors can shorten that update time, but you're still looking at 30ms end-to-end latency in even the fastest setups, and that's before you account for software processing lag.
The difference between Sublime Text responding in 11.4ms average and Atom responding in 28.4ms average is difference of almost exactly 1 frame of latency for a typical 60Hz monitor. Add up all of the other sources of lag (buffering, monitor input lag) and you're looking at something like a 5 frame latency instead of a 4 frame latency. Still less than the blink of an eye (literally).
From another perspective: If you really believe that you're sensitive enough to feel a difference between something like Sublime Text's 11ms processing latency vs. Atom's 28ms latency, then you might want to invest in a proper 144Hz gaming monitor with low input lag, as it would improve your experience by the same margins. A gaming-specific keyboard might also help, as average keyboards can have 10-20ms of input lag before the keypress registers with the OS (Source: https://pavelfatin.com/typing-with-pleasure/ ) Realistically, though, I doubt many people could A/B test the difference between a 1ms and a 30ms latency editor under ideal conditions, let alone while typing out some code.
If you really believe that you're sensitive enough
It's pretty obvious when using extended shortcuts or a string of keys that are committed to muscle memory, like Ctrl+Shift+P (+ first letter of a cmd) or Ctrl+K combos.
Imagine playing an instrument wired up to headphones and then adding a 30-60 ms delay... the longer the delay the more it creates that "off" feeling
Don't forget too, those are bare metal calculations. Stick those apps inside a VM like many devs do or add some other latency along the I/O chain and it all helps to create a perception of lag.
> Imagine playing an instrument wired up to headphones and then adding a 30ms delay
We need to keep these numbers in context. 30ms delay would appear as an echo if you were playing an instrument and also listening to a 30ms delayed version of the same sound.
However, you're not feeling the latency. You're noticing the presence of two distinct signals. That's why UX input latency matters much more when you're doing something like dragging across the screen with your finger on the screen: You visually register the distance between your finger and the expected location.
However, when you're typing a memorized series of letters on a keyboard, you're not comparing the keypresses to the letters appearing on the screen. You already know what you're typing and what you expect to see on the screen. Visual stimulus latency is on the order of 250ms, so you wouldn't even begin to start processing types for an order of magnitude larger than the latency of these text editors, for example.
Most of us are operating with regular 60Hz laptops or monitors. End-to-end latency from physical key press to physical color change on your monitor might be as high as 50-60ms even with zero-latency software.
> However, when you're typing a memorized series of letters on a keyboard, you're not comparing the keypresses to the letters appearing on the screen. You already know what you're typing and what you expect to see on the screen. Visual stimulus latency is on the order of 250ms, so you wouldn't even begin to start processing types for an order of magnitude larger than the latency of these text editors, for example.
nothing makes me want to throw my computer through the window faster than typing and not having an instant response honestly. It doesn't matter that the brain cannot process the letter. Would you imagine handwriting where the shape of the letters you write is 1/4th of letter shape late ?
Even for typing, just comparing xterm at 60hz and 120hz on my monitor feels different when typing moderately fast (100 wpm according to https://typing-speed-test.aoeu.eu/?lang=en)
> However, you're not feeling the latency. You're noticing the presence of two distinct signals.
Not necessarily; it could be an electronic instrument.
I once tested a "e-piano -> midi -> PC -> headphones" chain. The latency was not really noticeable when I pressed only a single key, but actually playing was impossible. After I replaced a (software) component with something with lower latency, it was bearable. While I cannot precisely estimate the latency reduction, I would guess that it was less than 30ms.
The ear's whole function is to detect differences in patterns. This is how we identify the direction of sound with only two ears. For our ears, a delay of 30ms is an eternity.
The eyes on the other hand have very little response to this. Some will say they can spot a 15ms vs 30ms visual delay easily, but this is an open debate rather than an obvious fact. A 15ms delay in audio is noticeable to almost anyone.
Different senses, different sensitivities. Comparing them isn't very instructive.
Your 15/30ms is close to the timings of 30 vs a 60fps videogames. While testing for a single frame you'd probably get very mixed results, most people can distinguish the pattern of when the framerate drops from one to the other. That looks more close to me to the audio example, since it's a constant stream of visual information.
I think this is actually more to do with timing than audio.
I've made a jumping mechanism for a game that makes you move faster if you jump right when you hit the ground again. This system does not give any audible cues as to when you should jump.
When something in this system is slightly off, ie keyboard latency, jump height, gravity, etc. You can't time the jumps properly and you're left feeling that you somehow just can't do it but can't really pinpoint what the problem is exactly.
I think the same applies to many games. Especially fighting games.
With that said, I don't think writing code is about timing. I think it's nice when an editor feels very responsive, but I don't think it necessarily makes me more productive.
I don't agree with this at all. If the software doesn't care to make keypresses responsive how does this philosophy translate to anything more complicated than reading and rendering the result of these keystrokes? The answer is pretty obvious if you try any of the slower editors pointed out in the article (or pretty much 90% of software written in the last decade). It doesn't take a person with sensory hypersensitivity to pick up on the differences.
>The difference between Sublime Text responding in 11.4ms average and Atom responding in 28.4ms average is difference of almost exactly 1 frame of latency for a typical 60Hz monitor
That is looking at the average latency for Atom. If you look at the max latency Atom, it is 60ms compared to 15 ms for Sublime text. That is basically 3 frames of latency difference on a 60Hz monitor, and would definitely be notable for the average person.
> That is basically 3 frames of latency difference on a 60Hz monitor, and would definitely be notable for the average person.
Having done some UX work in this area, I can say that people greatly overestimate the effects of latency on text entry when they're just looking at the numbers. End-to-end lag is more obvious in situations like dragging objects, but even then people manage to work around it.
Typing isn't performed on a tight feedback loop in the brain. We don't wait for the letter to appear on screen before starting the process to press the next key.
Typical human reaction types are on the order of 250ms for a simple visual stimulus. Recognizing letters will take even longer. A difference of a few 10s of milliseconds isn't generally going to be noticeable for typing unless you're really going to great lengths to A/B test.
Consider that people can SSH into remote machines all of the time with 100s of milliseconds of latency. The experience may not be optimal, but our typing doesn't fall apart over SSH.
People tends to compare latencies of devices to human cognition-action loop and declare it irrelevant, but paraphrasing what I’ve heard from a gamer, their understanding is their ability to eyeball sub-10ms range latencies comes from a fact that their I/O loop runs synchronously to real time by training, that added latency can disrupt and force re-adjustments in detectable ways. Like 1PPS signals can be used to sync time in orders of picoseconds, precision of an event is not limited by frequency, only by deviations. Isn’t that interesting?
I think the latency becomes most problematic when it's inconsistent, and if you want to fix input errors, or move the cursor and then input something.
I've experimented with video games with various amounts of display lag or dropped frames; nothing blinded or anything (although, setting up a blind test sounds fun), and there's clearly an increase in difficulty the farther you get between input and response. Writing code is clearly not Mario Brothers, but small delays can add up.
> I think the latency becomes most problematic when it's inconsistent
Indeed! I don't have a problem using say a SSH console with 1 second lag. After a bit of initial cursing, it works just fine, I can write my code or edit conf files just fine. It's not as comfortable as writing at home, but it's not really a big deal.
However if the lag is inconsistent, say due to packet drops, it's horrible. Even if the base latency is low.
It's funny you mention this, because I like to use vscode over ssh. I believe the editor updates the screen before the change is ever made on the server, so it feels nice and responsive like you're coding on your home machine.
Not trying to take merit away from VSCode, but emacs and tramp mode also do that. And not just with remote machines over ssh, docker containers work as well. Also docker containers over an ssh connection (or whatever crazy combination you may need).
You get latency-free editing because the file is edited locally and sent to the server on save. Simple implementation but highly effective.
I use 60Hz keyboard autorepeat for navigation, so latency is very noticeable. I need prompt visual feedback to tell me if my estimate of the autorepeat startup delay is too short or too long. The quicker I get it the shorter the navigation sequence I can make while still having a good chance of hitting the correct frame-perfect key-release timing. Note that reaction time is irrelevant; I anticipate everything and only use the visual feedback for timing adjustment ("rushing or dragging" in drumming terminology).
Good G-Sync or FreeSync gaming monitors can get down to low single digit millisecond numbers, but the typical 60Hz non-gaming monitor or laptop display will be more in the 8ms-15ms range.
Also most LCD monitors do not update all pixels at once. So the topleft pixels effectively change about a frame earlier than the ones at the borrom right. Which puts some of the numbers there also into perspective.
Correct me if I'm wrong, but this doesn't seem to measure input stack latency nor how long it takes for the pixels to be actually visible on the display after all of the compositing delays.
All this measures seems to be time from injected keyboard event until pixels change on whatever bitmap/surface in memory. Message passing, in other words.
That sounds like it's measuring almost exactly the portion of the end-to-end latency that an application can actually do anything to reduce; the rest is in hardware and drivers. So it would definitely be a useful measure, but cutting this measured latency in half won't cut the entire end-to-end latency in half.
Application might be able to make choices about APIs that affect output latency. Hardware swap chain (AKA multi-plane overlay) is going to be visible on the screen at least one full vsync faster than something that's composited on a frame buffer. In the best case, this measuring method wouldn't see any difference in such case. In the worst case reading pixels from the screen might even misleadingly measure worse due to triggering a corner case code path in the window compositor.
Using an emulated legacy API (like Windows GDI) might yield a very low latency score in this test, even though it might take much longer to actually be visible on the display.
It's called Electron and it shouldn't have been used for building desktop apps in the first place. I presume it was built as an inside joke when people were joking about web browsers becoming the new OS.
That said, it allows companies to use cheap web developers to build (previously more pricey) desktop apps, so it's used everywhere now. It's good enough to make money, but obviously using the wrong tool for the job will make some aspects of the experience worse.
Since most consumers hate working with the buggy software on their computer anyway, the loss for a company in making it slightly worse by introducing delay is negligible.
Because in a garbage collected language, most of the time you have little to no control over how and when the garbage collection takes place, and how long it will take.
That's the reason you won't see a Java-based pacemaker anytime soon.
It's unfortunate that software seems to feel slower the newer it is, including developer tools such as text editors/IDEs. I suspect this is because most people - including younger developers - have never seen how fast things can be, but are instead constantly subjected to delays in using applications to the extent that they think it is an absolutely normal speed[1], and then propagate that notion to the software they themselves create.
Also related: https://danluu.com/input-lag/
[1] For example, everything that uses UWP in Windows. Someone who had only ever used the Settings app in Windows 10, or (even worse) the Calculator, might not realise how absurdly slow they are in comparison to the previous versions' where they would open nearly instantaneously.