I met someone at a conference who was both a stenographer and a developer. Because the dictionaries which convert the keystrokes into words are just JSON, he changed them to common commands/code snippets. Watching him code was mindblowing.
Much of the speed of stenography comes from using macros, and macros should be available in any decent editor without needing the use of a steno keyboard.
For example, both vim and emacs have snippet or template plugins (ex: yastnippet, snipmate, ultisnips, xptemplate, etc) that can be activated with a short keystroke and which then expand to whatever code or text the user desires. Then, depending on the features of the snippet/templating plugin in question lots of other advanced behavior (such as selecting items from a menu, activating sub-snippets/sub-templates, etc) can be activated. No need to use a steno keyboard for any of this.
There are other benefits to using a smaller chorded keyboard though, namely with RSI prevention and general comfort. That's kind of what I was going for with the QWERTY layout for Georgi. Rather then using chording for briefs and phonetic phrasing, it's just used for simple mapping with QMK. Makes for a compact and ergonomic board. Check out the layout I've linked below.
Weirdly, though developed for Georgi the lightweight springs have found themselves back into Gergo and GergoPlex as users have reported it helping with their RSI flareups (compared to traditional mechs)
In Vim(and I assume Emacs too) it's also possible to just define keybindings for any sequence of inputs, allowing even mode changes inbetween and differentiating between different file formats.
I used that for a couple common symbols in LaTeX' math mode, e.g. entering ;lra gets me a \leftrightarrow.
But I wouldn't recommend that approach for multi-line snippets and more complex cases, the config will get unreadable otherwise.
This is awesome! learning to write in shorthand was in my bucket list, but this! This is way cooler.
imagine writing code as fast as your thoughts are?
Well, also a note: programming is not typing. it's 90% thinking. But at the end of the thought process, it just burns to turn your thoughts into working code ASAP! It would be so nice to streamline, optimize the process of the thought to code transfer. Until mind readers are invented, steno is a great solution
Often its better to have 5 pieces of code that look alike, then to have a function that has 5 different cases. And the former will be easier to modify and delete.
Not when you modify state in the same scope. You now have 5! (120) ways of introducing bugs, if you just wrote it as a function that doesn't modify global state you only have 5 ways of creating bugs.
Well, Mr. BuzzKillington(!), I’d disagree. I’d argue that fast thought to code ratio would give you wings! Surely you’ll type bugs in faster (valid point) but you’ll fix those also as fast! Overall development progress is incomparable.
I mean you do you man, but I've definitely spent an hour and a half on 1 line of code to make it fully vectorised instead of doing things the easy fast way iterating over the data and that code could run orders of magnitude faster than the fast way of coding.
I remember once in undergrad we had a particularly gnarly computation problem to tackle. Some people's code ran as long as 40 minutes. Most were around the 10-15 minute mark. A few were in the 4-5 minute range.
Mine ran in 50 seconds. No one else came even close.
Now, I get that that's not always required. For a one off calculation, it makes no sense to spend 3 hours of developer time to save 1 hour. But if you have to run it 10, 100, 1000 times it starts to make a lot of sense.
I can hack together a convoluted piece of shit pretty quick. No design doc, documentation, design elements that would give me flexibility or error checking. There are always compromises, but I just find it's extremely rare that my typing speed is the limiting factor. It always seems like spending a bit more time thinking about design saves me time in the end.
Maybe you're John Von Neumann or something but like I said, I'm just pretty skeptical.
I'm curious how you approach writing such performant code. Did you start with the crappy/easy way and then just refactor until it was in a state that you liked?
Maybe its easier to specifically describe the code/problem?
Hmmm... yeah so it does tend to depend a lot on what's going on. I'll try my best to answer though at a very high level.
For the best speedups, refactoring crappy code isn't going to cut it. It WILL still result in some nice performance boosts, and it's a good way to code in general. But when performance is absolutely essential, you're probably going to have to completely redesign from the ground up.
Mostly it seems like it just comes down to focusing on the parts that are going to take the vast majority of time and then just thinking very, very deeply about the problem you need to solve in that chunk. (Including asking if you're solving the right problem!) and what each and every function call or operation does, and trying to be as smart as you possibly can. Some fairly subtle things here working with intrinsic libraries can sometimes do some really stupid things behind the scenes that are hard to spot. Be careful to avoid any extra steps that are unnecessary, where data might get copied in memory unnecessarily or things like that.
Most of the time for the real speedups, I was writing my own libraries in Fortran that were custom tailored to the problem I was trying to solve. This is important because you might find some weird data validation being done or something that isn't relevant to your pipeline, but is included for generality.
There's certainly no magic bullet, but the most important is thinking hard about your problem, make sure you're solving the right one and the next most important thing is probably custom tailoring the libraries if you have time. ...assuming it's a problem that isn't so well solved. Sometimes that can be a waste of time of course, because your use case isn't so unusual.
Haha “you do you man” - hilarious! Read my “note:” in original comment. My whole point is a better human vs IDE I/O throughput. Can’t imagine this improvement can be any bad? Process stays the same, some aspects get significant improvement, that’s it
EDIT: also how many times in your career you had a problem where you had to make a significant and time consuming effort over that one line of code (other than hackathons, of course). With practically all high-level langs in modern collaborative environments readability is prioritized. “I should be able to read your code as a well written prose” - uncle Bob’s statement, is pretty much universal doctrine. Ability to type fast here is greatly underestimated
>With practically all high-level langs in modern collaborative environments readability is prioritized.
Readability and verbosity are two different things. I can see how someone using Java would think that typing faster is better. But that's a reflection of poor language design choices rather than anything intrinsic to programming.
Oddly enough, I find that I type fewer keystrokes when writing Java than, for example, Python. The boring predictability of the language allows the IDE to guess what you're thinking most of the time.
Typing speed may not be a bottleneck, however reducing delays between the thought and the implementation can still make a difference.
Strictly speaking I don't need to use Vim to write and edit my code and it doesn't impact my typing speed in the slightest, however it cuts delays when modifying code(moving hands to mouse, to arrow keys etc.) much shorter and makes the whole thing feel more direct.
Although from my perspective I have my doubts if switching from something like Vim to a steno keyboard would be worth the time investment. For the usual text and many other tasks I can definitely get behind the idea, though.
Hey Jared, in fact, if you click into the video on the Open Steno page, the stenographer & developer you're talking about is Stan Sakai, who is involved in Open Steno:
Tho it's not the same person, there are some talks by the Open Steno / Plover people, e.g.: https://www.youtube.com/watch?v=Wpv-Qb-dB6g , e.g. at 6:30 or so
(edit: also one on the bottom of the "about" page)
I guess those were the ones I saw at an Angular conference in London.
It was a totally amazing experience, and for a non native speaker allowed me to catch every joke and nuance (or so I think).
At one time the speaker was talking about the scaffolding system of Angular that I can't remember the name of right now and it became a bit recursive with templates of templates and that was the only time when the stenographer lagged slightly, the speaker realized and took it a bit further until the stenographer realized it and made a witty comment instead.
At BangBangCon, they managed to fully capture the nuance of audio-based humor, to the point that their captions made funny things funnier.
Watch the video from BangBangCon 2019 on steganography for one example; the presenter hid data in audio files, so the stenographer carefully described the data-enabled files as sounding "perfectly normal, unsuspicious".
I have begun the path to learning stenography. Steno involves chording, or pressing multiple keys at once. Multiple keys at once means greatly improving the information density of when you type allowing professionals to type 240 words per minute (realtime), which is just not possible on single-key-at-a-time keyboards. Unfortunately most commonly available, non-gaming keyboards do not natively support multiple keys at once which is also known as n-key rollover (nkro). I ended up buying a pre-assembled, fully opensource hacker keyboard Ergodox EZ⁰, and have a custom layout firmware that matches up with the open steno project¹ . From here I am using Querty Steno² to practice my chording. Here is an example video someone did of using steno for programming a simple FizzBuzz on a different keyboard on YouTube³. In my opinion, if anyone is looking to really take their typing to the next level, chording is the only way and Dvorak/Colmak/single-key-at-a-time-layouts will never really get you there.
Yeah, not convinced. Yet another youtube video where someone had to first build and try to memorize their custom dictionary preloaded with the specific words they are going to use to write the program they have in their head...
At least this one actually seems to remember their inputs well enough, I wonder how many times they practiced? I've seen similar videos before and it is hilarious to watch when they're trying to recall the chords they programmed.
I don't see this scaling for real projects with thousands and thousands of complicated identifiers, camelcase, crazy abbreviatin, acronyms, etc. Each project is going to have its own set of identifiers and you're going to need a custom dictionary for each of them.
Steno is great for natural languages that evolve very slowly over time and which can be largely learnt once and then used forever. Code is a rather different beast.
It only takes a few seconds to "program" a new chord combo into your custom per-project dictionary. You can do it dynamically while you code. (You can also use chords for editor macro commands, which, broadly speaking, are something you "learn once and use forever".)
> It only takes a few seconds to "program" a new chord combo into your custom per-project dictionary.
I know. It also takes only a few seconds to look up a word in a dictionary.
It takes much longer to memorize thousands of ever-changing chords for different projects. And when you haven't memorized them, typing is going to be very slow and awkward. Kinda like trying to write an essay when you need a dictionary for every other word.
> It takes much longer to memorize thousands of ever-changing chords for different projects.
I'm skeptical of this. A chord sequence is of similar complexity to an identifier (the keys on a stenotype keyboard have mnemonic designations to make this easier), and people memorize commonly-used identifiers just fine, as part of getting familiar with a new project.
There's a fantastic, tight-knit community associated with the Open Steno Project. I'm in the process of learning steno, and https://didoesdigital.com/typey-type/ is a fantastic page that does drills that builds up steno skills.
Looks like a project to build a stenotype machine (chorded keyboard entry), not to promote stenography which more commonly means various techniques of shorthand writing.
It's the project behind Plover (https://github.com/openstenoproject/plover/wiki/Beginner's-G...), a cross-platform free steno program that works with normal (NKRO) keyboards and steno machines. There are also a number of hobbyist steno machines, made by the community, that run ~$100, instead of >$1000 for a real machine.
There was just a post about using a raspberry pi as a USB device. I wonder if that would make it a good candidate for their concept of a self-contained keyboard (no drivers)
My brand new ThinkPad laptop has a keyboard with a pleasant feel, and a high-end GPU for (among other things) a 3D hand-pose human interface. But the keyboard is 2 KRO. :/
After decades of looking, and with screen-comparable AR coming in next year, I kind of expect to move on from the laptop form factor before ever finding one that doesn't feel like an HID facepalm.
So if you want to optimize for typing another language, you can swap in corresponding dictionaries and type away. There's only the slight problem that there might not yet exist any for your language...
It's not that easy, since mnemonic key labels (the actual assignment from keys to "letters") might change depending on the overall system. Some stenotyping systems may even use subtly-different keyboard designs, that wouldn't match the stenotype keyboards Open Steno Project was designed to work with.
Anyone remember texting on a feature phone using the numeric keys? I never learned it myself. But those good at it would still beat me typing on a virtual keyboard on my smartphone. I wonder if you could actually program using just 12 keys as well!?
Nice to see someone developed a plug-in[0] for the Michela machine[1] (the one used in the Italian Parliament). The one used in the Parliament is actually just a midi keyboard, which means it can be reproduced with a relatively cheap 2-octaves off-the-shelf midi keyboard.
On youtube is a person who shows how to mod [0] a cheap midi keyboard for use as a steno device and also has videos showing how the typing on the michaela-midi looks [1].
My bad: I mis-read that headline as "Steganography" and as I read the comments, I kept wondering how making typing faster would obscure communications. :-)
Neat. The list art of taking down dictations fast. Wish I knew it in school (and that they had allowed it in school). I hated dictation but for some people, it's a job, I guess.
Not only dictation, you can also use it to type your own work at 150+ words per minute. I've seen fiction writers say it helped a lot, not just for raw typing speed but because it kept them in flow.
I recently started practicing Teeline shorthand. It's a system for pen and paper. I might not even use it in practice but it's cool to learn by itself. It's pretty clever and sort of layered. Letters are the first layer, rules for connecting them are the second, then there are short notations for common words and word groups, then there are shorthands for common suffixes, prefixes and letter combinations. It is satisfactory to lets you make a shorthand even shorter.
I agree although I'm using the older "Teeline fast" book.
I want my writing to be faster while taking notes in meetings and slightly obufiscated if I drop my journal somewhere, it still needs to be interpreted by me years or possibly decades down the road, or by my successors after I'm gone.
That means that I needed to use teeline over Gregg or Pitman because the ladder are designed for quickly transcribing while you know the subject matter, and that's if you really approach those shorthand notes taken after decades can be extremely difficult to interpret back to English words.
Teeline is grounded in the English letters themselves and so as long as you're consistent with your rules and or consistently inconsistent with your house rules, then someone who learns to interpret a few of your words can slowly but surely unearth all of your words.
I wouldn't say that it's excellent now. It still has a long way to go. Small example, try enabling captions in Google's meet. Those are awful. Don't get a lot of words right, if you have an accent it's even worse. And distinguishing between different people talking? Forget about it...
You can use a steno mask so your voice doesn't disturbs others but also so that the voices from coworkers and other background noise doesn't disturb the speech-to-text software.
It would look funky though, and I've read from people having problems with their voice after an 8 hour day speaking software.