When you say Terminal app, do you really mean shell commands? i.e. do you want to run something in Terminal.app that will demonstrate the expanded colour range?
Try btop, a resource monitor with true colour support and 37 builtin themes.
> because macOS is adhering to the UNIX specification
Isn’t it rather that Darwin was based on BSD 4.4? I’d imagine GPL 3.0 is a bigger impediment to them ever migrating to GNU tools than any desire to be UNIX certified.
I know. My question is, isn’t the reason the command line tools work the way they do simply that they’re essentially the BSD programs (give or take an Apple patch), with BSD options, not because they needed to work that way for Apple to get the OS certified?
Even if macOS wasn’t UNIX-certified, Apple would still be unwilling/unable to include the GNU software due to the license. I can’t see the Apple of today implementing a full suite of non-GNU software but with GNU-style options either.
So, POSIX compliant or not, there’s probably no world where `grep -P` works out of the box on a Mac.
The % register contains the path for the current buffer, you can insert that into prompt commands with <C-r>%. <C-w> at the command prompt deletes the last word, which in this case will be the filename of the current buffer, leaving the directory path.
So:
:o <C-r>%<C-w>new-filename<ret>
Would open a new buffer at /path/to/the/previous/buffer/new-filename. The file isn’t created on disk until you explicitly write, so :w! to save the first time.
If you literally just wanted to create a new file instead of opening a buffer, you could do that from inside Helix with :run-shell-command (aliases sh or !) instead of another terminal:
:sh touch <C-r>%<C-w>new-filename<ret>
The :o method has the advantage of LSP integration. For example, when I create a new .clj file that way in a Clojure project, the new buffer is pre-populated with the appropriate (ns) form, preselected for easy deletion if I didn’t want it.
The “missing” keys are on additional layers reached via a modifier key, or by overloading keys on tap/hold, or by increasingly esoteric methods the smaller the board gets: chording, tap dance, etc. They’re typically no less accessible than capital letters, while allowing you to keep your fingers on the home row.
For me, the additional keys on my larger keyboards rarely prove useful in practice. I end up mostly using the same subset available on the 60% I’m typing on now – it’s quicker and more comfortable than reaching over to the dedicated key.
On the other hand there is spatial memory. Overloading things has some downsides - it adds more possibilities for errors - and makes muscle memory complicated.
In a lot of software those extra function keys are well used, easily go into muscle memory and help to safe a lot of time.
I actually make fewer errors and in some cases reach higher speeds when typing a lot of special symbols as everything is reachable without moving my hands and I arranged all special symbols in a way that makes sense to me. Muscle memory works perfectly fine, with the difference that I don't need to make blind error prone hand movements across the keyboard to use arrow keys etc.
That sounds promising. Training muscle memory for a custom arrangement of keys is something I'm not (yet) brave enough to do. Did you arrange everything yourself?
Normal Arrow-Keys on a layer are good. But I need Arrow-Keys usually in combination with Shift and Ctrl. (move to start, end, next instance, declaration, implementation, Select something). This is already overloaded a lot, so I'm not sure, how well this works with a layer.
The thing that's missing for me most are those function keys. I need them a lot - sometimes overloaded in combination with Shift and Ctrl. Those actions are usually something that should be out of reach from normal typing (Terminate something, start a Compiler, mess with Breakpoints, ...).
Those small keyboards never include function keys. But I think I could just build something for that.. Custom button-Panels with a few keys are quite easy to make.
It’s common niggle but, as far as I know, nobody is sure of the precise rationale for placing every key, only the broad explanations of the layout that Dvorak published and promoted. The layout wasn’t based only on letter frequency but they attempted to account also for bigram frequency, frequency of repetition within words, frequency with which words are used, and with an objective of rhythmic alternation between hands.
Consider also that it was developed in the 20s and 30s. Nowadays you could throw some moderately hefty compute at almost everything of note written in the English language and come back to an error-free analysis after lunch, but who knows how representative was the corpus they analysed, painstakingly and manually. It might have made perfect sense with their data set.
Ultimately, the English language didn’t evolve to be easy to type, there will always be compromises somewhere, and the English of today isn’t the English of a century ago anyway. I imagine you’d get quite a different layout if you based it on Gen Z text messages or something.
Personally, I can’t help but note that Dvorak’s first name was August.
I’m curious what led to that conclusion. As far as I remember, making concurrency easier to manage was always presented as one of Clojure’s primary objectives. It’s fundamental to the design e.g. a major motivation for all core data structures being immutable.
STM, atoms and agents were there from the beginning. I think futures and promises were added in 1.1. core.async is from 2013. Even popular third-party libraries like promesa and manifold are around 10 years old at this point.
I think flow promises to make it easier to orchestrate core.async, specifically, in complex applications, but the essential primitives are far from new and I don’t consider them any harder to use than JavaScript.
On that page they have Cmd + Esc mapped as the shortcut for the global system menu (see “Teleport”), and it looks like the CEO is a NeoVim user[0].
I guess they anticipate users hitting escape a lot. Making it a large target doesn’t strike me as a worse use of the space than dividing function keys into blocks of 4, and more likely to be intentional than an artefact of generative AI.
I don’t think that necessarily follows. The age of the surviving fragments today isn’t the whole story.
We could presumably infer it still wasn’t “missing” as recently as a thousand years ago from later sources referring to it, even if the specific text (or oral tradition) those authors knew of hasn’t survived.
Like how we know about some of now lost Greek plays, originally written in the 5th century BC, because they were still being performed in Imperial Rome and writers of that time described them, even the details of how they were staged.
Try btop, a resource monitor with true colour support and 37 builtin themes.
reply