Hacker News new | past | comments | ask | show | jobs | submit login
Plotinus – A searchable command palette in every modern GTK+ application (github.com/p-e-w)
101 points by Fnoord on Feb 13, 2020 | hide | past | favorite | 24 comments



I’ve always pictured the opposite—rather than taking a GUI and bolting on “fake” command-line like features, why not have a base-class for GUI “main” windows that’s essentially already a true, fully-featured terminal emulator showing the stdin/stdout of the executing process?

Then, as you would add graphical controls to the main window in the layout builder, the terminal-emulator-interface part of the window would get squished down further until it just looks like a status-bar. (Which is essentially what status bars are: a very underpowered terminal-emulator only capable of showing the last line of stdout.)

But click [or tab into, or Ctrl-` to activate] the thing, and it’d “pop down”, adding more height to the window, if there’s room; or “pop up”, resizing the rest of the window narrower; or just overlay translucently on top, like in games with consoles.

Opening this wouldn’t give you a login shell, exactly—it’d be the stdin of the program. But the toolkit would hook the stdin and turn it into a REPL, based on the CORBA/DCOM/DBUS service-objects available in the program; and the GUI toolkit would then enhance this REPL, by introspecting these service-objects, to enable both textual autocomplete/auto-validate of commands, and the drawing of these sorts of graphical command-palettes. (But you’d be able to “shell out” to process these objects with commands, like you could in scripting languages.)

Or, to put that another way: Plan9’s Acme editor is a cool idea, making the whole editor into interface where you can just write magic runes on the walls and then activate them. But there’s no real understanding or support for those runes, because they’re fundamentally opaque Plan9 executable binaries. What would this look like if we translated it into the object-oriented world that GUIs live in? What would Smalltalk’s Acme be like? And then, what if all applications in the OS were built on top of it, the way that you can build applications as major modes in Emacs or as plugins in Blender?

Or, to put than another other way: if Automator in macOS can already latch onto a running program’s service API and tell it to do things as a batch script, why can’t I interactively access the same APIs from inside the running program itself, as an automatic benefit of using whatever technologies power Automator? Why can’t I do the equivalent of opening the Smalltalk Inspector on my GUI program?


What you think would've been great, but the entire gtk ecosystem has been wrecked by fallout from Gnome 3.0, and following demise of the community.

Open source projects like GNOME paid a heavy price for not being able to deal with people like Lennart resolutely.

I see it as a great unjustice that 2-3-4 drama queen personas can hound hound out 50+ most productive developers out of the ecosystem, and not the other way around.

This is how Gnome 3.0 was a private project of a few drama queen personas who hijacked an open source project from 100+ people


> why can’t I interactively access the same APIs from inside the running program itself

Isn’t that what the script editor on OSX does?

On Linux some TCL programs are like this, I don’t know if it’s a feature of the applications I use or a feature of TCL since I (unfortunately) almost never use it.


That is a bit how Xerox PARC experiment in Mesa/Cedar went, and how Wirth got inspired for Oberon.

In what concerns Oberon System, there were no executables, only dynamic loadable modules.

They could be used by other modules, or by the OS itself.

Basically, there were a set of conventions how public procedures/functions should look like for the OS to recognise them as callable from REPL, or mouse actions.

And for mouse actions, they could either act immediately, act upon the current selected text/viewer or present a context menu.

Or how Inferno and Limbo interact with each other, don't stop a Plan 9.

I think the closest you can get back into these ideas is Windows, with COM/.NET/DLLs and PowerShell as orchestrator.


Much of what you're describing reminds me of Dynamic Windows, the UI on Symbolics workstations in the 80s, and CLIM (Common Lisp Interface Manager), an attempt to embody the same ideas in a vendor-neutral standard.

DW/CLIM record text+graphical output in a DOM-like tree structure with references to the underlying application objects plus a "presentation type" and parameters for how the object was presented. The interaction window presents a stream interface rebinding your application's standard-input and standard-output streams, but also implements drawing operations, and the functions 'accept' and 'present' do structured IO against the stream in terms of actual objects. The presentation-type parameter determines both how to parse/print objects and which objects on screen may be selected with the mouse as input. Everything being live objects, the lisp machine usually let you select any object on screen and pop it into an inspector. Commands defined the presentation types of their parameters, again defining how to parse/print arguments and enabling mouse selection. Presentation translators let you define how to convert an object of some type into a context where the UI wanted input of a different type. Commands themselves had a presentation type, so the top-level command processor literally loops printing a prompt, calling 'accept' for a command, then executing the command. Consistently, presentation (to command) translators let you trigger commands directly via clicking objects, drag-and-drop gestures, etc. permitting the UI to function in terms of commands even if the textual 'interactor-pane' was absent.

It's a fascinating paradigm, with some shortcomings. Despite CLIM defining some more traditional GUI widgets, how to integrate then with presentations and the command processor is a bit half-baked. This is a solvable problem. A free implementation called McCLIM is still around and worked on by one or two of the devout.

One of these days I hope to see a good implementation of these ideas in Javascript. A browser DOM is the perfect substrate on which to implement a modernized CLIM-like UI, rather than reinventing everything the hard way on top of X11 like McCLIM attempts. It would certainly do wonders toward spreading the ideas.


For macOS users - if you haven't known before, a similar functionality is built-in to macOS (Props to Apple for this!).

Cmd-Shift-/ (a.k.a Cmd-?) opens the help menu item, where you can just type in and it filters all of the menu items matching the query. One can press return to execute the item, or discover new shortcuts to items.


Awesome. This is something I like about MacOS, and a feature that that Unity had before Canonical switched to Gnome, that I wish was adopted by more desktop environments.


Looks quite interesting!

Kudos for using Vala, it needs a bit more love, instead of slowing down GNOME with GJS everywhere.


It is available in Arch. I got it to compile on Ubuntu without much hassle. On Nix, it is available but install conflicted with another package. Didn't look into it further.

Keybind is already in use for some Gtk+ applications (Firefox IIRC), and it also makes it painful how some apps I use are not Gtk+ but say Qt... which makes me wonder if there's a thing like this for Qt?


> On Nix, it is available but install conflicted with another package

Huh. I was sort of under the impression that nix was supposed to make that impossible.


I suppose improbable rather than impossible.

Here is the error:

  error: The unique option `environment.variables.XDG_DATA_DIRS' is defined multiple times, in:
  - /nix/var/nix/profiles/per-user/root/channels/nixos/nixos/modules/programs/plotinus.nix
  - /nix/var/nix/profiles/per-user/root/channels/nixos/nixos/modules/config/shells-environment.nix.
  (use '--show-trace' to show detailed location information)


Most of this is certainly possible with Qt. KDE Plasma ships with a global menu and Qt >= 5.7 exports its menus to DBus, so that covers those. As for other UI elements (buttons, etc.) I'm not sure - I've seen it done through dll* injection, but I don't know it there's a cleaner way.


This is just awesome. To my regret many GTK application don't care much about keyboard shortcuts any more. That's why I personally often use terminal applications. This project could actually make many applications usable again.


Last commit: two years ago..


It's because gtk+ 4 will no longer support general-purpose loadable modules – the very feature which makes the palette possible. Also the maintainer is disappointed with the attitude of GNOME to third party developers: https://github.com/p-e-w/argos/issues/75#issuecomment-475844...


Man this is terrible! I was warmed to Gnome 3 (I like the experience unlike most of the loud crowd) but this is inexcusable. Literally a breaking experience for hundreds of third part applications for some reason I just don't understand.


This looks useful although I feel like it needs a better default shortcut. Ctrl-Shift-P is emacsish in its contortions.


Ctrl-Shift-P is also the command palette in vscode


And Sublime Text, Fman, Atom, ...


Perhaps chosen to (EDIT: try to) avoid conflicting with other shortcuts?


Great! Will this enable creation of a useful voice interface?


It does look like a good first step towards that. If I ever find that one service I used that one time to build context-aware intent recognition, I'll definitely try that.


Seems dead, last commit was in 2017.


Nice! I love GUI projects.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: