Hacker News new | past | comments | ask | show | jobs | submit login
Project Mage is an effort to build a power-user environment in Common Lisp (project-mage.org)
190 points by zdw on Jan 16, 2023 | hide | past | favorite | 71 comments



Hi! I am the author of the project. There's a discussion on Emacs is Not Enough that has been taking place today here:

https://news.ycombinator.com/item?id=34375137

The Emacs is Not Enough is quite ranty, and its style is not to some people's taste (well, it is kind of a rant, after all).

If you want the meat, indeed, please see any other articles on the homepage, or, for an overview of the project, this:

https://project-mage.org/the-power-of-structure

Thank you.


I hate to taint this, but reading up on Fern I have to point to JavaFX. It rings a lot of similarity with what is being talked about here. There may be other frameworks also.

But JFX captures much of this. It doesn't have your schemas (its obviously based on Javas class and object system), but it does have constraints, it does have bindings (not as sophisticated as yours, but they exist and quite useful in practice). It has the scene graph, it handles contextual styling via CSS. It has its layout components, and GUI components. It even has 3D ;).

I will say about the bindings that when the graph gets large it can be fun to try to keep in your head, and, unfortunately, there's no real way to visualize it. So, when something changes (or changes in a way you don't like), it can be an endeavor to source the change and navigate the relationships.

The primary reason I mention this is just to lift it up as an example of many of the concepts you're discussing, but in "production" code, used by many, for quite some time. There may be insights there that can be mined as to possibly decisions that were made that you could apply (or not) to your work. Are the limitations of JFX done by design? Limited by Java? Limited by performance? Limited by "haven't got to it yet"? I don't know, but, anyway, just something you may want to study a little.

I wish you luck on your project!


> Javas class and object system

I can't stress enough how important and helpful prototype OO has been to the current design. I couldn't imagine doing what I want to do without it. For example, some of the concepts such as configurations and contexts rely on dynamic inheritence.

> There may be insights there that can be mined as to possibly decisions that were made that you could apply (or not) to your work. Are the limitations of JFX done by design? Limited by Java? Limited by performance? Limited by "haven't got to it yet"? I don't know, but, anyway, just something you may want to study a little.

Sure, I will take a look. Thanks!


I haven't finished reading everything but so far I like what I read.

In a way, it reminds me of the software research project STEPS which aimed to reinvent computing with much less code than normal software. I consider this project a failure not because its funding got cut before the end but because nobody managed to reproduce their achievements (non reproducible research isn't research).

I wish you more success!


STEPS is a very interesting project, indeed. I don't think it was quite a failure, after all, they did Nile and Gezira, which was pretty cool (powerful vector graphics rendering in a few hundred lines of code). Someone might reuse those in the future, I don't see why not.

But from the limited things that I saw, I got the feeling there was no overarching system. And there was no thinking about flexibility in general (which I consider to be essential to computing). I could be wrong about that, though, I didn't study it very thoroughly, only watched a presentation or two and skimmed the paper. Not that it wasn't ambitious enough already.

> I wish you more success!

Thank you!


Actually there was a lot of thought given to flexibility, I think a lot of the concepts were not exposed well due to the large project scope. Maru (which is the underlying STEPS runtime) for instance was designed to be completely replaceable (See COLA papers) using a composition mechanism similar to fexprs. Maru is also very easily bootstrapped as the VM/runtime and compiler come in at under 2KLOC with all the facilities.


Huh, interesting. I should try to learn more about STEPS in general some time...


What I want to know is why that implementation (Frank) that Kay uses for all his presentations isn’t available anywhere. Is he the only one that has a license to use it? I mean after all that research could you not zip up the result and throw it on GitHub?


This (STEPS) sounds interesting, do you have a pointer? (It's not the easiest word to google...)


Research out of VPRI: http://www.vpri.org/

The "writings" page has a long list of publications you can check out.


> non reproducible research isn't research

Toxic mindset, so committed to this particular shibboleth!


Well, we're not talking about multibillion dollars particule accelerators research here but about software research!


When is that not true?


Isn't It Obvious That C Programmers Wrote Git also a bit ranty. It was never made clear how Mage would make what-git-does better. And isn't it obvious that C Programmers also wrote unix, kernels and userspaces?


The way I read this is you write that assuming the reader cherishes unix, kernels and userspaces as opposed to integrated live lisp environments assuming the role of unix, kernels and userspaces, yet without C. Now try looking at the world as someone who does not cherish unix, kernels, userspaces and C programmers' view of the world...

[note: I do cherish unix, kernels and userspaces; AND lisp. Oh and C...]


and I cherish lisp, and scheme too... and, of course, C programmers wrote a lot of lisps also :)


My point isn't so much to reinvent Git's model (aka tree of blobs/commit/commit-graph), but rather to make it a part of a programmable interactive environment, and then just do the right thing from the user perspective.

That would include:

(1) Develop a UI for navigating the model,

(2) Develop a set of commands that act on the objects of that model.

I do want to append to the Git model by introducing the staging step for the whole tree (this, unlike with commits, won't have to have a history). I also think it could be possible to make it more flexible. So, you could say that there are blobs. But instead of blobs you could have some other kind of object too: for example the kind that would lazily compute itself from the definition of some other similar object. Or the kind of of object that would have knowledge about its contents, have an efficient way of storing it, maybe. So, I don't really want blobs (binary large objects), but rather a general kind of objects. Such objects would obviously need to conform to some API spec. Such objects could be written by the user (although, then he would have to supply the definitions to the others who want to use this). A tree could hold different kinds of such objects.

There's definititely some design that will have to go into commands. The commands would have to be taught to be context-sensitive (e.g. to selections or to the current state of the graph or commit).

A session-time undo graph for the repo-staging tree could be built as well. Easy implementation: copy the whole damn thing to memory on every change (only works for small repos, but that's a start). Harder implementation: define the inverses for the commands and build an undo graph.

I will admit: I am by no means an expert on version control systems. So, thanks for expressing the fact I didn't write about this, there's definitely some challenge to this, and I don't have any sort of final design in my head quite yet.


I enjoyed Emacs is Not Enough, I think it nailed trying to make it's point on why modern computing is so user hostile. I've sent it to emacs users I know, and they, too, have enjoyed it.


It was a bit of one of those train wrecks that's impossible to tear one's eyes away from. It had me thinking "wait, Emacs could be better?" (I've only been using it for a few years). The Emacs is Not Enough post[1] certainly drew more onlookers with its title than this post though many complained about the length and the focus. Admittedly I think I wouldn't have been intrigued if my first interaction with this project was just the root of the site as linked here. The author has certainly done a good job providing multiple avenues to explain this concept.

[1]: https://news.ycombinator.com/item?id=34375137


Have you thought about how your system might be adapted for power users that have accessibility requirements, such as blind people? I'm guessing you'd rather not implement the current GUI accessibility APIs, but would rather have something like Emacspeak that directly implements the alternative UI (e.g. speech output) inside the Lisp environment.


Update: I have updated the power-of-structure article with a section on accessibility.

Copypasting:

It's important to think about issues such as accessibility in advance.

I am basing my knowledge about this issue on this article about Emacspeak.

So, Rune will ask lenses to implement the surface-level text commands. Whatever text is shown or assumed to be on screen, the user will be able to copy that text programmatically. So, you could ask a spec-conformant lens stuff like "get the surface level text of the semantic unit at point". Such functions are of general utility.

Next, Runic functions will be required to return some information about the results of their semantic actions. So, if you issue a delete-semantic-unit command, it will return what it has removed (and that would include the info about the surface level text as well as the data-structural unit itself).

The list of such functions and their return-value properties will all be a part of the spec. So, it will be possible to automatically advice all these spec functions for any given lens to invoke the speech functionality.


You know, I really haven't thought about that, but now that you mention it, I have heard before that in Emacs, the advice facility is very useful to the goal.

Well, all I can say is that I see no problem in adding speech output or in adding advice. The system is going to be very flexible to the end user.


I think the trick to all accessibility issues is actually logging/tracing. Similar to kafka, you can have consumers watching the UI log to act upon.


thanks for this! have you thought about hierarchical data structure manipulation in emacs style cl environments? any tools to look at?


I am not sure I understand the question, but, for instance, Lem is an emacs-inspired CL editor. Not structural or even hierarchical though.


I admire your bravery for even trying to tackle this. Your rant about Emacs spoke to my soul. It's such a pain to use, but nothing else even comes close to playing the same game. I've been mulling over similar thoughts for years and there's clearly some space in the overlap between Emacs, Excel, Smalltalk and web browsers for a true power user computer interface. I'll be keeping an eye on this for sure.


Thanks : ) I will do my best to deliver.


I can't edit the top post, as it was posted a few days back, and the thread has resurfaced.

Currently, I am preparing an "elevator pitch" (as a few people have suggested me do), and I hope to publish it in a few hours. So, if the "Power of Structure" article is too long to consume and you just want to get the central idea, there will be such a summary soon.

Also: the project needs funding. I will work on it regardless, but it will be an awful lot easier for me to get this done with some help from the community. Please, see https://www.patreon.com/projectmage for the details.

Thank you.


your elevator pitch still assumes quite a slow elevator doing quite a long ride...


Time is going to get by whether we like it or not.

Here it is, by the way: https://project-mage.org/elevator


What I was getting at is that IMO you failed to boil down the essence of the proposal to a length that could be called 'elevator pitch'. By the time you start preaching the essence, everybody's arrived at their floor already.


Oh, boy, how did I miss that one... heh.

You have a point there, I have thrown in a TLDR.


Flexibility interacts poorly with composability. When everything is specialised at the whim of the developer, tying different pieces together is difficult. A lot of the rigidity in programming languages is there because it gives programmers (and programs) common ground to work from.

Flexibility usually interacts poorly with performance as well. Specialisation to a domain is usually faster and can be dramatically faster, but the plumbing that makes the specialisation possible has a cost everywhere.

I'm curious whether lisp's macros are sufficient to splice together dissimilar code and provide the domain specific optimisations one would want to eliminate the plumbing overhead. I would expect not but it'll be interesting to be proven wrong.

It's interesting that you're starting with the user interface instead of the sufficiently smart compiler. "I'll rebuild the world in lisp" projects sometimes (usually?) stall in the first stage of writing their compiler. I think it's quite encouraging that you're building on an existing language implementation.

That would be a lot to build in five years. Good luck and happy hacking.


> Flexibility interacts poorly with composability.

> Flexibility usually interacts poorly with performance as well.

I disagree fundamentally. I believe there's such a thing as the flexibility of specialization.

In fact: the reason why everything is so slow is because we don't have /enough/ flexibility in our systems.

https://project-mage.org/on-flexibility

> It's interesting that you're starting with the user interface instead of the sufficiently smart compiler.

I think it's because I care more about the applications than anything else. Yes, they require the platform, but my motivation is to get something I can use, not just to build a tool.

> That would be a lot to build in five years. Good luck and happy hacking.

Thanks : )


I've been reading the The Power of Structure in chunks. Your Rune section was really beginning to sell me, but I'm confused about something.

I'm trying to imagine my existing Notes as a tree of objects instead of text parsed into an ad-hoc tree. Ok, that sounds good. I could imagine a leaf node being a "blurb", which itself could be something easy and familiar like a string. I could imagine it sitting beside a table, and nested under a bullet point.

I can imagine my up/right/left/down keybindings getting smarter, as I move from "left" or "outside" of a table then it might highlight itself, indicating that I'm selecting it like I would select any heading and can drill down into it.

I can imagine instead navigating up, to the text, and there being a seamless transition so that I'm editing that text just like I normally would in Emacs.

I can imagine navigating "out" and into the tree structure of the bullet points / headings. I already use packages to easily navigate my headings, and so I like the idea.

But when I read this:

> See, even words have structure. Are words objects? We wouldn't have a word for them if they weren't significant somehow. And that's how I would want to analyze and work with them: as words. For the plain text editor, a word is just a cluster of characters somewhere in a string, and there's a whole pile of mess built on top of that string to convince you otherwise.

and this:

> Your word was supposed to be an object that carries information about itself.

I start to worry about performance. Is this something like the old Flyweight Pattern from GoF? Is this level of granularity no longer expensive?


Good question. (Incidentally, it provides a good example for my words in the comment you have replied to.)

Words could be objects. But I will tell you more: you can start breaking words into characters and making them objects too, if you needed to. The key to performance here is to do the necessary stuff on the fly, on demand, dynamically. In other words, the word may become an object only when it needs to become a seperate entity. Suppose you use some data structure for storing sentences or paragraphs. Let's assume it's just a list. So, before a word needs to become an object that stores some information about itself, it can just be a string:

    '("This" " " "is" " " "a" "sentence" ".")
Now, if you wanted to tag the word "This" with some info (like stylization), you could turn that word into a more complex object:

    '(("This" :style bold) " " "is" " " "a" "sentence" ".")
In fact, if your application requires such an optimization, you could even store it as a simple string

    "This is a sentence."
You can choose whatever storage unit you like, as long as you can make your editor work with it. Structural approach allows specialization on the fly, you can just adapt to the granularity when you need it, since there's just full programmatic access.


Do you have examples of other "rebuild the world in lisp" projects? I'd be curious to see how and why historical attempts have foundered


Not great examples. Hazy memories of readme files for projects that sound interesting but fizzle out, sometimes decades ago.

Emacs is probably a success story in that genre. It's possible to use it as the first process booted iiuc. I certainly have days where it's the only process I run after gnome has booted.

Guix has replaced a lot of Linux userspace infra with scheme. Successfully as far as I can tell.

I think there are (non-emacs...) operating systems in various states of progress written in lisp.

Sometimes one finds a compiler written in lisp which doesn't have a runtime in C. Jitawa, Maru are the only two I know of. All the others I've looked at seem to be built on top of C. I think they're both finished/abandoned.

My rebuild everything project is still in the better compiler phase so I might be projecting. Hopefully I'll remember better anecdotes later.


Medley interlisp appeared on HN a few days ago.

https://journal.paoloamoroso.com/my-encounter-with-medley-in...


Me: this task is big, I’m not at all sure whether it’ll take a week or two, let’s better subdivide it into subtasks and estimate each one in turn

This guy: I am pretty sure I can get done in ~5 years

Seriously, though, fingers crossed! I no longer use CL professionally, but it will always have a dedicated spot in my heart. I’ll be keeping a close eye on this.


Planing long and talking big is simple, because you don't need to face the harsh reality of the actual challenge. You just define some goal, assume it's working and call it done. But when you plan short, you are already in front of the problems, and see the cracks, and must build your plan around them. This is a bit similar to how everyone can build and promise some elaborated framework, but still fails on the small important parts.

So not planing big, might be actual a hint that you are just more based in reality, than in wishful thinking. Which can be seen as good.


I feel like your campaign would benefit from explaining a little bit about who you are, what you've written before, and why we should believe you can actually accomplish this (understandably big project). People like to feel like they're paying a person, not an idea...


Haha, alright. I have a github profile

https://github.com/some-mthfka

I am not particularly proud of most of those projects. Many of them were strictly done for fun. Some of them I did because I didn't know any better.

The most sophisticated project I did was a game. You can grab it here for free:

https://ottrta.itch.io/ottrta

I don't promise it will bring you any joy, though. It was written in C++ and there's no code and there's only Windows binaries. Sorry, it was a few years back that I have published it. But, easily, that was the most interesting project I did, and the only usable one I have ever delivered.

Otherwise, I am just some motherfucker (you guessed it).

But you know what I believe? If you can think it, you can make it... That's all I want to say.


The other way around is more meritocratic (avoid cult of personality or credentialism). Yeah good if it includes portfolio etc


It was unclear to me, but some code was written.

https://project-mage.org/the-power-of-structure#org63e88da

> Fern is the most technical section of the bunch. Perhaps that's because much of it is implemented already (KR, constraints, basic geometry, and linear algebra).

and it doesn't do everything from zero, but is somehow based on Garnet

> Fern is a spiritual successor to Garnet, a Common Lisp project for building graphical user interfaces (GUIs). Garnet originated at CMU in 1987 and became unmaintained in 1994. It's a great toolkit which was used for many research projects, some of which are still active.

https://project-mage.org/Code

with some unit tests. This makes me want to explore more. But what does the code do? An exercise for the reader.


The linear-algebra provides a usability layer to April (an APL compiler). The geometry is just geometry with intersections (Its pretty cool in that it has planes, infinite sections, etc). Then there's KR with formulas thrown out and the constraint solver right from multi-garnet. Some utils. It's a start.

You are right, I will update that page with more explanations on the particular subsystems.


I'm looking forward to seeing this appear on Savannah with a corresponding mailing list. Or maybe on Sourcehut? Since the author did leave this tidbit on the Code page:

> As for Savannah being not entirely modern, well, I like the fact that it has a mailing list.


Thanks for mentioning Sourcehut, somehow is just didn't come up when I was researching the mailing list options. So far, looks like it has a really streamlined interface and is meant to be integrated with the mailing list workflow, which I didn't quite see on Savannah.


If I'm understanding this correctly, with your Lisp IDE you could have an image sub-editor which may expose functions for image modification (eg. cropping), which could be called from the parent-editor. Is this accurate?

Also, what is it about Lisp specifically that makes it suitable for this undertaking?


> If I'm understanding this correctly, with your Lisp IDE you could have an image sub-editor which may expose functions for image modification (eg. cropping), which could be called from the parent-editor. Is this accurate?

Yep, and, presumably, you could then interact with it with your mouse, like draw something in it.

(One correction though: it doesn't have to be a Lisp IDE, but just any Runic document.)

There are a few facts at play: 1. Lenses are cells, which means they are just graphical objects responsible for their own graphical output and input handling (among many other things). 2. An image editor would be a cell as well. 3. A lens could, at runtime, dynamically, inherit from the image editor via an :is-a relationship, and, thus, become an image editor too.

Of course this would require some UI programming to get right, but that's the idea.

> Also, what is it about Lisp specifically that makes it suitable for this undertaking?

Please, see: https://project-mage.org/why-common-lisp

TLDR: It's an image-based language, and interactivity is a top-priority for power use. For instance, if something goes wrong, you don't want the application to crash. Incremental development for GUIs in general is pretty crucial. So, the only other candidate could be Smalltalk, but I like Lisp better.


Very interesting! My understanding is that you're thinking about this more in terms of an 'application environment for power-users' than in terms of a 'multi-faceted IDE with lensing'.

To clarify, the specificity of Emacs is that it fully exposes it's internal function sets to the world. This could be done by other applications in an organized way. For example, in the picture-editing app example, it would amount to allowing scripting over the features that the app exposes. The scripting feature would come from the environment, not from anything specific the app itself does (apart from being built in that environment). The previously mentioned IDE could then be thought of simply the multi-tasking environment in which such generic applications are running.

Does this roughly correspond to what the project is about?


You are absolutely on point, yes. The building blocks themselves are what the users will extend upon and use. That plays a very big role in composition and reuse. There will also be configurations and contexts (which are really quite simple mechanisms, really) that will factor into this, too (for the purposes of customization so that the users don't have to modify the original code to change some behavior or slot). Of course, prototype OO itself has a key agency here.

I also like to think about this in terms of "building blocks", not just an exposition of API. So, Emacs has the notion of a buffer for its building block (the only one, I believe). Cells and lenses will be building blocks.


It makes me sad to see these kinds of efforts in 2023. It just seems very likely that we will have various advancements in AI (for example Hey GitHub) for input methods/ text-editing before this reaches to an MVP stage. While even 5 years ago I would choose Emacs and split ergonomic keyboards over any other cool new solution, I feel like recent developments in AI are just too good.


How exactly do you expect advancements in AI to make this kind of project redundant?


It's not about making it fully redundant, but more about taking away a good chunk of the critical mass of users.

If you could manipulate code by just voice or if you can write entire programs by natural language descriptions, would you really need a very capable and extensible text editor most of the time?

Just for context, I'm an Emacs user for almost my entire programming career. And I enjoy crafting my Emacs for speed and efficiency.


As a user that is annoyed but some of the things you mention. Will I require to learn Common Lisp to use project-mage?


The thing about power-user software is that you can customize and package it to the point where it's indistinguishable from any other software, a polished end-user experience -- the user accepts the workflow of the given configuration and rolls with it, without ever touching the config. That's how some people use Emacs -- they just learn the bindings of the default distribution. But, for Emacs, there are also some distributions like Doom Emacs and such. They are just different user-friendly distros.

What's more, if you have good GUI capabilities (which Mage will have), you could build visual user interfaces for configuration (I don't plan to make such UIs btw, but someone with a good-enough UI sense and interest in this stuff could attempt it at some point, I guess).

If all you do is stick to some distro with some visual config tool, then you won't even have to know it's common lisp (unless some error is signalled and the interactive debugger pops up).


Someone says: "I'm making a new OS/desktop environment/text editor that works for power users."

I hear: "I'm making this for me."

There's absolutely nothing wrong with that. Many powerful tools came about because people didn't like the way the old one worked.


Could all this be done in a smalltalk environment like squeak? I was wondering because I am modestly proficient in ObjC


Absolutely, it's image-based. I have considered Smalltalk, but I like macros a bit too much. I am pretty sure CL code will have better performance too, but don't quote me on this. A point worth making: if I were to do it in Smalltalk, I would choose the standardized one, as the most stable one. Stability is pretty important for a project like this, as you would imagine.


I think you mean you like emacs but got hit by autocorrect. I think the key is to duplicate a mid nineties office suit that is hackable with a programmers IDE thrown in. Something like AppleWorks or MS Works with a database that is universal across all the documents in the suite with structures that allow reuse


I think he meant Common Lisp macros, a powerful feature of the language (and missing from Smalltalk) that allows one to easily write code that writes code.


Are macros performant enough to allow a power user to replace a scripting language like elisp for a long file of strings, which if I remember correctly was the main issue with using emacs over clisp, unstructured strings.


Common Lisp macro expansion happens at compile time, so they wouldn't affect runtime performance.


...but it's a fair thing to think about if the "poweruser" interface encourages creating new macros on the fly. Common Lisp can "compile" at the REPL or otherwise in the live environment.


Just a few additional comments from our last exchange, based on https://project-mage.org/elevator. I prefer this elevator pitch because it gives some insight on how to proceed, beyond the stylistic choices. (If not a rant, it's a mixed bag of all I could think about the subject, since I never really exchanged about these ideas, so I'm interested in your thoughts as well.)

The example of splitting "Hello world" into a list of words is a pretty bad example; structure as in list of words is something that text editors manage well (delete word, inner, etc). It contrasts with syntactic information that gives you structure that expands the range of what your editor can do. If you know something is a noun, you can give only a list of nouns to replace them, and any replace would be able to ignore any verb that is written the same way (I'm not sure what IDE that does scoped replace manages variables shadowing correctly, but this is something you should be able to do with structure).

The idea of recursive, specialized micro-editors sounds like a real good idea. There should also probably be a way to specify that something is an encoding of a structure. For example, escaped shell commands, or SQL written as strings (not everything has DSLs, and even then it would still be useful). Ironically the delta diff program would correctly highlight syntax of SQL strings because it only analyzes the text, as opposed to most IDEs. There is also some prior art in the field, which reminds me of [0].

I'm currently working on knowledge management, which I think you have to split in different subfields; there's a great deal of overlap with project and document management, yet these would be mainly distinct workflows. I think logseq [1] and org-roam [2] tick all boxes for knowledge management, strictly speaking, but other tools are needed nonetheless. (digression: If you split it into plenty of different subsystems with another query system on top of it, you might just call your knowledge management a computer and the shell that querying tool. The issue you might have is that every program has its own API, so interfacing requires a custom facade, unless you directly write/eval shell commands.)

There was another idea between code editing and knowledge management, called literate programming, which had a fairly high profile advocate (Donald Knuth). The idea would be to explain the code in the way that made sense for humans, and have the system (something in between the text and compiler, if not the compiler itself) assemble these blocks in 'computer order' for execution. He explicitly said that literate programming is not 'documented code', which is what people usually understand by it, but does not represent any paradigm shift. There's the Leo editor [3] that took some ideas from it, but I think it's also something that never really took off.

Again, good luck etc.

[0] https://tratt.net/laurie/blog/2014/an_editor_for_composed_pr...

[1] https://github.com/logseq/logseq

[2] https://github.com/org-roam/org-roam-ui

[3] https://leo-editor.github.io/leo-editor/


> eco

The eco article is quite interesting, it's a cool proof-of-concept. I don't know exactly how it compares, but there's also tylr, with an online demo you can check out [1].

> The example of splitting "Hello world" into a list of words is a pretty bad example;

I just wanted to set up some very quick easy-to grasp context with it for the discussion that follows. You are right, of course, the normal editors don't have much trouble with that level of detail. Maybe I will come up with something better later on, though not too complex...

> I'm currently working on knowledge management, which I think you have to split in different subfields;

My view on this is that you can't generally predict that, but what you can do instead is let the user compose the structure and features of custom documents, thus creating custom workflows suitable for the task at hand, whatever it may be. I will be generally taking that approach with Kraken.

> literate programming

I think computational notebooks take the core idea and make it practical, and I think it's fair to say those are literate programs, albeit without the web-tangle aspect.

> Again, good luck etc.

Hey, thanks for the feedback!

[1] https://tylr.fun/


missed opportunity in ANT not being called aint, imo.


Ha, maybe : )


Good luck!


Hey, thanks, I have recognized your nick, nyxt is a cool project : )




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: