Hacker News new | past | comments | ask | show | jobs | submit login
Why Concatenative Programming Matters (2012) (evincarofautumn.blogspot.com)
93 points by rrampage on Dec 1, 2020 | hide | past | favorite | 18 comments



Sick article. I'd say I'm a reluctant advocate of this style, and the author touched upon the two points that I always present to my colleagues:

1. The focus on verbs can make things clearer.

2. Under ideal circumstances, it's no more complex than Unix pipes.

But I'm still not 100% convinced it makes for more readable code. Sometimes I still need variable names for documentation purposes. This isn't so bad in Haskell, since the type signature can make the intention known, but something like J...


I've been trying to write a bunch of Forth, and I've gotta say stack comments (essentially type signatures, though typically not checked) are invaluable; that, and making definitions small enough that you can verify that they're obviously correct within a few seconds. (Typically, most definitions are one-liners.)


Having created a small concatenative language myself[1], no, the pure concatenative style with no variables and stack combinators is just too much for a normally-wired brain.

BUT! I tried to cheat a little bit to make things more readable. Take this example from the min front page, which showcases its concatenative style:

     . ls-r 
     (mtime now 3600 - >) 
     filter
Can you guess what it does? You might, but what about this:

     . ls-r :files
     ((mtime > (now - 3600)) ><) =changed-in-last-hour
     @files #changed-in-last-hour filter
Now ok, I went insane with sigils and weird operators but: - creating variables helps - using infix notation via the infix-dequote (><) operator makes expressions much more readable

Why would I use this rather than a traditional language? It helps reasoning in terms of point-free function composition and in some case it is faster to work with, i.e. processing rules, pipelines, task sequences etc.

[1] https://min-lang.org


> Sick article

A little red-faced here but I don't know if you're saying it's good or bad (suspect former but not sure). Matters as upon it depends whether I read the article or not, so... help me out? :)


Sick = good


Shows your (lack of) age.


I really made an effort to get into https://factorcode.org/ about a decade ago. It's an incredibly impressive project, but I found that the cognitive overhead of manipulating the stack stubbornly failed to go away over time. I say this as someone who is reasonably comfortable writing moderately complex Haskell code, so it's not that I'm unable to adjust to unusual programming paradigms. I think naming local variables just turns out to be an incredibly good idea.

That said, I do appreciate the conceptual simplicity and ease of implementation of concatenative languages. They certainly have their place.

In case there are any Factor evangelists out there, let me add in fairness that the language makes it easy to use named variables if you want to.


Someone a few days ago shared a stack-based VM he’s building in C++. Quite interesting. He also has a C version on his GitHub.

https://news.ycombinator.com/item?id=25243084


I have to say the fact that it makes function evaluation associative is quite appealing.


I love the fact that this makes tuples a language level thing rather than a data type level thing.

Every other language makes tuples a data type. Some languages special case the data type a lot for convenience. This language makes the concept of 'multiple values' something neatly handled by the language itself.


Example from the article: f = drop dup dup × swap abs rot3 dup × swap − +


    ?- E = "pop dup dup * swap abs rollup dup * swap - +", joy(E, Si, So).
    E = "pop dup dup * swap abs ro...p - +",
    Si = [_56410, int(_56422), int(_56432)|_56428],
    So = [int(_56454)|_56428],
    _56422 in 0..sup,
    _56500+_56422#=_56496,
    _56422+1#=_56520,
    _56422^2#=_56544,
    _56500+_56544#=_56454,
    _56544 in 0..sup,
    _56496 in 0..sup,
    _56432^2#=_56496,
    _56520 in 1..sup ;
    E = "pop dup dup * swap abs ro...p - +",
    Si = [_57392, int(_57404), int(_57414)|_57410],
    So = [int(_57436)|_57410],
    _57404 in inf.. -1,
    _57482+_57404#=0,
    _57404+1#=_57502,
    _57404^2#=_57526,
    _57482 in 1..sup,
    _57578+_57482#=_57574,
    _57578+_57526#=_57436,
    _57526 in 1..sup,
    _57574 in 0..sup,
    _57414^2#=_57574,
    _57502 in inf..0 ;
    false.

It's a Prolog query with two solutions (because the input to abs could be positive or negative) showing the input and output stack effects (the type signature) and the CLP(FD) constraints between the input and output integers.

It's a little hard to read, I know, but the code that performs the type inference and constraint generation/recording is very brief and elegant.

( It's a work-in-progress: https://git.sr.ht/~sforman/Thun/tree/master/source/thun.pl )

- - - -

FWIW I messed about with a Python implementation of Joy (another concatinative language) and I have some notebooks that might be interesting: https://joypy.osdn.io/notebooks/index.html


The article literally continues,

  Well…that sucked.

  A Lighter Note

  You’ve just seen one of the major problems with concatenative programming—hey, every kind of language has its strengths and weaknesses, but most language designers will lie to you about the latter.


I see no problem if more paradigms would be pushed into mainstream. Not so long ago, functional paradigm had its comeback and it improved quality of the code in i.e. JavaScript world. The thing is to not be a purist, and mix paradigms in the code to have the best possible solution.


I don't know if you're just mentioning an example that's in the article or making an implicit comment on it, but, like with any other programming paradigm, it's going to be hard to read until you get used to it, and becomes easier once you wrap your mind around it. The question isn't whether it looks funny at first, but whether the effort to make it look familiar is worth it.


The author could've made it somewhat easier, though. Keeping the unused variable (so still keeping the drop) it can be reduced by 1 operation, but the logic of what's on the stack and managing it is simplified:

  drop dup dup * swap abs - swap dup * +

  operation : stack
  (init)    : x y z
  drop      : x y
  dup       : x y y
  dup       : x y y y
  *         : x y (y^2)
  swap      : x (y^2) y
  abs       : x (y^2) |y|
  -         : x (y^2 - |y|)
  swap      : (y^2 - |y|) x
  dup       : " x x
  *         : " (x^2)
  +         : y^2-|y|+x^2
The author's version requires rot_3 in order to deal with performing the computation in an awkward order. This version deals with each variable in order and in a more natural way. And replacing `dup * ` with `square` simplifies it a bit more (which is what you'd do in a language like Forth, you factor common operations into new words):

  : square dup *;

  drop dup square swap abs - swap square +

  operation : stack
  (init)    : x y z
  drop      : x y
  dup       : x y y
  square    : x y (y^2)
  swap      : x (y^2) y
  abs       : x (y^2) |y|
  -         : x (y^2 - |y|)
  swap      : (y^2 - |y|) x
  square    : " (x^2)
  +         : y^2-|y|+x^2
Down to 9 ops, and reasonably clear at this point.


Reading the article I got quite enthusiastic about it in the beginning and started to think I should try this out. But when I saw drop dup dup × swap abs rot3 dup × swap − + I thought "maybe later".

It is great that the author brings the issue up, even saying "every kind of language has its strengths and weaknesses, but most language designers will lie to you about the latter"

I do believe that point-free-form holds much promise. Maybe there could be more support for it in more standard languages and programming support tools.

I think the problem with "drop dup dup × swap abs rot3 dup × swap − +" is that it gets too abstract. Even though it is very elegant and succinct, it takes effort to understand it. And if something is hard to understand it is also hard to find the errors it may have.


> I think the problem with "drop dup dup × swap abs rot3 dup × swap − +" is that it gets too abstract. Even though it is very elegant and succinct, it takes effort to understand it. And if something is hard to understand it is also hard to find the errors it may have.

But this is my point—the effort it takes to understand it is, in some sense, a one-time effort. If you invest the up-front effort to learn how to read concatenative programs, then you gain a fluency that makes it easier to read other programs. (Our more familiar programming paradigms are just as opaque to a beginning programmer; recursion, which is so fundamentally intelligible to us, is a major obstacle to people first learning to program, but I think there are few who would say it's not worth it.)

One of the big goals of concatenative programming is refactorability, which Jtsummers beautifully showed at work in your sibling post: https://news.ycombinator.com/item?id=25261911 . If you can understand enough to refactor, then you can simplify it to the point where errors must be apparent.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: