The approach of avoiding having to explicitly name variables is known in computer science as "point free" or "tacit" programming, and in math as "combinatory logic". Languages like APL have even less boilerplate/ceremony for this than shown in the post. E.g the examples shown:
[ even? ]
[ even? not ]
[ 4 > ]
in APL are:
2∘|
1-2∘|
>∘4
`∘` is partial/bind, and `|` is modulo.
Many other languages are pretty good for point-free programming. For instance this series goes into the benefits of using this approach in Swift:
https://www.pointfree.co/
Point-free programming has always been a controversial subject. Some people passionately hate it (particularly people that haven't invested in learning to use it) and some passionately love it.
> Point-free programming has always been a controversial subject. Some people passionately hate it (particularly people that haven't invested in learning to use it) and some passionately love it.
And for other people, such as myself, it depends on the particular case.
I think that point-free style can be clearer and more concise in certain cases (particularly when you're chaining a sequence of operations), but you can also really overdo it. I think when you start to use (in Haskell) "flip", "uncurry", etc., the question is whether it wouldn't be more readable to just name the arguments instead. But obviously, people are going to have varying levels of tolerance for such things.
Sure, but some amount of stack juggling is unavoidable once you allow for multiple arguments/return values. It's a rare case when function arguments simply map "naturally" to the top values on the stack.
Point-free style can be useful as an intermediate step when refactoring, since making code point-free (a) is largely mechanical (although judgement can still useful to avoid horrible results!), and (b) removes many of the abstractions and variable names in the code.
The latter are often historical baggage, telling us what the code used to do, or was meant to do; whilst a point-free version shows us more directly what it actually does.
If the point-free version makes sense as-is, we can leave it; but that's rarely the case. Usually, we'll introduce a few abstractions and variables; but this time they'll be more appropriate for the current codebase.
Assembly obfuscates the implementation with memory allocation, garbage collection, etc. which is often worse than even 'the wrong' (i.e. legacy) abstractions in a high level language.
In my (admittedly newbie) experience, it's particularly useful when application of your function is going to be in a context where η-reduction is natural. For example, your fmap is going to look cleaner without a wrapping lambda inside it.
I guess your examples still need some framing to make them somehow passable around, for example, a definition or some kind of closure. Let's say they require no more than square brackets around them.
Now your examples will look like this:
[ 2 partial mod ]
[ 1 minus 2 partial mod ]
[ > partial 4 ]
And, magically, they are not more terse than what was expressed in Joy. Not at all. Quite the opposite.
One character operator is still an operator. If it does not require spaces around it most of the time, for sake of comparison between languages it should have them.
What I'm reading in the parent post is an assertion that any verbosity that's simply replacing a word with a weird symbol (that's going to be spoken or thought of as a word anyway) is not true verbosity but just an illusion of it.
Some languages express the same thing with more or less concepts, and more or less explicit complexity, and that is a true difference in verbosity, but symbol-vs-word is just a superficial difference that can and should be ignored for a useful comparison.
APL/K and concatenative languages are not there, but Haskell is. What is fascinating is that Haskell wins by large margin because it uses implicit "apply" operation in place of space between expressions' parts. Something similar can be the case of APL and Joy/Forth as well.
It appears to me that APL/K construct operations using implicit operations' stack. In case of Haskell, there would be an (infix) operation to perform something like that, much like what is discussed with the "implicit apply" in the paper above. In case of Joy, there is a stack of stacks (or list of lists).
The link you provided doesn't seem to work for me. I don't think it's a particularly useful metric anyway, as the actual code matters a lot, not just the semantics.
Also, those examples were very simple ones, and APL has a lot of handy composition features[1] that make function composition the best in any language I've tried, including Haskell. For example (+/÷≢) in APL is (liftM2 (/) (foldl1 (+)) (fromIntegral . length)) in Haskell.
Why did you use (foldl1 (+)) instead of sum? Also, (fromIntegral . length) is (genericLength) - and that means you used implicit type conversion which may add terseness and also lead to errors. After all that, your example becomes (liftM2 (/) sum genericLength). Not much longer than APL's in terms of actual symbols used.
APL fails differently, I think. It may very well fail you if you try to express parsing combinators, especially when context sensitivity and/or non-determinism are important.
I used foldl1 (+) because that's the literal translation of +/. I used (fromIntegral.length) instead of genericLength because genericLength requires an import (as does liftM2 actually, so I should've used that).
'Not much longer than APL's in terms of actual symbols used.' - well, there's liftM2, and for longer examples, chaining it is even worse - (f g h i j) in APL becomes liftM2 g f $ liftM2 i h j -> quickly becoming unwieldy and unclear.
A lot of people also seem to say 'oh well if each character was a word it would be the same', and while it seems like that might be true, in practice it isn't, and the use of symbols allows for pattern recognition that you just don't get with words.
You used monadic interface, instead you may use applicative interface.
liftM2 then become liftA2 and what you exemplify above (if I understand that correctly) will render as (g <$> f <*> (i <$> h <*> j)). It is unusual, because "g" goes before "f", but not much unusual. Note the explicit pure function application in the use of <$> and explicit apply <*>. I would like you and other readers compare that to the implicit apply in the "Are ours smaller than theirs?" paper I provided link above.
That translation shows what APL hides in its implicits.
Let me repeat. APL will certainly help you with computations over arrays but will most probably fail you big time with parsing of something like Ada. Haskell won't fail you in both cases at the expense of slightly more verbose program text.
Tacit programming exposes the need to "normalize" programs, to express them with as less function argument rearragement as possible. In stack based languages this is guided by the dependence on the stack(s) (data, return, control, whatever you invented). In point-free programming in Haskell, this is guided by argument capture and propagation. In K (I mostly studied it), as I understand it, tacit programming is guided by the implicit operations upon the stack of actual operations.
These variants of tacit programming are mostly the same, just some cases are more easily expressible or some operations conveniently assumed to be implicit.
I did exactly the same - I changed lexical structure of language, mainitaining the syntax, most of it.
Programmers, just like most other people, read words as a whole entities, quite like hierogliphics [2]. If we talk about "people are not computers" we should recognize that and allow for longer (yet atomic) lexical structure of the same language.
All they did was do C preprocessor style text replacement to illustrate that, semantically and syntactically, they are very similar, the only difference being that APL chose to use super terse operator symbols instead of names. Assuming that the language allows Unicode identifiers, you could just as easily define even? or whatever to have a symbolic name and the factor/joy/forth code would look almost the same as APL.
Now I’m not saying that APL isn’t interesting and that it’s choices aren’t useful, just that the comparison here shows the difference in many ways is a surface level, visual one.
Of course syntax does matter but we could argue all day about whether replacing everything with symbols as APL does is more or less readable, for non-trivial code, than what factor/joy/forth do (once familiar with either style).
What would be useful is the definition of abstractions, breaking long chains of combinators and functions into manageable pieces, instead of playing one-liner code golf. Meaningful function names rather than superfluous parameter names.
For example (adapted from Wikipedia and untested):
arrayaverage =: +/ % #
arrayaverage 1 2 3
i.e. defining the average as the ratio (%) between the sum (+/) and the cardinality (#) of an array, with the advantage of being able to replace this function with a better version that doesn't fail miserably for an empty array.
For comparison, [0] is my page on tacit programming in BQN, which is laid out a lot like the OP. It even begins with a filtering example, taking positive entries of a list l with 0⊸<⊸/ l. For odd entries it would be 2⊸|⊸/ and for evens (¬2⊸|)⊸/. APL does have a negation function ~ (not sure why jph hasn't used it), but it has trouble with filtering because / is overloaded to be both a function and an operator[1]. BQN fixes some problems like these in APL, and I'd consider it to be an improvement on both J and APL for tacit programming.
I’ve written a gentle intro to APL for experienced programmers if you’re interested in making it a bit less incomprehensible: https://xpqz.github.io/learnapl
For people without a functional language background, it makes sense to think of point-free style with an example they understand: Unix pipes.
The controlled producer-consumer flow in pipes is a subset of the declarative combinatorics of values in a point-free function composition. Approaching it as an ongoing process that generates values and consumes them at the same pace can make it easier to understand.
The given Haskell example doesn't support their claim that
> This is shorter than the equivalent in an applicative language, because we have to name the input argument, that is only used once.
It's also wrong in that the predicate (4 >) is equivalent to (\x -> 4 > x) and thus tests for numbers being less than 4.
It's strange to argue that concatenation is structurally simpler than application, when you then need to add a quoting operator to add back the needed structure, while introducing a subtle distinction between evaluated and unevaluated code.
But there is a strong benefit to adding quotation: higher-order programming becomes explicit, and you can restrict it to prevent the need for garbage collectors.
"Linear-style" programming becomes obvious to implement in a point-free style, but obtuse when you have variables, adding arbitrary usage restrictions.
> The given Haskell example doesn't support their claim that
>> This is shorter than the equivalent in an applicative language, because we have to name the input argument, that is only used once.*
Are you saying that in your eyes the Common Lisp and Haskell snippets are equally terse? To me the Haskell snippet looks a lot shorter and easier to read. Counting tokens also supports that judgement.
The article seemed to be claiming that the Factor example was shorter than both the CL and Haskell ones. While the CL one is clearly the longest, the Haskell example seems at least as terse as the Factor example (`( 4 > )` in Haskell, vs `[ 4 > ]` in Factor - not sure if the parens in Haskell and the square brackets in Factor have the same amount of semantic weight though).
Personally, I attribute more "semantic weight" to the brackets in Factor. They have a much heavier definition of creating a lambda than Haskell's parenthesis, that only group operations.
(But I'm not sure if I prefer terseness to that level. Explicitness has value too.)
The fact that you originally swapped /f/ and /g/ in your comment then edited it to fix there mistake is perfect example of why this is a dangerous and bug prone approach to writing programs.
While I understand curry and compose, I just don't consider them simple or intuitive in usage because normally I get cleaner result by not using them. I don't understand your recipe explanation unfortunately.
Just to be clear, I don't use it in a pure form you'd see in many Haskell, etc. environments. I still use them when appropriate, with context-specific names and full arguments, because .then and "x => x<4" are more readable even if less terse.
> whip /f/ the butter /x/, then whip the result while adding sugar /y/
That's not a valid explanation, since curried functions don't "act" until they've got all their arguments; i.e. we do not whip any butter until the sugar has been added.
> shape the cookies /x/ on a tray first /g/, then bake them /f/
Whilst this is accurrate, I think introducing a notion of time ("then") makes things much harder than they need to be. (That's one reason I find imperative programs hard to understand)
This approach also extends to embedding languages with different disciplines in a faithful way. With Haskell's rich system you can embed a concatenative, effectful DSL using regular functions[0].
Retaining the source code inside a compiled function is a waste of space.
Lisps support traditional ahead-of-time compiling. Propagating source code to the compiled binary is not only a waste of storage, but something that some people who don't want, namely those who regard compiling not as only an optimization but as a way of protecting IP and getting paid.
Clojure has built-in macros and functions that allow you to do similar composition - without having to name arguments.
One of them is to use the threading macros (-> value fun1 fun2), and another one is to use the comp function. Both will require you to name arguments for anonymous functions, but this is likely not an issue as you have even better ways to get a lambda like using partial, comp itself or other higher level functions.
Many other languages are pretty good for point-free programming. For instance this series goes into the benefits of using this approach in Swift: https://www.pointfree.co/
Point-free programming has always been a controversial subject. Some people passionately hate it (particularly people that haven't invested in learning to use it) and some passionately love it.