Forks and hooks interest me. I always thought they were a cute little feature of APL and J, but I've actually found real world uses for them writing Haskell at work. The little APL knowledge I've picked up has made me much more comfortable with tacit programming in the latter language, probably to the annoyance of my colleagues.
I think I’m officially never going to be smart enough to get APL. Perhaps that day will come, but until then I find it hard to tell between these two viewpoints:
(1) APL is another form of Perl golf, showing off basically. The fact that so few people would be able to read it makes code essentially write-only and any enthusiasm for it as a serious language for communicating ideas between people is quackery; or
(2) APL truly is readable, and if only I had received the right upbringing from an early enough age then I would have a mind capable of becoming APL enlightened, but it’s ok that I don’t because the virtue is that the next generation, at least, will all be able to revel in the language’s brevity. Just because I can’t look at (+\%^,)•[x,}]• and instantly recognize it is a JPEG parser, it doesn’t mean APL is a language without virtue.
Perhaps a better analogy would be with character languages (eg CJK) where it’s quite apparent that billions of people function (no pun intended) productively using them, and yet these writing systems will almost certainly remain forever inscrutable to me.
(A counter-point to that is that I believe character reading and writing issues in terms of illegible handwriting are a much more significant societal problem in China than they are in the West.)
As I like to put it when I have to come up with a “fun fact” for corporate icebreaker activities, “I have programmed in APL for money”. It’s not really that hard.
The notation is certainly a stumbling block if you’re trying to read these articles out of context. How do you even “read” something like “5⍴⍳20” in your head when it’s that unfamiliar? But that’s the same as anything else, what does “for (x = 0; x < 10; i++)” mean to someone who’s never seen it before? So when you can see that as “5 rho iota 20”, at least you can pronounce it. But when you’re actively using the language you know what ⍳20 means as easily as you know what “1 through 20” means. “Reshape 5” is just as familiar. So it takes only a glance at 5⍴⍳20 to visualize a 5x4 matrix of the numbers from 1 to 20.
Once you start thinking that way, the majority of what might seem like show-off tricks or being clever are just the paradigm of the language. The idea of creating a vector of things, applying a true/false test to every element to create another vector of 1s and 0s, and summing those or using them to pick elements out of another vector is a super powerful tool that lets you express a lot of ideas in a compact form that describes the algorithm without getting bogged down in writing explicit loops. What’s the average of these numbers? The sum divided by the count. In APL you just write that down. In many other languages, you write “I wish to declare a variable ‘i’ of type 32-bit-integer. Set ‘i’ to 0. Compare ‘i’ to ...”. What does any of this have to do with an average?
Now of course in an article where the author is actually showing off how clever you can get, and the topic of many items is some obscure mathematical concept, it’s going to be harder to read than the average program.
> The idea of creating a vector of things, applying a true/false test to every element to create another vector of 1s and 0s, and summing those or using them to pick elements out of another vector is a super powerful tool that lets you express a lot of ideas in a compact form that describes the algorithm without getting bogged down in writing explicit loops.
Exactly. APL's syntax and terminology is a bit odd. But what you're describing there should be a concept known to anyone familiar with a functional language. It seems you've just described what I would call mapping a function over an array. Indeed, a very powerful construct, often lacking from more procedural languages. Or, sadly, often eschewed by programmers even in the languages that do have it.
APL makes me think of Forth. Because of its idiosyncratic terminology (e.g. functions are "words") and unusual syntax (strict RPN), it is considered more exotic than it really is.
> But what you're describing there should be a concept known to anyone familiar with a functional language. It seems you've just described what I would call mapping a function over an array.
The other powerful notion is that arrays become mapping functions over their index.
You're smart enough if you can already program. In a lot of ways APL is easier. It's just frustrating for you as you probably can't remember how long it took to learn your first language.
Every subsequent language you've learned has been built off of that (Ex: once you know Perl, Python isn't that hard to pickup in a couple of days if you have StackOverflow to help you translate). You still think in lists, hashes, for loops, while loops... etc. You just need to learn syntax and idioms. With APL, you have to start from scratch and realize knowing Ruby is of limited help.
Here's the cool part...the symbols, what they do, and even their location on the keyboard is easy to learn. It is especially important to actually write code as you're going through the Dyalog book. Similar to Forth, all the individual pieces of APL fit together seamlessly. With Python you have these Lego blocks (for-loop, dictionary, list... etc) that are the fundamental building blocks you can use to assemble any computation. With APL, you need to learn all those symbols and then you have many methods to combine them.
My understanding is that APL was initially designed to compete with math notation. Math formulas often require a two-dimensional language (consider matrices, roots, sums, products). Iversion tried to completely redo math notation in a more consistent and simpler way.
Knuth designed TeX, which contains an ASCII language to render two-dimensional math. Is TeX easier to read than APL?
"I was appalled to find that the mathematical notation on which I had been raised failed to fill the needs of the courses I was assigned, and I began work on extensions to notation that might serve. In particular, I adopted the matrix algebra used in my thesis work, the systematic use of matrices and higher-dimensional arrays (almost) learned in a course in Tensor Analysis rashly taken in my third year at Queen’s, and (eventually) the notion of Operators in the sense introduced by Heaviside in his treatment of Maxwell’s equations."
"I did not join IBM until 1960, at which time I was just finishing up my A Programming Language, published in 1962. It included a chapter (called Microprogramming) on the formal description of the IBM 7090 machine."
Well, considering I met Iverson, attended many if his lectures during conferences in the early ‘80’s, actually used APL professionally for about ten years —including using it for a system involved in the race to decode the human genome— and even published a paper and gave a lecture at an APL conference...
Still, I appreciate the misinformed down-votes. Always good fun.
The number of people on HN who talk about APL as though they actually understood what they are talking about seems to increase with time. It’s like watching people have opinionated discussions about SQL when they’ve only read about it, googled a few articles and tried it for half an hour.
You can pick nits off of that nit, too. Korean can use a mix of phonetic writing and Chinese characters, just like Japanese does. Its usage is dwindling rapidly, but people writing software arguably still have to be prepared to handle this kind of text.
I recall opening some book in Korean pertaining to linguistics: almost every words had its hanja version in-between parenthesis. This was quite ridiculous: the mixed-script version would have been almost twice shorter, and the content was not understandable without hanja anyway.
Every language (without notable secondary dialects) with a recently defined orthographic system is "simpler" because its orthography seamlessly matches its phonetic structure. Give it another few hundred years of phonetic drift -- especially combined with dialect profusion -- and you'll find that this "beautifully simple" orthographic system is every bit as warty as English's is.
Just remember, there is an old enough English (dialect) where "knight" is pronounced "keh - nih - ch - t": where the orthography perfectly matches the pronunciation. It's just not any of the spoken dialects of modern English.
You are smart enough. These articles (both this one and the longer "A history of APL in 50 functions") get posted here from time to time. They are showing off and include some examples of very clever code (which made perfect sense in the context in which they were written), but they are not introductory material.
In my opinion, APL lacks a good tutorial. Iverson's book or Turing Award lecture teach you about the principles, but they are outdated with respect to current implementations. Dyalog has a very nice on-line tutorial, but it does not teach you how modern APL is written (using dfns and tacit expressions) and it is a bit too long. They have lots of other information scattered around, and exhaustive reference material and great tools like tryapl.org, but there is not (IMO) a good introduction for a total newbie. Learning about derived languages like J and K also helps (and both are great languages in their own) but can be difficult too. Moreover, there are subtle differences between all of them that are not easy to digest at first.
So, you have to be willing to make a considerable effort to get started. You do not need to be very smart, but may need some hand holding to not get lost. The good thing is that there is a quite active community that, although not very big, is helpful and engaging. Once you pass the bump at the beginning of the learning curve, articles like the one posted here will make your day. I think it is worth it, but YMMV.
Coincidentally, the author of this article writes code and does math mentally in Cantonese (because you can store more in your head at one time in Cantonese than you can English), so your CJK comment might be closer than you'd think.
However, APL is pretty simple. There are a lot of resources for J that are made for beginning programmers, and APL requires a different mindset than ALGOL derivatives. Try glancing at these:
If you really want to get a pure conception of APL, though, try the book it's named after: A Programming Language. Or Notation as a Tool of Thought (here's a semi-well-digitized copy: https://www.jsoftware.com/papers/tot.htm).
APL is generally readable, however a common mindset that if you come back to it in a day, a month, a year or a decade, it's faster to rewrite it than to try to figure out what you were thinking: if you can't understand what it's doing, you must have done it wrong the first time.
For me it seems that the big disconnect is between people who read code vs. people who pattern match code.
I never read a piece of code start to finish to understand it - I glance at it and expect to see patterns, and then explore based on pattern recognition. I only read details piecemeal.
That doesn't work if there are no clear visual patterns, and I'm extremely picky about syntax and layout of code as a result, not for the code to be readable, but on the contrary, for it to be possible for me to glance at a piece of code and have intuitions about it from seeing shapes of code layout laid out spatially.
I'm not sure if there are reasonable ways of reconciling those two ways of looking at code.
In my experience working with other programmers, it is very common for people to only learn to visually pattern match on code formatting and stop there. This is the code reading equivalent of functional illiteracy. Using = instead of ==, missing break statements, all kinds of semantically significant mistakes become completely invisible when you look at the shape of the code instead of reading to see what it does. In my experience, people who can only read code by visual layout do very poor code reviews.
Another annoying thing, as you point out, is that this makes people hung up on code formatting style to an unreasonably extreme degree. People who only learn to look at the layout literally cannot "read" an unfamiliar layout, which tends to frustrate them, and in a lot of cases seems to hurt their ego, because their conception of themselves as competent programmers is challenged.
The way to "reconcile" that is to learn to read code for its meaning.
What you find 'annoying' I find about speeding up processing. I can 'pattern match' code far faster than I can read the text, so the alternative where I read every token is to dramatically slow down my mental processing of the code.
To me, reading every token is the functional illiteracy, the way only new learners spell their way through each word of a natural language.
I've never seen people make it to the point where they pattern match without being capable of reading the language properly.
I slow down and read everything when I need the details, the way I slow down when I read an unfamiliar word, or when I need to ensure I get every detail of a complex paragraph.
But most of the time reading at character or even word level is a waste of time. Unless you're dealing with a language like J or K etc. that doesn't lend itself to visual matching.
To me that is a fundamental problem with those languages.
But I get that people who program them tend to read token by token instead, and if that is what you want to do it needs to be compact.
> I can 'pattern match' code far faster than I can read the text
No you can't. You are getting the impression that you understand what the code does from its shape. This works only when the code does what you already assume it does before you start reading the code.
I very explicitly used pattern matching as distinct from reading because the entire point is that when you pattern match you don't need to understand fully what the code does.
It's a faster way to narrow down location in a code base, not removing the need from reading at all.
So you are expressing violent agreement even though you suggest "I can't".
Yes, it "only" works when the code matches my assumptions.
But that is most of the time.
On the few occasions I'm wrong I end up reading a bit more code than I otherwise would.
But overall I end up spending far less time reading code than if I didn't visually pattern match, because most of the time I only need to carefully read the pieces of code I need to understand the precise details of at that moment.
None of us ever read all the code we depend on and understand every aspect of it.
We depend on documented functionality and testing to ensure the pieces we don't have time to read - or access to - behaves how we expect, and otherwise code to deal with failures to conform. Reading everything and understanding it only works for tiny programs. Even then recognising patterns helps,the same way we don't spell our way through sentences.
What you are describing is the code-reading equivalent of skimming or speed reading. Yes, you can gain some information from it. At best you are going to miss things, at worst you will learn the wrong information. No one is saying "read all the code we depend on." What I am saying is that people do this for code they need to work on, or to review (because this is the only time most people ever bother to read code), which is why most people do such a poor job of understanding programs or spotting issues. Reading code by looking at its shape is a lazy acquired habit that is inappropriate in the most common code reading circumstances.
My view is pretty much entirely opposite of yours. The biggest problem I see with developers are developers who struggle to keep up because they don't know how to quickly navigate a code-base to understand the big picture, and end up bogged down reading way more than they need. Outside of languages designed to be read, lacking the ability to work by seeing structure is massively debilitating.
Meanwhile, I rarely if ever see people "reading code by looking at its shape", because recognizing code by shape doesn't tend to be something less experienced developers are any good at; if anything, caring about shape comes from caring about detail and wanting the detail to stand out. You need to be good at carefully reading code to be good at recognizing which aspect of the code matters and should stand out to be good at using code structure and shape as a tool to communicate.
> The biggest problem I see with developers are developers who struggle to keep up because they don't know how to quickly navigate a code-base to understand the big picture, and end up bogged down reading way more than they need.
That indicates a lack of a complimentary skill set involving cross-referencing tools, grep, and note-taking and diagramming. Similar to what you would do when trying to get an overview of a subject from books (citations, indexes, and diagrams and notes). Skimming by relying on code format reading is indeed a useless and counterproductive skill here, because you will miss the important details while wasting a lot of time. It is no surprise that you are seeing this in people if you and they think the cause and solution is a dichotomy between reading code and reading the code layout.
APL espouses the mindset that, like human languages, it can be learned, and does away with resemblance to some other (human) language as the case may be for a lot of other languages following the FORTRAN/COBOL/Algol/C lineage. Even within those languages, "readability" is highly subjective. For example, I'm not an APL developer (I mainly use C), and find the "modern" trend of ExtremelyLongIdentifiersNamedLikeThis to be irritatingly "fluffy" and verbose, preferring the "humble" and more concise style exemplified by early UNIX and current BSD, while a lot of others find the latter "unreadable" in preference of the former.
I wonder if people with CJK as their primary (human) language find APL easier to understand than those whose human language is basically within Latin-1.
Coincidentally, the first one is also valid C and a lot of C-like languages too, and I recognised it at once.
So there are two distinct aspects to APL that make it mysterious - the semantic philosophy of array manipulation, and the syntactic philosophy of terse characters and very simple grammar. These are no longer unique to APL, though the combination is unique: the semantics have been copied very closely in the form of Numpy (which even adopts some of its terms, like 'roll' and 'reshape'), and terse character syntax is found in many DSLs like regex and basic math syntax (a DSL so ubiquitous that it's easy to forget that it is one).
APL makes perfect sense if you think of it as 'regex for numpy'. I'm actually quite surprised no one's made an APL->numpy syntax shim.
An APL is a very compact programming language, from Wikipedia:
> APL (named after the book A Programming Language)[2] is a programming language developed in the 1960s by Kenneth E. Iverson. Its central datatype is the multidimensional array. It uses a large range of special graphic symbols[3] to represent most functions and operators, leading to very concise code.
Took me some time to realise what I was even reading about, let alone the content!
hm. "An APL" sounds fine to me! i'd say that, grammatically, "APL" (and other acronyms) become independent from their expansions.
e.g. would you say that expressions like "the FBI's policy" or "multiple DUIs" are grammatically incorrect? expanding the acronym gives "The Federal Beaurau of Investigations's policy" and "multiple Driving Under the Influences" which sound wrong, but the original expression is fine.
my english is fine, thanks. my point was: splicing the expansion of "DUI" into a sentence will often make it ungrammatical, but that doesn't matter, because the acronym functions as a separate thing. analogously, expanding "an APL" into "an <A Programming Language>" and calling it incorrect is a weird way to judge a sentence.
anyway, i guess i was just in the mood to argue about something pointless online! have a nice day :)
Looking at Advent of Code solutions, I'm not sure if APL is really so much shorter or if it is rather a styling of programming.
Some day2 solutions from reddit [0]:
APL:
0{⍺⋄s←⍵⋄c←s[f+⍳4]⋄99=0⌷c:0⌷s⋄i a b l←s[c]⋄s[3⌷c]←(0⌷c-1)⌷a(+,×)b⋄(⍺+4)∇s}{j←⍵⋄j[1 2]←12 2⋄j}⍎¨','(≠⊆⊢)(⎕NGET 'data')
Python:
def R(I,x=12,y=2,O=__import__('operator')):
q=p=[*map(int,I.split(','))];p[1:3]=x,y
while p[0]!=99:o,a,b,c,*p=p;q[c]={1:O.add,2:O.mul}[o](q[a],q[b])
return q[0]
R(inp),next(100*x+y for x in range(100)for y in range(100)if R(I,x,y)==19690720)
I think it makes better sense to compare how APL looks in the wild versus Python/Ruby/Java whatever. Rosetta Code shows this pretty well. There are dozens of examples where the APL program is one line and the Python/Perl/Ruby/C#/Whatever is ~1/3 page if I recall correctly.
So that might prove your point better now that I think about it lol. Python can be written like the user did above, but I've never written it that way or seen anybody at work that has. Most of the APL I've seen (except for the really old school kind) looks like oneliners.
Actually, my experience is that APL in the wild (as practiced by those who are just trying to do their job) is not very terse at all. It looks like an old-school procedural language. Since APL is an incredibly proprietary community, it can be a bit hard to find public examples of "industrial" APL style, but this looks similar to what I remember is written at places like SimCorp: https://github.com/Dyalog19/SA3/blob/953e591eace72bb1c147d19...
With a handful of helpers you can get close to APL/J/K in any language.
The evidence for that is that the 'famous' first draft implementation of J in C is about a page of code.
The difference tends to be more that the APL way of thinking is hard for people to get used to, and so most non - APL'y languages tends to lack methods to support the bits that makes the APL family solutions brief. But adding naive versions of those facilities is easy.
Why didn't Dijkstra like this?! I get that APL is an acquired taste but these programs are beautiful. It pains me to read of such a visionary being so close-minded to a radical and succinct way of expressing algorithms.
Dijkstra seemed to think that it's a mistake to define operations directly on arrays, as opposed to on individual elements of arrays. Of course, this would also rule out nearly every modern language, since map/reduce/list comprehensions all fall into this category. Then again, Dijkstra hated almost every language out there. At least he's consistent.
See also the portion of "The Humble Programmer" starting with "The competent programmer is fully aware of the strictly limited size of his own skull; therefore he approaches the programming task in full humility, and among other things he avoids clever tricks like the plague. In the case of a well-known conversational programming language..."
The character set is not Dijkstra's complaint. His preferred language, ALGOL 60, also uses characters like × ÷ ≤ ¬ ⊃. ASCII wasn't really the definitive standard it is now when Dijkstra wrote that APL was a "mistake" in 1975.
> The fact that the printed or written word is apparently not the proper medium for the propagation of APL may offer a further explanation for its relative isolation; at the same time that fact may be viewed as one of its major shortcomings.
Brace yourself. Dijkstra was complaining about the fact that APLers wanted a computer available in order to write programs! He thought the only sufficient way to develop programs was to first conceive a mathematical proof of the program's correctness, then write the proof and the code together. I think this is part of what he means when he talks about the future of computer science, although it's not clear whether he actually thought this methodology would become common. I probably have more sympathy for this viewpoint than most, but I think it's obvious that it has not held up well.
Like many of Dijsktra's complaints about APL, this one is ironic because APL today is one of the best embodiments of the idea that code should be communicable without a computer. APLers will gladly discuss ideas with each other by writing APL on paper or a blackboard, or even by speaking it out loud. A few people have remarked that this is a unique feature of array languages: other programmers will write things down, but only array programmers use their own language to do so.
"To our surprise, the two teachers worked at the blackboard in their accustomed manner, except that they used a mixture of APL and conventional notation. Only when they and the class had worked out a program for some matter in the text would they call on some (eager) volunteer to use the terminal. The printed result was then examined; if it did not give the expected result, they returned to the blackboard to refine it."
"... the initial motive for developing APL was to provide a tool for writing and teaching. Although APL has been exploited mostly in commercial programming, I continue to believe that its most important use remains to be exploited: as a simple, precise, executable notation for the teaching of a wide range of subjects."
The antipathy between Dijkstra and Iverson makes no sense to me. Was Dijkstra just misinformed about APL?
APL is like if we assigned a number to every word in the English language. There are about 60,000 words: sixteen bits would do it. We could use base 64, so every word would be reduced to three characters. Wouldn't that be an improvement? </sarcasm>