> This is known as composability, or the UNIX philosophy. In Lisp a procedure tends to accept many options which configure its behaviour.
Check out the UNIX man for tail, grep, ... to see how strange above quote about 'unix philosophy' is. In reality Unix commands are programs with obscene amount of configuration options, sometimes piping data as text around, sometimes glued together by strange shell languages...
unix commands may not be the pinnacle of minimalism, but the most important thing UNIX taught is about composability. Yes, I did recognize the irony of the fact that the common lisp example was much more like a unix command line than the haskell example, but if you're talking about composing (relatively) small commands, unix is still a pretty good comparison.
For being a pinnacle it is really unsatisfying. It tells you very little about how or what programs can be composed. It's either trial and error or reading man pages. In addition every program has to be written to take in anything as there are no constraints.
Function composition via types is a much more satisfying take on this problem. It's too bad Unix is regarded as the holy grail of this technique when it barely provides a passable implementation of it.
People seem to have forgotten that when Perl evolved from being a better AWK to the paradigm example of the modern "scripting language", Larry Wall explicitly described this as a rejection of the Unix small-tools philosophy. http://www.linux-mag.com/id/322/ ("But Perl was actually much more countercultural than you might think. It was intended to subvert the Unix philosophy. More specifically, it was intended to subvert that part of Unix philosophy that
said that every tool should do only one thing and do that one thing well.") http://www.wall.org/~larry/pm.html The fact that getting things done with a Perlesque scripting language is now seen as the height of purist Unix propriety only shows how far gone the original Unix ideal now is. But moving to the scripting-glue model doesn't really get rid of small tools that endeavour to do one thing well, it just reimplements them inside the scripting-language universe as functions/objects, though with a more expressive and less burdensome common language that makes it easier for them to stay small while being correct and effective. The more expressive their shared language, the smaller a set of tools can be.
> Text just isn't a great medium for IPC.
Yes, in retrospect Unix's determination to know about nothing but binary or plaintext blobs and streams looks like an adolescent rebellion against the (apparently - I haven't used them) clunky record structures of '60s operating systems.
Unix commands happily pass around binary streams of any format. The problem is, as you noticed, there is not schema, no uniform (let alone machine-readable) description of accepted / emitted formats. Composability is possible but not entirely trivial, often with a dose of `grep` / `sed` / `cut` between commands.
It's also pretty hard to pass a function to a Unix command. Either your command supports its own syntax (grep, find), or you make do writing a loop or a temporary file.
"It's also pretty hard to pass a function to a Unix command."
On this and a few other points, it would be interesting to allow a process to call out to the shell that it's running below ("... if any" being one of several issues with the idea).
I've hit a number of small "oh, I could do that if..." use cases over the years; none is really immediately springing to mind.
There are things that a shell function can do that a spawned program cannot - mostly to do with affecting shell state. Updating shell variables and opening file descriptors are obvious examples.
This is one of the reasons shell languages feel kind of functional. But they're not really designed like a functional language so they feel kind of clunky.
On the subject of "oh, I could do that if..." I know exactly how you feel. I really want the ability to pass multiple filestreams to a command, so you could, say, use cat on the outputs of two commands. Assuming the syntax was @`<pipeline>`, It would like something like this:
foo|grep bar|cat @`baz|grep bar`|sort -rn|tail
The above script assumes that the commands foo and baz are generating lines with time signatures at the start, and then checking for the most recent entries. I chose the @` syntax for its similarity to the lisp @, syntax, but it could be anything. Unfortunately, I think the only way to do this would be to monkey-patch either open(2) or fopen(2), probably both, which would be a Really Bad Idea (tm).
the <(...) syntax seems to be what I was looking for, passing a stream/pipeline to a function where a filename was expected. Hey, you learn something every day.
EDIT: Unless you're just piping both of them to STDIN. piping two files to STDIN is useful, just not as useful as the thing that I wanted. This seems to be what you're doing.
I don't know if the problem is text as much as it is the pissweak type system that the whole POSIX environment supports. The other big beef I have is with the total donkey circus that are the interfaces to the various tools. They're ridden with terrible special cases and mutually contradictory parameters and only make sense to poor bastards like myself who've spent years using them to build ad-hoc programs.
I can't find anything to support that in the IEEE POSIX ISO spec from the Austin WG, but the GNU toolset definitely supports your assertion. I think it's important to recognize that POSIX is not SCO UNIX (UNIX(tm) is effectively a trademark and nothing more at this point) which is not GNU/Linux which is not LSB(Linux Standard Base).
POSIX itself is a huge PCB of bodge-wires due to legacy compliance. On the upside for systemd haters, they can appeal to the 2004 ISO standard as init.d being the "right way". But if you read the rationale behind a bunch of utilities especially in Section B, you'll see why it's so crusty. "This ..worked in SysV so we're going to keep it". The Linux Standard Base specification isn't much better.
The whole argument as to 'what is UNIX' is so contrived at this point, but anyone making the argument that GNU coreutils are about composibility are correct (see page 230 of https://www.gnu.org/software/coreutils/manual/coreutils.pdf). They assert that the Bell Labs UNIX philosophy was effectively "one tool to do one thing well, orthogonal to other tools, to facilitate composibility" (the authors even dedicate an entire proceeding section to demonstrate it) but that's without any direct references to previous specifications.
Anyways, I'm with you, and apparently the GNU guys claim the Bell Labs guys were with you as well.
>POSIX is not SCO UNIX (UNIX(tm) is effectively a trademark and nothing more at this point)
UNIX is more than just a trademark, more importantly it is the test suite you have to pass to be allowed to use the trademark. And the trademark doesn't belong to SCO, it belongs to The Open Group, a not-for-profit industry consortium (successor to X/Open and the Open Software Foundation.)
Does it matter that much? I'm sure it makes it easier to port a package to a certified UNIX, since you know a certain collection of APIs must be there (putting aside the fact that some parts of the UNIX standard are optional)
The bell guys are so much with me that they created plan9, the system I'd rather be running. It's very elegant, but garnered no support, and it's following is INSANE, in that they are incredibly strict in their interpretation of the UNIX way, and think that anything that isn't is awful. I like composability and small systems as much as the next man, but sometimes, you just want LISP.
That `butlast` removes the last three elements, but here you were supposed to remove the first three elements. So I guess `cdddr` is what you need.
Also, the way the article uses `remove-if-not` is all wrong. The `:start` argument to `remove-if-not` doesn't mean it starts taking elements after the third one, rather it means it starts filtering after it. So
(remove-if-not #'evenp '(1 2 3 4 5 6) :start 3)
doesn't produce (4 6) as the article assumes. It produces (1 2 3 4 6).
And the `:count` argument doesn't mean it returns 5 elements, rather it means it removes 5 elements. So
> Check out the UNIX man for tail, grep, ... to see how strange above quote about 'unix philosophy' is.
The "UNIX philosophy" is good, it's just that UNIX doesn't really follow it.
The C and shell programs that make up UNIX commands are actually UNIX's own little programming language, complete with ad-hoc data types.
Look at `cat` for example. It might as well be `str + str...`. Or look at `grep`, it's basically `filter` or `reduce` at heart. But it has to do this per-line, and it has no metadata about the line, it literally just works on string contents. The `head` and `tail` commands aren't much different than `take` and `drop`. Heck there's even a `sort` command.
That's why shells that actually use a programming language (e.g. eshell) don't seem so crazy to me. To be honest, I use eshell way more than I use bash, and half the functions I use there are written in lisp rather than bash or C.
> The "UNIX philosophy" is good, it's just that UNIX doesn't really follow it.
It never did, beyond some examples in beginner books.
> Look at `cat` for example. It might as well be `str + str...`. Or look at `grep`, it's basically `filter` or `reduce` at heart.
Then look at the options of `cat`. On my Linux system cat has a -n option, which numbers the output lines. If it were following the 'UNIX philosophy', this option would not exist.
UNIX may have been about small commands ("cat -v", anybody?), but its most important property is COMPOSABILITY. You could use small programs to build larger programs, allowing you to do things by gluing together code that you would previously have to write new programs to do. It made this very easy, and you could also do it from within C, so it wasn't an either/or situation. This is something your namesake system could have learned to do. The Right Thing isn't always the right thing.
That was already available in Xerox PARC systems, by making use of function call composition in Interlisp-D REPL, Builders in Smalltalk transcript or live debugger in Mesa/Cedar.
The UNIX composition is only a novelty for those that never saw other OSes that surfaced at the same time. After all UNIX just adopted the idea from MULTICS.
"The UNIX composition is only a novelty for those that never saw other OSes that surfaced at the same time. After all UNIX just adopted the idea from MULTICS."
One could certainly chain programs through intermediate files. My understanding has it that pipelines were new in Unix, and Wikipedia agrees: "The pipeline concept was invented by Douglas McIlroy and first described in the man pages of Version 3 Unix."
Yes the concept was popularized by UNIX, but anyone that spends time doing computer archeology will find similar patterns in other OSes, like the ones I listed.
Don't forget that back then computing was developed in silos, with knowledge only shared at conferences or when researchers switched universities/companies.
So it was quite common that researchers across the globe would come up with similar discoveries.
"On my Linux system cat has a -n option, which numbers the output lines. If it were following the 'UNIX philosophy', this option would not exist."
Gnu's not Unix. "The GNU coreutils version of this on my Linux box behaves this way" is not a good way of getting at finer points of what is or is not Unix-y.
That said, -n exists in BSD cat as well. It's not required by POSIX, though.
Not only "Gnu's not Unix", it's also not that much interested in following "Unix philosophy". Which isn't that bad, from a user perspective: users want more capable tools, even if it impedes composability a bit.
> The "UNIX philosophy" is good, it's just that UNIX doesn't really follow it.
Well you know what they say. Talk is cheap. Ideas are basically free. Following through is hard.
Having an abstract ideal is worthless if you are just going to squirm and excuse yourself when you don't follow through on them. Which is basically what Unix does when you question whether it follows what it preaches.
"Check out the UNIX man for tail, grep, ... to see how strange above quote about 'unix philosophy' is."
I think a lot of that is due to the untyped nature of unix. Everything is text, so you constantly have to specify details about what the actual input is.
Is this really a philosophical difference? Granted, I've only used Clojure as far as Lisps go, but composability seems to be something that's emphasized.
Rather than
(remove-if-not #'p xs :count 5 :start 3)
It seems to me like most Clojure users would do something like
My first contact with programming (besides BASIC) was (auto)lisp. It was kind of a first love: every language I tried after Lisp was a disappointment. Then I found Haskell..
A bit off topic: interesting link with project euler solutions written in haskell and clojure (and sometimes other languages). With execution time log.
One could also change the Haskell. Data.Function (&);
Control.Arrow (>>>)
-- the pipe operator seems to be all the rage these days
(|>) = (&)
-- forward application + forward composition
xs |> (drop 3 >>> filter p >>> take 5)
I never understood the appeal of the pipe operator. It's a kludge to work around the need to eta-expand chained function compositions, which in turn is a kludge to work around the value restriction, which in turn is a kludge to prevent the combination of polymorphism and storage effects from breaking type soundness. Haskell doesn't need any of this.
I agree with you. I'm going to write a post about function application and composition (right-to-left / left-to-right) so other communities can stop discussing non-essential matters.
I do like the fact that the `pipe operator` is often mentioned along the bash '|' which is used in a pointfree style.
Well it might make sense sometimes, so its good to know one's options :)
Is not the same, in lisp :count 5 means that you remove at most 5 elements, so that the result can be a list of 100 elements, while take 5 will always produce a sequence with at most 5 elements.
(remove-if-not #'evenp (loop for i below 10 collect i) :count 3 :start 0) result is (0 2 4 6 7 8 9), there are three elements deleted.
Funny how when I think about Lisp I think about that kind of code, while other people see elisp imperative, autolisp and common-lisp full-featured system.
Common Lisp has these kitchen-sink functions and macros because it was a standard developed by a committee whose goal was to incorporate several popular implementations of Lisp that each had several decades worth of of cruft. Lisp was big enough business at the time that having several mutually incompatible versions of Lisp was making knowledge sharing and business difficult. And having a standard was important for many reasons.
update: spelling and...
The real philosophical differences between Haskell and Lisp are basically apples and oranges. Lisp's composition strategy is the recursive application of symbolic expressions... the famous EVAL and APPLY. I don't know enough about the theoretical underpinnings of Haskell to make any kind of apt comparison but my intuition suggests that to do so would be moot. They're just too different.
The theoretical foundation of Haskell is a typed lambda calculus called System F-omega. GHC Haskell has some additional features that are inexpressible in System F-omega.
Lisp is far from any lambda calculus, even the untyped one. Lambda calculi don't have variadic functions, or any means to inspect their own syntax. Lambda abstraction is called abstraction for a reason - you can't inspect the expression inside. Lisp is really its very own kind of thing, which can be both very interesting (if you care about extensibility) and very irritating (if you care about abstraction). Racket, a dialect of Scheme, is the only serious attempt so far at providing an extensible language that protects user-defined abstractions.
Nice article, but the language embedded here is exactly the untyped lambda calculus, not a Lisp. You can tell because there are no tags: pair is given as \x y sel -> sel x y, which is indeed the textbook way to encode pairs but does not provide enough to write CONSP.
If you wished to implement a subset of Lisp in LC faithfully you would need an encoding for sums - which would not be all that hard, but does reflect the distance between Lisp values and the raw abstractions of the lambda calculus. (There is also the more substantial matter of RPLACD, etc, but let's just pretend that particular rabbit hole is not there.)
Yes, that's a very good point. That's why I said, "If you want to explore the relationship between Lisp and ULC..." and deliberately not, "If you want to see how to embed Lisp in ULC..."
Writing CONSP actually makes a very interesting exercise.
You know that Haskell was developed by a committee and was designed to bundle the various research streams on lazy/statically typed/ purely functional programming languages?
From the Haskell history:
> ... to discuss an unfortunate situation in the functional programming community: there had come into being more than a dozen non-strict, purely functional programming languages, all similar in expressive power and semantic underpinnings. ... It was decided that a committee should be formed to design such a language...
The core of Common Lisp was designed by mostly five people (the 'gang of five': Scott Fahlman, Guy Steele, David Moon, Daniel Weinreb, and Richard P. Gabriel) in 1981/82. This group was supported by various Common Lisp implementors - around 30 people. Steele wrote the book on Common Lisp the Language and published it in 1984.
The ANSI Committee to expand and clean-up the language was set up years later.
> incorporate several popular implementations of Lisp that each had several decades worth of of cruft
That's also nonsense. Common Lisp is mostly based on a single language: Lisp Machine Lisp (aka Zetalisp), which is an extended version of Maclisp. Groups developing other successors to Maclisp joined: NIL, Spice Lisp and S1 Lisp. Lots of things in Zetalisp were new in Lisp and it took quite some time to get it into Common Lisp, which itself also brought new things into Lisp (like type annotations, sequences, ...).
So Common Lisp is directly based on Lisp Machine Lisp, which only was a few years old. NIL, S1 Lisp and Spice Lisp were brand new and morphed into Common Lisp. Spice Lisp was morphed into CMU Common Lisp, which forked the now popular SBCL.
The style to use keyword arguments in parameter lists was brand new in Lisp. It appeared in Lisp Machine Lisp just a few years before. LML did get it from an obscure Lisp dialect called MDL ('Muddle').
> Lisp was big enough business at the time that having several mutually incompatible versions of Lisp was making knowledge sharing and business difficult.
Lisp was emerging as an implementation/research language for the military. DARPA called for a Lisp standard, because they got several Lisp applications and each had its own version of Lisp...
As always, thank you for the clarification. My 6 month old is teething and waking me every 40 minutes starting around 3am. I'm mostly recalling from hazy, caffeine fueled memory. I like your posts -- so informative.
I admit to knowing little about Haskell's history. I had assumed it was an outgrowth of ML -- itself having an interesting history intertwined with the LCF theorem-proving system. Thanks for the clarification.
>> it was a standard developed by a committee
> That's basically nonsense.
In my much-abridged comment, yes I understand it could sound that way. I was referring to the entire process but mainly the ANSI committee. I was also under the, apparently mistaken impression, that there was more than one Lisp involved.
In April 1981, after a DARPA-sponsored meeting concerning the splintered Lisp community, Symbolics, the SPICE project, the NIL project, and the S-1 Lisp project joined together to define Common Lisp. Initially spearheaded by White and Gabriel, the driving force behind this grassroots effort was provided by Fahlman, Daniel Weinreb, David Moon, Steele, and Gabriel. Common Lisp was designed as a description of a family of languages. The primary influences on Common Lisp were Lisp Machine Lisp, MacLisp, NIL, S-1 Lisp, Spice Lisp, and Scheme. Common Lisp: The Language is a description of that design. Its semantics were intentionally underspecified in places where it was felt that a tight specification would overly constrain Common Lisp esearch and use.
Quite right.. it's all there in ye old Hyperspec. Thank you for pointing that out.
I'm not sure I agree at all with the premise of this article. Languages are a syntax they don't have "Philosophies." They may have design elements that reflect or support the philosophies of their designers(e.g. functions as first class citizens), but the author doesn't provide critique on what they believe these to be instead looking at a few std lib functions that some yahoo implemented.
Look at something like java. Java has a core set of design elements. It was built by a person with some philosophical leanings("Everything is a class", "Checked/Unchecked exceptions being different things", "no first class functions") Then it has standard libraries(i/o, collections, threading/synchronization) each of these was built by a person with there own set of biases and understandings. The original Collections implementation has lots of mutable data structures then later Josh Bloch decided that he didn't like that anymore and stopped adopting "immutability" as core design philosophy. Immutability was not previously considered important when evaluating a java implementation. What you end up with is a mish mash of different opinions that only gets more different as you go.
Some 3rd party libraries like Guava didn't jibe with the Philosophical leanings of the language itself and looked for work arounds. They went as far as to create their own implementation of functions as a first class citizen in a language that was expressly designed to omit them. Some commonly used Android libraries do this as well.
My point here is that "language" can mean lots of things. It can refer to the language itself, its run time, the syntax+runtime+community. What is idiomatic and on, on, on. People and groups of people have the philosophies. I'd like to see the author point to something a little more critical about the differences between these two concepts. This premise is a little muddled.
Some of the Lisp and Haskell code examples aren't doing the same thing. For example, these two do completely different things:
(remove-if-not #'p xs :count 5 :start 3)
take 5 . filter p . drop 3
I don't like Haskell, and it's not immediately obvious to me how to achieve what the Common Lisp is doing, so I won't bother, but one way of writing the Haskell in CL would be:
(loop for x in (subseq xs 3)
when (p x) collect x into result
until (= (length result) 5)
finally (return result))
Another option would be to use subseq and remove-if-not, etc, but without lazy evaluation, the loop version will be more efficient. And though some Lisp people dislike LOOP, I like that it's easy to read, if not always easy to write ;-)
About the topic of the article, though, to me this seems like less a philosophical difference than a result of Haskell not having easy to use default and optional parameters. There's currying, but it's not a great substitute, and it's a little awkward to use.
I like the Common Lisp way, even if it's crufty at times, because the keyword arguments to functions like remove-if-not and sort are easier for me to use than chaining a half dozen functions. I don't have to think about whether I need to call filter before or after take or drop, etc. At the end of the day, it's a personal preference, though.
Another advantage is that I don't have to "roll my own" for common idioms.
IMHO whoever wrote this article should change the title to "A philosophical difference between Haskell and Common Lisp". There are a lot of LISPs and not all of them follow Common Lisp's philosophy.
(λ (pred list)
(take 5 (filter pred (drop 3 list))))
And yes, in most schemes, you can use λ as a synonym for lambda. Sometimes you have to define it first, though. Anyways, that looks a lot like the Haskell to you, doesn't it? It doesn't have the laziness, but other than that...
Oh! Oh! I almost forgot! you can also use the thrush combinator, if you use the clojurian package:
And yes, I think this is all the ways you can do this in scheme. CHICKEN specifically. With some various macros packages.
Oh, wait! I forgot we have function composition, too, but you'd have to use lambda or cut constantly to make the thing work because we don't have currying. So these are all the ELEGANT ways to make this in chicken scheme. With the cut SRFI. Or clojurian. And srfi-1, which is basically standard. and it's better than the examples given, because pred and list aren't pre-specified.
It's easier for people propping up one language over another language family to ignore the similarly easy ways to do things in prominent members of that language family. ;)
If anything, this has made me rethink possibly going over to common lisp, which I was thinking of for the larger stdlib, because SBCL is very fast, supports TCO, and you can disable case insensitivity.
Common LISP was made to smooth over the differences of the ancient LISP's plus standardize their situation a bit. If those don't matter to you, then the alternatives are often better. SBCL being one of them. If I get back into LISP, I think the Racket Scheme community looks like it's delivering most bang for buck in terms of what the tools can do after investing time learning them.
Most schemes don't have a lot going for them in terms of libraries, which is the primary reason I'd want to go over to Common. I really don't like racket, for mostly aesthetic reasons. It seems a bit over complicated, trying to stuff a lot of cutting edge research into its core, and forcing you to learn about 6 different models at once just to understand what its documentation is saying. Its OO system is apparently lackluster, especially since most schemes fork tinyCLOS for OO, and syntax-case is too complicated for its benefits.
I'll stick with CHICKEN for now. Its module system may be a bit awkward at times, (you can't cleanly bind all your dependent modules into one executable), but it's much more minimal, and while its documentation may not be the best, at least I can understand what its saying.
I'm not entirely sure that this is due to philosophical differences. The fact that Haskell is lazily evaluated makes writing functions that do only one thing much easier, since there is no performance hit for writing code like:
take 5 . filter (not . p) . drop 3
In a strictly evaluated language. This would involve iterating over the list three different times. (Kind of not really, since take 5 isn't going to be that expensive.)
> there is no performance hit for writing code like:
> take 5 . filter (not . p) . drop 3
That doesn't seem to be the case in my testing. Writing this out as an explicitly recursive function sped things up 3x on GHC -O3. (Dropping 10^8, taking 10^7.)
FWIW, I also tested on Rust, Python (PyPy3) and Java (OpenJDK) with iterators, iterators and streams, respectively. Python and Java were about as fast as the manually recursive version, and Rust was two orders of magnitude faster than that. And half of the time in Java was spent boxing, because Java can't reify types away like Haskell.
It's true these are all using opt-in laziness, unlike Haskell's lazy-by-default, so it's not a fair comparison. But I also see little reason to believe that there's "no performance hit", given that that's exactly what I saw.
Not really. It doesn't matter that Haskell is lazy by default, so long as you called the function on a lazy stream data structure. Clojure is strict but it can do this efficiently as well, I think.
The difference is not one of "philosophy", but rather of using the right types. Haskell's so-called "list" type constructor is actually a type constructor of streams. Unsurprisingly, streams are much better than lists if you want to do stream processing.
>Haskell's so-called "list" type constructor is actually a type constructor of streams.
Most papers consider streams to be lists without a terminal constructor. Haskell lists are a superset of streams. All streams have infinite lengths, but only some lists have infinite length. (Again, this just comes down to definition, but this is how most functional papers describe it.)
That's one way of doing it, but most formally I think it has to do with whether you're considering the list type in a "data" sense or a "codata" sense. In either case you might be talking about the same "shape" of data, but streams are defined under observation as compared to construction.
In chapter 4 of his book, Okasaki defines a type of both finite and infinite streams (which is really just the type of Haskell "lists"), and, in the remainder of the book, he only uses finite ones.
A stream of infinite length, which you call "stream" without qualification, is what I call "a function on the natural numbers". (Well, up to isomorphism.)
Speaking of things that the Haskell type system lets you make explicit, the isomorphism between streams and functions from the natural numbers means that streams are "representable functors" in the jargon of category theory. [1] Knowing that a data type is representable allows you to immediately build a bunch of other interesting structures on the data type. [0]
Yeah, people talk about "functional languages" as if they're a unified thing. IMO the differences between the strongly typed ML/Haskell tradition and the Lisp tradition are as big as the differences between either and "OO languages" or "imperative languages", but that's not the way it's usually presented.
That's nonsense. Lisp in the functional style vs. Haskell is two different implementations of what are at their essence the same ideas. However, lisp is multi-paradigm, which complicates the comparison somewhat...
They're both primarily based on lambda calculus. Although haskell on typed lambda calculus. They both emphasize purity over mutations. They both are designed to make higher-orderisms idiomatic.
But the point isn't so much the features they share. I'd be the first to admit that Lisp and Haskell are very different. But functional programming in both is very much based on the same ideas. Claiming that they're as different as OO vs imperative is like saying that washing dishes by hand is as different to using a dishwasher as football is to baseball.
"They both allow you to pass functions as arguments to other functions."
" Is it because Lisp is seen as the birthplace of functional languages (because of point 2)?"
That was my feeling. Some tenuous connection that leads to meaningless comparisons. They have such different styles, cultures, and attributes that direct comparisons don't even make sense. People might technically be able to do something that's unnatural in one in the other easily. And vice versa. Question: would any idiomatic user of either really want to or waste time doing it that way? Probably not...
Clisp is the most popular lisp bearing the name, but the LISP, Lisp, or lisp family includes Scheme, and Clojure. if you just refer to Lisp, you may be referring to Clisp, the lisp family, or maybe even LISP 1.5, the lisp equivalent of V7 unix.
Clisp is an implementation of Common Lisp. Common Lisp includes a core of the original Lisp. Lisp programs from the 60s can either be run or ported to Common Lisp with little or no effort.
Clojure is FULLY incompatible with any other Lisp dialect or Lisp derived language. Porting code means 'rewrite'.
Oh. Sorry. I was abbreviating Common Lisp. Whoops. Anyways, that's like saying that Go, Plan 9 C, D, Java, and Cyclone aren't in the C family, because you can't run ANSI C '99 on them and have it work. The Lisp family is diverse.
No. The lisp family is a real thing. They all share homoiconic syntax, singly linked lists as a primary data structure, and most of them support syntactic extension through macros.
While the validity of some of my examples may have been questionable, how is Plan 9 C not in the C family? and just TRY to compile ANSI C on a P9C compiler without significant modification.
Significant modification is something different than complete rewrite.
The Lisp languages share a common core language, code, ideas, literature, community.
Derived languages like Logo, Dylan, Clojure, Racket, ... have their own core language, their own code, their own ideas, their own literature and their own community.
Yes, but they also share a history and a set of syntax and ideas... Well, except Dylan, syntax-wise. But ultimately this is an argument about terminology. And the majority of people would say that scheme is a lisp. A lisp. Not a language derived from lisp, that is independent. A LISP. in the LISP family, because that is a thing. And if you want proof that that is what most people think, you need only check Wikipedia: https://en.wikipedia.org/wiki/Category:Lisp_programming_lang...
I assumed it to be a typo; i.e., it should say "less than".
`takeWhile` is different to `filter` in that is returns the input list until some element matches the predicate, whereas `filter` returns all elements that match the predicate. For example, if the predicate is `(< 5)` and the input was `[9, 2, 3, 6]`, then `takeWhile` would return an empty list, but `filter` would return `[2, 3]`.
He should have used a better input, though, maybe 1, 2, 3 ,4 ,5. With the given input it's a bit confusing, it's equivalent to filter . even [1..4]. Also, the Lisp loop should have been checking for (>= i 5), to be equivalent to the Haskell example.
The biggest difference between Haskell and Lisp is that Lisp is multi-paradigm, while Haskell is not. Haskell is more opinionated, and makes a bunch of decisions for you (that you can choose to work around/sugar/hack until Haskell looks like something else/does what you want).
All the other things that Haskell comes with - strong typing, monads, lazy evaulation, can be written into common lisp, but whether you need them is often questionable.
They can't be written in Lisp well, or at all. Strong typing with inference can't be bolted on. Monads that take advantage of this typing can't be bolted on. Changing Lisp to have lazy semantics across the board ain't gonna happen.
You might be able to write a Haskell interpreter from scratch that interacts with the Lisp environment, but you can't feasibly transform Lisp itself.
Nobody told Dr. Tarver[1]. He built a series of lisps with tight integration with the underlying Common Lisp platform featuring a strong typing system rooted in the propositions of Sequent Calculus[2][3]. Type delcarations are only on top level forms with all locals inferred. There seems to be work to improve this in Shen Professiona's new compiler. Shen offers tight integration with the underlying platform (although it has been ported to a number of runtimes (SBCL, CLISP, and V8 amongst others)[4]).
Usually when people say "you can do X in Lisp", they mean that you can integrate a feature into the language.
Don't have list comprehensions? Write a macro.
Don't have "thrush" syntax? Write a macro.
Don't have lazy evaluation? Write a... err. You can't write a macro[1]. You can't write something which completely changes the entire calling semantics of the language. What you can do is make a little sub-language which uses different values and objects from Lisp, but you can't magically make the standard library support lazy evaluation.
Same with types and type inference (a la Haskell).
Same with many other things.
Of course, obviously, you can write a compiler that compiles to Common Lisp, or C, or Scheme, or COBOL. But writing a compiler -- while Lisp is good at that -- isn't really taking advantage of what it usually means for Lisp to be extensible.
[1] For those who are ready to call out the many "lazy" libraries in Common Lisp, perhaps even my own, you're missing the point. You can make a new version of DEFUN and a new version of FUNCALL. But that doesn't make REMOVE-IF, SUBSEQ, MAPCAR, LIST, CONS, etc. lazy. As such, many, if not most, of the benefits that laziness brings, don't get applied to the language at large.
What are macros if not custom compiler extensions?
Sure, Shen is enough of a compiler extension to be its own language, but I can call Common Lisp code from Shen and call Shen code from Common Lisp. Doing either isn't very different from calling a native function (calling Shen from Shen is similar to calling CL from Shen).
As far as [1], I don't really see this as an issue with CL. CL has not made the effort to rewrite its core in terms of generic functions. If this were to be done, drastically changing CL's behavior for random new types would be relatively painless. But the way that the current CL standard chose for its library and core functionality isn't to use generic functions everywhere. As such, it is expected that if you drastically change implementation, you end up with separate functions. I think this is a huge draw of something like Clojure/JVM with its interfaces/protocols as a base level over a far more mature system like CL/SBCL.
Got to this late, but other commenters have already pointed out why this is wrong.
While this is unfounded (I have no sources), LISP is quite possibly the most flexible higher level language ever invented, I would caution against betting against it in the future.
Saying that it's the most flexible higher language ever invented might not actually amount to much -- I'm sure there's actually a small subset of features/points-in-the-language-that-allow-for-user-modifications that you need to achieve such high flexibility, but figured it was worth saying.
[EDIT] - My wording of the original post is sloppy. Where I say Lisp, I mean the dialect of Lisp that is Common Lisp.
Your argument doesn't add anything really. You can make anything in any reasonable language.
When people say "you can add X to Common Lisp" or "you can make X in Common Lisp", they usually mean that you can integrate such a feature in Lisp.
If you mean "you can write a compiler in Lisp", then, obviously. But you can do that in C++ too.
Not a single reply has demonstrated that you can effectively and reasonably add those features (to their fullest extent) to Lisp. They all replied saying that you can do things like write a new language (e.g., Shen), perhaps by using Common Lisp as your implementation language.
You cannot write a compiler that integrates into your language in C++ or Haskell. And it is trivial in Lisp, just wrap a compiler into a macro.
For example, my typical workflow heavily depends on Prolog blended into Lisp. I also use an ML implementation, also blended into Lisp seamlessly, and a subset of a Haskell-like language (lazy and strictly typed), also tightly integrated with Lisp, to a degree that it can access Lisp or ML local variables defined around the Haskell code block.
You cannot do this in any language which does not have proper macros.
> In Lisp a procedure tends to accept many options which configure its behaviour. This is known as monolithism, or to make procedures like a kitchen-sink, or a Swiss-army knife.
This is the case in some areas of the Common Lisp language; it is not true of the Lisp, as a family of dialects.
There are plenty of examples of Lisp functions or operators that just do one thing: `cons`, `car`, the `lambda` macro operator.
;; CL
(remove-if-not #'p xs :count 5 :start 3)
;; Haskell
take 5 . filter p . drop 3
;; TXR Lisp: a dialect with ties to CL:
(take 5 [keep-if p (drop 3 list)])
;; Compose the functions using opip macro
;; (result is a function object):
(opip (drop 3) (keep-if p) (take 5))
TXR Lisp's library functions don't have the :count, :start and whatnot. In fact, there are no keyword parameters, only optionals. If you want to default an optional on the left and specify a value of one on the right, you can pass the colon keyword to explicitly default:
(defun opt (a : (b 1) (c 2)) ;; two optional args
)
(opt 4 : 5) ;; b takes 1, c takes 5.
The colon is just the symbol whose name is the empty string "", in the keyword package. It makes for nice sugar and has a couple of uses in the language. Note how in the defun it separates required args from optionals.
(Anyone else cringe at "UNIX philosophy"; how silly! This is the Unix philosophy: let's reduce everything to a string in between processing stages and parse it all over again, with simplifying assumptions that it always has exactly the format we are looking for without actually validating it.)
That's the JWZ perspective, but if you add strong typing you can basically turn the "string" into "SomeType" and process records like that. If you look at languages with a |> operator they often act like that.
Real general question about function composition - when you're comfortable with it, how do you think it to yourself, or say it to yourself, like what's your shorthand mental model?
I was able to really intuitively get comfortable with unix piping a long time ago, so instead of:
take 5 . filter p . drop 3
I'd be thinking, ok, take a thing, drop the first three, grep (or whatever), now take the first five. It felt intuitive because each step solved a problem, and then I'd move forward into the future, and have to only think of one additional solution (head -5) assuming I did the previous steps right.
Meanwhile, take 5 . filter p . drop 3 is in the opposite order, from right to left.
Maybe I'm just saying that left association is easier to think about than right association. Don't you feel that weird recursive bump in your head, the increasing mental stack, when you are dealing with function composition and right association?
I like imagining functions as various pipes, the real world ones. Compose is that short pipe that connects two others. I imagine that the data comes from the right and needs to be output on the left of the function. I then can read such line (take 5 . filter p . drop 3) twice. Once following the order of creation (ie. we're laying our pipes first) and then the second time in reverse, following the data and function applications. That did the trick for me when I first learned about function composition coupled with (auto)currying.
Haskell has strong typing and lazy evaluation, which makes it easy for functions to take only one argument at a time. Although a function could take a tuple parameter, it's usually rewritten to take each component of the tuple as a separate parameter, which makes the strong typing and built-in currying simple, higher structures like monads possible, and a syntax to suit this style. Lisp functions and macros OTOH must be variadic to enable the homoiconicity of the language. It's therefore much more difficult for parameters to be typed, or to curry them.
These two styles make Haskell and Lisp mutually incompatible, unless they use clunky addons like Template Haskell macros or Typed Clojure annotations. The pure form of each language, however, is based on two mutually exclusive foundations, i.e. strongly typed auto-curried parameters vs variadic untyped macro parameters. The poster child of each language, i.e. monads and macros, thus also don't mix well with each other.
> Lisp functions and macros OTOH must be variadic to enable the homoiconicity of the language.
That makes no sense at all. Homoiconicty is a syntactic concept, variadicity is a semantic concept. They have nothing to do with each other.
The feature that enables Haskell's programming style is laziness, not currying. Lisp can be trivially curried, but it cannot be trivially made lazy, so idioms like "take 5 . filter p . drop 3" -- i.e. (drop (filter p (take 5 arg)) 3) -- are much harder to compile efficiently.
> Homoiconicty is a syntactic concept, variadicity is a semantic concept. They have nothing to do with each other.
It's far more difficult to define macros when you don't have variadicity. A large number of syntactic constructs you'd want to define would even be impossible.
Many functions and macros in Clojure rely on reducing variadic parameters using recursion, e.g.
(defn max "Returns the greatest of the nums."
([x] x)
([x y] (. clojure.lang.Numbers (max x y)))
([x y & more] (reduce1 max (max x y) more)))
You'd be limited to 2 params if you didn't have variadicity. Some macros like switching, which in Clojure is called `condp`, would lose their elegant single-parens appearance if variadicity wasn't allowed.
Again, you are confusing syntax and semantics. All you need to do to have the exact same surface syntax with single-argument functions is to stipulate that the arguments to a macro expander are passed in as a (single) list.
What is it about the features that you mentioned that makes monads possible? Lisp (or at lease Scheme and Clojure, which I'm familiar with) make monads trivially possible -- just as they are in Haskell. They're not as useful because those languages have other mechanisms that make monads largely unnecessary, but they're no less easy to express.
Well you can't have implicitly resolved typeclasses without a static type system. You could pass around explicit dictionaries with all your values or some such, but most of the value of explicitly sequencing relatively minor effects is only there if you have an extremely low-overhead way of doing so, and a system that can verify the correctness of that sequencing at compile time.
It sounds like racket generics are at least partway on the road to a type system. And those placeholder values seem a bit greenspunny - I'm not sure how they'd interact with native racket features (e.g. macros). (I mean, you are right, but in a degenerate sense you could implement typeclasses in any dynamic language by having your program construct strings and writing a Haskell compiler in that language that executed those strings at runtime. So the more relevant question is whether monad techniques can be used effectively in an idiomatic program).
The big difference is that generic dispatch is resolved dynamically.
I haven't yet found myself reaching for monads in Racket (or writing much non-Redex Racket code) since tonyg wrote that post, so I don't know if there are any odd interactions to watch out for.
> Well you can't have implicitly resolved typeclasses without a static type system.
Yes, but that wasn't the feature mentioned in the GP comment. He mentioned lazy evaluation, with currying, static typing and monads as a consequence. But if monads are just a consequence of the type system, then I understand what he meant.
> but most of the value of explicitly sequencing relatively minor effects is only there if you have an extremely low-overhead way of doing so, and a system that can verify the correctness of that sequencing at compile time.
Well, extremely low-overhead of just about anything is certainly achievable with modern JITs[1].
> and a system that can verify the correctness of that sequencing at compile time.
Why is that necessary? You might as well presuppose the necessity of types :) or, alternatively, say that you need a type system to verify that your monads are truly monads (i.e. obey the monad laws) at compile time, something Haskell doesn't do, either.
> Well, extremely low-overhead of just about anything is certainly achievable with modern JITs[1].
Can't read twitter where I am, but to be clear I was primarily thinking of the code-readability overhead.
> Why is that necessary? You might as well presuppose the necessity of types :)
Well for me the value of using a monad to capture an effect is to be able to verify that I've sequenced that effect correctly at compile time. E.g. I use a monad to represent a database action that has to happen inside a transaction and the type system can enforce that I wrap it in a transaction. In a language without types it would be technically possible to do that, but entirely pointless, because if I get a mistake it's a runtime failure on that codepath either way. Maybe that was your point, but while monads are technically possible in dynamic languages, the technique that yields a lot of the value from using monads isn't.
> or, alternatively, say that you need a type system to verify that your monads are truly monads (i.e. obey the monad laws) at compile time, something Haskell doesn't do, either.
If I have a given defect rate in the sequencing of some effect, then isolating the fallible heart of that sequencing in a short piece of code is a big win (and I can e.g. flag that critical section for additional review). I do prefer to have a type system which verifies the monad laws at compile time, and I do think that the stronger the enforcement around the monad system, the more worthwhile it becomes to manage a given effect explicitly.
> Well for me the value of using a monad to capture an effect is to be able to verify that I've sequenced that effect correctly at compile time.
> I do prefer to have a type system which verifies the monad laws at compile time, and I do think that the stronger the enforcement around the monad system, the more worthwhile it becomes to manage a given effect explicitly.
That's an advantage of having a type system, not of having monads. There are other ways to type-check effects without monads (at least not explicit monads, and the monad laws don't need to be checked): http://blog.paralleluniverse.co/2015/08/07/scoped-continuati...
But effects do need to satisfy the laws if the "natural" way of writing them is to make sense, so I'd rather that were enforced (compare the confusing results one gets when using Set in a scala for/yield, because it doesn't obey the monad laws). Unless you're restricting the "interpreters" such that the implementation of a given action is necessarily monadic?
(Have you seen the "freer monad, more extensible effects" paper? I think they end up in a similar place, though from the opposite direction - using a single structure typed by a (type-level) list of possible effects to represent an effectful operation, and then you supply an interpretation for each effect to run it)
> But effects do need to satisfy the laws if the "natural" way of writing them is to make sense, so I'd rather that were enforced (compare the confusing results one gets when using Set in a scala for/yield, because it doesn't obey the monad laws). Unless you're restricting the "interpreters" such that the implementation of a given action is necessarily monadic?
I haven't thought too much about it, but I believe that if you structure effects as continuations, then they are monadic by construction (though I may well be wrong) because the continuations themselves are monadic. But I got interested in continuations because I don't like Haskell's purity, and I wondered if there is an imperative construct with the same expressive power as the PFP monad, and it turns out that not only that there is one, but that it composes much better than monads.
> Have you seen the "freer monad, more extensible effects" paper? I think they end up in a similar place, though from the opposite direction - using a single structure typed by a (type-level) list of possible effects to represent an effectful operation, and then you supply an interpretation for each effect to run it
Yes, and it isn't a coincidence. Oleg Kiselyov started out trying to bring the composability of continuations to Haskell (he built at least one continuation library for OCaml). As I wrote in that blog post, that was an open question by Phil Wadler: whether monads could be made as composable as continuations, and Kiselyov showed that the could.
> I haven't thought too much about it, but I believe that if you structure effects as continuations, then they are monadic by construction (though I may well be wrong) because the continuations themselves are monadic. But I got interested in continuations because I don't like Haskell's purity, and I wondered if there is an imperative construct with the same expressive power as the PFP monad, and it turns out that not only that there is one, but that it composes much better than monads.
I don't see it as imperative - if anything passing continuations around is more functional. They were originally a scheme thing, no?
How does the stack aspect work? It's a continuation style so you never return, so if you keep the stack frames around it'll overflow very quickly, right? The usual solution to that is tail call elimination, but then you don't have even the immediate stack trace. Are you doing some kind of dynamic "keep the last x stack frames"? (I can imagine that being useful for a trampolined monad style too).
I think that's the really interesting part. As you say the styles are equivalent (even clearer when one's using something like scalaz-stream), so it's a question of which is easier to read/understand. I do think the monadic style as implemented in scala has a big runtime-inspectability problem (one idea I keep toying with is a free monad style where you represent every step with a named object or class, purely so that debuggers handle them correctly). I find call/cc style incomprehensible at the code-reading level - I'd rather have the control flow captured in a value I can inspect in a debugger than as what are effectively gotos in the code text. Maybe if you're more used to macro-based programming then it makes more sense.
Sure, but scheme is an imperative-functional language (as is OCaml).
> It's a continuation style so you never return
No, I'm talking about delimited continuations. No need for TCO. The continuation contains only a portion of the stack (off the top).
> As you say the styles are equivalent
They're only equivalent in the sense that they have the same expressive power. It's been proven that continuations can be implemented with monads and that any monad can be expressed as a continuation, but continuations really do compose better (unless you use Kiselyov's effect handlers).
> I find call/cc style incomprehensible at the code-reading level
Right, but we're talking about delimited continuations, i.e shift/reset, not call/cc. You can simply treat them as threads. They're not at all like gotos (you can read my blog post). My claim is that they fit much better with imperative (including functional-imperative) languages than monads.
> No, I'm talking about delimited continuations. No need for TCO. The continuation contains only a portion of the stack (off the top).
I don't think that resolves it. It's quite normal to want to chain together a few thousand or million monadic operations even within a single method (e.g. making a number of network calls, writing lines if you're managing I/O monadically). If doing that results in a stack overflow then that severely limits the utility of this approach.
> Right, but we're talking about delimited continuations, i.e shift/reset, not call/cc. You can simply treat them as threads. They're not at all like gotos (you can read my blog post). My claim is that they fit much better with imperative (including functional-imperative) languages than monads.
I did read it; I'm sorry if I've got the terminology wrong. I found e.g. the "class Foo" example really hard to follow the control flow; in that sense it seemed goto-like to me. I find threads really hard to reason about, so "it's as easy as threads" doesn't really appeal.
I've not seen the term "functional-imperative" used that way, and usually the two are contrasted - can you clarify what class of languages you're talking about?
> If doing that results in a stack overflow then that severely limits the utility of this approach.
There is no stack overflow. A chaining of monads translate to a blocking of the continuation. The familiar imperative sequence of blocking operations is an instance of the continuation approach, and it isn't susceptible to stack overflows.
> I find threads really hard to reason about, so "it's as easy as threads" doesn't really appeal.
Let me make it clearer: it's not "threads", as in multiple threads, but "as easy as threads" as in the simple, familiar blocking thread. There is no concurrency involved. If you can understand:
a = read()
print("hello " + a)
you understand how to work with delimited continuations (that code is actually written as a continuation, which blocks on read()); there's nothing more to it. It is simply a formalization of what it means to block a thread, and a mathematical name for what we've always done.
When you block a kernel thread, the OS takes care of all the continuation stuff -- but it does exactly that (the OS does run your single-threaded program in a continuation). Just as the OS mechanics of continuations don't get in the way of your understanding of the above code, language-level implementation of continuations shouldn't either. It is completely transparent to you -- your thread blocks, some handler does something, and then your thread resumes.
> I've not seen the term "functional-imperative" used that way, and usually the two are contrasted - can you clarify what class of languages you're talking about?
Well, it's sort of my made-up term that I use because "functional" doesn't really have a definition (try to come up with a definition that encompasses all of: Lisp, ML, Haskell, and Erlang but none of Java, Ruby, Smalltalk, JS and you'll see why), so I use the term to denote languages which we call functional (whatever exactly that means) which aren't pure, so: the Lisps, MLs, Erlang.
> There is no stack overflow. A chaining of monads translate to a blocking of the continuation. The familiar imperative sequence of blocking operations is an instance of the continuation approach, and it isn't susceptible to stack overflows.
So wait, you are doing TCO? When my code "unblocks" it will look like it's continued directly from the previous line, when actually the previous line called into some other code which then called into a continuation for the "tail" of this method, right?
> When you block a kernel thread, the OS takes care of all the continuation stuff -- but it does exactly that (the OS does run your single-threaded program in a continuation). Just as the OS mechanics of continuations don't get in the way of your understanding of the above code, language-level implementation of continuations shouldn't either. It is completely transparent to you -- your thread blocks, some handler does something, and then your thread resumes.
If it's the kind of thing that can be implemented transparently then I don't think I'd want to be managing it at all (and automatic CPS-ization is a valid alternative to threading (indeed as you say threading can be viewed as just a crude form of it) that can have performance advantages, but if it looks the same at the code level then it's "just" a performance optimization). The point of the monadic style is for if I do want to manage the sequencing explicitly.
More concretely I think the big advantage is that a monadic effect can be captured as a value. So if I want to e.g. perform some effectful operation on a list, and I don't know about traverse, I can just do the operation in the normal way with map, and then as long as I use those values in a way that makes sense according to the normal rules of the language then I'll do something that makes sense. You draw an analogy with checked exceptions in Java - but most Java developers avoid using them, regard them as a failure, and I think a big part of that is you can't safely factor out the result of a call to a possibly-exception-throwing method as a value.
> Well, it's sort of my made-up term that I use because "functional" doesn't really have a definition (try to come up with a definition that encompasses all of: Lisp, ML, Haskell, and Erlang but none of Java, Ruby, Smalltalk, JS and you'll see why), so I use the term to denote languages which we call functional (whatever exactly that means) which aren't pure, so: the Lisps, MLs, Erlang.
Do you just mean non-Haskell languages?
I don't think "pure" is binary in that sense: all languages have some things that are managed implicitly by the compiler/runtime/infrastructure and some things that the developer is expected to manage implicitly. Haskell explicitly sequences I/O but the infrastructure manages laziness, whereas in most MLs it's the other way around. Which is more pure is a matter of perspective.
> So wait, you are doing TCO? When my code "unblocks" it will look like it's continued directly from the previous line, when actually the previous line called into some other code which then called into a continuation for the "tail" of this method, right?
No TCO. Your code doesn't "unblock"; it unblocks (no quotes). The handler code doesn't "call into the continuation" (i.e., there is no stack-growing call), it just resumes the continuation. Like unparking a thread (or not "like"; unparking a thread is resuming a continuation). There is a relationship between continuations and TCO in that they both deal with stacks, but in this case it only serves to confuse.
> The point of the monadic style is for if I do want to manage the sequencing explicitly.
But how is `a = read(); print(a)` not explicit?
> More concretely I think the big advantage is that a monadic effect can be captured as a value.
Continuations are the same in that they are also a "value", except that it is a mutable value. A continuation has a state (it's current program counter). The calls to read and print trigger an OS handler. When working with continuations, those are programmable handlers, so their function is pluggable. You can run the continuation calling read and print with a handler that actually does IO, or with one that does something else -- just like with monads.
> You draw an analogy with checked exceptions in Java
That's only to explain the typing. Java's checked continuations are an effect system, and the list of thrown exceptions can be viewed as a list of effects a function may have. That's how you can ensure that a function calling read and print will only be called within a context that implements a handler for those effects.
> most Java developers avoid using them, regard them as a failure
Off-topic, but that is a myth (there was an unscientific poll about this in the last JavaOne conference). If there is a failure is not with the concept, but that the standard Java libraries overuse checked exceptions (this may be addressed). Most Java developers like the concept (according to the poll).
> and I think a big part of that is you can't safely factor out the result of a call to a possibly-exception-throwing method as a value.
Don't think of checked exceptions in this context as actual exceptions, but as a list of effects. The same "problem" applies to monads, and is actually a good thing -- as you said yourself -- to have the type system ensure that monadic methods are called within the appropriate monad only. This kind of effect system also ensures that a function with effects is only called when the needed handlers are in context. The difference between continuation-effects and monads is that a function making use of continuations can use multiple effects without requiring nasty monad-transformers (and that's exactly the problem Kiselyov wanted to solve with his handler system).
> Do you just mean non-Haskell languages?
Basically yes (but Idris, Agda and Coq are also pure).
> I don't think "pure" is binary in that sense: all languages have some things that are managed implicitly by the compiler/runtime/infrastructure and some things that the developer is expected to manage implicitly.
I certainly agree in theory: what constitutes an effect (except IO) is indeed up to the language to define, but those languages I listed allow all effects (including IO, which is an "absolute" effect) without declaring them explicitly, hence they are not pure.
BTW, there is another absolute effect (i.e. an effect which is externally observable, and thus must be considered an effect regardless of the language's preferences) -- time. Haskell (and Idris etc.) don't have a good mechanism of handling that effect, which is one of the reasons I am not fond of the Haskell-style pure-FP.
As a nice demonstration of that problem, there are two ways in Haskell to write a function that sleeps for a second: one that is effectful (monadic) and one that is pure. Also, many people believe that Haskell has only one function inhabiting the type forall a: a -> a (a notion that is at the core of Haskell's equationality myth), when in fact there are infinitely many, all different from one another.
It doesn't tell me which of the calls is effectful and which is pure. It's clear enough at the direct level, but if I'm calling methods that I don't know the implementation of then the =/<- distinction when using monadic style is very helpful.
> That's only to explain the typing. Java's checked continuations are an effect system, and the list of thrown exceptions can be viewed as a list of effects a function may have. That's how you can ensure that a function calling read and print will only be called within a context that implements a handler for those effects.
So if I do "a = read()" what is the type of a? "(String effects IO)" or some such? If it's "string, but only allowed inside the scope of a method with IO as an effect" then it's the same problem as checked exceptions.
> Off-topic, but that is a myth (there was an unscientific poll about this in the last JavaOne conference).
Hmm. I wouldn't expect conference-goers to be a representative sample, and it doesn't match my experience.
> The difference between continuation-effects and monads is that a function making use of continuations can use multiple effects without requiring nasty monad-transformers (and that's exactly the problem Kiselyov wanted to solve with his handler system).
Since you're emphasizing it I'll ask: how do you express effects that don't commute? EitherT[Writer[Log, ?], Error, Value] behaves very differently from WriterT[Either[Error, ?], Log, Value].
> It doesn't tell me which of the calls is effectful and which is pure.
But it does because their type says so:
String read() effect IO
(where `effect` is just like Java's checked `throws`)
> what is the type of a? "(String effects IO)" or some such?
Right. It is as I said, a type on read, not on a. `a` is still `String`.
> only allowed inside the scope of a method with IO as an effect then it's the same problem as checked exceptions.
But that's the opposite of a problem. That's like using `IO a` outside of an IO monad. You want the function to only be called in the right context (an appropriate handler is in effect).
Sure. But I remember doing this with checked exceptions, mousing over each method call one by one to figure out which ones could or couldn't throw. It's much nicer if you can see which methods are the throwey ones immediately - =/<- is a real sweet spot in terms of having the difference be visible but not overly intrusive.
> But that's the opposite of a problem. That's like using `IO a` outside of an IO monad. You want the function to only be called in the right context (an appropriate handler is in effect).
Disagree. I want to be able to have a value of type IO a outside a method - e.g. in a static field. More generally it's really huge to be able to just treat these things as ordinary values that I can manipulate using the ordinary rules of the language.
> I'm not sure if the same effects would make sense, though. For example, Either is a monadic value which doesn't make sense in an imperative setting.
The two effects are "possibly skip the rest of the computation, returning an Error instead" and "record a programatically-accessible Log" - I think those both make sense. The point is the reason you have to use the horrible monad transformers is that the order in which you evaluate these effects matters - if there's an early error, are later log statements recorded, or not? There are more complicated cases when e.g. catching I/O errors.
> It's much nicer if you can see which methods are the throwey ones immediately - =/<- is a real sweet spot in terms of having the difference be visible but not overly intrusive.
Well, I guess that's a matter of taste, but 1/ what information does "effectful" convey if you don't know what the effect is until you mouse-over? 2/ I think that's a very, very low price (if it is a price at all) to pay for something that fits much better with non-pure languages.
> I want to be able to have a value of type IO a outside a method - e.g. in a static field
But continuation effects don't have `IO a`, just `a`. The effect comes with the context. An `IO a` would translate to a lazy value, namely a function, which could be a continuation. There's nothing stopping you from storing a continuation in a field. You can manipulate the `a` just like any value, and the "IO" with handlers -- just as you do with monads.
> possibly skip the rest of the computation, returning an Error instead
Yes, but that's not all Either says. It says "return an error or a value of type a". In imperative code that makes less sense. What you want to say is: "throw an exception or don't (and return whatever value it is that the function returns)". The type of the normal return value is not recorded in the Either.
> The point is the reason you have to use the horrible monad transformers is that the order in which you evaluate these effects matters
You need monad transformers not because of ordering, but because monads don't generally compose. Or it's a bit reversed: because monads don't compose they are sensitive to accidental ordering (whether you care about the order or not), which makes monad transformers tricky. With continuations you have the choice of when to be sensitive to order and when not to.
For example, in the imperative case, you could encode the information "if there's an early error, are later log statements recorded, or not?" directly in the type, (I think) but I don't see any reason to. In Haskell you must because the types need to be nested in one particular way according to the transformer.
In any case, my claim is as follows: if you generally prefer the imperative way of doing effects (like in Java, OCaml, F#, Scheme, JS, Python, Clojure, Erlang etc.) -- as I do -- then continuations are far more natural than monads. If you prefer the PFP way, you should stick to monads (or if using Haskell, must stick to monads).
> An `IO a` would translate to a lazy value, namely a function, which could be a continuation. There's nothing stopping you from storing a continuation in a field. You can manipulate the `a` just like any value, and the "IO" with handlers -- just as you do with monads.
I think the Java world is finally accepting that methods are not a substitute for first-class values; one of the biggest remaining problems with Java lambdas is that it's awkward to use them with checked exceptions (e.g. do you to create a named @FunctionalInterface for each input/output/exception combination you want to store? You can use generics but they don't let you abstract over arity). Handling effects this way would presumably have the same problems?
Put it this way: Either[MyError, A] is much more useful to me than a checked MyException, because I can use the former as a value whereas the latter I can only throw from methods. That's the difference I'm worried about here.
> You need monad transformers not because of ordering, but because monads don't generally compose. Or it's a bit reversed: because monads don't compose they are sensitive to accidental ordering (whether you care about the order or not), which makes monad transformers tricky. With continuations you have the choice of when to be sensitive to order and when not to.
What does the choice look like in code then? Under monads I'd write:
> Handling effects this way would presumably have the same problems?
First, I didn't suggest actually using checked exceptions as they are now to model "typesafe continuations" in Java. I'm perfectly fine with continuations not being fully type-checked (I like types, but I'm far from a type zealot).
But more to the point, such an effect system (or actually even Java's checked exceptions) won't pose such a problem at all. On the contrary. Just like you can't store a monadic value in a variable of a "plain" value, it makes sense to reject `int foo() throws IO` from a computation that doesn't support IO. It is a problem in Java with checked exceptions because `int foo() throws InterruptedException` would work where `int f()` is expected, and having it rejected (and requiring a wrapper) is indeed a nuisance. That's not the same issue if effects are involved.
> I can use the former as a value whereas the latter I can only throw from methods. That's the difference I'm worried about here.
I'm not sure I understand the distinction. You can always use something like Optional (Maybe) if you want a value. Alternatively, no one is stopping you from using the monadic Either even in an imperative context (although you may lose the stack trace). You can still use it this way if that's how you like doing things. I don't.
> then which semantics do I get, and how do I get the other one?
would log the exception, as in your second example.
You don't even need to think about it. It works just like plain imperative code. Continuations are the general (mathematical if you will) formulation of how imperative code already behaves.
> I'm not sure I understand the distinction. You can always use something like Optional (Maybe) if you want a value.
I mean that the "value or error" that I get back from a function that returns Either is itself a value, that I can use in the normal way I would use a value. I can pull it out into a static field. More importantly I can pass it back and forth through generic functions without them having to do anything special about it (e.g. as a database load callback), because it's just a value. I notably can't do this with exceptions (or e.g. if I'm emulating Writer with a global or ThreadLocal variable), though checked exceptions will at least fail this at compile time.
> would never call the writer (like your first example, I believe)
Hmm. If you're willing to fix the order in which effects happen (and explicitly handle when you want to change it, as in your example) then you can write something much simpler than the usual monad transformers, so make sure you're comparing like with like.
(1) Monadic programming gets really hard to understand if you don't have a strong type system correcting you when you screw up the difference between `x`, `m x`, and `m (m x)`. The tendency in weak type systems is to autopromote `x` to `m x` with `return`, but this can make it impossible to see if you need to deconstruct `m (m x)` to `m x` with `join` in any given application, unless you just auto-flatten everything, which sometimes (especially in the list monad!) is not what you want.
(2) Monadic programming gets hard to understand if you can't cleanly split out functions into having just one argument and one output.
Translation for people who don't know what a "monad" is: occasionally there are type-adjectives (I'll use the example "blue") where there is a canonical way to make any object into a blue object (called `return`), and a canonical way to make a blue blue object into a blue object (called `join`), and a canonical way to take some function from objects to objects and turn it into a function from blue objects to blue objects (called `fmap`).
One example is the adjective "either an X or a ____", which is written in Haskell as `Either x`. The canonical way to take a Y and `return` it into "either an X or a Y" is to say "it's a Y -- the type on the right." In Haskell we'd say that `data Either x y = Left x | Right y` and that this `return` function is `return y = Right y`.
Similarly the canonical way to map a function `y -> z` over such an `Either x y` to get an `Either x z` is "if the value is an X don't do anything, otherwise do the function to the value and collect the end result." This is written `fmap zy exy = case exy of Left x -> Left x; Right y -> Right (zy y)`. Finally, the canonical way to take an "either an X or an (either an X or a Y)" into "either an X or a Y" is just: if you've got an X, it's an X; if you've got a Y, it's a Y:
join ex_exy = case ex_exy of
Left x -> Left x
Right exy -> case exy of
Left x -> Left x
Right y -> Right y
So it's adjectives which are (a) universal [we can apply them to anything], (b) collapsible [we can take an `m (m x)` and turn it into an `m x` and (c) mappable [we can take an `x -> y` and apply it to an `m x` to make an `m y`].
Haskell became obsessed with them because this structure describes computer programs really well. Suppose you build in a data type for "a program which produces a...". Then the basic "program composition" operator looks like this:
First, take two inputs: (1) a program which produces an X, (2) a function which takes X and produces a program which produces a Y. Then, yield a program which will (when run) produce a Y by first running program (1), then using the X generated as an input to the function (2) to compute a program which produces a Y (3), then runs (3) to produce the y.
The type signature here is `(IO x, x -> IO y) -> IO y`. This curious function is called `bind` and it can be more easily thought of as a combination of two separate steps: `fmap` the `x -> IO y` on the `IO x` to produce an `IO (IO y)`, then `join` the programs to produce an `IO y`. "A program which produces a ..." is thus a special case of a monad, an adjective which supports these three operations (`return` is "produce a program which does nothing and yields this Haskell value").
And you care about all of this because being all indirect -like about computer programs allows you to metaprogram up an impure program in a pure language, which is how Haskell does I/O.
So the translation of the above points is that (1) this whole process gets confusing if you don't know the difference between a wagon, a blue wagon, and a blue blue wagon at compile time, and the tendency in a Lisp is to say "I saw a wagon, I wanted a blue something, so let's just quietly use the `(paint-blue wagon)` subroutine to keep going," and "I saw a blue blue wagon, I wanted a blue something, so I am happy." You can force things to always use some `(coalesce-blue-paint ___)` function to always flatten blue-blue Xs into blue Xs, but this is usually counterintuitive. (2) You can't even clearly articulate what `fmap` does unless you can cleanly say "I have a function from wagons to wheels, and I want the corresponding function from blue wagons to blue wheels." If your functions are taking a bunch of extra parameters, like a wheel-index of "front-left, front-right, back-left, back-right" then it gets hard to say "here's the wagon input that I want to become a blue wagon input, please leave my wheel indexes alone!"
But when that language has variadic parameters, you must explicitly invoke the currying to distinguish it from a normal function call that uses up all the variadic parameters (just as you can't specify different types for the variadic parameters). In a language where each function has only one parameter, the currying can be implicit (just as all parameters defined in the program can be statically typed).
I should have written "You can, but..." in front of my comment above because you weren't disagreeing with me. In my original comment I italicized "must be variadic" which was my original emphasis.
But if you eliminate variadicity and macros from a dynamically-typed, strictly evaluated language, you might as well add optional static typing so you can extend that implicit currying into monads, etc.
Having worked on one of the largest common lisp projects I have to say this is spot on. And monolithism it's not just visible on level of functions it's visible on higher level. Common lisp nudges you to write monolithic applications and it's crucial to keep many details in one head (in case of big project - many heads).
Also I think Clojure is scheme, so overall approach is different.
Also, yeah, Common lisp probably would push for monolithism. Lisp Machine Lisp, upon which Common Lisp was based, was very much built upon The Right Thing, monolithic tradition of MIT, ITS, and all of the lisp machines.
That wouldn't be the same. The point was filtering a list (which could contain anything, not just a sequence from 1 to 4).
The example seems a bit strange to me. Why does it have the check for i being less than 5, when the problem is "get all elements _greater_ than 5, then just the even ones of that set.". Was it supposed to be "remove all elements greater than 5..."? (edit: although the code would then only work for sorted lists; I think it'd be better to just loop over the whole list and have "(and (< i 5) (evenp i))" as the condition for collecting)
You have to be clear about which Lisp. Clojure and Scheme are Lisps, and their focus is very much towards simplicity and the use of combinators.
The complex, do-it-all-in-one-huge macro is a Common Lispism, not a Lispism.
Secondly, Clojure, like Haskell, is focused on sequence abstractions, not lists. Sequences can include collections, streams, observables, sockets and many other kinds of process that can be modelled as an event stream.
Basically new languages with strong Lisp influence.
> very much towards simplicity
Scheme maybe until the early 80s. Later it grew to Common Lisp size and beyond. See Scheme R6RS which is a complex language with tons of features and still underspecified.
> The complex, do-it-all-in-one-huge macro is a Common Lispism, not a Lispism.
Not really. Macros appeared in Lisp in the early 60s and many Lisp dialects have used it in complex ways. For example the 'famous' LOOP macro of Common Lisp actually was developed in Interlisp in the early 70s (as a part of 'Conversational Lisp' / CLISP), redesigned in Maclisp/Zetalisp and then brought into Common Lisp. The original Common Lisp in 1984 did not even have that macro in the language description, it was standardized several years later - after a search for a better alternative failed.
R6RS was basically DOA, with only 3 or 4 implementers (Guile, Ikarus, Ypsilon, and (partially) Racket) actually going along with it. Most of the other implementers, most notably CHICKEN's Felix Winkelman, refused. R7RS is now split into a small core, and a large standard library for practical, as opposed to teaching, use, whose development STILL isn't done.
>This is the home page for R7RS, the Revised⁷ Report on the Algorithmic Language Scheme. This version of Scheme has been divided into a small language, suitable for educators, researchers, and users of embedded languages; and a large language focused on the practical needs of mainstream software development.
>The report on the small language was finalized on July 6, 2013. It is available in PDF format and as LaTeX source code. There are errata.
>The development of the large language is still in progress. For details on the development process of both languages, see the Scheme Reports home page and the Working Groups home page.
Calling Scheme a "new language with a strong Lisp influence" is a bit rich. It's certainly not new, and in many respects it's more representative of Lisps-of-Old than Common Lisp.
Lisp started in 1958. Popular Lisps in the 60s/70s were Lisp 1.5, Lisp 1.6, Interlisp, Maclisp and Standard Lisp.
Scheme grew out of research from 75-80 (the Lambda Papers), investigating Actors and bringing ALGOL and Lisp together.
Thus Scheme is the newer language.
>it's more representative of Lisps-of-Old than Common Lisp.
Common Lisp contains a direct core of the original McCarthy Lisp. One can develop in Common Lisp in the typical styles of the 60s and 70s. With PROGs and GOTOs, symbols and property lists, dynamic binding, procedural macros, ...
Scheme was different: it introduced a more functional style, lexical scope, closures built-in, away from symbols and dynamic binding, ..., continuations, hygienic macros, TCO (recursion instead of direct iteration constructs), ... for some years people tried to bridge the gap (for example by providing a Scheme in Common Lisp, by reusing the concepts or by writing to a compatibility layer), but nowadays there is very little sharing.
Common Lisp isn't technically focused on lists as you imply. There are plenty of libraries which provide "sequence abstractions" if that's what you need. Common Lisp provides structs, arrays, and even a hash table implementation. The methods of abstraction are transparent in Lisp.
The main difference is: it is trivial to build an efficient Haskell implementation on top of Common Lisp, keeping interoperability with the rest of the system. And it is impossible to do it the other way around, to build a Lisp on top of Haskell.
Shen is like C++ in that it's bolted on top of a less typeful language (except this time it's Common Lisp, rather than C), and is "type safe" as long as you don't deliberately use a number of escape hatches.
Shen is a PLATFORM INDEPENDANT (not clisp based) language with an emphasis upon functionalism, a novel and very powerful type system based on sequent calculus, and OPTIONAL type checking, IF you want it. It is platform independant because it is built upon an incredibly simple lisp that you can build an interpreter for on top of almost any platform, so long as you can guarantee TCO.
Wrong. Sometimes type checking is uneeded. In addition it can make prototyping easy: Write the code, add the annotations later. Of course, you need some discipline to make it work, but of what good practice isn't that true?
And if you really hate it, just type (tc +) at the start of your code. Magic!
The version with small functions, similar to the Haskell version:
In above Common Lisp code, we use four different functions which do one task: For different approaches see Common Lisp libraries like Series or Iterate... Iterate is LOOP on steroids.http://series.sourceforge.net
https://common-lisp.net/project/iterate/
> This is known as composability, or the UNIX philosophy. In Lisp a procedure tends to accept many options which configure its behaviour.
Check out the UNIX man for tail, grep, ... to see how strange above quote about 'unix philosophy' is. In reality Unix commands are programs with obscene amount of configuration options, sometimes piping data as text around, sometimes glued together by strange shell languages...