Shen/Qi was the first Fluchtpunktish lisp I ever spent time with, and I had not used any typing system outside of Typed Racket and some other hand-made implementations. I remember being amazed that the Shen type system isn't more widely used across more domains. Once you start using it, you feel like the type problem is 'solved' in Shen. The author is right on about this language offering something special--Shen is one of those addictive hammers that you will want to hit all the nails with. More importantly it's a type system that can be used in conjunction with existing systems or even implemented in a different language.
A good test for how powerful a new means of abstraction actually is, is to try using it outside of its original implementation.
I agree. In my opinion the whole notion of a baked-in "type system" is silly. Type systems themselves aren't silly, but including one in the language proper seems to be stopping at the wrong abstraction.
Let's say we see:
int add_two(int a, int b)
We're so used to seeing those "ints" as types, but why? There's a better abstraction here, mainly one of:
I'm not arguing for that horrific syntax, I'm saying that type systems are just another part of your program running at a separate time. There's a LOT more you can do with that concept than just enforcing and deducing the tags and acceptable conversions that go with values. Contracts, powerful dispatch mechanisms, ad-hoc optimization strategies, complex lambda cube stuff like dependent types... all of these things can be served with the right abstraction.
And there is no reason that your editors, documentation browsers, code completion engines, syntax highlighters, debuggers, and every other part of your environment can't use it!
That was, of course, just an example. I'm also assuming that any "real" type system implemented this way would do this with something like typeclasses or higher kinds or something else.
The larger point is that baked-in type system see the word "int" as a type, but what that's hiding is the higher abstraction: that what we're really doing is specifying a function which should be run when determining the validity of the program.
But that was exactly my point: what IS this "higher abstraction" to which you allude? What are you actually advocating here? Typeclasses? Sequent calculus? Something else? Details matter.
I'm advocating something that typeclasses, sequent calculus, contracts, etc. could be built on. Sure, the language may have some standard libraries that would do much the same things that these do in other languages, but those libraries could be improved, removed, or replaced by the programmer.
What exactly is the point of a type system? I think there are really two big ones: program verification and metadata tagging. To simplify, the former is why Haskellers love types, and the latter seems to be the main reason that F#'s type providers exist (and why C# programmers love creating lots of little types rather than using typeless object literals a la Ruby).
Intellisense is an incredibly seductive thing, and many programmers in the enterprise world would argue that any language that can't show you that little "box" after you press Ctrl+Space is dead on arrival.
But both of those things are a subset of the things you COULD do if the phases of compilation were in your control. Maybe you want a function that says "throw an error at compile time if someone uses this deprecated parameter on this API call", or maybe you'd like to have a function where if you call it with a hard-coded URL, the program fails at compile time if the URL can't be reached.
These are obviously a bit silly, but the point is that, to many of us, WHENEVER a compiler is doing something with our AST, that should be fully exposed and not a black box. Another example: why is Idris[1] its own language? Why isn't it just a Haskell library? Granted that also gets into programmable syntax and editor support thereof, but that's part of my larger point as well.
One reason why Idris is its own language is that you can't just lie dependent types atop a language---that's the whole point, the types interact with the values and the values the types. This blurs the boundary between compiler phases.
Sometimes you get type erasure, but for really interesting programs to arise (the kind that I'm sure Idris is aiming at, though I don't know for certain) you really want to operate in that blurry divide.
Anyone with experience with both Shen/Qi and an HM type system (OCaml, Scala, Haskell family) able to comment? I'd love to learn a bit more about what Shen/Qi do.
Its most significant fault and also its greatest virtue: there's no way to integrate Java's type system into a proper HM type system. As Scala's (mostly) seamless, bidirectional Java integration is its prime selling point (otherwise, why not just use Haskell?), this complaint mostly misses the point.
Hi fogus, I really enjoyed the article. I just want to point out a couple of typos: a) s/tern/term/ in footnote 9; b) in German nouns are always capitalized, so it should be "der Fluchtpunkt".
Can anyone here defend Lisp-2? Any time I read about it I seem to instinctively recoil in disgust. To me, the hygiene argument doesn't make up for the extra characters and loss of generality.
What I took away from it was that macros get more tricky to write when you have possible variable capture for functions, which we humans seem to be a little more blind to than for values.
Item 18 contains the compatibility concerns for existing code, which the guys writing the standard seemed to think critical to keep the various vendors on board.
Lisp-2 is much saner. What it adds is just funcall, and it's not such a big deal in practice, because most of the times in the situations of higher-order function you would use apply, reduce or map, which likewise should be used in Lisp-1. What it brings instead is that you have to worry about name clashes much much less. The classic is list vs lst.
What I personally like is that you always know what are you referring to in a higher-order function call: a variable name or a function name. So it just removes the need for some additional intellectual effort of remembering this (i.e. #' is a nice annotation)
Interestingly, the Ur-Lisp (Lisp 1.5) actually had both. There were two ways to define a function. There was one form, LABEL (the ancestor of CL's LABELS), which bound a function like a variable, and another form, DEFINE, that put the function on the symbol's property list. APPLY would check the property list first, then look for a variable binding.
From the Lisp 1.5 manual (1962):
"In actual practice, LABEL is seldom used. It is usually more convenient to attach the name to the definition in a uniform manner... The fact that most functions are constants defined by the programmer, and not variables that are modified by the program, is not due to any weakness of the system. On the contrary, it indicates a richness of the system which we do not know how to exploit very well."
I find the reasoning specious. In particular, files and programs are the same in MS-DOS and Unix. You can, for instance, ls the ls command or cat the cat command. He's also completely ignoring the superiority of the file system's namespacing (by directory) to just having two global buckets like in elisp.
The execute bit comparison is especially specious. If I try to execute a non-executable file, it fails. It doesn't go look for an executable file with the same name. Same as if I try to invoke 0 in Clojure. e.g. (0)
He's right that the symbol resolution of the first entry has different rules but ignores the fact that the shell isn't very good at higher order commands. Which is probably okay for a shell but not much use for a modern programming language.
Finally, he's right that multiple namespaces are a good idea, but that's Accepted Truth these days. It's just that we think namespacing should be under the control of the dev, not dependent on the type of the variable.
you can simply cat cat or ls ls by cd-ing to /bin first... with the caveat that some distros have moved them to /usr/bin or somewhere, hence the 'which'
Exactly. File systems resemble modern namespaces, not namespacing by type. You can put an executable and a data file in the same directory, or not. Your choice.
I have much the same reaction. My theory is that it comes from an extinct Lisp tribe. Those who think like that these days use different languages (Java?). We can only really hunt for someone who dates from that time and remembers what they were thinking. Maybe Richard Stallman? :)
I've used both Lisp-1 and Lisp-2, and really, this is a tempest in a teapot. It makes zero difference to how I think about code and very little difference to how I write it.
The paper that coined the term http://www.nhplace.com/kent/Papers/Technical-Issues.html lists arguments on both sides. IIRC the main argument in favour of Lisp-2 is that it makes CL-style macros easier to write by making it easier to avoid some kinds of variable capture.
Perhaps that was worded wrong, but what I meant was the idea that you have names that map to things, and that you have an evaluation order for those things. So that if you see
(foo a b c) and it isn't quoted, we know that foo will be resolved and evaluated, so will a, b, and c, and then the objects to which a, b, and c resolved will be applied to the object to which foo was resolved.
So, if I then type (a foo b c), we get the same behavior with different object resolutions.
Hmm...one symbol that points to both values and methods...maybe that could be the foundation upon which some types of useful abstractions could be built...nah, it would never be successful.
This really is a fantastic newsletter/post. Was excited to see it in my mailbox, and am enjoying it (taking some detours to look at the referenced articles).
Here is my brief review of the articles as a whole. As a little background, I've been writing Lisp professionally for a while at many companies, both start ups and top-tier tech companies.
I like writing about Lisp[1], and have been so far as to write blog posts ranging from why parentheses are a good thing to what kinds of data types Common Lisp has. I think there is still a place for writing about Lisp, but I don't think this e-zine accomplished the task to make it interesting.
Lisp unfortunately attracts a lot of fluff writing. Writing about how wonderful, deep, rich, or sophisticated it is. Writing about how it completely blows one's mind and takes you to this baroque, fluid land of programming literacy. I must admit that I've even been sucked into the vortex at a point in my Lisp career.
Then there are particular topics that get rehashed over and over again: what makes a Lisp? which is the best Lisp? which is the most practical? lisp-1 vs. lisp-2? was r6rs a failure? why are macros so great? why does Lisp subsume every programming language ever?
I found in this e-zine—whose title I can't repeat without bloating the paragraphs I write—that I didn't really gain anything after reading it all. What did I learn about T? I just learned that it was some language with interesting concepts, backed up with a paltry offering of examples. What did I learn about Apple? To me, it seemed like an opinion: Since Apple isn't doing wacky Lisp things, it no longer is catering to those with ideas. (This is something I disagree with.)
Regarding topics, I also found this e-zine to be somewhat haphazard in its arrangement. Maybe it's just the visual layout, but the topics didn't really flow, and it read more like a long blog post with randomly selected topics interjected with philosophical ideas about categorization.
I have a hard time really seeing, after reading, who this e-zine is for. It doesn't seem like it'd interest the general programming enthusiast, because it's just about a class of languages that few people actually use. It constantly shows code and uses jargon that the general programmer probably isn't accustomed with. It barely seems to appeal to veteran Lisp programmers because there's little depth and lots of the aforementioned fluff. The only group I can think it appeals to are people who have casually brushed next to Lisp at a previous time and wish to be re-injected with a shot of apparent beauty and mysticism they once saw, but never had time time or patience to fully reach.
My suggestion for future articles: Instead of trying to talk about all Lisps all the time, and talk about beaten-to-death topics that pop up on Usenet everyday, talk about something more technical and be comprehensive. What is something really interesting about Common Lisp? Though not exactly in the format I'd have for a magazine article, I wrote about "linear random access sequences" in Lisp here:
What about an overview of how the object system in T works? How about examples of how Lisp has solved problems in the industry that people don't know about (i.e., people know about ITA, the space probe, Jak & Dexter game, etc. because they're the examples that continue to propagate)?
How about where Lisp falls short? Lisp is a language that was ahead of its time, but there are language features now that seem beyond Lisp's grasp. Where has it failed?
All in all, I was somewhat disappointed, especially given the well qualified and articulate author. I think there's potential, but I don't think Read-Eval-Print-Love has successfully converged to what it should or could be.
I do wish Code Quarterly[2] was successful. Fogus even seemed to contribute to it with an interview with Rich Hickey. But, as outlined here[3], there wasn't enough motivation from the writers to make it successful.
The interesting thing about putting things out into the world is that people feel compelled to have an opinion. The fact of the matter is that this only exists because it's something that I feel motivated to write about. I am under no contractual obligation to produce anything, so that it exists at all is the result of a labor of love. You're probably right that there is nothing ground-breaking here and that the same information is available elsewhere, but where? Two dozen blog posts here, a few dozen Usenet posts there, a few books over here and a few papers over there. The value that I'm attempting to bring is through curation -- value to myself and as a side-effect others. The information is free foremost and is created on my free time.
As an author I have very little control over my readers' expectations or desires. The topics that you list are definitely interesting, so I hope to see you one day write about them on your blog (which I like BTW). I think that if this newsletter does not meet your expectations then maybe you will not like future installments. I appreciate your constructive criticism, but if I am to write a newsletter then I should stand true to the topics and styles that I would like to read and hope there are others who feel the same.
I, on the other hand, really liked it. I suppose I viewed it like an old school hardcore/punk fanzine in which the purpose is not to blow minds or push boundaries necessarily but to express and share an affinity for something you are passionate about. I also got the feeling that this is just the beginning of a long running process which required initialization.
Lisp's literary cannon, like R&B, cop shows, and Buddhist monk anecdotes has a traditional forms. The love song is foremost among them - and the newsletter's title is more than fair warning.
In the end, any piece may be dead to a reader, but writing without passion will almost certainly be stillborn.
Type systems: This is the biggest issue in my opinion. Most Lisps don't really have any formal notion of a type system. Common Lisp kind of does; it's pretty baroque, but if you look deep enough, you'll see it's way behind the systems offered by ML derivatives, Scala, or Haskell. Such a thing would be incredibly hard to bolt-on. Shen sort of offers a richer system in very weird syntax, but the compiler just throws that info away and doesn't make it useful. Typed Racket is another approach.
Polymorphism: In Common Lisp, I can't really make efficient, generic data structures. In Haskell, I can, by making the data structure polymorphic. Haskell will know the types at compile time and can optimize accordingly. In CL, I must do ugly things like provide equality predicates to functions, as opposed to having them associated to the data structure itself. François René Rideau has been trying to patch this up by something called the "Lisp Interface Library".
Functional optimizations: In any Lisp, you typically need a special library for doing optimization of functional code. Deforestation and so on can only be done with special packages like reducers in Clojure or SERIES in Common Lisp. Again, they aren't broad enough to cover the language as a whole.
Immutable/persistent data structures: Clojure has this pretty covered. It is possible to implement these data structures in other Lisps, like Common Lisp, but they're not bound to be very efficient.
OS integration: Not much of a comment. For Common Lisp at least, the language was designed without POSIX or Windows in mind. So it has really weird pathname conventions, poor ways of targeting the user environment, a weird idea about files, etc.
Code organization and packaging at the language level: This is an issue with CL and Scheme. Lisp doesn't really have the concept of an explicit API, or modules of code. There's no concept of a compiled shared library. Code is almost always distributed and integrated by source.
...
The list goes on. You can implement lazy data structures in Lisp, but it's hard to really integrate them in the language. Lazy data structures provide tons of benefits, especially by moving the boundaries for abstraction, but there seems little hope to make this a part of Lisp.
A big problem is that even if some of the above concepts are implemented in various languages (and as I stated, some of them have), they're usually implemented as a part of a toy language (even if it's not intended to be a toy), and are never really integrated well with what exists. Because of this, I don't think it's fair to say Lisp has all of these features, even if there exists dialects of Lisp that implement some of them.
> In CL, I must do ugly things like provide equality predicates to functions, as opposed to having them associated to the data structure itself.
> Immutable/persistent data structures: Clojure has this pretty covered. It is possible to implement these data structures in other Lisps, like Common Lisp, but they're not bound to be very efficient.
You really should take a look at FSet[0]. Not only does it provide fairly efficient persistent implementations of functional datatypes -- not as fast as Clojure's, perhaps, but not bad -- but it allows you to associate your equality predicates with your datatypes rather than passing them to every operation explicitly. (In fact, FSet currently provides no way to pass the predicate explicitly. Some people, including Faré, think this is an unfortunate omission, but I've never needed to do that.)
I know about FSet and have used it in production. It indeed is a library that has an immutable data structure.
FSet's solution to the equality "issue" is by having a generic function COMPARE which will return :LESS, :GREATER, and whatnot.
FSet goes to length describing the issues embedding immutable data structures in an inherently mutable language. Things like RENAME-PACKAGE screw up invariants that FSet expects.
Lastly, FSet admits that it isn't exactly performant, it's just good enough, which was the point I was trying to make.
Aside from a few gripes I have, it's a good library to use, especially in multithreaded environments where you don't want to deal with locking and whatnot.
Glad to hear you like FSet! (Not sure if you noticed: I am the author.)
I protest, though, that there's nothing about Common Lisp that makes FSet slower than Clojure. Rich Hickey used a very clever data structure called a Hash Array-Mapped Trie to implement Clojure sets and maps, and (I think) an Okasaki-style sequence implementation. FSet, in contrast, uses balanced binary trees for all these types. That's the source of the performance difference, not anything about CL.
> Shen sort of offers a richer system in very weird syntax, but the compiler just throws that info away and doesn't make it useful.
While I share your belief in the importance of type systems, I don't understand this remark.
I haven't used Shen (or Qi), but it's at the top of my list of interesting languages to try. As I understand, the optional type system is a core Shen feature. The syntax may be unlike other programming languages, but is taken directly from type theory. Can you go into more detail, perhaps contrasting with Haskell?
Thanks. I don't partake in any newsletters and don't really write about languages in general, though I am interested in them. As I said in my parent post, there are ramblings on programming/Lisp/etc on [1]. There are topics related to the above that I plan on writing about, including the somewhat messy state of type/data-driven programming in Lisp, and so on.
Agreed with the polymophism efficiency concern. I was doing some farting around with heaps-the-data-structure earlier this year and changing my DEFUN to/from DEFMETHOD made a decent difference.
`(defmethod insert ((heap heap)) ... )` is distinctly slower than `(defun insert-heap (heap) ... )` (even after as much tuning of SBCL as I could).
One nit: in T (object ...) doesn't create an anonymous class, it creates an object implementing the given operations, just as lambda creates a procedure. I agree that it's a lovely feature worth highlighting.
Many languages claim to be "lisps" for various reasons, and there's always someone who disagrees.
Also, what exactly do you mean by "homoiconic"? People love this word because it sounds fancy and marks them as someone who geeks out on programming, but the fact is that this word is, if not ambiguous, at least prone to abuse. Let me give a specific example.
The new language Julia has a macro system and Julia's designers claim that the language is homoiconic. Is it? It has a way to quote expressions such that
a + b
is just the result of a + b whereas
:(a + b)
returns the expression tree for a + b. This can then be modified, eval'd, and otherwise sliced and diced. But what does that expression tree look like? When you print it in Julia, you get something like:
Question: does that look like a normal snippet of Julia code you could paste into your test editor and eval? No. The "real" syntax for that code is something like:
{:call, {:+, :a, :b}, Any}
(Julia-ish pseudocode, not precise)
In any event, while you CAN write a Julia program using the literal syntax for the data structures that represent Julia programs, people usually don't do that.
So what does homoiconic mean? Does it mean that you can get to the data structure representing your code and then manipulate it at will? Or does it mean that PLUS the fact that programs in your language always take exactly the form of the data structure that represents that code?
If the former, then a lot of languages (including C# with its expression trees) can claim to be partially or fully homoiconic. If the latter, then the list is much smaller.
So would a homoiconic Haskell be a lisp? I think what you'd end up with after taking things far enough wouldn't be Haskell at all. It'd be a lisp with a REALLY good type system.
After all, things like monads and morphisms and applicative functors aren't Haskell, they're mathematics.
Yeah, the word is admittedly a bit problematic. If one wanted to be overly pedantic, any language with strings might be called homoiconic. The better answer is to ask how commonly-used are the data structures for storing the expressions. What makes Lisp so powerful is that these data structures are lists and lists are used all the time; this enables a large amount of code sharing between macros and regular functions.
Now this makes it very straightforward to evaluate the usefulness of Julia's claim of homoiconicity: how often would you expect to use the data structures Julia uses to represent its expression trees? If the answer is not very often, then the language suffers for it because you're going to have to write special-purpose code for macros instead of reusing the libraries you'd use anyway.
The Land of Lisp is a fantastic book. Absolutely fantastic. I am not a Common Lisp fan AT ALL, but if you want to learn Lisp without feeling like you're trying to gain admission to some Hacker Buddhist Monastery high in the mountains where a bearded guy named Master Foo may just condemn you forever to Visual Basic if you so much as ask the wrong question, then it's a great start. :)
FYI, I didn't click the link to look at flower paintings, so I left. If you're trying to pull people into a tech article, it might help to start off with some text.