Just to put this paper in an appropriate historical context, I wrote it back when I was trying to convince NASA that flying Lisp on a spacecraft would not be a completely insane thing to do. The Powers That Be had already mostly accepted that it would be OK to fly Java, but they thought Lisp was too big and slow. This study and paper were designed specifically to address those concerns.
Yet to this day, I still find people claiming that Lisp is an interpreted language and that the garbage collector introduces disastrous pauses in runtime. As if it was still the same Lisp people were using back in the 1980's.
The 1980's state of the art was a lot better, depending on how long a pause you could afford for incremental GC (fractions of a second, OK for humans, but not a variety of applications. OK for piloting bulldozers, as one company did.)
Interpreted only ... that's a narrow, 4 year window, the first was developed in 1962.
Probably the key thing in this is that most any good programmer could hack out a Lisp implementation in a week or two. Making one preform as well as contemporary languages, strangely enough, took a similar amount of effort, and most Lisps back then didn't.
Well I was born in '84 so it's all history to me ;) I've only ever known compiled, high performing Lisps with unnoticable GC. So it's weird when I still hear people trotting out the old "bloated, interpreted, slow" line.
I sometimes think that the AI winter really took computing in a weird turn, not just Lisp, but computing in general. It feels like we stopped trying to make computers do all the work for us and decided that humans should be doing the work for the computer. So we ended up heading down the path of manual memory management in the C style languages, verbose constrained programming in Java and static typing instead of the easier dynamic. Whilst I agree that at one point, hardware performance was a serious consideration and we needed to squeeze everything we could out of computers - that is no longer the case. We're overwhelmed by hardware performance in most cases. Perhaps that's why we're seeing a resurgence in Lispy languages nowadays?
In the '80s, a common startup setup that was more powerful than a PC (for a while) and ASCII terminals to a UNIX(TM) box was 3 or so el-cheapo Sun workstations sharing one disk over an Ethernet (or so I've read, I might well prefer the ASCII terminal to a UNIX box, which is what I tended to use).
Although also think about the PC starting with ones using the 386 (true 32 bits, the 286 was misbegotten and all but useless except as a fast 8086): both these configurations were cheap, but not all that capable, memory was at a premium as well as virtual memory in the workstation case.
So using machine but not programmer efficient C and the like was the zeitgeist. Then about or a bit before the time engineering workstations based on more capable microprocessors were also capable of running Lisp quickly, although without hardly as much error checking, the expert system bubble collapsed. And those promoting it had to find something to blame, and Lisp became it. Or so goes one narrative of it.
There was certainly a ... Slough of Despond in programming languages we mortals could use to earn a living about the time you would have started programming seriously (before college, I assume). For Lisper and programming language guy like myself it was very depressing; I think what got us out of that, besides Moore's Law, was the dot com bust, it upset a lot of the conventional wisdom, coupled with a paucity of resources for development. No longer could you, at the extreme, do a wild IPO with an idea and a plan to hire a bunch of blub programmers using a consensus language running on expensive Sun or whatever hardware. You probably get the idea of what lost favor (ask if not).
Also, as Paul Graham pointed out, in essays who's influence I can't judge, the above will at best get you average results, and the average result of a startup is failure. He also noted that technical failure was a frequent cause of corporate failure. To take one stark example, compare Facebook competitor Friendster. Whatever promise it had, that the company's board used to wheel and deal, didn't matter when it was just too slow to use. Of course (if I remember correctly), its technical founder was purged by that board.
Which also leads into issues of control. We technical people don't need so much money, and the all too common downside of losing control is awful for us, the company, and of course the people we hired for it.
So all in all we more tend to call the shots technically, and are able to pull it off. And wild successes like Facebook with PHP and Twitter with Ruby on Rails didn't hurt.
Now that the dominant languages seem to be dynamic languages like Python, Ruby & JavaScript, I wonder if Lisp's edge in development speed is as pronounced; OTOH, I wonder if it has an increased edge in stability and/or security.
For the first time this month the top 5 languages on Tiobe add up to over 50% -- last month it was just under 50%. And those top five are the 5 "C+ languages", i.e...
C (plus is implied)
C++
Java (3rd to rise, with syntax based off C++)
C# (4 pluses joined together to make a sharp sign)
Objective C (5th to rise, tho around for longer)
Whereas all the "dynamic" languages in the top 20 (i.e. Basic, PHP, Python, Perl, JavaScript, VB.NET, VB, Ruby) make up only 18% in total.
VB is more so than C#. Instead of having the "dynamic" type, which lets you enable late binding for specific references, it has "Option Strict Off". That one enables late binding for every single reference of type Object in the file in which it appears.
And it's had that behavior for much longer than C# has; I think since the very first version of the language. Though I suspect that the implementation of late binding has become more efficient since the DLR came along.
There is always the last bias: lisp programmers, by their strong academical background, tend to be more educated CS-wise than JS, Ruby, or Python programmers, which enable them to a better understanding of a CS problem. It's almost tautological.
You can't just take 20 programmers and give them a problem to solve, that's not how surveys works.
The dominant languages are still
c-family strictly typed imperative languages (see e.g Tiobe). The up & coming langs are those with even more powerful type systems such as F#.
The Tiobe Index front page used to run a monthly historical comparison of statically typed languages against dynamic ones, but removed it about 6 months ago. There was a clear reversal in the trend away from dynamically typed languages 5 to 10 years back. I see dynamic languages as the 2000's version of the 1990's visual programming fad. Only one or two of those visual programming products (Visual Basic and maybe Delphi) survived, and the same will probably happen with dynamic languages: I'm picking Python and Javascript will be the only ones around in another decade.
I don't think the RAD languages really died, their most useful features just become so standard that they stopped being notable. Visual Basic .NET and C# have more-or-less exactly the same tooling for developing UIs, for example, and on the Apple side of things there's still Interface Builder.
I could see something similar happening with dynamic languages. Over the past decade, the things that once irritated me about working in static languages have largely been addressed as features like generics, type inference, and interface polymorphism became more common. But I can't imagine dynamic languages will ever go away, if anything because they're still so successful in the scripting domain.
My experience with Common Lisp, Python and JavaScript, is that Common Lisp (More through intrinsic properties, rather than a specific choice of tools), provides a better development environment.
It seems that python development speed means well rounded libraries. And IIRC lisp is still not on par with that (quicklisp users feel free to correct me). Other than that I still think that linguistically, lisps still have an edge (and a sharp one).
Common Lisp certainly doesn't have as many libraries as Python or Ruby, but let's not exaggerate. The ecosystem is broad and deep enough for the average programmer -- me, here -- to be perfectly productive.
But Ruby is an acceptable Lisp[0], and has also a lot of libraries. ;)
So, how will Lisp compete ?
Maybe if we look at it pragmatically, the solution is not in the performance nor in the time to develop, but in the ease of maintaining the software in the long term. A field where Lisp is not particularly brilliant.
The article's first premise of what are lisp's virtues is dead wrong. First Lisp is NOT a functional language. A functional language is one that maintains referential transparency. i.e. where I can replace the reference/variable with its value its semantics are unchanged. Common Lisp is a multi-paradigm language and pervasively uses the the concept of places to setf state. And in a more general lisp sense, macros are opaque to the run-time and directly opposes referential transparency.
His second statement, Ruby gives you 80% of what you need of macros, implies that we know what macros are for. Macros are not a solved problem, you can see Racket is actively researching macros. For example, how does one do meaningful error reporting in macros? And given that macros, generally, are programming at compile time the statement is as analogous to saying we know what we are programming for, a statement I couldn't disagree more with.
The quid of Lisp I would argue is not specifically macros, but its 'meta-circular semantics', that is to say, the capacity of lisp to speak of itself in a meaningful way[0][1]. For example, in C++ we can speak of C++ but not in C++ but in its template language, that is to say a meta language. Lisp is its own meta language. We could also speak of Python in Python, for example express Python's for loop in python[2], except there is now way to replace python's definition of for with my own.
BTW, that doesn't mean Ruby isn't a good language. Lisp is not an idyllic standard from which to measure how good all other languages are. For example, Smalltalk is far from being a Lisp, that doesn't mean it is not good.
[1]: "Informal "design patterns" are only for inexpressive languages. In Lisp, you can always express the "design pattern" formally, through a function or a macro.
(...) No half-assed informal descriptions of repetitive patterns needed; if you can actually identify and express a redundancy using English language, you can also code a formal function or macro to get rid of it"
— Fare in http://fare.tunes.org/files/fun/fibonacci.lisp
I forgot to mention that the main lisp implementations (SBCL) are fast. Compared to languages like these it's a done deal.
Good point about long term maintenance. Ruby and Python aren't that great either, people say static typing is the way to go. Lisps are weird because they allow to hot swap most of the things which makes changing a system easy, but at the same time it leads to spaghetti images.
My experience is that long-term image-based development isn't really that common in Lisp, maybe outside of some specialized use-cases. You can develop by mutating/saving/reloading an image over an extended period of time, but in practice people mostly develop the "usual" way, with source code that's loaded into a fresh image periodically. The use of image-oriented development then gets limited to prototyping and debugging. Being able to redefine functions/etc. on the fly (even during restarts and the like) is useful while experimenting and debugging. But then the "real" code gets written into a source file, which is reloaded into a fresh image to make sure it works. For modern muti-developer Lisp this is pretty much the only way things are done, since the expectation is that you check Lisp source into svn or git, not images.
That's pretty much exactly how I develop. I rarely type anything directly into the REPL, instead typing into the source file in emacs and then C-c C-c it over to the REPL for me. The advantage is that I get a canonical copy of the entire source code, whilst still having the ability to inspect and hotfix things through the REPL. If I then need to reboot the image or deploy to a new server, the entire source base should be completely stable and consistent.
I often connect to production servers using Slime and fix little things like typos this way - then check the source into Github and have it permanently saved. It removes a lot of the stress of doing a full deployment each time you need to patch something!
> Now that the dominant languages seem to be dynamic languages like Python, Ruby & JavaScript,
That's incorrect. Regardless of the metric (TIOBE, job boards, StackOverflow, ...), you will see that the top five languages are 90% statically typed (usually Java, C, C++, C#).
Even for 2000, this paper is hilariously biased, starting with the observation that Lisp wins over Java in every single respect. If I order an evaluation paper and the paper comes back with 100% for choice B and 0% for choice A, I'm going to disregard it completely because the author is obviously too biased.
One of the strengths that Java has over Lisp is that it's statically typed. At the time, the direction of the industry wasn't too clear cut but this is clearly the trend today.
Another example of the bias is that the benchmarks compare development times (and Lisp wins, surprise surprise) but not maintenance time, which is admittedly harder to measure but a big liability that dynamically typed languages have.
TL;DR is that many CL implementations will check types at compile time, but for appropriate speed and safety settings, they will also generate code that checks types at runtime, too.
That depends on what you mean. CL declarations are promises to the compiler. They permit the compiler to make optimizations but does not require it to signal compile-time errors. Specific implementations may give you compile-time errors, but in general you can't count on it the way you can in Java.
> One of the strengths that Java has over Lisp is that it's statically typed.
With the caveat that Java's type system can be subverted and isn't expressive enough to catch any more than the most trivial kinds of errors a type system can catch.
Most type systems (including Haskell's) can be subverted. Java does catch many common errors, typically of the form "I refaxtored, now this function takes 4 args instead of 3" or "this function returns an X now".
The latter sort are, in my experience, the most common form of type error and catching g them is awesome.
"Expressive" tends towards "complex". I would argue that Java's type system is pretty simple which is a good thing - a pragmatic choice. You sound like you're making "perfect" the enemy of "good"
It's unfortunate that languages can't gain much traction in the mainstream. Rust, Haskell, Julia, Go, Clojure, Scheme...
I remember the fight that it took before Python and Ruby were accepted. There's the big catch-22, of course, in order to obtain critical mass.
Maybe if we all adopted a language for our weekend projects, we'd contribute enough back through blogging and StackOverFlow questions that we'll add enough knowledge, etc to the net, that it'll become increasingly easier for the next person.
I'd say Go is becoming mainstream pretty quickly. I think Clojure isn't far away and Rust is showing big promise. I doubt based on probably under-informed personal observation that Haskell, Scheme or Julia will ever become widely adopted.
The thing with niche languages such as Haskell is that not everybody has to use them, but we ideally would get to enough critical mass that the ecosystem is sustainable, new and better tools are continuously being developed, and it's pretty beginner friendly. Clojure is probably at that stage right about now, Haskell doesn't quite feel that way, but it's getting better (more docs, guides, books, services like Stackage etc)
Basically it's very nice when a language gets enough traction that the lack of users is no longer a concern for its long term prospects.
Ditto. If JVM is good enough for, say, Twitter, it's sure that performance of Java programs got better and that "Run time" and "Final image size" from Figure 1 in OP pdf would look different. OTOH I'm not in the Lisp land but I guess there was probably some improvement there too.
Standard JVM, slightly smaller, much faster. This is comparing to Java 1.2 before JIT/Hotspot and a decade of performance work. (Also I doubt anyone had 7.7 years of experience with Java in 2000, so that would be general)
If we consider JavaCard or Java ME a java dialect then yes, it has gotten a lot smaller! I think neither of these where available in 2000.
Running Java also does not need a JVM. There are AOT compilers for JAVA. Like Lisp there are a lot of different implementations with lots of differing trade offs.
In any case a very flawed comparison, like all programming languages comparison of this kind as it is so hard/expensive to do it correct.
But an interesting read anyway.
I think of languages as equivalent (they are all turing complete right) so something buildable in one must be buildable in the other. The question is what tradeoffs do the languages make for the humans in the loop.
Lisp and Java make different tradeoffs there, and C makes yet another set of tradeoffs. And which languages we use is more often dependant on historical accidents than technical excellence (like most human languages) see JS as an example.
I believe dynamic languages are better for small teams and short project life times. I believe automatic memory management by default languages are better for the vast majority of projects than manual memory management options.
I think a written language like Traditional Chinese (Java) is better for running an empire than one like Latin (Lisp). But a more modern idea like Hangul (Clojure?) might do even better.
In the end to get the best CPU performance one needs to know the CPU and architecture as well as an ability to think in bits. And like in human language one can write poetry in all of them, its just a bit more difficult than the office memo's we tend to write and read in our jobs ;)