I used to work with a number of LISP machine believers at the MIT AI Lab/CSAIL. They all had more modern computers for day to day tasks, but used the lispm for most of their programming. This wasn't that long ago (I left in 2010), and I suspect that those machines will remain in active use for as long as people can keep them running.
They all believed that the loss of the lisp machine was a serious loss to society and were all very much saddened by it. I never used the system enough to come to my own conclusions in that regard, but it was interesting food for thought. As somebody for whom Linux/POSIX is very deeply entrenched, would I even recognize a truly superior system if it was dropped in my lap? More importantly, would society in general? The superior technology is rarely the "winner"
It was by far the most productive programming environment I have ever used. The level of integration of the editor, debugger, IO system, and interpreted and compiled code is unparalleled. Interestingly it philosophically descended from MACLISP development on a machine (PDP-10) that was designed with Lisp in mind and that had an O/S (ITS) whose "shell" was a debugger, so you could also do pretty tightly coupled development with EMACS (in TECO) and your code in a mix of interpreted and compiled Lisp. In theory this deep level of integration need not be Lisp-specific, but I haven't seen it that often.
The closest I've used were the three environments at PARC when I was there: Smalltalk, Mesa/Cedar and Interlisp-D. When I use Xcode or Eclipse I feel removed from the machine. In these other environments I felt simultaneously able to think at a higher level and yet more tightly coupled to the hardware.
I've used various GNU Emacs modes and the coupling between them and the runtime environment is not tight enough. Today I use SLIME+SBCL and it's OK. It too lacks the tight coupling of the lispm. However for production we'll end up re-coding in C++ for performance.
PS: A good friend of mine scorns the lispm-style of development as "programming by successive approximation." There's some truth in that.
Well, with exploratory programming you tend to build up a couple of data structures, add some functions to manipulate them, and then extend out from there. In a more structured way you think up front about a lot more things. In the second you probably doodle some stuff out on the whiteboard or on paper before coding; in the first you probably simply start with an empty buffer type straight into the REPL. There's a time and place for each and certainly not a well-defined dividing line between the two (except in some highly structured UML/TDD processes which may not even exist any more outside aerospace).
For example I talked about the Lisp implementation of the code I'm working on: for deployment it looks like it'll be be an implementation in C++ based on what we end up learning about performance; certain implementation decisions that were particularly good or particularly bad, or that we iterated on several times before settling on something good; and that handles memory management more directly.
It was fun although at that point in my life I was not comfortable with statically typed languages. So it was good for me as well.
I really just experimented in it and the (more welcoming to me) Smalltalk environment. I used InterLisp-D as my "day job" language (actually we implemented 3-Lisp in it, with some custom microcode).
BTW there was a good paper from the Mesa group which I can't find online (my copy must be buried in a box someplace) comparing the performance of counted strings vs delimited strings (e.g. [3, 'f', 'o', 'o'] vs ['f', 'o', 'o', \0] in C syntax). According to the paper the bounded strings were much faster. All three languages (Smalltalk, Mesa and Lisp) used counted strings.
Thankfully Rich Hickey & co wrote Clojure so we can program on a modern Lisp in the Java Virtual Machine and in the browser! (ClojureScript) (even the .NET CLR is supported)
Though it kind of sucks that Tail Call Elimination is such a difficult task on the JVM.
Scheme kind of gets you thinking in a way that works iteratively, but gets expressed recursively. Its easy to read, and can make for some great optimisation without being premature.
The JVM does not really support this style of programming - despite LISP's syntax leading towards it.
I wouldn't say that it was appropriate there, either.
I really wish I could get my hands on a LISP Machine.
I love the idea of LISP being so close to the metal, but that power means some design tradeoffs.
Scheme makes sense with TCE.
I'm not sure InterLISP and the like need it - memory is limited and you work without so many of the system overheads I'm used to with modern systems. Iteration is less costly here, and that makes it easier to not need recursive design. (Which is more expensive).
In simple terms:
Why worry about blowing up a stack you don't need, when you've thought long and hard before you allocated it?
The designers argued that a million-line software (the OS + basic applications) was easier to debug/develop without having TCO everywhere. It makes stack traces useless, unless one thinks of clever ways to keep tail calls recorded, which makes it complex/tricky. The basic machine architecture is a stack machine with compact and nice instructions. Stack traces were useful then. The compiler also was not very sophisticated when it comes to optimizations.
I've long thought the right thing for Common Lisp would be to provide a way to declare sets of functions such that any tail call from one member of the set to another would be TCO'd; all other calls would push stack. This lets you write sets of mutually tail-recursive routines without forcing TCO on the entire system.
It's an entirely reasonable tradeoff, which I understand. But not having TCO is a strong negative from the language perspective, IMHO. Not an impossibly bad one, but it's really nice.
Funny enough, it seems JavaScript will get TCO before Java. JavaScript, for f*ck's sake! This will provide even more argumentative power to the people who claim JavaScript is 'Scheme in C clothes'.
However, a named let in Scheme is not a loop. Its still a lambda.
Which means you can have nested lets with TCE, and you can construct them on the fly, depending on what your needs are, usung patterns like currying.
That flexibility just doesn't work on the JVM. You can force it, but it'll be slow and horrible compared to another pattern - and I don't think a LISP should tell me how to do something. That's Python's ideology.
I believe the whole point was to target the JVM, because of reuse and maturity. And actually it's not stuck just in java ecosystem, clojure got wings years ago. Yoy can find it inside a browser today too :)
I wonder if there ever has been talk of a native Clojure? I guess it may not be very usable without the JVM ecosystem, though. Frankly, I find calling JVM library calls from Clojure to be quite ugly and really stand out in the code (mostly because of the mix of the lower-case-dash-delimited variable and fn nameing convention of Clojure and the mixed-case/camelCase naming style of Java.
The problem with all languages that decide to implement their own runtime, instead of building it on top of JVM or .NET eco-systems is that their native code generation and GC implementation are always going to be worse.
Also there is the issue of having to implement the whole set of third party libraries from scratch, just like PyPy and JRuby have issues using libraries that rely on CPython or Ruby FFI.
So unless you get a set of developers really committed to go through the efforts of making it succeed, everyone will ignore it.
The closest thing to a native Clojure is Pixie[1]. As the authors note, it's a "Clojure inspired lisp", not a "Clojure Dialect".
>Pixie implements its own virtual machine. It does not run on the JVM, CLR or Python VM. It implements its own bytecode, has its own GC and JIT. And it's small. Currently the interpreter, JIT, GC, and stdlib clock in at about 10.3MB once compiled down to an executable.
Yeah, but for many of us the Java ecosystem is a feature.
You only get tooling comparable to Visual VM or Mission Control in commercial Common Lisps.
Also I think many that bash Java don't realise it is the only language ecosystem that matches C and C++ in availability across OSes, including many embedded ones.
It is a consequence of being an enterprise language.
I imagine you never had the pleasure of doing enterprise distributed computing projects via CORBA, DCOM, SUN-RPC, DCE in C, C++, Visual Basic and Smalltalk.
Guess where those enterprise architects moved on.
EDIT: Should have mentioned Delphi and Objective-C as well.
Indeed. It's actually my preferred language for writing code. It may not have as many libraries, and it might not be as mature as, say, CL, but it's just so pleasant to program in.
The blub paradox would indicate that you wouldn't recognize it.
Richard P. Gabriel's famous "Lisp: The Good News, The Bad News, And How To Win Big" (aka "Worse is Better") discusses this very thing. I'd reccomend reading it.
But frankly, it's hard to say that an environment is objectively better. What one person may view as a step up, another may view as a step down, and we all have a kneejerk reaction to unfamiliar environments. The lispm is a really nice environment, to be sure, but I'll likely never know if it's better. Emacs will have to be good enough (which it certainly is).
There's still significant things on the list below you cant do with Linux, C, etc. So, yeah, I'd say a modern version of Genera would give hou a worthwhile experience.
For a very early overview of the technology I would recommend the 3600 Technical Summary. The 3600 was the first machine which was mostly designed by Symbolics. It was followed by three more generations (gate array processor, micro-processor and a virtual machine) of CPUs with something like 20+ further models.
I once used a 3600. :-) It greets with "Yes, master" when you turn it on.
The alpha port that wanders on the torrents doesn't seem to have that.
Either I didn't see it or is it something specific to a particular version of 3600?
Symbolics made some really cool tech. I just wish everybody would stop complaining about it. Yes, it was amazing. Yes, nothing modern can ever compare, not even Emacs, an environment arising from the same culture. Yes, we who experienced The Glory of the Lispm must eternally genuflect before it, condescending to anybody who didn't.
Instead, go look at what Symbolics did (or try: it's quite hard to get the emulator running), and learn. That system lost, and it's never coming back, but you can learn from what they did well when building your own system.
But when you have, don't complain about the inferiority of our systems. It may be true (I can't get the emulator running to find out), but it gets annoying pretty fast. Take the energy you would use doing that, and put it into making your system that much better.
You would not know that GNU Emacs has a shitty UI, if people would not tell you.
> Take the energy you would use doing that, and put it into making your system that much better.
It's not easy, but people are/were doing it: CMUCL, SBCL, McCLIM, Mezzano, the commercial Lisp systems like LispWorks and Allegro CL, ... all of them were/are efforts to bring some of the ideas/features. People were even developing emulators for the various Lisp Machines to make some of the old software runnable. MIT open sourced their original Lisp Machine software.
One of the most important things currently is to preserve the history of these and other machines / operating systems / applications. So that they are not lost and people can study them and learn. Lots of stuff is already lost.
I was plenty aware that Emacs had a crummy UI. I just don't think it matters as much as you do. I said as much.
>It's not easy, but people are/were doing it...
That's actually great. I'm tremendously excited by those systems. I tend to prefer Scheme to CL (less cruft), but I'm excited all the same. Those systems and environments are really cool.
Although it is telling that SLIME (which is excellent, by the way) is one of the most popular ways to write Lisp code.
> I just don't think it matters as much as you do. I said as much.
There is a lot of software out there which needs powerful and/or different IDEs to be written.
> Although it is telling that SLIME (which is excellent, by the way) is one of the most popular ways to write Lisp code.
Guess where the inspiration for some of the SLIME features is coming from... Slime also does many cool new things, but it is still missing a lot of features and usability from the old environments. Part of the reason is GNU Emacs. It is what it is: a programmable editor with zillions of features. But its base, its UI, its interaction model, its integration idea is limiting in many ways. Many of the defining/leading IDEs were very different: Smalltalk 80, Interlisp-D, Symbolics Genera, ... Turbo Pascal, Visual Basic, Hypercard, ... NeXT ..., IntelliJ, ... (and a lot of others)
I am well aware of where SLIME's inspiration comes from.
As for IDEs, I have a marked distaste for special purpose tools: I want to be able to use on tool to edit all text. And not all IDEs are good: IntelliJ is pretty rubbish (although that's more to do with Java than IJ itself...)
Emacs's limitations tend to line up with the length you can go before you start becoming a special purpose tool, tightly tied to your environment, or the point at which the Unix model doesn't really work, and you're going to have to go build your own little world (like Smalltalk did). They certainly aren't perfect, and could go a bit farther (something work is being done on, last I checked), but it usually works well enough, and it's so extensible, and has had so many extensions written for it, that it can beat just about any other general purpose editor, and sometimes even some of the special purpose ones.
> I am well aware of where SLIME's inspiration comes from.
Obviously not. SLIME's inspiration comes from various other integrations of Lisp into Emacs editors, especially Emacs Lisp and ILISP. SOME of its inspirations comes from the Lisp Machine, direct or indirect. SOME. Not ALL and not even MOST. For example the Lisp Machine does NOT use Zmacs as a Lisp REPL, like ILISP/SLIME/... does. The Lisp Machine has a REPL, which is called a 'Listener', which is a separate application and which is not based on a Zmacs editor buffer. The Lisp Machine listener has a very different feature set and look&feel, from an ILISP/SLIME repl.
The LispWorks listener is a mix of both: it is based on an Emacs substrate, but offers slightly more Lisp Machine like interaction. SLIME though adds a simple presentation system, which LispWorks does not use in the listener.
> I want to be able to use on tool to edit all text.
A Lisp system is not text-based.
Lisp is based on data. Code is data and data can be code.
Using an editor to work with text is only half of the story. A good Lisp IDE lets me more or less directly interact with the data.
Interlisp-D worked with Lisp data throughout the IDE. That's a whole different interaction.
> IntelliJ is pretty rubbish
It isn't. It's actually quite good at what it does.
...And what makes you say that? I was actually aware, in any case.
>A Lisp system is not text-based.
>Lisp is based on data. Code is data and data can be code.
>Using an editor to work with text is only half of the story. A good Lisp IDE lets me more or less directly interact with the data.
I'm aware of that, but text is a fairly convenient representation of Lisp data. In fact, I'm unsure what you mean by directly interacting with the data, as you can't have bytes fly from your fingertips, AFAIK.
In addition, files are a pretty good metaphor as well: If you want to store your code as something textual, they're indispensible. And sure, you can use image storage and navigate in other ways, but Lisp isn't Smalltalk: The way Lisp is written isn't as unified, so that wouldn't work as well, AFAIK.
And sure, IntelliJ does what it does well, but I don't think that what it does is especially good, and most of it is just making up for Java's sins, things that don't exist in other languages: A good development environment is imperative, but if it's painful to write code in the language without the tools IDEs provide, than there's something wrong with the language.
> I'm aware of that, but text is a fairly convenient representation of Lisp data.
Some Lisp data does not have a textual representation, it might have a graphical representation or the textual representation may not be very helpful (Lisp data being a graph might be better displayed as a 2d or even 3d graph, than as a textual representation). The Symbolics UI of the REPL/Listener would allow you to interact with the 2d/3d graph as it were Lisp data, which it actually is underneath.
> In fact, I'm unsure what you mean by directly interacting with the data, as you can't have bytes fly from your fingertips, AFAIK.
GNU Emacs pushes characters around.
S-Edit manipulates S-expressions.
GNU Emacs lets you pretend that you edit s-expressions, by pushing characters in a buffer around. But you don't. All you do is push text around.
S-Edit directly manipulates the data. It's a structure editor.
Symbolics Genera uses 'presentations' to record for every output (graphical or not) the original Lisp data. When you interact with the interface, you interact with these data objects. For example in a Listener (the REPL) programs and the Listener itself display presentations, which then can be acted on. It also parses text into data objects and then runs the commands on the data objects and not on text.
SLIME provides a very simple and partial reconstruction of that - which is a nice feature.
> In addition, files are a pretty good metaphor as well: If you want to store your code as something textual, they're indispensible. And sure, you can use image storage and navigate in other ways, but Lisp isn't Smalltalk: The way Lisp is written isn't as unified, so that wouldn't work as well, AFAIK.
Smalltalk does not use data as representation for source code. It uses text as representation for source code.
Interlisp-D is more radical than Smalltalk 80. The Smalltalk editor edits text. The Interlisp-D editor S-Edit edits s-expressions as data.
Interlisp-D treats the files as a code database similar to Smalltalk, but the source code loaded remains data and you can run the program from that data directly via the Lisp interpreter. Smalltalk execution does not provide something like that. The so-called interpreter in Smalltalk executes bytecode. This is different from a Lisp interpreter, which works over the Lisp source data.
The combination of a s-expression editor with an s-expression interpreter (plus optional compilation) is very different, from what Smalltalk 80 did.
When you mark an expression in Smalltalk 80 and execute it via a menu command, then the expression gets compiled to bytecode and the bytecode interpreter then runs it.
If you mark an expression in S-Edit, the s-expression data is extracted from the data structure you edit and you can run that s-expression data with an interpreter, walking directly over that s-expression data.
BTW while pretty much every Interlisp-D user used D-edit, I was never comfortable with it because it required use of the mouse. By the time I started using Interlisp I had about six or seven years of Emacs wired into my fingers and didn't like being slowed down by taking my fingers off the keyboard.
There was an emacsy interface made by Kelly ??? in the office next to me which I extended into a real Emacs clone (with modes and everything) which a bunch of us used. In retrospect I should have gone native and adopted D-edit directly but what can I say I was a snot-nosed 20 year old kid.
Thanks for the information. It is legitimately interesting. But while it is a neat idea, the difference seems to be largely inside-baseball. Sure, we're dealing with text, but we can still inspect our code quite well, and we can do much of the same sort of stuff.
In any case, I don't really want my editor editing sexps. The moment my editor is working with sexps rather than characters, it's no longer a general-purpose tool.
Imagine the editor and the runtime work on the same data:
EDIT <-> data <-> EVAL
In GNU Emacs it looks like this:
Emacs EDIT -> text -> Emacs SAVE FILE
SBCL LOAD FILE -> SBCL READ -> SBCL EXECUTE
or
Emacs EDIT -> text -> Emacs TRANSFER to SBCL
-> SBCL READ > SBCL EXECUTE ->
SBCL generate TEXT -> SBCL TRANSFER to Emacs
-> Emacs DISPLAY -> text
With presentations it looks for the user that the Emacs side knows the data for the text.
That's a lot of indirection, a lot of conversions, different runtimes, etc.
S-Edit feels like working with clay. A text editor feels like working with instruments, manipulating something which then manipulates the clay and you are watching the result through goggles.
Neat. But actually, I find that mechanism quite unpleasant, and one I wouldn't like working with. I'm sure it's quite powerful, but so is Vim, and I never really "got" Vim either. I would, in fact, argue that manipulating text has advantages over direct object manipulation: the first of which is that you can more directly edit text, whereas the DEdit interface is all about executing commands on objects. Secondly, you can apply useful transformations to text which don't necessarily make sense to apply to raw datastructures (search + replace, regexes, other handy transforms). Finally, many of the advantages of manipulating a pure datastructure can be had in text as well: see paredit.
But that's just me. You use your cool lispm tools, I'll resign myself to never acheiving ultimate productivity.
Text is a raw data structure, it's just not the one that's used behind the scenes: Everything is a datastructure. And actually I do touch the data by manipulating text.
>From the sidelines, your approach is "all I have are nails and all I need is the biggest hammer you can get me".
From my perspective, my approach is "that development environment sounds really unpleasant, and while I can appreciare the elegance, I'm having trouble finding the relative upshot."
When your editor's cursor hovers over "transistor T1" do you in fact touch the transistor on the board over your desk, the transistor in the schematic diagram, or only the text that happens to be "1T rotsisnart" spelled backwards (with no underlying meaning that is)?
Data representation is not the data. It's just this: "re" presenting. And when this representation is disconnected from the source by means of showing only the raw text and not allowing access to the source you get what you describe as "unpleasant".
You may find it enjoyable to work only with raw text but you're missing out on working with what that text is supposed to represent.
All is representations. Some are closer to the data they are representing than others. I am aware that text is a representation. It is a also a data structure: A data structure which we are better able to manipulate directly with our human senses, and a data structure which tools like DEdit still render to.
>And when this representation is disconnected from the source by means of showing only the raw text and not allowing access to the source you get what you describe as "unpleasant".
First off, pleasantness of an interaction is a matter of personal opinion. If you like DEdit, great: I'm not stopping you. But I don't enjoy DEdit and its like. That's my prerogative.
Secondly, I'm not working "only with raw text." Paredit &co let me manipulate the sexpr data structure more directly (or less directly, depending on how you look at it). Geiser and SLIME let me evaluate and manipulate the code as code. But at the end of the day, I'm also operating on text, so I have all of the textual data tools as well, and because Paredit is actually operating on text, it can be used outside of the context of Lisp: As Perlis said, "It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures."
There's no reason it can't come back. It requires a massive amount of programming work and possibly new hardware but it's definitely possible and people are already working towards it. It's actually quite possible to build upon it and even surpass it.
People are working on new Lisp-based systems (which is awesome!) But Symbolics systems will never come back, and it's unlikely that any of these systems will gain traction in a world where Unix dominates.
...Unless, of course, one of the popular embedded or mobile systems companies (Apple, etc.) Suddenly rewrote everything in lisp, and forced all 3rd-party devs to do the same, creating a massive market only reachable by Lisp programmers.
Apple did that once upon a time, while Steve Jobs wasn't there. In a 'perfect' world you would use an iPhone which runs a Lisp OS. Before the iPhone and before the Newton MessagePad, Apple had an ARM-based Newton-like machine with really really tiny hardware (roughly 20 Mhz ARM CPU, around 1 MB RAM, a few MB ROM) running a real Lisp OS and had it almost made a product. Almost...
It's not a problem, really, I just think it's a good idea to tell people if you're citing your own pages, in general. It's by no means awful to do so, but it's better to let people know, so they don't think that your claim is corroborated by an external source when it isn't.
If you think I have a problem with you, or something, I don't. So don't worry about it.
The source is well-known and long-time Lisper Mikel Evins who worked for Apple in various projects and also in these projects where Lisp-based operating systems were developed. Mikel has multiple times mentioning and describing this work. You even find it here on Hackernews, where he is a user. Just google for it. I see that he is even participating in this thread now.
Sorry for giving you grief over... everything, but I can't say I entirely regret it. I learned a lot that I wouldn't have otherwise, which is why I use HN in the first place.
Honestly, from the look of it, I'd like earlier versions of Dylan better than the language we ended up with. I never really liked the non-sexpr syntax. But that's just me.
The IDE was really written in CL? Huh. I'd have thought that they would have gone the self-hosting route.
The s-expression version of Dylan was my all-time favorite programming language. I've tried to be interested in the present version of Dylan, but it hasn't worked.
The first version of the OS for Newton was written in a language called Ralph (named for Ralph Ellison, the author of Invisible Man). Someone persuaded John Sculley that writing an OS in Lisp was a crazy idea, so he ordered Larry Tesler to redo it in C++. Larry asked a small group of us to see if we could find a role for Ralph on Newton. We probably took that mandate more broadly than Larry intended; we wrote another whole OS (except for the microkernel and the graphics kernel; we reused the the same ones that the C++ effort was using). It was probably the most intense and rewarding 18 months of my programming career. There were some really cool things that happened, and some really tragic ones.
Later it was decided that "Ralph" was not a good name for the language, and we chose "Dylan" in a process that yielded a few mildly funny stories.
Ralph was basically a subset of Scheme with a few special forms renamed, plus a subset of CLOS, plus a few additional extensions. All built-in Ralph datatypes were CLOS-style classes.
The native-code compiler was written in Macintosh Common Lisp. The development environment was an extended version of MCL called "Leibniz". Besides the usual MCL features, it additionally had a parallel set of tools--for example, in addition to the Lisp Listener, there was a Ralph Listener. Common Lisp code was compiled to native code and ran on the Mac's CPU. Ralph code was compiled to native code and ran on the Newton's CPU. At first the Newton motherboard was a big honking PC board jammed into a Nubus slot, so big that I had to leave the top off my Macintosh IIfx when it was installed. Later a ribbon cable connected the Nubus to a prototype Newton tablet-type device.
Making Ralph self-hosting on Newton would have been a lot more work, we would have had to live without a host of useful development features for a long time, and it would have required special target hardware with much more storage than Newton was intended to have. Using a cross-compiler written in Common Lisp neatly solved all those problems, plus it also gave us all of MCL's rather nice tools plus whatever extensions we chose to add.
Apple did do a bunch of stuff in Lisp around that time. For example, it did GATE, a knowledge-based automated testing system invented by Matt Maclaurin (I worked on that for a while), and SK8, the fabled "HyperCard on Steroids" invented by Ruben Kleiman (I worked on that for a while, too).
However, Lisp was never really a mainstream tool at Apple, except in certain groups in ATG (the Advanced Technology Group, which no longer exists). In fact, the only mainstream languages at Apple were, in rough chronological order, 6502 assembler, 68000 assembler, Pascal, C, C++, Objective-C, and now, of course, Swift. All kinds of languages were used here and there, but only the ones on the mainstream list had any great general acceptance.
Thanks for the tremendous amount of info. Now I get the lack of self-hosting.
Yeah, I'm a pretty big fan of Scheme, so I would have probably liked Ralph. We do actually have CLOS-type classes over in schemeland (although not everything is an object by default, of course) now, and by "subset of scheme," most people mean "no call/cc," which I likely wouldn't miss too much.
Ralph lacked call/cc, as you suggest. It did support upward continuations, or "exits", using a special form called bind-exit.
Scheme implementations of CLOS are much more common now than they were in 1992. The only ones I remember from around then are RScheme, Oaklisp, and of course Gregor Kiczales' tinyclos.
As far as language features, I still like Ralph the best of any language I've used, though there are things from newer languages I would want it to incorporate if it still existed today. But the actual usable language I like best today is Common Lisp.
There are a handful of other languages I like a lot--Scheme, Haskell, and ML are at the top of the list--but when I use them seriously, I miss Common Lisp. I don't miss any of them when I use Common Lisp.
I didn't miss Common Lisp or any other language when I was using Ralph.
BASIC was a product for Apple customers; as far as I know it was never a mainstream choice for writing system or application software at Apple. The first 6502 assembler I ever used was written in BASIC, but that was on a Commodore machine, not an Apple machine.
No, it's not just you. Most Lisper thought that way. But the target weren't Lisp developers, it were C++/Apple Pascal developers. Basically similar purposes which Java was designed for: general 'mainstream' application/OS developers from mobile systems upwards. Management did not think that an s-expression-based syntax would be a success with developers used to use Pascal, C and C++.
> The IDE was really written in CL? Huh. I'd have thought that they would have gone the self-hosting route.
The new language was emerging and targeting the new hardware platforms.
You can think of it that it was similar with the Playstation games from Naughty Dog - for example Crash Bandicoot. The platform for the software was the Game Object Oriented Lisp on Playstation and the development environment were desktop computers running and IDE based on Allegro Common Lisp.
The dev environment were Macs. The developers were often Lisper and Apple bought the technology and the people. Even the later product version development environment 'Newton Toolkit' was originally developed in Common Lisp.
There were stranger things then. I once saw a version of Microsoft Word for Macintosh on a developer CD, written in Common Lisp. It was a relatively sophisticated User Interface mockup. It looked like the real MS Word for Mac, but lacked much of the functionality. But when you looked at the application file on the bit level, you could see that it was a Macintosh Common Lisp runtime/image. It was written by or for Microsoft. Long ago.
I should probably look into Swift at some point, but I've never found the time, and I find languages like Rust, Go, Python, Ruby, JS, Haskell, CL, and Scheme far more compelling.
Jobs' Apple would never allow such a product. A user programmable machine for which people could release software without paying Apple a cut? Zero chance of that getting off the ground, for the same reason that Hypercard had to die.
If we limit our perspective to the duration of Jobs's lifetime, the Apple I, II, III, and the entire Macintosh line never required a tithe to Apple for releasing software. (You can argue about the cost of developer tools / the developer program for the Mac, but you'd be wrong, since (a) those were never profitable for Apple and (b) people could and did use third party development tools.)
HyperCard development floundered before Steve came back partly because of bad management decisions and partly because it required a total ground-up rewrite to bring it up to even 1988's graphic standards (it was obsolete pretty much as soon as it was released, thanks to the appearance of the Macintosh II, which supported larger screens and RGB color).
"Obsolete" is overstating it. You could do a lot with HyperCard. People did.
Apple management started talking about cutting it pretty much immediately. It didn't directly generate any revenue--Apple gave it away for free. It didn't fit into any familiar product category. It wasn't on anybody's list of must-have features. Basically, Apple management didn't understand what it was or why it existed.
Apple's programmers and other makers were an entirely different matter. They used it for all sorts of things. So did third-party developers. So did people who discovered how to make software through working with HyperCard. Projects sprouted in Apple that either extended HyperCard or implemented new software in the same spirit with expanded capabilities--but all those projects had the same problem as HyperCard: management didn't see what they were supposed to be for. Furthermore, the other projects didn't benefit from Apple management's promise to Bill Atkinson that HyperCard would be given away to Apple customers.
HyperCard's days were numbered from the beginning; that much is true. But to call it "obsolete" from the beginning misunderstands a lot.
Well, but you initially said that HyperCard "had to die", presumably because it was user-programmable. You seem to be implying that Apple had a conscious policy of opposing user programming.
That definitely was not the case at the time HyperCard was being developed. I know because I was there. I even worked on the HyperCard team for a while.
HyperCard's troubles had nothing to do with an Apple policy against user programmability, and everything to do with the fact that management couldn't figure out how to make it into a product that paid for itself.
If you want to argue that Apple has a policy against user programming now, well, maybe they do. I don't know. The last time I worked there was in the 1990s. But they didn't have any such policy when HyperCard was being shipped.
I'm not sure what your point is... If you're saying I'm resistant to new ideas, perhaps, to some degree. My point was more that if you want to make people receptive to a new idea, slamming them as ignorant fools for not knowing about it already, or condescending to them for not immediately genuflecting before it isn't the best approach.
Built on top of Lucid Common Lisp infrastructure, as Lucid pivoted their business to C++.
IBM had a similar one, that imported C++ into a database, offering a Smalltalk like development experience for C++, called VisualAge C++ Professional 4.0
Fair enough. And it is irritating, I won't lie. We're too perf-obsessed for our own good. But complaining about it, or condescending the way many do, won't do any good: Either you're preaching to the choir, or you're irritating people, thus making them less likely to listen to you (sometimes both in one person, like me. But then, I took a while to come around on a lot of this stuff).
If you want a future, build it. If you're already doing it, keep doing so, and find ways to win people over, not alienate them.
By the way, almost none of this is aimed directly at you...
Looking at software today, I'm note sure we're perf-obsessed enough actually. Fortunately, simpler, faster, and more powerful all go hand in hand if you pick the right abstractions.
And I am building something better—think AS/400 with an APL-inspired Forth dialect. It is certainly alienating, but (hopefully) the system will have enough fun demos to hook people, and the language runs on other OSes to tempt people in. ;)
Not a whole lot, no. There are some cryptic documents on the language and some various experiments I've done with the seL4 microkernel. Currently I'm working on getting the language solidified and the completing the implementation (and removing all the usual dependencies, so that it should be relatively straightforward to port to a fairly bare microkernel environment).
Nobody does that. But for certain areas this is true and you should not ignore that.
Grady Booch once at an Eclipse conference:
"For those of you looking at the future of development environments, I encourage you to go back and review some of the Xerox documentation for InterLisp-D."
We all do, to a more or lesser degree. Lispers, in particular, are prone to it. I do it, and I've seen you do it too.
This isn't, by the way, a slight on you. Nor do I dislike you in any way. I do tend to argue with you a lot, but that's because I disagree with you, and I am of the firm belief that you can learn a lot by discussing subjects with people who disagree with you regarding them.
I get the nostalgia for these machines. I have great nostalgia for my old computers, the ones I learned on.
I have watched some videos of open genera. I don't see the appeal. It was neat for the time, but I have a hard time seeing what problems it solves for today. I'm not trolling, if someone can give some specifics, I'd be interested to read them.
I'm distantly related to the original founder, and have been told this confusing version of the story: Symbolics was bought out buy a different company that then renamed themselves Symbolics, though none of the people remained and the new company just pulled government contract money for "maintenance." Eventually, the IP was owned by a single person named Andrew Topping, who was known to the rest of the world pretty much only by name; no one knows his history. Then he died. Now nobody has any clue who owns the IP, where any useful documentation is, etc...
I thought that symbolics was still running in a limited capacity and pulling in money from government contracts. At least that was the case five or six years ago. I know licenses for Genera were being sold as late as 2010 for around 5k. I would be interested to know if this IP changed hands.
I used the Symbolics refrigerator (the 3600) briefly, but found Franz LISP on a Sun II more useful. But then I'm not into editing from a control break.
I have an old MacIvory (Symbolics board set housed in a Mac II) which doesn't want to boot anymore. IIRC Googling indicated that the Mac IIs are like cars - they won't turn over if the motherboard battery is dead. I changed the battery - no joy. Anybody have any recommendations, whether for a repair place (New England, USA) or further DIY things to try?
...yes, it is. Lisp is a multiparadigm language. Most lispms where programmed using a mixture of OO and procedural code, with a bit of FP here and there.
They have a couple, they're on display and in the collection. Better that it would go to someone who would restore the whole thing (not just the exterior) and actually use it.
They all believed that the loss of the lisp machine was a serious loss to society and were all very much saddened by it. I never used the system enough to come to my own conclusions in that regard, but it was interesting food for thought. As somebody for whom Linux/POSIX is very deeply entrenched, would I even recognize a truly superior system if it was dropped in my lap? More importantly, would society in general? The superior technology is rarely the "winner"