I've been fascinated with the 'Lisping in the JPL' piece (even though I tend to prefer Scheme). However, I've been trying to find something substantial about the 'remote agent' and I can't really find much. Sure, there are some articles describing at a very high level how it worked, but not much more than that. At least, I couldn't find.
Do you think anything survived that would shed more light in its workings? Not even talking about the code (although that would have been wonderful, and maybe it should be provided - taxpayer funded and whatnot).
The canonical references (e.g., http://www.ai.mit.edu/courses/6.834J-f01/Williams-remote-age...) are pretty good. If you are interested in diving very deeply into the topic, I think you'd be better off going through the work that Brian Williams has been doing since he worked on Remote Agent (https://people.csail.mit.edu/williams/Web%20site/papers.shtm...). While this more recent work reflects his individual perspective and not the ideas of his remote agent collaborators, it is a rich and detailed body of work that can be studied. Brian's collaborators at NASA Ames mostly left Ames and went on to work on a wide variety of other interesting projects. At JPL, some of the ideas in Remote Agent went into a large project called the Mission Data System. I only know about this from a distance, but my impression is that this was a rethinking of how to build spacecraft flight software from the ground up to have the flexibility that Remote Agent sought while also sticking close to JPL's software development and testing methodology. It's described here: https://mds.jpl.nasa.gov/public/
I have no idea what documentation for the RA was preserved, but I'd be surprised if the complete source isn't archived somewhere. I would contact the JPL PIO (public information office) and ask them.
Sorry to disappoint you more, but there was even a Scheme dialect that another JPL researcher had invented, which never came out as far as I know. (I contributed a bit to it, but left for silicon valley, and I think at that point it got mothballed. Wish I had a copy.)
You mention debugging the live Deep Space probe using a LISP REPL, and I remember reading somewhere else how people troubleshooted an earlier Voyager (?) probe via a FORTH REPL. Were those schools of thought in contact at all?
What do you think about the efforts of some Haskell folks to make embedded languages that would generate code for a more limited system but at the same time leverage the beefy Haskell stage as both an extensible static analyzer and an overgrown macro processor? There were a number of those (I think shaders for earlier-generation GPUs, real-time control code, and even hardware synthesis were all among the attempted targets) and I don’t know that they got any adoption at all, but this sounds similar to what you’ve described doing on less capable machines with LISP.
A Forth REPL on Voyager would be news to me. It's possible, because I do know that Forth was used to program some flight instruments on later missions [1] but even then it didn't include a REPL.
I don't know anything about what is happening in Haskell-land, but from your description it sounds like what they are doing is very similar. Haskell shares a lot of intellectual DNA with Lisp so it's not surprising that it would end up being used this way.
A little late to the discussion! The John Hopkins University Applied Physics Lab (JHU/APL) uses or used to use FORTH for non-critical satellite flight software (and Ada for mission-critical software). Back in 2009, I came across this 2006 JHU/APL paper about compiling a subset of Haskell down to FORTH for execution on a special-purpose processor: Andrew J. Harris and John R. Hayes, "Functional Programming on a Stack-Based Embedded Processor". ( http://home.iae.nl/users/mhx/Forth_functional.pdf )
I don't know if they pursued the use of Haskell any further or even if they still use FORTH. In 2009, when I read this paper and others from them, I discovered that the Lab had hosted a by-then-defunct FORTH Users Group, not exactly an auspicious sign.
Did you ever try Smalltalk? Is there any area where you would prefer Smalltalk over LISP?
If you had to start from scratch, which of the two would you learn today, and why? (Let‘s assume that getting hired is not your concern, but only the ability to build powerful systems and the ability to reason about difficult problems.)
I've never tried Smalltalk so I can't really speak to that. But I have done some coding in ObjectiveC which is a direct descendant so I can tell you that I strongly prefer Lisp, and Common Lisp specifically because I prefer generic functions to message-passing. Message passing privileges the receiver object and thus introduces an asymmetry which is totally unnecessary. It does nothing but cause confusion. The whole point of having a high-level language is to reduce the cognitive load on the programmer, and message passing does the exact opposite. Message passing forces me to think, every single time I call a function, about whether to write a.f(b, c, ...) or f(a, b, c, ...) or that absolutely horrific syntax that ObjC has, [f a:b] or something like that. There is no reason why I should have to think about any of that. That's the compiler's job. All function calls should use the same syntax -- and that syntax should be (f a b c ...) because all those commas are unnecessary too.
...or a b c f because all the parentheses are unnecessary too (Forth nudge ;-) )
I think that Smalltalk deserves a little bit of opened mind here. In many respects, it is the closest thing to Lisp machines experience you can get today.
> It's just not a good impedance match to my brain.
I think that's the answer to every language war ever. "To my brain." If you claim "X is easier to understand", you get a war. But if you claim "for my brain, X is easier to understand", that's a lot harder to have a war about. All someone can reply is "well, my brain is different", which... OK, your brain is different. Fine. (Or someone could claim that nobody could actually have a brain such that X is easy to understand, at which point it's pretty clear who the unreasonable zealot is.)
In fact, I think this is also the answer to the editor wars. It may even be the answer to the political wars.
Well, not quite. Some languages are by design good impedance matches to brains that don't know very much. BASIC is the canonical example. It's really easy to learn, which makes it attractive to beginners, but it has some pretty serious limitations that you only become aware of after discharging a certain amount of ignorance. IMHO this makes BASIC objectively worse than other languages.
To me, Lisp makes every other language I know feel like BASIC in the sense that the distinguishing features of the language feel to me like they are designed to appeal to ignorance rather than to empower.
Not really. All quality metrics are subjective. But if you accept a particular quality metric, then you can objectively assess things against it. For example, C is objectively better than Brainfuck if your goal is to write software easily. If your goal is something else, like challenging yourself to overcome adversity, then Brainfuck might be objectively better than C.
> But if you accept a particular quality metric, then you can objectively assess things against it.
I agree, but the scope of your assessments will be very limited, and also you end up writing tautologies (if your objective quality metric is A, then you've redefined "better" to mean exactly A). In particular, the scope of your assessments will be far more limited than the example you give (words such as "easy," "simple," etc. are unlikely to enter the statement without some rigorous definition, but therefore also highly restrictive, underlying it).
> For example, C is objectively better than Brainfuck if your goal is to write software easily.
I would not take a bet, that if you presented Brainfuck and C to all > 7 billion living humans, there would not be at least one person who found it easier to write software in Brainfuck than in C. Certainly I could imagine a hypothetical human for which it would not be true.
> In Forth it is impossible to tell syntactically where the function calls are.
To be fair, this is true of Common Lisp as well, macros aren't functions.
Forth doesn't really have functions, it has words, they're broadly useful like functions but are a differently-literal sort of memory object from a lisp function defined with cons cells.
So you can't know where the function calls are, there is no such animal, but you do know where the word boundaries are, it's the whitespace, and in Forth that's what you care about.
That's true, and it is arguable that Lisp could be improved by making macro invocations have a distinguished syntax.
But in practice it's usually pretty clear what is a macro and what is a function because of naming conventions, and because macros commonly admit syntax that would be illegal in a function call e.g. (let ((x 1)) ...) But however bad this problem is in Lisp, it's much worse in Forth.
> Forth doesn't really have functions
That is debatable. If I write 1 2 + 3 * then I think of the + and the * as function invocations that take their arguments off the stack and leave their results on the stack. It's true that this is not exactly the same as a function call in Lisp, but it's similar. In any case, if I write:
a b c d e f g h i j
there is absolutely no way to tell what that is going to do without knowing what every letter means. By way of contrast, if I write:
(a (b c d) (e f) (g h (i j)))
then it's a pretty good bet that a is a function that takes three arguments, b and g are functions that takes two arguments, e and i are functions that take one argument, and c d f h and j are variables.
What you don't get in FORTH is arity information from the syntax, because everything is simply treated as a function from stack to stack. Unfortunately it's not feasible to keep the cleanliness of the FORTH syntax while providing that info (one would need a 2D view for that, where multiple arguments to the same function can be seen in parallel) so a parens-based syntax is arguably the next best choice.
The other thing you can't do (straightforwardly at least) is push a word onto a stack or apply it against some kind of compound structure.
That's what I mean by Forth words not really being functions, they're subroutines for sure, and you can do some surprisingly powerful metaprogramming given that what you have is a glorified linked list, something a little more powerful than GOSUB, and two stacks, but functions?
When you compare it with a visually similar concatenative language like Factor or Joy, which inarguably have functions, Forth words look pretty different.
are about as meaningless as each other to me with just letters, in Forth you build a sort of minimal vocabulary for the problem at hand and so you have a better idea of what words are doing. Proper documentation (or at least very good naming) is needed in lisp and forth IMHO.
Of course you actually have to know what the words mean. But it's harder in Forth than in Lisp or conventional C-like languages because Forth takes away all of the visual cues that help you figure out what is going on.
You must have run Clozure a while back, I went to the App Store and the version of CCL there hasn't run properly since Sierra, according to reviews. I myself can't run it on Mojave or Catalina.
Yes, the app store version hasn't been updated in a long time but if you build from source it will run on Catalina. Maybe later too, I haven't tried it.
I would say the same thing (or more exactly, the dual) from a Smalltalk perspective: I just want to send a message to an object, I shouldn't have to figure out what function to call or what to parenthesize. The problem people get into with OO (particularly Java, C#, C++) is thinking in terms of method-function calls rather than message sends.
Although I do like Lisp generics - I did a bunch of programming in rscheme many years ago, and really liked it.
Why? Seriously, why is that something you want? Why do you want sending a message to be syntactically and semantically distinct from calling a function?
> The problem people get into with OO (particularly Java, C#, C++) is thinking in terms of method-function calls rather than message sends.
Well, IMHO the problem people get into with OO is thinking that there is something special about sending a message that is different from calling a function. Sending a message is an implementation technique, not a semantically distinct action that should be exposed in the language semantics, and certainly not in the syntax.
Where it matters 'who' gets what powers or responsibilities, sending messages is a powerful way to think about it, not an implementation detail.
In Java it is pretty much just a function call, because the code in the callee has the ability to wreak all kinds of havoc in real Java programs, so the pattern of "messages" is quite imperfectly related to the authorities you're bounding.
In most cases, it is really only an implementation technique, but it may be different. In my eyes, sending a message to a probe on Mars is very different from evaluating a function with the probe as an argument. It's sure that, in this case, a Lisp program would transform the function call to a kind of message send anyway with the expectation that the probe is a standalone computation system. OOP approach makes this assumption implicit for everything.
> sending a message to a probe on Mars is very different from evaluating a function with the probe as an argument
Sure. But why do you want that difference to manifest itself in the syntax of your language? What is the benefit of writing, say:
[probe message]
over
send_message(probe, message)
?
> OOP approach makes this assumption implicit for everything.
And that is exactly the problem IMHO because this model is not appropriate for everything. It makes sense when you are dealing with actual physical objects like a Mars probe, but that is rare. Much more commonly you are dealing with abstract objects. If I want to add two numbers A and B, who do I send the message to? A? B? The adder in the ALU? As a programmer, 99.9% of the time I don't care how the operation gets carried out. I just want to write (sum a b) or a+b and let the compiler sort out the details.
which is the message + send to a with argument b. You just have message passing syntax for everything (Smalltalk has minimal syntax comparable to Common Lisp in size). IMHO it hit the sweet spot of simplicity/readability/expressiveness.
But you cannot judge Smalltalk only from the perspective of its syntax. You need to take into account the context in which it is used. The environment built around it. Then everything starts to make much more sense and forms a vital well balanced system. It is Lisp but different, and I strongly recommend downloading Pharo and to try to figure out how.
I stronlgy admire you; I read "Lisping at JPL" at least dozen times over the years. I really like Lisp, Forth and Smalltalk, and I know that knowing each of them well is worth every penny, even if it may not be obvious on first sight.
> which is the message + send to a with argument b
Yeah, I get that. What I don't get is why I should care that "a+b" means "send the message + to a with argument b" rather than just "add a and b". I see no benefit of the first formulation over the second.
It's not even an implementation detail. Both use, ultimately, JSR and RET instructions.
It really comes down to fetishism, which is why I don't use either of them. I am not a Fetish-Oriented Programming enthusiast.
It is also why I am not all in on Rust. Memory safety is pretty important, but it is far from the only important thing in programming. The amount of attention Rust insists I devote to it is attention I don't have available for the other things that, frankly, deserve it more.
Considering that in the last 10 years I have spent strictly less time debugging memory usage errors than preparing compiler bug reports against Gcc, putting memory correctness guarantees front and center ahead of all else seems to me a mis-allocation of my extremely scarce attention.
Attention is by far the scarcest resource available to any programmer. Allocating and applying attention where it is most needed is the central problem of programming. A language that overrules your judgment as to where your attention must be thus interferes with the core business of programming.
C++ does demand just a little more attention to memory management than some other languages, but less all the time. It pays that back by eliminating most taxes on performance, a swap I agree to.
I don't know what Lisp on the BEAM is (and neither does DDG).
With regards to LFE, I don't really see the point. But that might just be because of the kinds of coding I do. I've never written high performance massively parallel code. Message passing might have a benefit there (that is what the big win with Erlang is supposed to be) but I'm generally skeptical of designing the language semantics around the needs of the compiler rather than the programmer.
In particular, I don't see what Erlang could possibly do that could not be done by a CL compiler that noticed when a method was dispatching only on the first argument. The semantics of that are equivalent to the semantics of message passing, and so the code that such a compiler emits could be identical in both cases. But I don't actually know Erlang so it's possible I'm missing something.
With Erlang it's necessary to separate the compiler from the VM. You are working around what the VM wants but the compiler is not a problem. The VM acts a bit like an operating system that can schedule its own processes and processes can send each other messages. In order to receive a message a process has to have a receive block and be capable of receiving the message (with an applicable pattern match). When a process is waiting for a message, it is set aside by the VM so it consumes very few resources (just a bit of memory). It is woken up when it receives a message.
This way a single 'program' can have many things going on at once and it is never blocking.
From the programmer's perspective it feels like you are writing synchronous code but you are getting async behaviour 'for free'.
wrt LFE specifically, Robert Virding, one of the founders of Erlang, really likes languages and Lisp in particular so he wrote one for the BEAM. It just has some special accommodations to be able to send/receive (and I think pattern recognition too).
> This way a single 'program' can have many things going on at once and it is never blocking. From the programmer's perspective it feels like you are writing synchronous code but you are getting async behaviour 'for free'.
I wish people would stop saying this. Writing concurrent systems in BEAM requires thought if you want to avoid deadlocks, races, bottlenecks, and performance problems. It also can be a bit tricky because processes serve many roles at once: fault isolation, GC isolation, the unit of concurrency, and the unit of synchronization.
Yes, a process can unleash concurrency and parallelization. But you also use processes to do things synchronously since a process goes through its mailbox one message at a time. You can accidentally bottleneck your system because of this.
Regardless, it does make it easier to write certain types of programs.
That actually sounds pretty cool as a runtime environment. Is the VM programmed in byte code, or is it native code? What back ends are available? What is the runtime written in?
The BEAM (runtime?) is written in C. There is also an effort to rewrite it in Rust (https://github.com/lumen/lumen). Some functions are built into the VM but most of the supporting 'standard library' (OTP / Open Telecom Platform) is written in Erlang. The (main) compiler is written in C. So it's all C or Erlang afaik.
It is ported to every major flavour of OS.
I don't know what 'back end' means in this context.
You can compile a high-level language down to a byte code which looks like machine code but which does not correspond to any actual hardware. Instead, the byte code is interpreted. Python and Java both work this way.
You can also compile a high level language down to machine code that runs on actual hardware, like an x86 or an ARM. For languages that run on more than on processor, the compilation process usually consists of an architecture-independent phase which produces some sort of intermediate representation (which may be a byte code, or it might be something else, like LLVM) and then an architecture-specific pass that transforms the intermediate representation into machine code for the target architecture. The code that implements that second pass is called the "back end."
I have no idea whether Erlang compiles to native code or byte code.
In addition to all that, there can also be a run-time environment that is required to run the resulting code. For byte code, this environment necessarily includes an interpreter for the byte code, and might also include other things. For native code this environment might include things like a garbage collector or a standard library that provides an interface to an operating system or something like that.
There are multiple passes, the last of which is byte code although at one point, native code was (is?) an option with HiPE (High Performance Erlang). HiPE seems to have been passed over by the development team in favour of JIT.
The lines begin to blur in fun ways with JITs, I think, too. I believe Erlang with BeamAsm does bytecode compilation for the BEAM VM, which JITs it to native code (via AsmJit, a neat C++ library it seems?)
Hi Ron, I'd recommend Joe Armstrong's thesis https://erlang.org/download/armstrong_thesis_2003.pdf to get an idea what's special about Erlang. Briefly, it's designed around keeping systems running live even when they have bugs and other faults. Straight Lisp with single-dispatch wouldn't isolate effects, though of course you could build something in Lisp that did. Initially Erlang was built on top of Prolog.
(This is Darius. I never really learned your system with related goals for robots -- as you know, I didn't stick around at JPL. About 10 years later I did a bit of Erlang for Yahoo.)
Another thing about the BEAM that may resonate for you is the abstract syntax tree (AST) for the BEAM is very reminiscent of Lisp. Some people call Elixir which is a popular reworking of Erlang which also runs on the BEAM a secret Lisp as it gives direct access to the AST via macros.
Mostly retired. I tried to do some startups but they all failed. Now I have a part-time gig at a chip manufacturer who uses Common Lisp for one of their internal design tools, so I'm helping them maintain that. Besides that I'm doing a little writing, a little hacking, a lot of traveling, and the odd podcast interview :-)
>> have a part-time gig at a chip manufacturer who uses Common Lisp
Is this design tool being actively developed or does the company want to phase it out eventually? Also sorry to ask as it may not be in this spirit of this thread but are there any open roles for a CL developer on this project?
> Is this design tool being actively developed or does the company want to phase it out eventually?
The tool is called Meta [1] and it was originally developed at a startup called Barefoot Networks which was recently acquired by Intel [2]. Whether Intel keeps it or not is still TBD.
> Also sorry to ask as it may not be in this spirit of this thread but are there any open roles for a CL developer on this project?
Good questions. I have been away from NASA for almost 20 years now so I can't really say what I think they should do differently because I don't really know what they are currently doing. A lot can change in 20 years, and I'm sure a lot has, but I'm no longer in the loop so I can't really say anything constructive about this.
I've never used Racket but from what I can tell it's a fine system. I'm partial to Common Lisp myself, never really resonated as much with Scheme, though I do admire the elegance of the language. But if it works for you, go for it.
I think lisp is still used in various nooks and crannies of NASA. As a small example, I was the payload software engineer for the LCROSS mission (a robotic mission in 2009 that demonstrated there was water ice near the moon's south pole). The software that ran on the payload computer was in C, but there was a very simple, ad hoc scripting language provided by the company that provided that payload computer. Because this language was very simple, I wrote a higher-level DSL and a simulator of the computer in common lisp and then wrote all of the instrument command sequences in that DSL. Lisp's simple, flexible syntax and macros made it easy to express patterns of commanding and timing for this. What the commands did is described in the last (topmost) blog entry here: https://blogs.nasa.gov/lcrossfdblog/ . These days, I'm working on the VIPER rover, and the commanding approach for it is very manual, as Ron described. VIPER will be a moon rover, so the turn-a-round time for commanding is much shorter than it would be for a MARS rover, and teleoperation from Earth is a very reasonable and lower-risk solution than autonomy. Still, some of the systems used in self-driving vehicles today to evaluate the environment around the vehicle will be used by VIPER, but to provide advice and situational awareness for the drivers. No lisp involved.
That is a very difficult question to answer succinctly. From a technical point of view, Common Lisp has never been in better shape, particularly since the advent of Quicklisp. Just about any functionality you would want is available as a CL library through QL. Programming in CL today is easier than it ever was. But on the other hand, it does not seem to be attracting a lot of young blood. The cool kids all seem to be using Clojure or Haskell or Rust or, heaven help us, Javascript.
So I hope Lisp has a bright future, but I wouldn't be my life savings on it right now.
> it does not seem to be attracting a lot of young blood.
In my experience "space" sells to the kids more than money, fast cars
and fame. So thanks for your inspiring writing about your work - I'm
looking for ways to convince the dean to let me switch some courses
from Python (which has jobs) to Lisp (which has excitement, space
adventures and really wild things).
Personally I did all my development in Macintosh Common Lisp (now Clozure Common Lisp). I'm sure some people used Swank and Slime. There was no reason for everyone to be on the same page about that so we didn't discuss it much.
Interesting ... I just assumed that SLIME hadn't existed yet. Now I'm curious about CCL on the mac ... I think I'll install CCL on my lone mac device just to see a different take on connecting to a remote image. As well as checking out a non-emacs IDE for CL ...
Ron's story about working on the Mars Rover prototypes is mind-blowing to me.
He used DSL's written in LISP and compiled to run on embedded devices and did self-driving tech all back in the 80s. Also, debugging software running 150 million miles away is something I'm sad yet relieved I'll never get to do.
Also the photos he dug up and scanned are neat to see.
Probably the best "debugging in space" story is Don Eyles hacking the Apollo moon lander program that was in "core rope ROM" to work around a hardware failure on Apollo 14.
It is findable online, but he published a book "Sunburst and Luminary" (Fort Point Press, 2019) about the whole process of getting the moon landing code ready in time to use, and the hack.
As I understand it (I just got the book, haven't read it yet) the Apollo Guidance Computer, one each in the Command Module and the LEM, was programmed by Margaret Hamilton (inventor of Software Engineering as we know it today) with a real-time executive and interpreter emulating a saner machine, and Eyles coded the landing to the interpreter. Because it was interpreted code, it was patchable, and he came up with a patch on the fly that the astronauts punched into the AGC by hand, and saved the mission.
Margaret Hamilton's real-time executive itself saved the day when Apollo 11 crew left some extra stuff turned on, by accident, that the system had not been tested with and that burned excess CPU cycles during the landing. When it trapped a scheduling failure, it checkpointed important state and was able to resume the important tasks where they left off. That happened several times during the landing.
The cases I've seen deploy relatively full binaries, not patches.
And they don't typically even have a REPL, as doing brain surgery on a satellite (that's already in a bad way if you're debugging something) is rarely the best option. So lots of telemetry logging, with 'feature flags' to turn on different logs. Then a change and code push once root cause is found, or a new build with new telemetry gathering if it's still not clear.
I'm sure just about every model for code structure has someone doing it though.
>> And unfortunately at the time, I was not very skilled at playing politics and so I was a little bit more blunt about this than I should have been
If you were to handle it today, how would you do it differently? Any advice to an engineer to learn the art of selling more powerful tech against a lesser one when the latter is more popular?