Wow that brings back the memories. I remember when it first came out it seemed like such a small step from having a system that could converse about a simplified universe to one that could converse about a much closer to "real" universe. And here we are with computers that are literally over a million times more powerful and nope, still can't ask Alexa or Google home where you left the car keys.
For the same reason we still need classical physics and chemistry and biology and psychology and sociology even though in principle it should be possible to model everything ab initio with just quantum mechanics. In practice we can just barely do that with single molecules before the complexity grows insurmountable.
For anyone wondering, I assume the name is a reference to the frequency distribution of english characters - when sorted by average frequency, you get "ETAOINSHRDLU..."
Person: DOES THE SHORTEST THING THE TALLEST PYRAMID'S SUPPORT SUPPORTS SUPPORT ANYTHING GREEN?
Computer: YES, THE GREEN PYRAMID.
Person: WHAT COLOR IS IT?
Computer: BY "IT", I ASSUME YOU MEAN THE SHORTEST THING THE TALLEST PYRAMID'S SUPPORT SUPPORTS.
Computer: RED
I got confused by the instruction. That's ...it's hardcore.
This program is discussed amongst a whirlwind of beautiful insight in Douglas Hofstadter's "Gödel, Escher Bach". I doubt I'm in the first 1,000 people to recommend that book on HN but I think it would be remiss to have a thread about SHRDLU without it being mentioned!
The first AI to really blow my mind as a kid and - between that and Conway's Game of Life - no doubt responsible for the first 5 years of obsessive my all-night hacking and sneaking into university libraries to supplement lackluster CS education in high school at the time.
I remember watching SHRDLU being demonstrated on a NOVA TV program 30+ years ago. At the time I was program in BASIC on a microcomputer-class machine, and it completely blew my mind that someone had created a program that could parse and reason that well.
This reminds me a lot of WordsEye[1], but not even WordsEye lets the human user interrogate the machine about the created environment using natural language.
Would a modern Prolog (those things + automatic backtracking) or Erlang (those things + cheap actors) have been seen as even better for such purposes, if they had had it at the time?
Having looked at the source for SHRDLU (which I found very hard to understand without more context), I think a lot of it was in an early version of Prolog, "Micro-Planner."
I think it was more about the way the lambda calculus encodes ideas from category theory and so on. GOFAI was all about using logic to achieve AI so languages like LISP (and, of course, logic programming languages like Prolog) were a natural choice for that sort of thing.
No, Eugenio Moggi realized that you can use Monads to represent IO etc. in languages like Haskell, and that was in the paper "Computational lambda-calculus and monads" (1989). The first person I can think of to make a connection between categories and lambda calculus is Joachim Lambek, in his paper "From lambda calculus to Cartesian closed categories" (1980).
Category theory, the lambda calculus and the principle of compositionality were employed in linguistics since the 1930's, for instance with categorical grammar. I don't think we're talking about the same thing.
What kind of Category Theory are you talking about? [Category Theory](https://plato.stanford.edu/entries/category-theory/) in the mathematical sense, and in the sense that it can be used to describe Lambda Calculi, appeared in 1945, in the field of Algebraic Topology and Homological Algebra. I'd be really surprised if it was employed in linguistics in the 1930s.
I guess you're right and I'm wrong- the lambda calculus is not my specialty really. I know for sure that it's used in semantic and discourse analysis in computational linguistics [1] (which again are not my specialty) but, admittedly, I am a bit fuzzy on the dates.
What I had in mind in particular was the application of concepts from formal language analysis to natural language processing, which wikipedia confirms started in the 1970's with the work of Richard Montague [2]. Somehow I had convinced myself this work hails from much earlier, around the time of (Chomsky's) Syntactic Structures - but that again is from the late '50s. Anyway, this kind of analysis does make use of the lambda calculus and category theory _today_ but perhaps Montague's early work didn't specifically do so.
Like I say the lambda calculus is not my specialty and I'm always a bit surprised when I realise how far back Church's original work goes. Considering how long it took for the relevant concepts to percolate down to the mainstream of programming languages, that is. That should help explain my confusion with the dates.
As a side note- a lot of what I know about semantic analysis is just "lore" - things I've picked up here and there while looking at structure, which is my main dish usually. Sometimes I struggle to point out how I know some things I'm sure I know :/
Edit: I think my reply to the original question, about why Lisp was used in computational linguistics, is mostly correct, even if I got my dates wrong: using formal logic tools in analysing meaning in natural languages was popular around the time Winograd wrote his thesis and he would have leaned heavily on it for SHRDLU. I'm not getting the dates wrong here, either- Alain Colmerauer and Definite Clause Grammar goes back to at least 1972 and he didn't just come up with the idea out of nowhere, there was a lot of activity in logics and grammars around that time.