IIRC, Gambit Scheme was originally written in Prolog.
The "in zero lines" part doesn't really make sense, because most language runtimes use existing functionality in the language of implementation.
Also, the "3842534 Lips" bit on the timer lines is a typical Prolog measurement, 'logical inferences per second'. ("Infinite Lips" is pretty silly, though.)
Erlang and Oz are both from Sweden, and Sicstus Prolog (of the Swedish Institute of Computer Science) is one of the most prominent Prolog implementations. Perhaps Prolog is/was more popular there than elsewhere?
Back in 1987 i wrote a toy interpreter for Scheme in Prolog. Well, a Scheme-like language with a Prolog-like syntax. I kept the files, which are at http://umbc.edu/~finin/sip/.
Neither of the Prolog-in-Lisp examples I've seen (in "On Lisp" and "Paradigms of Artificial Intelligence Programming", both of which I highly recommend) seem to be much about language one-upmanship. Rather, they seem like interesting ways to really understand the important bits of Prolog, how they work and what they might be useful for.
Prolog is to Lisp what assembly is to C: used occasionally for specific purposes, where absolutely necessary.
Prolog is a fun language, on it's own, but one feels like his arms have been cut off and his legs made 100 times more powerful when using prolog; it empowers you in some aspects, but completely disables you in others. So, many people opt to get it in a form where it's embedded in Lisp, to augment its shortcomings and exploit its powers.
That's because the design of the language is so strongly skewed towards unification and search. It's great as a database / ruleset query language, for example, but it's not as general-purpose as e.g. Lisp or ML. It makes a lot of sense to embed it in something else. (I wouldn't try to write a web server in Make, either, but Make is quite useful in its specific domain.)
My prolog is certainly rough, but am I correct in thinking that this doesn't do closures? Or is there some aspect of the state-passing that I am missing?
I don't have Prolog on this computer, but it looks like the interpreter environment (state) is passed around in two a-lists.
Prolog can do neat tricks (difference lists!) with infinite data structures by passing around unbound variables, and then "retroactively" binding them, perhaps partially, further in the execution. It's conceptually different from lazy evaluation (more like dataflow variables), but has many similar applications. I wouldn't be surprised if it does handle closures automatically during unification of the code tree, though I can't check at the moment.
Prolog works by filling in all the blanks in a way that makes sense as a whole, but in this case it happens that the check if something could have been passed in as an argument can be used to set the argument to it, if valid. This probably sounds like weird quantum physics stuff, but Prolog is capable of testing whether the variables in the goal being tested are bound, explicitly binding and revoking them, collecting the list of possible valid bindings, etc.
It's a somewhat misleading way of stating that if you try to build a Lisp in Prolog, most of the necessary constructs already exist, you just need to tie them together.
Prolog is also homoiconic, has macros, and is built out of atoms and arbitrarily nestable lists. (Its design is very strongly tilted towards pattern matching and search, though, so it's less of a general-purpose language.)
Thats rather pedantic... the title should really just be "lisp in prolog". As it stands, I go to the page, see a large (compared to 0 lines) chunk of source, and conclude link-bait.
"""
Some online books show how to implement a simple "Prolog" engine in Lisp. They typically rely on a representation of Prolog programs that is convenient from a Lisp perspective, and can't even parse a single proper Prolog term. With this approach, implementing a simple "Lisp" in Prolog is even easier ("Lisp in Prolog in zero lines"): Translate each Lisp function to a Prolog predicate with one additional argument to hold the original function's return value. Done. This is possible since a function is a special case of a relation, and functional programming is a restricted form of logic programming.
"""
Right: this actually parses Lisp. (Parsing is supposedly another thing Prolog is particularly good at, but I haven't gotten to that chapter in _The Art of Prolog_ yet.)
Parsing Prolog is actually pretty easy (the syntax is almost as simple as Lisp's!), but IIRC the Prolog-in-Lisps I've seen in PAIP and On Lisp just use native Lisp sexps. While the Prolog-y syntax strikes me as a bit weird in Erlang, it actually makes a lot of sense for Prolog.
2.) Now point me to the Prolog interpreter in Lisp in 1 line.
3.) Now for the rant: Looks like we slowly recognise how silly it was to hop on the oop/imperative programming track. All we got from it: everyone and his mom programming, their code soo explicit it destroys the entropy balance of the universe, mankinds software library mostly non reusable trash.
OOP and FP are very similar. You can bet that if the average Enterprise Java Programmer used Haskell instead of Java, the code would be just as confusing and unmaintainable. The problem is not programming paradigms, but bad programmers.
(Programming is difficult to teach, learn, and practice, but is also in high demand. This leads to bad programs. OOP has nothing to do with it.)
I certainly agree that immutable data is better than mutable data, but that again has nothing to do with FP or OOP. I can mutate data in Lisp, and I can have fully-immutable instances in Java (not my choice, btw, but definitely possible). It comes down to recognizing the value of immutability, which is the programmer's job.
One of my private fantasies is to ship a task to an outsourced (in the worst sense) organization and specify that they must use Haskell. I can't even imagine what monstrosity would come back, but I bet it would be hilarious.
Personally, I rate the perversity of bad coders as being well above Haskell's ability to prevent perversities, but the fight would be truly epic and produce months worth of tdWTF fodder. I don't know which would be worse: Them discovering unsafePerformIO, or them not.
It's likely to go against the flow of the language. While I don't agree with all the design trade-offs in Python, there's a lot to be said for having a clear idea of what style(s) fit the language (what's "Pythonic") and sticking with it. Python code stubbornly written as if it were Haskell is often inefficient and hard to read. Writing idiomatic code that works with the strengths of the language at hand is a better idea. When/if somebody else works with your code, it's unlikely they will have exactly the same background as you, though they will probably know the language you wrote the project in.
(I'm assuming it's not in a language like OCaml or Oz that can comfortably fit both styles.)
works, because Prolog can unify the former with its result ("does the Lisp expression provided have a result we can determine? Yes, and it's [a, b, 3, 4, 5]"). If you try to unify with the opposite variable unbound ("Is there a valid Lisp expression that results in [a, b, 3, 4, 5]?"), it starts generating a depth-first traversal of every possible valid Lisp expression according to the grammar * (with logic variables; templated, sort of), and could e-ven-tu-a-lly find the append expression. For me, on SWI-Prolog, it just runs out of local stack. If the program had been designed with that use in mind, there would be changes to the design to greatly reduce the search space. Regardless, "Generate every possible syntactically valid program and test each of them until one makes all my test cases pass" is probably a pretty slow way to write code. :)
I found Prolog really confusing when I thought about it as "going backwards". Really, it tries to determine if a completely instantiated version of the expression you give it exists (according to the facts and rules you've provided), like trying to fill in a sudoku puzzle. If there isn't a valid solution, it says "no.", otherwise it searches (depth-first) and pauses to present any solutions it finds along the way. "Fill in the missing pieces, backtracking to give multiple solutions" is a fundamentally different style of evaluation than "apply this function to these arguments, return the result", and ideas of forwards/backwards doesn't directly apply. (Unsurprisingly, doing I/O in Prolog gets a bit weird...)
In some cases, this means rules have several uses. member(Item, List) can be used both to get each value from a list one at a time ("member(X, [1, 2, 3]" -> 1, 2, 3) and to get all generic lists that contain a value ("member(3, L) --> [3|_G242], [_G241, _3|_G245], [_G241, _G244, 3|_G248], ..."; those are gensyms). This doesn't always work (e.g. add(3, 5, X) succeeds with X=8, add(3, X, 8) needs extra declarations), in part because Prolog doesn't know to try integers unless you tell it (in the above case, "between(X, 1, inf)".)
* Based on a quick glance at the code and my understanding of Prolog, thus far.
wikipedia
"""
Greenspun's Tenth Rule of Programming is a common aphorism in computer programming and especially programming language circles. It states:
Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp
Prolog follow-up
Any sufficiently complicated LISP program is going to contain a slow implementation of half of Prolog.
"""
The "in zero lines" part doesn't really make sense, because most language runtimes use existing functionality in the language of implementation.
Also, the "3842534 Lips" bit on the timer lines is a typical Prolog measurement, 'logical inferences per second'. ("Infinite Lips" is pretty silly, though.)