I recall trying a few mods that made Kerbal Space Program programmable so you can automate your rocket (Some have used it to make SpaceX-style reusable boosters: https://www.youtube.com/watch?v=sqqQy8cIVFY), and mods that provided a domain-specific language were more convenient than mods that used an existing conventional programming language. However the kOS language was still ultimately an imperative script, and too inflexible to be a complete programming language. It got me thinking about what kind of language would best be suited for the purpose of controllers (I was also writing PID controllers at the time), and I noticed there wasn't a language for it.
I also recall a Minecraft mod called ComputerCraft where you program a "Turtle" (a la Logo) to perform automated digging actions, but the task of programming it in the easily-embedded Lua proved somewhat inconvenient for simple actions, if only for the ALGOL-style syntax. Another mod tried Forth, but get the sense that while one could be productive in Forth, it's a lot to learn.
The developers have already incorporated it in a proof of concept game: https://bobthesimplebot.github.io/, but I could imagine RTS style games or mods where you not only have to scavenge for resources, you also program your bots to do your bidding, like Dwarf Fortress with programmable drones instead of independently-minded Dwarves.
I've actually been really impressed by Shenzhen I/O (http://www.zachtronics.com/shenzhen-io/), which is a game where you have to program microcontrollers to build weird products. The built-in programming language is an ultra-simple assembly language, with either one or two registers, and about a dozen operations. Program length is limited to about 10 or 15 statements. There are external ROM and RAM modules which store about 20 values. Every challenge comes with a pre-made test suite.
Something about these extreme restrictions and the simplicity make for excellent, puzzle-style gameplay. And I think you could add a Shenzhen I/O-like "microcontroller" to a lot of automation games (like Factorio: https://www.factorio.com/) and it would work very well.
I think that one of the problems with adding traditional programming languages to games is that there are no constraints to make it challenging, and that you can write fairly large programs. But a deliberately limited and simple language could be fun.
> I could imagine RTS style games or mods where you not only have to scavenge for resources, you also program your bots to do your bidding, like Dwarf Fortress with programmable drones instead of independently-minded Dwarves.
Sounds sort of like Screeps [1]. It's an MMO game where you write scripts in JS to control different robots (so, not logic programming, but still pretty fun with lots of depth). Parts of it are also open source [2].
The concept underlying Screeps basically amounts to my dream RTS.
While I haven't played it yet, the only thing that worries me is the tick rate. The trailer videos all use the history replay feature so as to appear real-time, when in actuality you're talking a simulation tick rate of anywhere from 1hz (private server) to something like 0.25-0.5hz on regular/MMO servers.
A normal RTS would probably sim in excess of 25hz, though Supreme Commander sims at 10hz and is perfectly fine.
While the idea is fantastic, it kind of sounds like it's a bit similar to watching paint dry in practice.
(Please correct me if I'm wrong; going off of Steam reviews alone here.)
Screeps is a lot of fun. (I've published my whole code on GitHub when I retired after about four months.) The tick rate on the main server is more than adequate. You can run your own server locally to test things at a faster tick rate.
Logic programming and productions are a wonderful idea that I studied in undergrad back in pre-2000. These days, people seem hostile towards them. People just don't like the idea of specifying rules. I couldn't find modern intro textbooks on the topic or classes that cover it on MOOCs. I'm very surprised and confused.
The trendy way to approach the problem these days is to throw massive amounts of data at a deep learning model and let the model try to learn the rules. And my irony detector can't help but squalk out the famous minsky koan:
"What are you doing?", asked Minsky.
"I am training a randomly wired neural net to play Tic-tac-toe", Sussman replied.
"Why is the net wired randomly?", asked Minsky.
"I do not want it to have any preconceptions of how to play", Sussman said.
Minsky then shut his eyes.
"Why do you close your eyes?" Sussman asked his teacher.
"So that the room will be empty."
At that moment, Sussman was enlightened.
Machine Learning has its use cases, that's for sure, but I can't help but laugh at the person who eschews a well understood model of a well understood system by an experienced and trained expert human in favor of a magic black box that at its best might converge on the expert's understanding after enough time and data. There is no shame in building off of what is already known and understood.
Any tips for introductory/intermediate material on the topic? In undergrad, we used Russel and Norvig. Surely there are more advanced books or courses.
On expert systems specifically no. With the AI winter of the early 90's, not only did expert systems research die, but so did publisher interest in the topic. "Expert Systems" is effectively a dead field of study.
That being said, the field of Operations Research, at least philosophically speaking, has picked up where AI researchers dropped it off. They've fully embraced the idea that human experts can model many systems extremely well, and have built incredible tools to do so: mathematical modeling and optimization, constraint programming, boolsat, graphical models, etc. Effectively speaking, if you think expert systems are cool, the next logical step is to delve into constraint programming, which is a sort of evolution of logic programming. I'd recommend the Minizinc Tutorial for a practical introduction with a nice DSL. Constraint Processing by Rina Dechter is a great intro with a more academic bent. I'd say mathematical modeling has been Ops Research's greatest success, and I'd definitely recommend Model Building with Mathematical Programming by Paul Williams.
I'd also say that Baysians have embraced these ideas (that of building upon human expertise) within the field of Machine Learning far more than other ML researchers have. My recommendations here are probably less helpful...I've only ever toyed with Bayesian learning models, but never employed them professionally. But I would recommend Doing Bayesian Data Analysis by Kruschke. It was very helpful as an introductory material.
Today we have production rules systems such as iLog and Drools which are head and shoulders better than OPS5 and other "expert system shells" from the golden age of A.I.
These are in widespread use for a few applications. Probably every bank has at least one iLog instance running for enforcing business rules. Another one is "complex event processing", where a RETE engine makes it easy to aggregate small events into larger events. Also, a few efforts, such as Inform7 and Clara have tried to push the boundaries of programming for non-experts and real-life applications.
It is interesting, however, that the technology is not further applied, and a deep analysis of that could be worthwhile. For instance, RETE networks can eat "callback hell" situations for lunch, much like the complex event processing case. Instead, however, we are seeing one awful Javascript framework after another, and coroutines pushed as a very narrow answer to the problems on the server.
Note that all forms of A.I. have ties to optimization. For instance, usually when you train a neural net you are minimizing some kind of an error function. Drools (and iLog) both have optimization frameworks, etc.
The recent discussion of "superintelligence" has been marked by both a lack of imagination and any awareness of previous work on the subject. For instance, Rules engines are a fairly direct answer to "AI Safety" and "AI Ethics" problems that there is so much handwringing about. Most areas that require computers to be "creative" amount to some kind of multi-objective optimization, and even if rules can't make a system good, they can at least prevent the worst abuses.
I'm in complete agreement. I didn't mean to give off the idea that you couldn't use Expert Systems anymore, merely that the research and publishing interest died. Implementations have definitely improved. Supposedly, Charles Forgy has continued researching and implementing improvements to his original RETE algorithm, but both the research and implementations are entirely proprietary and mostly inaccessible.
I likewise find it sad that the idea has died so much that we continue to build square wheels that were made obsolete by rule engines. I remember a futile attempt at Amazon to get a team to scrap their system in favor of a RETE based rule engine. Theirs was a poorly performing and fragile homemade "rule engine" that compiled xml rules into if/else statements, which after tens of thousands of rules had slowed to a snails pace. Somehow the magic of O(1) escaped them, and they practically required me to reimplement their system from scratch in order to convince them. So I let it go.
I had never considered the possibility of killing callback hell with a rules engine but I can definitely see it now. I personally have toyed with building a compiler that eschews the pipeline architecture with a rules engine where both analyses and optimizations are implemented as rules. I definitely think there is a world of possibilities out there, but maybe we'll need another AI winter before people consider them again.
Getting RETE right isn't trivial. I tried looking into the fundamentals a few months ago, and was very unsatisfied - it felt like people explaining it knew parts of it but not the whole thing. I guess it is good that we have some open source implementations that can be reverse engineered by someone motivated enough.
Constraint logic programming indeed evolved from logic programming, but it encompasses such diverse topics as finite-domain propagation, integer interval propagation, SAT solvers, linear and restricted polynomial optimization, discrete planning and constraint satisfaction strategy meta languages that the only commonality seems the more or less Prolog-like syntactical presentation. These vastly different formalisms were cast into terms such as CP(x) (meaning constraint programming over x, for x the reals or other domains) but this doesn't give you a solution strategy (the solution strategies being as varied as math itself).
> in a lot of domains
Right, but not some domains where expert systems are not only simpler to build but perform better. The can also be audited and evaluated for correctness.
empath75 says:> "But I think experience has already shown that machine learning models blow away expert systems in a lot of domains."
I don't think there have been many head-to-head comparisons. Academic researchers simply ceased using "expert system" in their research grant applications and began to use "neural network".
For real-world applications your choice of model would likely depends on the data available (e.g., human expert vs historical data on electronic media).
I think this Quora posting characterizes well the status of expert systems vis-a-vis neural networks today:
"Artificial Intelligence: Are Expert Systems outdated?"
Some of what logic programming represented became absorbed into "semantic web" technologies (RDF, SPARQL, OWL/OWL2), and went down with it, though I think actually OWL2 (description logic) isn't half bad and has lots of vertical use in eg. bibliography systems, taxonomic meta knowledge bases for medical and other research support systems, and backends for graph databases like OpenGraph. I remember RDF being used for metadata in Linux desktop search software, and folks hated it.
Prolog, OTOH, is as minimalistic and pragmatic as ever, with many implementations around, based on an ISO standard even. There's a slowly evolving initiative to come up with an extended standard library across Prolog implementations (eg. Prolog Commons). Though as you might know, Prolog doesn't solve hard problems in itself; rather, it gives you a Turing-complete language based on backtracking, negation-as-failure, closed-world reasoning, and extra-logical mechanisms as primitives to build more interesting reasoning or planning/optimization problems on.
The "semantic web" stuff has not really disappeared, but it hasn't thrived either.
OWL2 points to a world where a production rules or logic language is fronted by a macro language that lets you say something like
"x is a transitive property"
rather than write
"x(a,b) and x(b,c) implies x(a,c)"
In fact, many RDFS and OWL implementations work exactly that way. RDFS/OWL is designed to support data integration in the sense of "this predicate is an alias of that predicate" but aren't revolutionary because it lacks the ability to do things like
which ordinary production rules engines do easily. Many semantic web tools have such a production rules engine hidden inside (Jena/GraphDB/...) but production rules have resisted standardization.
Note that "Datalog", a subset of Prolog that is purely logical and doesn't have cuts and all of that awful stuff, has caught on, but there has never been a "Turbo Datalog" or "Common Datalog" because frequently Datalog-equivalent functionality gets build into semantic web tools, Datomic, or systems that don't claim any formal compatibility with anything else.
RDF suffered a lot because of RDF/XML where it was really unclear where RDF ended and XML started. Today there is Turtle and JSON-LD, both of which are pretty ergonomic.
One issue I see is that many companies have their own "semantic web"-like technologies which are their own secret sauce and competitive advantage (thus not shared) whereas there seems to be a huge amount of fear and loathing of the W3C standards process, especially amoung people who are veterans of it.
Yes, and I'd like to add that W3C's semantic web efforts (in the form of EU-sponsored papers of top researchers in the field) have brought pretty well-understood description logic subsets formalized as OWL2 profiles. But IMHO, in W3C's semantic stack, these pearls get somewhat lost under layers of syntax - not just RDF/XML but also the (IMHO) mediocre SPARQL language syntax/protocol and needless variant syntaxes for description logic axioms such as functional syntax, etc. As you probably know, this all started as a variable-free syntax for logic axioms (such as your example for transitivity) as early as 1991, but I'm arguing not much was won (and in fact, things were made less clear intuitively) by using variable-free ad-hoc syntax for logic.
While many people like JSON-LD, if you see actual uses of it such as Google's corporate contacts ([1]), I can't help but think it merely adds to the already heavy "syntacticity" of the semantic stack. For me, JSON-LD feels like an appeasement to the JSON and pragmatic web developers crowd, but then its JSON gets very complicated/counter-intuitive and squeezes @type, @context and other meta-meta attributes into JSON which won't make pragmatist happy.
Personally, my go-to stack for logic has become Datalog and Prolog once again (unmatched in terms of minimalism, elegance, and power IMHO).
I'm not hostile to them per-se, but I don't see much usefulness in systems like this one these days.
For some kinds of video game agents, traditional logic programming might be fine.
But for anything that I would consider using in the real world, I would want to use probabilistic knowledge representation and reasoning.
Even for many kinds of video games you'd want that instead. Like in a first person shooter where the agents have limited knowledge of the world state, you want to be able to reason about the other player's position and status without cheating, so that the agent can be more realistic and fair.
I work for a living in a company that creates products to calculate timetables and schedules for transport companies.
The schedules and timetables need to be exactly, defined according to a large number of requirements (hard constraints and weighted preferences).
While these exact schedules could be created through probabilistic means, the systematic approach of logic search provides more consistent results than a stochastic search, and the imperative programming style allows finer control over the search algorithm than what would be easy to achieve in a pure logic language.
Computational Logic and Human Thinking: How to be Artificially Intelligent
"This earlier draft of a book of the same title, published in July 2011 by Cambridge University Press, presents the principles of Computational Logic, so that they can be applied in everyday life. I have written the main part of the book informally, both to reach a wider audience and to argue more convincingly that Computational Logic is useful for human thinking."
Very long time ago I was trying to program many matroids algorithms, let's say that matroids are a generalization of vector spaces and independence, in prolog. I could program those algorithm in many languages, for example in Lisp, but programming in Prolog was very painful. I think that prolog related programming is a useful tool when the program is similar to those solved by forward chaining but in other cases I can't not see the benefits, the dismissal of Prolog in day to day programming seems to reflect that sentiment.
At http://blog.ruleml.org/post/32629706-the-sad-state-concernin... Kowalski summarizes his theory of the classification of rules (within production logic systems) in to a triple typology, and suggests that the last of these types, which he terms reactive rules, are more fundamental, and "the driving force of life". These are rules which define a subsequent action or state from current conditions, ie. "if hungry then eat".
To fully grasp the idea it is important to understand the different natures of logic programming versus imperative/procedural programming... there is not really an if-then-else-style (ie. control flow oriented) program state. Rather, the programmer tends to throw a bunch of formal rules at a 'solver' which simply determines the result.
I've always thought that someone armed with an efficiently implemented Logic Programming Language could sweep through the first few rounds of competitions like Google Code Jam.
I also recall a Minecraft mod called ComputerCraft where you program a "Turtle" (a la Logo) to perform automated digging actions, but the task of programming it in the easily-embedded Lua proved somewhat inconvenient for simple actions, if only for the ALGOL-style syntax. Another mod tried Forth, but get the sense that while one could be productive in Forth, it's a lot to learn.
The developers have already incorporated it in a proof of concept game: https://bobthesimplebot.github.io/, but I could imagine RTS style games or mods where you not only have to scavenge for resources, you also program your bots to do your bidding, like Dwarf Fortress with programmable drones instead of independently-minded Dwarves.