Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLM’s aren't expert systems. A hallmark of expert systems is they encoded human-readable, human-checked knowledge with explainable reasoning. It was usually done as if-then rules. Others with logic programming. Forward and backward chaining for rules. Usually had specialist knowledge for one, use case.

LLM’s are unsupervised, use probabilities with unpredictable results, and don’t explain every step of their thinking. They’re the opposite.

You might argue Cyc was. It was also more complex than any expert system I had ever seen. We just called stuff like that a reasoning engine or just Cyc to avoid confusion.



An expert system is just a system based on repeated application of declarative rules. CYC was certainly an expert system - the ultimate scaling experiment of expert systems. I believe CYC also had a variety of inference/reasoning engines in addition to it's set of rules.

The rules (some prefer to call it a world model) in an LLM are deduced, via gradient descent, from the training samples, but are still there. The transformations effected by each layer of a transformer are exactly those it has learnt - the rules it is applying.

As with CYC people seem to be hoping that some external scaffolding (better inference engine(s)) will rescue LLMs from just being a set of rules to something more general and capable, but I tend to agree with Chollet that this active inference (reasoning) is actually the hard part.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: