Hacker News new | past | comments | ask | show | jobs | submit login

The other big question regarding Rete is, today, at what point is the complexity of the algorithm worth the performance.

A primary goal of Rete is to narrow the rule set to be applied based on changes in the working memory. Quite important if you have a large rule set over a more naive, perhaps brute force, approach of apply rules.

But modern machines are pretty stupid fast, and when operating on very slow hardware of the day, Rete was very important. Today, I don't know. Or, simply, that the use cases perhaps narrow particularly for smaller rule sets.

"Here's a bunch of rules (say, anonymous JS functions). Run each one against this working set (which is little more than a fancy map). If any of the members of the map change, do it again."

Cheap hack 2B. Register with each rule what variables its interested in:

    addRule(['weight', 'height'], function(map) { var w = map.get("weight"); var h = map.get("height"); var bmi = w / (h * h); map.put("bmi", bmi); });
    addRule(['bmi'], function(map) { var bmi = map.get("bmi"); if (bmi > 30) { map.put("weight_status", "obese"); } });
(No, this isn't a rule development environment. Yes, rule engines can be very complicated, and rule interaction, precedence, etc. are a Thing. This is a trivial example, doesn't mean it's not potentially useful for some scenarios however.)



I've always meant to hack up a Rete implement but never found the time. Your example here makes it look like a reactive program though, which I have done. In functional reactive programming, you topologically sort the update graph to ensure that nodes are updated in a correct order so no glitches occur, and the reactive graph only updates subgraphs that are affected by any change. Is that basically what Rete is doing?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: