May I ask why there are NNs in your project at all? Just to heat up the planet and make Nvidia share holders even more happy? :-)
I mean, what I've seen in the Readme makes sense. But doing basic computer stuff with NNs just makes the resource usage go brr by astronomical factors, for imho no reason, while making the results brittle, random, and often just completely made up.
Also: Do you know about the (already 40 year old!) project Cyc?
> May I ask why there are NNs in your project at all? Just to heat up the planet [...]
> Also: Do you know about the (already 40 year old!) project Cyc?
Has Cyc accomplished anything so far? Or is it just to heat up the planet? The Wikipedia page makes it sounds pretty hopeless:
> Typical pieces of knowledge represented in the Cyc knowledge base are "Every tree is a plant" and "Plants die eventually". When asked whether trees die, the inference engine can draw the obvious conclusion and answer the question correctly.
> Most of Cyc's knowledge, outside math, is only true by default. For example, Cyc knows that as a default parents love their children, when you're made happy you smile, taking your first step is a big accomplishment, when someone you love has a big accomplishment that makes you happy, and only adults have children. When asked whether a picture captioned "Someone watching his daughter take her first step" contains a smiling adult person, Cyc can logically infer that the answer is Yes, and "show its work" by presenting the step-by-step logical argument using those five pieces of knowledge from its knowledge base.
"AI heats the planet"... really? You mean marginally?
I'll assume you're asking in good faith. Using NNs allows this project to stand on the shoulders of giants: philosophically, mathematically, programmatically, but also I expect this to plug in to OSS LLMs, and leverage their knowledge, similarly to how a human child learns in a Pavlovian/intuitive response, and only later starts to learn to reason.
Wrt inefficiency, training will be inefficient, but the programs can be extracted to CPU instructions / CUDA kernels during inference. Also, I'm interested in using straight through estimators in the forward pass of training, to do this conversion in training too.
Cyc looks cool, but from my cursory glance, is it capable of learning, or is its knowledge graph largely hand coded? Neurallambda is at least as scalable as an RNN, both in data and compute utilization.
That's the "heating the planet part" I was referring to. :-)
> but the programs can be extracted to CPU instructions / CUDA kernels during inference
This just makes my original question more pressing: What are the NNs good for if the result will be normal computer programs? (Just created with astronomical overhead!)
> Cyc looks cool, but from my cursory glance, is it capable of learning, or is its knowledge graph largely hand coded?
The whole point is that it can infer new knowledge from known facts through a logical reasoning process.
This inference process was run since 40 years. The result is the most comprehensible "world knowledge" archive ever created. Of course this wouldn't be possible to create "by hand". And in contrast to NN hallucinations there is real logical reasoning behind, and everything is explainable.
I still don't get how some "dreamed up" programs from your project are supposed to work. Formal reasoning and NNs don't go well with each other. (One could even say they're opposites). Imho it's "real reasoning" OR "dreamed up stuff". How "dreamed up stuff" could improve "real reasoning"? Especially as the "dreamed up stuff" won't be included in the end results anyway, where only the formal things remain. To what effect are the NNs included in your project? (I mean besides the effect that the HW and energy demands will go through the roof, ending up billion times higher than just doing some lambda calculus directly…)
And yes, these are genuine questions. I just don't get it. It looks for me like "let's do things maximally inefficiently, but at least we can put a 'works with AI' rubber stamp on it"; which is maybe good to collect VC money, but else?
> This just makes my original question more pressing: What are the NNs good for if the result will be normal computer programs? (Just created with astronomical overhead!)
You know how expensive it is to pay humans to write 'normal' computer programs? In terms of both dollars and CO2.
I mean, what I've seen in the Readme makes sense. But doing basic computer stuff with NNs just makes the resource usage go brr by astronomical factors, for imho no reason, while making the results brittle, random, and often just completely made up.
Also: Do you know about the (already 40 year old!) project Cyc?
https://en.wikipedia.org/wiki/Cyc
This software can indeed "reason". And it does not hallucinate; because it's not based on NNs.