A formal notation for reasoning could possibly solve some reasoning issues for LLMs. Perhaps something like Lojban or symbolic logic. We don't have a lot of data for it, but it might be possible to synthetically generate it.
On a dark note, I wonder if increasing AI reasoning capability could have dangerous results. Currently, LLMs are relatively empathetic, and seem to factor the complex human experience into it's responses. Would making LLMs more logical, cold, and calculating result in them stepping on things which humans care about?
> A formal notation for reasoning could possibly solve some reasoning issues for LLMs. Perhaps something like Lojban or symbolic logic. We don't have a lot of data for it, but it might be possible to synthetically generate it.
There's definitely anecdata supporting this. For some time chatgpt was better on a lot of arithmetic/logic type tasks if you asked it to generate a python program to do x than if you just asked it to do x for example. I haven't tested this specifically on the latest generation but my feeling is it has caught up a lot.
Due to the extreme data quantity requirement for pre-training, LLM effectively locks your reasoning language into the lowest common denominator, aka Python. Sure, people (maybe very smart) can come up with reasonable, effective, efficient notation; the problem is to train the model to use it properly.
On a dark note, I wonder if increasing AI reasoning capability could have dangerous results. Currently, LLMs are relatively empathetic, and seem to factor the complex human experience into it's responses. Would making LLMs more logical, cold, and calculating result in them stepping on things which humans care about?