Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe this will be a good way for users to communicate to the LLMs, but I wonder if it would be better for LLMs to generate code in a more disciplined language. Something like Eiffel with pre/post invariants or Lean/Rocq for provable correctness. Then a rigorous compiler can check the LLM emitted what it promised. The verbosity is unpleasant for conventional human programmers, but it's almost a non-issue for the LLM.


I agree. I feel like the more strictly defined the problem space can be and the more guarded rails that can be added and checked, the better the LLM will be. The data derived from such constructs also doubles as providing additional context to the LLM.

I haven’t had much time to really experiment with it but I feel that the typescript xstate library would in theory be a great fit for LLMs as it allows them to break a problem into states and state transitions and then focus on on one state at a time.


did anyone manage to cross the chasm between floats and reals doing proofs about algorithms in actual code? It might be a skill issue but I'm always stuck at reals not being decidably comparable


That's a good question, and I certainly don't know the answer. Floats are a bit messy, but I would guess it's solvable.


LLMs can only predict the next word, I don't know if it's an advantage to use a language with provable correctness or dependent types, especially when training data is scarce.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: