Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In lisp about 50% of the code is just closing parentheses.


Heh, but it can't be that, no reason to think llms can count brackets needing a close any more than they can count words.


LLMs can count words (and letters) just fine if you train them to do so.

Consider the fact that GPT-4 can generate valid XML (meaning balanced tags, quotes etc) in base64-encoded form. Without CoT, just direct output.


That's GPT-4, which you wouldn't use for in-line suggestions because it's too slow.

I don't know what model Copilot uses these days, but it constantly makes bracket mistakes in Python.


You don't need a GPT-4-sized model to count brackets. You just need to make sure that your training data includes enough cases like that for NN to learn it. My point is that GPT-4 can do much more complicated things than that, so there's nothing specific about LMs that preclude them from doing this kind of stuff right.


Logically, it couldn't be 50% since that would imply that the other 50% would be open brackets and that would leave 0% room for macros.


That's just a rounding error ;)




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: