Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Think of it as overly aggressive error correction at the language level.

It has a context of some Base64 code. Given that is almost always seen associated with computer code, is "address" or "actress" more likely.

It "knows" the algorithm for decoding base64, and can follow those steps. But it can't overcome it's built-in biases for optimizing the most likely output given the context.

(This problem is solvable, but I think that thinking about it like this helps understand why it behaves like it does)



> Given that is almost always seen associated with computer code, is "address" or "actress" more likely.

Sorry, but I don't buy it.

I don't think "address" is a particularly likely word to appear in code, especially the kind of code that uses base64 (usually high-level).

It appears even less often inside base64 encoded content.


https://github.com/search?q=base64+address gives 9M+ results

The original use for base64 was to send binary content to an email address.


This is all irrelevant. A language model should not run code itself, instead it should have a code execution environment, where it can read the error messages and iterate. It's terribly inefficient and error prone to run code directly.

People also code on computers, not on paper.


The point is to develop good intuitions for how large language models behave. If someone can develop a differentiable script runner that would be great! But the intuition about how the language model is behaving is useful for more than this specific problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: