Hacker News new | past | comments | ask | show | jobs | submit login

With LLM generated code (and any code really) the interface between components becomes much more important. It needs to be clearly defined so that it can be tested and avoid implicit features that could go away if it were re-generated.

Only when you know for sure the problem can't be coming through from that component can you stop thinking about it and reduce the cognitive load.




Agreed.

Regarding some of the ‘layered architecture’ discussion from the OP, I’d argue that having many modules that are clearly defined is not as large a detriment to cognitive load when an LLM is interpreting it. This is dependent on two factors, each module being clearly defined enough that you can be confident the problem lies within the interactions between modules/components and not within them AND sharing proper/sufficient context with an LLM so that it is focused on the interactions between components so that it doesn’t try to force fit a solution into one of them or miss the problem space entirely.

The latter is a constant nagging issue but the former is completely doable (types and unit testing helps) but flies in the face of the mo’ files, mo’ problems issue that creates higher cognitive loads for humans.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: