Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm curious how this might apply to your average tech worker/programmer type person ... Are there common missteps that we do which overtaxes working memory, and how can we adjust our working rituals to mitigate?


Open plan offices are one such misstep. We could adjust our working habits to be more privacy focused, preferring to err on the side of assuming that most hours of the day need to be quiet, private hours in a private setting spent on contemplative work, especially for collaborative work.

It should be seen as rare and exceptional to need hours spent in meetings or group settings where dynamic real-time audio is required. That is a form of collaboration needed much less frequently than asynchronous collaboration built out of individuals taking a more contemplative and personally customizable, adjustable approach to organizing thoughtful work habits that don't disrupt others.

(Note: I'm not saying I support this particular study on working memory, just that open plan offices are a problem in this area of productivity, whether or not this particular study turned out to be mostly click bait or not.)


In my opinion unnecessary abstractions -- especially splitting code up into different files -- is a tax on working memory and leads to difficulty achieving flow state. Every layer of indirection takes up residence in working memory and so it had better be necessary, else it should be omitted even at the cost of some code duplication


> In my opinion unnecessary abstractions -- especially splitting code up into different files

It gets especially bad when someone else wrote that code and you're just getting into it. Layers upon layers of abstractions scattered across classes, functions, files. Each add very little on their own, but you kinda have to keep it all in your head (or write it down as I do) if you're planning to grok it and fix that bug.

It's kinda like reading some piece of code not in its final form, but as a pile of diffs are applied to it. Fun. :-)

I find older style procedural code with longer functions generally much easier to get into than any OOP.


As a counterpoint, I find breaking code up into separate classes and functions to help greatly with minimizing what I need to keep in my working memory.

I don't often need to know what every single line does exactly all at once, nor can I actually keep that much information in my head anyway. I'd rather be able to say "that's the function that frobbles the subductor, it's 5 lines long and right now I don't care how it does it.",

Sure, the bug might be in that function - but if it's only 5 lines long and does one simple thing, I can write a bunch of tests for that one thing and work it out. When the code is broken up like this, I can keep an even larger system in working memory all at once.


I think the "functions should be small and do only one thing" maxim is important but needs to be paired with "things should be small and done in only one function." The more you split logic across multiple functions, the more likely that the bug won't be in any one function but in the way that you composed them.


This is one of the most astute criticisms that I've seen of coding style, along with your earlier post regarding locality of reference and duplication of statements vs duplication of intent. I think these are really great principles for writing clear and beautiful code. Are these ideas something that you've developed from experience? Is there a wider context and conversation around this kind of thing, that you know of?


Locality of reference isn't just important for CPUs!

I worked with a guy once who loved to split everything into tiny classes made of tiny functions, each in its own tiny source file. You'd hop through four or five different files to do the simplest thing, and it was mindbending to try and debug.

Also with regards to code duplication, it's important to distinguish duplication of statements from duplication of intent. If the code's actually doing the same thing then sure, collapse it down into a function. But if the intent of each piece is different and they just happen to involve the same operations, then removing the duplication is adding a dependency to both places, and may very well not be a net win.


Maybe this explains the appeal of single-file Vue components. Having the template, logic, and styles together in one file makes the overall component much easier to reason about.


We can get rid of git which is a massive tax on our thought process.

Or rather, we could have a considerably simplified version, with a very small set of commands that are consistent with one another, and 'undoable', and a simpler conceptual model - and only allow admins/superusers to touch 'real' git.

I'm deadly serious, I don't care how brilliant a dev is, I find that git's conceptual model, the variety and inconsistency of the command patterns, and the variety of usage models add up to an unnecessary mental tax on dev.

I don't mean to turn this into a git discussion, but I suggest it is an example of one of many 'sneaky taxes' that effectively introduce otherwise reducible complexity on all of us.


I use git, but almost never anything other than clone, pull, add, commit, and push.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: