Maybe his advice suffers from what many other pieces of apparently good advice suffer from: they get used as an excuse to do whatever the person saying them wants, hiding their real motivation.
Premature optimisation: oh that means we don't need to think about performance at all
XY problem: I don't need to tell you the answer because you shouldn't be asking the question
If it works don't fix it: we don't need to do any maintenance
Chesterton's fence: we don't need to change anything ever
Your summary is clearly not what he's saying, but I can totally believe that people would use it as justification for doing those things.
They didn't say it requires saints and/or geniuses to follow the advice in TFA, they said that people will often twist good advice in bad directions because they have ulterior motives. This is true regardless of how difficult actually implementing the advice would be.
The advice in TFA basically boils down to "don't pendulum, try to find a good middle ground between extremes", which shouldn't require either a saint or a genius.
The system works for him. Moreover, I expect it likely works well enough for some people. Humans are a rather heterogeneous lot. Any system of universal applicability is going to be extremely limited in its ability to provide tactical insight.
> If it works don't fix it: we don't need to do any maintenance
My counter argument is “Why don’t you have automated test coverage so you’re free to make maintenance or engineering improvements without fear of breaking things?”
One thing I have noticed with over zealous testers is to get 100% coverage they mock everything and test implementations. Then you end up writing 2x the code to get anything done. I think it's a net negative. I have more often had to rewrite tests in these environments then have them actually save me from shipping a bug.
This is why I very heavily favour functional tests or broad use-case based integration tests that are implementation agnostic. Save your highly coupled unit/property based tests for algorithmic code or weird edge cases when things are settled. More generally, even many TDD people advocate “spike then stabilise” as an approach.
That is such a sweet feeling - doing what you feel is good test suite (maybe not TDD perfectionistic 100%but good) then few months later you add a new feature and the tests find some subtle bug that would have affected production.
Why don’t you have automated test coverage so you’re free to make maintenance or engineering improvements without fear of breaking things?
Because that's equivalent to solving the Halting Problem. Even if you could test your way to quality in a non-interactive context, it would never be possible in a system that includes asynchronous human input.
Good question, ask my entire industry. Games defintiely has logic that isn't easily covered by automated systems, but there's plenty of deterministic code that could be put under test. We rarely get such luxury of time to implement that, though.
Premature optimisation: oh that means we don't need to think about performance at all
XY problem: I don't need to tell you the answer because you shouldn't be asking the question
If it works don't fix it: we don't need to do any maintenance
Chesterton's fence: we don't need to change anything ever
Your summary is clearly not what he's saying, but I can totally believe that people would use it as justification for doing those things.