My point (in that bit) was only that your criticism was incompatible with it being the thing you labelled. I'm not pitching TDD or defending it more broadly.
> But writing the test first means it will always fail once
Right, exactly.
> in my mind that doesn't count. It's like pretending.
¯\_(ツ)_/¯
It demonstrates that your test is actually testing something and that your code is having an impact on that something, which in most contexts is probably not high value but is not of zero value.
Pretending is when your team has agreed to TDD but you write the passing tests after the code just so no one can accuse you of not doing TDD when they weren't looking, which does seem to be what "we do TDD" sometimes turns into in practice... and even in that case, the tests may have communicative value that may be worth the weight of the test, your odds are just getting awfully low.
I think we are mostly in agreement but I'm enjoying deconstructing all this
When electricians are testing that a wire is dead they will take their multimeter and test it on a live wire (to make sure the multimeter works) then test the dead wire. Then test a live wire again (to make sure the multimeter is still working)
That seems worth it - because electrocution is worth avoiding.
Writing a test that tests a blank method with no code in it, watching the little light go red, then writing the code that you had in your head the whole time anyway, and then watching the little light go green, is just like superstition
For something as extreme as a blank method, it's unlikely to be useful, I agree. For something subtle in modifying existing code, "did that actually change the thing I thought" might be a question worth answering. I guess my best (somewhat Devil's advocate) argument for doing it all the time is that maybe it's cheap enough that doing it in the useless cases is less expensive than every time figuring out whether you should plus the cost of the false negatives.