Hacker News new | past | comments | ask | show | jobs | submit login

Back in the day, with Smalltalk, a programmer could override everything. If you did something that broke derived classes, you were simply being a bad programmer. You simply didn't do that, and if you couldn't deduce if your change would do this or not, you either had a badly architected system, or you were being a bad programmer.

This is how it should work in many production environments: Are you 100% sure about that? No? Don't do that! Then start asking why you can't be sure, then fix that. Rinse, repeat.




"final" allows the compiler to strictly enforce that "don't break things" idea, instead of delegating it to fallible humans. (it also lets the compiler make your code faster)

By using the tools that Swift provides - preferring value types, and falling back on final classes, I can much more easily deduce what my changes will do.

Non-final classes create an additional public API that framework authors need to support - the ability to change any behavior. Reducing the surface for potential errors makes frameworks and their clients more robust.


By using the tools that Swift provides - preferring value types, and falling back on final classes, I can much more easily deduce what my changes will do.

No disagreement here.

Non-final classes create an additional public API that framework authors need to support - the ability to change any behavior. Reducing the surface for potential errors makes frameworks and their clients more robust.

Since Smalltalkers knew all their code was "surface," there was motivation to keep things very encapsulated. (Perhaps this is part of why the Law of Demeter was so big in that programming culture.) Synergistic with this, was the heavy use of the very powerful debugger. If your codebase was mostly relatively stateless or very well encapsulated, you could time-travel with ease in the debugger by unwinding the stack, recompile your method in place, and continue on. Conversely, if you wrote code that didn't have those qualities, your fellow programmers would get annoyed at you for making their lives harder and their tools much harder to use.

Increasing the surface makes frameworks more flexible and necessitates good design throughout. Is there a trade-off? Sure. The really good Smalltalkers spent lots of time reading code and exploring stuff in the debugger/browsers. And sometimes, you could be stymied because you couldn't rule out stuff and risk a blow-up in production. And to be fair, in my estimation, Smalltalk projects were less robust -- but got fixed really quickly.

Nowadays, I think the sweet spot would be in a simple language with super fast compile/edit/test cycles, with equally powerful debugging, and with type annotations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: