Right, I would say the problem exists within the slider, and not the consumer of its state. With a sensible design, you might even have the slider producing a type that makes strong guarantees about the range of its value. That moves the concern of validating the value range to that type's constructor, and removes it from the rest of the codebase.
That doesn’t resolve the issue. If at runtime the physical memory associated with your strongly typed data has a negative value before the function is called either the type system checks it again which then blows up, or assumes it’s still valid and your fiction operates on faulty data.
Essentially what you’re describing is automating such checks.
Technically true, but practically hardware is very unlikely to fail in a way that maintains self-consistency.
Eg, if you checksum all the data, what you can expect from bad RAM is either to detect a mismatch, or to mis-calculate the checksum and something else will detect a mismatch later.
You certainly can't count on it, but enough of that sort of thing makes it a lot more likely you'll notice something is not quite right eventually.
Most hardware failure is extremely intermittent, otherwise it almost instantly result in a crash. ECC etc helps, but you still run into manufacturing defects etc.
That said, I agree you can’t guarantee everything is working via software but detecting hardware failure via software is simply good practice.
100% agree. That's why eventually I want to learn F# (currently my main driver is C#). It features more advanced type system and I love static typing, because by itself it provides nice guarantees. F# brings it further by guaranteeing specific state. I may learn F#, but I don't think I'm gonna use it in existing code base, as maintainability with fellow developers or whoever works after me is perhaps more important than F#...
This is turning into an argument for strong typing, but, in light of your original claims, only for external inputs, not for internal interfaces - but if it is useful in the former case, why not the latter?
In some cases, say when you work very close to a business domain, this may very well be a good thing. But for larger applications, especially that do low-level stuff, the tradeoff is cluttering your namespace with single use parameters. You quickly run into scaling problems.
It's exactly the same reason bindly following "textbook" OOP design is a bad idea. Unless you're able to do some very prescient upfront design of a large part of your application, you're in for a big headache later.
I mean that you need to assign each type a distinct name, and in many programming languages, the namespace for types is effectively global.
If you add say a thousand types for all the things combinations of things you may want to represent, then you've effectively added the need to have a thousand type names.
That is merely being explicit about what is implicit otherwise. When you are being explicit, there is at least the possibility that these declarations could be scoped - and, in many languages, this is something you can do.
Silently, trying to work with invalid inputs just results in bugs that people may not notice and are much harder to track down.