> (namely that you can just deref' an std::optional and it's UB if the optional is empty).
if that was not the case, `optional` would get exactly zero usage. The point of those features is that you build in debug mode or with whatever your standard library's debug macro is to fuzz your code, but then don't inflict branches on every dereference for the release mode.
> The point of those features is that you build in debug mode or with whatever your standard library's debug macro is to fuzz your code, but then don't inflict branches on every dereference for the release mode.
That's completely insane. If there's always a value in your optional it has no reason to be an optional, if there may not be a value in your optional you must check for it.
Sure, but that's not the issue. You should be using a std::optional like e.g.
if (my_optional)
do_stuff(*my_optional);
Here's one (explicit)conditional.
However if the dereferencing, *my_optional, should be safe, it too would need to perform a conditional check behind the scenes. But it doesn't - as C++ places that on the programmers hand to not sacrifice speed
This is solved in Rust by letting you test and unwrap at the same time:
if let Some(obj) = my_optional {
do_stuff(obj);
}
>However if the dereferencing, my_optional, should be safe, it too would need to perform a conditional check behind the scenes. But it doesn't - as C++ places that on the programmers hand to not sacrifice speed
So basically that turns C++ optional types into fancy linter hints which won't actually improve the safety of the code much.
I understand C++'s philosophy of "you pay for what you use" but that's ridiculous, if you use an optional type it means that you expect that type to be nullable. Having to pay for a check is "paying for what you use". If you don't like it then don't make the object nullable in the first place and save yourself the test. That's just optimizing the wrong part of the problem.
> Having to pay for a check is "paying for what you use". If you don't like it then don't make the object nullable in the first place and save yourself the test.
The point is that I can choose _when_ to pay that cost (e.g. I can eat the nullability check at this point, but not at this point, and I can use more sophisticated tooling like a static analyser to reason that the null check is done correctly).
Is it more error prone? yes. Does it allow for things to horribly wrong? yes. Is "rewriting it in rust" a solution? No. If I want to pay the cost of ref-counting, I can use shared/weak ptrs.
Nah. In simple cases like that, the compiler would always be able to optimize away an extra check, if such a check were present. After inlining operator bool and operator *, it would look something like
if (my_optional->_has_value)
if (my_optional->_has_value)
do_stuff(my_optional->_value);
else
panic();
and the compiler knows that the second if statement will pass iff the first does.
On the other hand, if the test is further away from the dereference, and perhaps the optional is accessed through a pointer and the compiler can't prove it doesn't alias something else, it might not be able to optimize away the check. However, that probably doesn't account for too high a fraction of uses.
In terms of generated coded, it is exactly the same. But that's not the point of optional types.
The point of optional types is to force you to write checks for undefined values, otherwise your code will not compile at all. In the old-fashioned style of your example, you might forget to check for the possibility of a null pointer/otherwise undefined value, and use it as if it were valid.
Only if you deliberately unwrap the optional, which means you either don’t know what you are doing (in which case no programming language feature will be able to save you), or that you’ve considered your options and decided you want to enter the block knowing some variable can be undefined.
IMO, that’s not the same as not having optionals at all, and writing unconditional blocks left and right that may or may not operate on undefined values. It super easy to just dereference any pointer you got back from some function call in C++, without paying attention. Optionals force you to either skip the blocks, or think about how to write them to handle undefineds. Also, it’s ‘code as documentation’ in some sense, which I’m a big proponent of.
"Deliberately unwrap the optional" is the exact same thing as "deliberately unwrap the pointer", you just deref' it and it's UB if the optional / pointer is empty.
C++'s std::optional is not a safety feature (it's no safer than using a unique_ptr or raw pointer), it's an optimisation device: if you put a value in an std::optional you don't have to heap allocate it.
> It super easy to just dereference any pointer you got back from some function call in C++, without paying attention.
And optionals work the exact same way. There's no difference, they don't warn you and they don't require using a specific API like `value_unchecked()`. You just deref' the optional to get the internal value, with the same effects as doing so on any other pointer.
if that was not the case, `optional` would get exactly zero usage. The point of those features is that you build in debug mode or with whatever your standard library's debug macro is to fuzz your code, but then don't inflict branches on every dereference for the release mode.