Only "obvious" if the things you are doing match simple ownership patterns for example. If you want to do performance-oriented things: for example compressing the size of certain data structures, shaving off a few pointers/integers of or even compress them in some way. Or applying various concurrency patterns, or otherwise speed up the code...
... then it's not all obvious anymore. In these situations you'd rather drop down to assembly than go up to sth like Rust.
I'm currently doing my 2nd take on a userspace allocator (fixed-size pages but of different sizes, running 32-bit too) as well probably my 8th take or so on a GUI toolkit. I've experimented with lots of approaches, but it always seems to come back to removing abstractions, because those seem to kill my work output. A big reason is they blow up code size and make code (by and large) less understandable, as well as harder to change because of the sheer size.
Not necessarily saying that approach doesn't lead to more potential security issues. I mostly just turn up warnings, not doing fuzzing etc. But it seems to be a more pragmatic way for me to achieve functioning (and robust-in-practice) software at all.
I think the center of the conversation is "general purpose safety" (a term I just made up), i.e. if you just want to get things done, how to write thr code in a safe way. If your other needs like performance are critical, things need to be looked at differently.
Which is why the video I linked is not so relevant to HFT guys, for example. (Context: some people try to use HFT as "counter example" to that video.) Let's be realistic -- nobody cares about whether your code running trading algorithms inside your company is memory safe.
... then it's not all obvious anymore. In these situations you'd rather drop down to assembly than go up to sth like Rust.
I'm currently doing my 2nd take on a userspace allocator (fixed-size pages but of different sizes, running 32-bit too) as well probably my 8th take or so on a GUI toolkit. I've experimented with lots of approaches, but it always seems to come back to removing abstractions, because those seem to kill my work output. A big reason is they blow up code size and make code (by and large) less understandable, as well as harder to change because of the sheer size.
Not necessarily saying that approach doesn't lead to more potential security issues. I mostly just turn up warnings, not doing fuzzing etc. But it seems to be a more pragmatic way for me to achieve functioning (and robust-in-practice) software at all.