Unless the hot spot in your code is removing spaces from strings, these tricks are just academic and not something you would ever do yourself. You'd use a well tested, safe string handling library in real code.
I am not an academic. I have done this myself. Of course, we wrote a regex library...
These tricks are in general something that it's well worth learning how to do if you do any performance critical code, as often the boundary of the way that they are implemented in someone's library is not useful for the actual task (i.e. you may have a fast 'find a space' function, but it might not be fast by the time you painfully iterate through each of its results removing spaces one by one). Sadly, a lot of this sort of processing doesn't compose very well unless you have access to an omniscient optimzer.
But yes, if this isn't your hot spot, don't do this thing in particular. It's still possible you might learn something useful by reading about it.
It's mostly interesting because it's a subproblem of many interesting problems. "get a bunch of data, discard uninteresting/invalid data, do stuff with the rest" is something that easily becomes a hotspot. "Removing spaces" is the "discard uninteresting/invalid data" part, in the easy case that the data is a bunch of bytes and there is some arbitrary threshold that decides what to keep (not unusual with sensor data of all kinds).
So I'm not sure "academic" is quite the right word, but I agree that worrying about Unicode is not the point here.
Also, this space remover looks a lot like a CSV parser, and CSV is a fine high level format if your data is a table (and you care at all about performance).