I think that's a fair criticism of a specific failing of a specific complexity metric. A reasonable response, especially if switch statements or pattern matches in your language are better at detecting completeness than other approaches, might be changing the metric to reward switch statements that completely cover the input range. It also makes it clear why any one metric should not be a target with no exceptions. I don't think it invalidates the concept of measuring complexity.
There are many things that the complexity calculator could take into account: the specifics of the language, the libraries that are used (which could start threads or contain hidden conditionals), the kind of project (following a spec vs. explorative programming), and so on.
I think it's easy to come up with an algorithm that's better than a coin flip or counting LOC. But to be useful, the algorithm has to be on par with the judgement of the developers who check in the code and review it. Otherwise the false positives will take time away from other bug-reducing activities like code reviews and writing tests.
Of course I don't have data to prove it, but I'm convinced that every complexity checker I've encountered in the last 15 years has been a (small) net negative for the project. Maybe machine learning will improve the situation.