Do you mean that you're allowed to only use types where you want to, which means maybe the type checker can't check in cases where you haven't hinted enough, or is there some problem with the type system itself?
The type system itself is unsound. For example, this code passes `mypy --strict`, but prints `<class 'list'>` even though `bar` is annotated to return an `int`:
i : int | list[int] = 0
def foo() -> None:
global i
i = []
def bar() -> int:
if isinstance(i, int):
foo()
return i
return 0
print(type(bar()))
- Don't write unsound code? There's no way to know until you run the program and find out your `int` is actually a `list`.
- Don't assume type annotations are correct? Then what's the point of all the extra code to appease the type checker if it doesn't provide any guarantees?
You may as well argue that unit tests are pointless because you could cheat by making the implementations return just the hardcoded values from the test cases.
Agreed, but I've never found it to be especially problematic. The type checker still catches the vast majority of things you'd expect a type checker to catch.
If you want to be able to change the type of something at runtime, static analysis isn't always going to be able to have your back 100% of the time. Turns out that's a tradeoff that many are willing to make.
Yes, 100%. I believe that any new Python codebase should embrace typing as much as possible[1], and any JavaScript library should be TypeScript instead. A type system with holes like this is better than no type system.
[1] Unfortunately, many important 3rd party libraries aren't typed. I try to wrap them in type-safe modules or localise their use, but if your codebase is deeply dependent on them, this isn't always feasible.