Hacker News new | past | comments | ask | show | jobs | submit login

So? `.decode(“hex”)` still makes sense on a string, and was >95% of the encode/decode calls I had.



Hex-encoding works off of bytes… like if a character has a different byte representation between two encodings you need to specify in both directions

Even the Python 2 strategy for hex might lead to your text getting garbled if you’re using anything not représentable in ASCII (roughly).


That’s not what decode(“hex”) does though. It takes strings in whatever encoding and turns 0-9,a-f into bytes. In Python 3 it would presumably return a bytes object instead, which you could then hypothetically call encode(“hex”) on. But the point is that it would have kept the syntax the same even if the underlying types changed.

Python3 unnecessarily broke compatibility in many places. Yes I agree that using bytes.fromhex() and .hex() are better design choices. But requiring manual inspection and updating of all strong manipulation code for the transition was an outrageous requirement, particularly for a dynamic language that doesn’t (or at least didn’t at that time) have compile-time type checking to catch such errors.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: