Historically, when the computing world only knew about English, a character was represented by a byte. That led to a string being essentially byte array. As such, the concepts of a byte and a char and a buffer and a string were often used interchangeably (in fact, the 8-bit-type in C is called char). With Unicode and the need to parse international text, however, a character can no longer be represented as a simple byte, and things have become messy.
Python3 finally cleanly separates the concepts of a "byte" (8 bits) and a "character" (which now is an abstraction). A String is now a collection of characters and no longer functionally equivalent to an array of bytes.
This change is normally welcomed by people who have to deal with multiple languages and encodings anyway, as making the conceptual difference between a character and a byte explicit makes dealing with text much easier. However, if you were used to thinking of strings as byte arrays and not as "the data type for text", you might have a hard time.
But it's basicly Python three killing of the byte-string and ignoring the fact that in some cases it actually makes sense to work with string that are specifically not unicode, rather than resorting to have a bytes type instead.