Just a reminder BTW that since version 2.0 (1996), Unicode is not an encoding scheme but a character set (I avoid the confusing “charset” word on purpose). Therefore, Unicode does not use any number of bytes: it only assigns code points to characters.
Windows used to use the UCS-2 encoding scheme which indeed used 2 bytes for each character, but since Windows 2000, it uses UTF-16 instead, which like UTF-8 uses a variable number of bytes per character.
Windows used to use the UCS-2 encoding scheme which indeed used 2 bytes for each character, but since Windows 2000, it uses UTF-16 instead, which like UTF-8 uses a variable number of bytes per character.