Really? I thought the analogy was pretty good. Here senses refer to how the machines perceive text, IE as tokens that don't correspond 1:1 to letters. If you prefer a tighter comparison, suppose you ask an English speaker how many vowels are in the English transliteration of a passage of Chinese characters. You could probably figure it out, but it's not obvious, and not easy to do correctly without a few rounds of calculations.
The point being, the whole point of this question is to ask the machine something that's intrinsically difficult for it due to its encoding scheme for text. There are many questions of roughly equivalent complexity that LLMs will do fine at because they don't poke at this issue. For example:
The point being, the whole point of this question is to ask the machine something that's intrinsically difficult for it due to its encoding scheme for text. There are many questions of roughly equivalent complexity that LLMs will do fine at because they don't poke at this issue. For example:
``` how many of these numbers are even?
12 2 1 3 5 8
```