I think their argument was: The characters look the same (e.g. Russian's first character and the English A) but have different meanings.
So in this example if you searched for the English word "Eat" that is also a completely legal Russian word (E, A, and T, exist in English and Russian), however it means nothing remotely similar.
I don't know if they're right or wrong. I am just saying that might be the point they were trying to make. You could make a Greco Unified unicode set and it would work fairly well, but you might wind up with some confusing edge cases where it isn't clear what language you're reading (literally).
This could be particularly problematic for automation (e.g. language detection). Since in some situations any Greco-like language could look similar to any other (in particular as the text gets shorter).
English, French, German, Italian, Spanish and several other European languages have mostly identical character sets and even large numbers of similar or identical words. Computers detect these languages just fine. I think we'll be okay.
Quite a few English and Russian Cyrillic letters unify just fine. E and A unify, and have identical lowercase forms, e and a. They don't really have different meanings, no more so than the letters E and A in English and French. T is more interesting: it has the same phonetic sound, but a different lowercase appearance: t in English, т in Russian. In this case, unification would be pretty terrible.
For simple alphabet-type languages, the basic rule should be: if the uppercase and lowercase look the same, then unify mercilessly. P (English) and Р (Russian) should unify even though they represent different consonants. But not V (English) and В (Russian): they sound the same, but have totally different graphemes. On the other hand, unifying B (English) and В (Russian) does not make sense: the lowercase forms look different: b (English) and в (Russian).
Sounds like the major problem with Unicode (and the author's complaints) was always where to draw the line. Han unification went too far and included too many characters that look different. With other languages, some common combinable characters were forced into diacritic representation rather than getting their own code points. To me, the first problem seems way more serious.
I think the real reason was to preserve round-trip compatibility (legacy char -> unicode char -> legacy char) with the existing encodings for those alphabets.