The NCMEC database is large and graded to distinguish types of photos. There’s evidence in the false positive calculations that Apple is only using a subset, presumably the one where photos are graded as depicting active abuse.
It’s not reasonable to dispute the 1 in 1e12 false positive claim on mere speculation.
Collision attacks make for a fun tech demo, but I've yet to hear anyone suggest any plausible scenario where they could be used against Apple's implementation. It would require absurdly elaborate, Oceans Eleven style espionage to achieve any outcome whatsoever. And it would be immediately apparent to anyone involved that a collision attack was involved.
It would be far easier (and far more effective) to just acquire child porn, break into your victim's house, stash physical prints under their mattress, and then contact the police.
Furthermore, the website includes numerous misleading statements about Apple's system, or makes critical omissions on the description of Apple's system. Whatever side you're on, misleading arguments should be dismissed for what they are.
The ease of adversarial collisions has no relationship to the probability of natural collisions.
It's entirely possible to make a cryptographic hash algorithm that has an exceptionally low probability of natural collisions but where adversarial collisions are trivial.
It's also possible to create a cryptographic hash algorithm where occasional natural collisions are expected, but adversarial collisions require brute force.
It’s not reasonable to dispute the 1 in 1e12 false positive claim on mere speculation.