Is it though? It could be interpreted as an acknowledgement. Five years from now, testing will be further improved, yet the same people will be able to take over your iPhone by sending you a text message that you don't have to read. It's like expecting AI to solve the spam email problem, only to learn that it does not.
It's possible to say "we take the security and privacy of our customers seriously" without knowing how the code works. That's the beauty of AI. It legitimizes and normalizes stupid humans without measurably changing the level of human stupidity or the quality or efficiency of the product.
Is it though? It could be interpreted as an acknowledgement. Five years from now, testing will be further improved, yet the same people will be able to take over your iPhone by sending you a text message that you don't have to read. It's like expecting AI to solve the spam email problem, only to learn that it does not.
It's possible to say "we take the security and privacy of our customers seriously" without knowing how the code works. That's the beauty of AI. It legitimizes and normalizes stupid humans without measurably changing the level of human stupidity or the quality or efficiency of the product.
Sold! Sold! Sold!