The whole point of TCP/IP was understanding that networks are unreliable, and introducing schemes check for the receipt of packets, handling lost ones, and changing connection parameters dynamically so fewer packets would be lost in the future.
In essence, TCP/IP is engineers trying their utmost to design a system where a single flaw doesn't result in the whole data stream being corrupted.
Add HTML browsers into it, which is extremely tolerant of input mistakes and you have a total system that will handle random losses pretty well.
That is until somebody writes a simple script to utilize the protocol to overload an end server with requests, then a server not equipped to handle such a load will fail due to an exploit that wasn't immediately obvious (of course it is now. It's an assumed part of best practice for any large project.)
The protocol doesn't offer any recourse itself, and the end user who might suffer because of such an attack has no defense. They rely on the ability of the engineers involved and some technical voodoo they don't understand to keep everything safe and working for them. That's my point -- I wasn't comparing the technologies directly, but their roles in a larger system.
In essence, TCP/IP is engineers trying their utmost to design a system where a single flaw doesn't result in the whole data stream being corrupted.
Add HTML browsers into it, which is extremely tolerant of input mistakes and you have a total system that will handle random losses pretty well.