Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Neither checks the checksum on every read as that would be performance-prohibitive. So "bad data on drive -> db does something with corrupted data and saves corrupted transformation back to disk" is very much possible, just extremely unlikely.

But they said nothing about it being bad drive, just corrupted data file, which very well might be software bug or operator error



This is wrong, both ZFS and btrfs verify the checksum on every read.

It's not typically a performance concern because computing checksums is fast on modern hardware. Besides, historically IO was much slower than CPU.


> Neither checks the checksum on every read as that would be performance-prohibitive.

It is expensive. It might be prohibitive in a very competitive environment. This is hardly the case here. Safety first!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: