You don't have access to "the whole disk". What the drive presents to the OS as an array of blocks is just an abstraction now. There are at least 3 commonly known ways that "filling" the disk with random bytes could fail to destroy the original data:
* If any part of a block is found to be corrupted for any reason, the block is transparently set aside and never used again, but its contents may still be present to read for someone that is willing to bypass the drive's firmware.
* To prevent uneven wear, ssd capacity is over provisioned which means it has more cells than what it tells the OS. In general, when you "overwrite" a block on your drive, that data likely does not actually overwrite the cell where the data used to be. The drive picks some other unused cell, writes the data there, and does some bookkeeping so it knows that this new cell is what the OS means when it asks for that block again. When the OS "fills the disk" it may not end up overwriting some of those cells due to the firmware's arbitrary wear leveling algorithm.
* Many drives have a local write buffer that uses different, faster persistent storage to temporarily save writes to the drive as it takes its time to write out cells. This is to prevent data loss in the case of a power outage. In case the drive hadn't fully written all the blocks to the main storage when it boots back up again, it finds the data in the write buffer and finishes the job. The write buffer has both of the previous issues except its even more arbitrary and depends on load profile which your random filling routine is not guaranteed to hit.
There is no such thing as securely overwriting blocks anymore, because "blocks" are a fiction presented to your OS by the drive's firmware. There is only one way to be sure the data on the drive is filled with random data: make it random to begin with, and ensure that it stays random by destroying the decryption key.
>* If any part of a block is found to be corrupted for any reason, the block is transparently set aside and never used again, but its contents may still be present to read for someone that is willing to bypass the drive's firmware.
Yeah, I honestly have not cared about how hard drives work in about 10 years since I built my last array. I have better things to do with my life than worry about this stuff. I just want to connect a drive, reformat/partition if necessary, and then go back to work. A single SSD now read/writes faster than a 16 drive array (got close in RAID0, but that's just dumb). Now, if SSDs could just get a decent volume size.
They've got what I consider decent size, it's just that the price can be a bit steep compared to an HDD, at least at quantities of 1. $700 will get you an 8TB SSD.
8TB at blazing speeds to boot in a single device. To get decent speeds from HDD would require at least 8 drives striped together. So not only do you need 8+ HDD drives, you also need the enclosure. That would easily double the $700 for the single SSD. Luckily, with USBC/TB4, these enclosures come with the controllers built-in instead of requiring PCIe board
* If any part of a block is found to be corrupted for any reason, the block is transparently set aside and never used again, but its contents may still be present to read for someone that is willing to bypass the drive's firmware.
* To prevent uneven wear, ssd capacity is over provisioned which means it has more cells than what it tells the OS. In general, when you "overwrite" a block on your drive, that data likely does not actually overwrite the cell where the data used to be. The drive picks some other unused cell, writes the data there, and does some bookkeeping so it knows that this new cell is what the OS means when it asks for that block again. When the OS "fills the disk" it may not end up overwriting some of those cells due to the firmware's arbitrary wear leveling algorithm.
* Many drives have a local write buffer that uses different, faster persistent storage to temporarily save writes to the drive as it takes its time to write out cells. This is to prevent data loss in the case of a power outage. In case the drive hadn't fully written all the blocks to the main storage when it boots back up again, it finds the data in the write buffer and finishes the job. The write buffer has both of the previous issues except its even more arbitrary and depends on load profile which your random filling routine is not guaranteed to hit.
There is no such thing as securely overwriting blocks anymore, because "blocks" are a fiction presented to your OS by the drive's firmware. There is only one way to be sure the data on the drive is filled with random data: make it random to begin with, and ensure that it stays random by destroying the decryption key.