I think the only "secure" way to erase the contents of a hard drive is to repeatedly overwrite the disk surface with a mix of random/patterened data (like Darik's Boot & Nuke does).
Also, for those wondering about the blocking of /dev/random, it will restrict the number of bits you can copy using dd, but this won't be apparent unless you attempt to copy more bits from the entropy pools than there are available for random number generation. For more information, see this question on Super User:
http://superuser.com/questions/520601/why-does-dd-only-copy-...
The ATA secure erase command is faster, and should be better than overwriting. Overwriting has potential for missing sectors marked as bad but Secure erase will get those.
Multiple over writes is pointless. There's the Gutmann stuff, but that's ancient and the 35 passes was for multiple drive controllers, if you didn't know what drive controller was being used.
But then sometimes you don't have to do what works, but what other people tell you. Thus, if you're working to a standard it doesn't matter if DOD specifications are actually more secure than a single secure erase, you do what the spec calls for. And if you have to persuade other people that the data is provably gone it's easiest to just grind the drives.
Secure erase depends on the technology of the drive and the prowess of your attacker.
If you fear the NSA seizing your disks, consider the tradeoffs of explosive disposal.
If you fear a technically-savvy reporter going through your trash bin, overwriting your disk three or four times with patterns will be fine. But it's probably faster to take a drill and make a couple of holes. Make sure you hit the platters.
If you are selling your old hardware and just don't want your unencrypted stuff to be recovered by a sixteen year old with no budget but lots of time, overwrite the disk once.
If you're trashing an SSD, make sure any patterns you use for overwriting are not compressed out of existence by the controller. Or pull off the controller and crunch it.
The whole "overwrite xx times" is a myth. Overwriting it once with random data makes it completely unreadable. Overwriting it more than that is a complete waste of time.
Some issues:
- don't forget that /dev/random blocks
- It's easier to use dd_rescue to track progress than to signal dd
- Using dd to zero out a hard drive repeatedly doesn't increase security[1]. Using ATA secure erase does[2]
- an alternative for summing file sizes is
[1] http://en.wikipedia.org/wiki/Data_erasure#Number_of_overwrit...[2] https://ata.wiki.kernel.org/index.php/ATA_Secure_Erase