Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So don't waste your money on it then!

Why in the world would you buy one "Enterprise" disk instead of three "consumer" disks?



Because the enterprise disks are rated for years of continuous service and have things like firmware which doesn’t lie about whether data has been committed durably for the sake of benchmarks on review sites.

None of this means that you should trust any particular disk enough not to need redundancy, backups, etc. Companies can and do make trade offs based on their needs and management competency and people have been shifting software for a generation to rely less on the hardware – back when Sun announced ZFS, one of the major appeals was that you could drop expensive hardware RAID controller dependencies in favor of cheap boxes of disks – but there isn’t a single global optimum point. A lot of enterprise purchases are driven by being able to satisfy your most demanding users with the same service as everyone else so you can avoid needing your admins to be trained and experienced with dozens of different storage systems. That last part especially extends to testing: for example, does your rack of consumer drives with software redundancy come back up cleanly after a kernel panic or power outage, especially a nasty one like a fluctuating brownout? Depending on your budget, needs, and technical bench depth you might reasonably conclude that the savings are worth the ops work, or that it’s safer to pay an enterprise storage vendor who’ll certify that they’ve done that testing and will have tech support on-sight within an hour, or that you’ll use AWS/Azure/GCP because they do even more of that tedious but important work. All of those can be right, but I’ve typically found that people in the first two categories think they’re doing better than they are and would be paying less for better service in the cloud.


Well, "enterprise storage" can actually be multiple, redundant copies of the data behind the scenes.

So, internally the storage team might quote say $1000/TB (simple numbers for an example) for a given storage quantity. And behind the scenes they'll likely have at least redundant storage arrays, plus backups and 24/7 monitoring for all of the data, etc.


Yes, but that still refutes the point all the same.


Because if you're providing services to a large business, you're typically not managing servers with physical disks attached. You're using a SAN with fiber connections or NFS mounts. Most SANs require specific drives sold by the vendor with firmware they've tested, mounted to custom sleds etc. You can't just connect a WD Mybook drive.

Mom and Pop businesses using single servers can do all they want in regards to drives, I don't care. I would argue they shouldn't have servers, but if you do have servers, you should at minimum use RAID etc, not a drive plugged into your USB port.


If you are in that situation, then obviously you have to do what you have to do.

Absolutely none of that has to apply to the sort of situation we are talking about here.

The only real reason to spend more per disk is when you know all your disks are going to fail, and extending the lifespan per average disk will definitely save you more than the enterprise markup costs. So you better have dozens or hundreds of disks in the first place.

For any truly valuable data, you should at minimum have a backup in one different physical location. That backup should be include at least one redundant disk. None of that is worth spending a dime on more expensive hardware.


You try telling a VP that their business unit can't function because you decided to purchase the cheaper drive.

All disks fail eventually. Outliers may run longer than the MTBF for that drive model, but they all fail eventually.

And backups are fine for restoring data, but they don't help provide access to that data in a timely fashion. That's why people use SANs, AWS etc.

The cost of a SAN storage array is a nit for a business making cars (like Toyota) or selling insurance (like my company).


It's very unlikely for all of your disks to fail at the same time. Even if they do, that's why you have an offsite backup. The name of the game is redundancy, not longevity.

In practically every use case, two consumer disks will be better than one enterprise disk. Once you start failing enough disks often enough, longevity can be worth the additional cost. Until then, it just isn't.


Actually, it's not uncommon for a batch of disk from a vendor (the same lot #) to have failures.

And again, a backup is fine for deleted data, or fire/ransomware. But for day to day operations, no one is really willing to wait for you to restore from a local backup, much less an offsite backup.


> no one is really willing to wait

And they are willing to wait for you to rebuild your raid array?

Either you lost your data from a disk failure, or you are waiting to lose your data from a disk failure. How are we not on the same page here?


I think if you had worked/managed a SAN, you might realize that a single disk failure is a non-event. I'm not talking about a JBOD or an storage shelf using RAID5, I'm talking about a Netapp or similar system that can easily handle disk failures without interrupting service.


It’s because all consumer disks are garbage. The way it works is that disks are tested. Disks without failures are sold as “Enterprise.” Those that have failures are labeled and sold as “consumer”, not thrown in the bin.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: