My first experience with drive failure was a ~40MB HDD expansion card in a 386. The bearings got "sticky", so the spindle wouldn't start rotating. But there was a Al tape covered hole, and you could insert the eraser end of a pencil, and nudge it. So yes, very understandable.
Not too much later, I used Iomega ZIP drives, and experienced the "click of death". That was sudden, and irreversible, but also very understandable.
For the past couple decades, I've consistently used RAID arrays, mostly RAID1 or RAID10 (and RAID0 or RAID5-6 for ephemeral stuff). I've had several HDD failures, but they were usually progressive, and I just swapped out and rebuilt.
I recently had my first SSD failure. And it was also progressive. The first symptom was system freeze, requiring hard reboot, and then I'd see that one of the SSDs had dropped out of the array. But I could add it back. At first, I thought that there was some software problem, and that the RAID stuff was just caused by hard reboot.
But eventually, the box wouldn't boot, so I had to replace the bad SSD and rebuilt the array. It was complicated by having sd1 RAID10 for /boot, and sd5 RAID10 for LVM2 and LUKS. So I also had to run fdisk before device mapper would work.
Not too much later, I used Iomega ZIP drives, and experienced the "click of death". That was sudden, and irreversible, but also very understandable.
For the past couple decades, I've consistently used RAID arrays, mostly RAID1 or RAID10 (and RAID0 or RAID5-6 for ephemeral stuff). I've had several HDD failures, but they were usually progressive, and I just swapped out and rebuilt.
I recently had my first SSD failure. And it was also progressive. The first symptom was system freeze, requiring hard reboot, and then I'd see that one of the SSDs had dropped out of the array. But I could add it back. At first, I thought that there was some software problem, and that the RAID stuff was just caused by hard reboot.
But eventually, the box wouldn't boot, so I had to replace the bad SSD and rebuilt the array. It was complicated by having sd1 RAID10 for /boot, and sd5 RAID10 for LVM2 and LUKS. So I also had to run fdisk before device mapper would work.