Hacker News new | past | comments | ask | show | jobs | submit login

Well to be fair, there is an entire layer of abstraction at the SSD controller level that does tons of black box magic. It allows the OS to treat the SSD as another other storage device without letting the OS know what is going on.

So the combination of non-moving parts (making it hard/impossible to debug via physical inspection) combined with a tons of wear leveling/misc magic can defo make it seem like SSD's are magical.




Is there a good reason why we use separate SSD controllers instead of letting the primary cpu handle it? The obvious reason is backwards compatibility, but as more of computing moves to SSDs, is this still relevant?

ZFS has shown that removing layers of abstraction with regard to storage can be beneficial.


That adds a round of latency, and makes it pretty much impossible to boot off the drive. The blocks aren't in the same order in the Flash as they are presented by its interface, and one of the main jobs of the controller is to re-order them.

It would be an interesting product to have, a raw block API to a Flash device with all the temporary state stored on the host - but a hard one to sell, as it's not differentiated in any way.


There are host-managed SMR drives, which share a number of characteristics with flash storage.


The controller makes it possible to get a standard bus and a pre-installed driver and use it to access any kind of memory from any manufacturer and any technology. It's the kind of convenience that makes people buy hardware - it's the kind of thing that made SATA and USB win. The alternative is that once in a while you plug a driver into your computer and it won't work.

Besides, I don't think manufacturers want to release the best practices for using their memory.


Apple uses their T2 security enclave as the SSD controller and it's caused quite a stir.


I wonder if you can read any extra state out of the T2 as a result, e.g. more information on wear leveling, temporary read failures and so on, more than the standard SMART counters?


Where can I read more about this?



The exact same thing happens hard drives. There's a ton of magic going on.



Thanks for this youtube link. In the case of recovering deleted files from SSD's that have the trim command enabled (Forensics Writeblocker was used,) the following drives have a low probability of recoverability: Crucial, Intel, and Samsung (3core controller). Whereas Seagate, Supertalent (parallel ATA to SATA bridge chip,) OCZ, and Patriot; files could be recovered. If the drive is quick-formatted, and if trim is enabled, the data is completely gone on the following drives: Crucial, Intel and Samsung. The trim state impacts the most whether or not the data can be recovered. You can check if trim is enabled with the command: fsutil behavior query DisableDeleteNotify - if your result is 0, it is enabled (default).


Yes, but most technical people seem to have a mental model that explains most failures pretty well. Not so for SSDs, apparently.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: