Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Getting bigger caches closer to the CPUs seems to be the big issue mentioned. SRAM takes up more real estate than DRAM, so on-chip DRAM is being considered, but doesn't look likely. Further out from the CPU, there are technologies faster than flash but slower than DRAM coming along.

This article doesn't address the architectural issues of what to do with devices which look like huge, but slow, RAM. The history of multi-speed memory machines is disappointing. Devices faster than flash are too fast to handle via OS read and write, but memory-mapping the entire file system makes it too vulnerable to being clobbered. The Cell processor tried lots of memory-to-memory DMA, and was too hard to program.

Maybe hardware key/value stores, so you can put database indices in them and have instructions for accessing them.



Can you go into some more detail about the risk of clobbering if you mmap everything? What's clobbering what?


"Devices faster than flash are too fast to handle via OS read and write", interesting could you elaborate on why?


Going through the kernel takes far longer than accessing memory.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: