Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, I wrote this as a fun little experiment to learn more io_uring usage. The practical savings of using this are tiny, maybe 5 seconds over your entire life. That wasn't the point haha


I'd be curious to know if this helps on supercomputers, which are notorious for frequently hanging for a few seconds on an ls -l.


It could, but important to keep in mind that the filesystem architecture there is also very different with a parallel filesystem with disaggregated data and metadata.

When you run `ls -l` you could potentially be enumerating a directory with one file per rank, or worse, one file per particle or something. You could try making the read fast, but I also think that it makes no sense to have that many files: you can do things to reduce the number of files on disk. Also many are trying to push for distributed object stores instead of parallel filesystems... fun space.


It's a very cool experiment. Just wanted to perhaps steer the conversation towards those things rather than whether or not this was a good ls replacement because like you say that feels like it was missing the point




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: