Hacker News new | past | comments | ask | show | jobs | submit login

I have several hundred million files stored on XFS, not sure if that's a lot or a little by HN standards, but maybe it counts for something.

We will never ever have another XFS setup with a filesystem where deletions are to be expected. XFS works fine as long as you are just adding files to the filesystem, deletions are pathologically slow.

We've tried just about every trick in the book and have finally made the decision to switch back to ext3, which given the number of files will take a while.

So, that's nothing against XFS from a reliability point of view, but definitely from a performance point of view.




Interesting. I had a meeting where XFS got high praises from a very knowledgeable person. The catch is that in this solution, files are added and appended and very seldom deleted.


Our usage is pretty specific, it took us a while to narrow this down to XFS mainly because it was hard to believe how slow it was.

Deletion can be much slower than file creation if the files are small.

If you google for 'xfs delete speed' you'll get plenty of bits of information, the best fix was the 'nobarrier' option.

Deleting files at random is especially bad.


Deletion can be much slower than file creation when the files are hundreds of mb too!

Worst of all, "rm -R" runs right into XFS's most pathological case -- it's doing tons of metadata reads while it unlinks files, and XFS shoves a lock in every orifice.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: