I have several hundred million files stored on XFS, not sure if that's a lot or a little by HN standards, but maybe it counts for something.
We will never ever have another XFS setup with a filesystem where deletions are to be expected. XFS works fine as long as you are just adding files to the filesystem, deletions are pathologically slow.
We've tried just about every trick in the book and have finally made the decision to switch back to ext3, which given the number of files will take a while.
So, that's nothing against XFS from a reliability point of view, but definitely from a performance point of view.
Interesting. I had a meeting where XFS got high praises from a very knowledgeable person. The catch is that in this solution, files are added and appended and very seldom deleted.
Deletion can be much slower than file creation when the files are hundreds of mb too!
Worst of all, "rm -R" runs right into XFS's most pathological case -- it's doing tons of metadata reads while it unlinks files, and XFS shoves a lock in every orifice.
We will never ever have another XFS setup with a filesystem where deletions are to be expected. XFS works fine as long as you are just adding files to the filesystem, deletions are pathologically slow.
We've tried just about every trick in the book and have finally made the decision to switch back to ext3, which given the number of files will take a while.
So, that's nothing against XFS from a reliability point of view, but definitely from a performance point of view.