I think you'd have to pay some additional copies to perform batch processing of integers in json documents in the general case. Last I checked simdjson included the typical scalar code for parsing base-10 integers and a fairly expensive procedure for parsing base-10 doubles (where most of the runtime is paid in exchange for getting the final bit of the mantissa right, which was not reasonable for our use case but is reasonable for a general-purpose library).
That said, it's not clear to me that the scalar integer parsing code should win even if you're only parsing integers individually. For inputs that have the length of the number vary unpredictably, it pays a significant amount of time for branch misses, while the vector code can replace this with a data dependency.
Edit: After writing the above, I thought that probably most documents have a regular pattern of number lengths. I don't know if this works well with branch predictors if number of branches in the pattern is pretty long (in terms of the sum of the lengths), but probably the branches cost ~nothing for a lot of real-world inputs.
That said, it's not clear to me that the scalar integer parsing code should win even if you're only parsing integers individually. For inputs that have the length of the number vary unpredictably, it pays a significant amount of time for branch misses, while the vector code can replace this with a data dependency.
Edit: After writing the above, I thought that probably most documents have a regular pattern of number lengths. I don't know if this works well with branch predictors if number of branches in the pattern is pretty long (in terms of the sum of the lengths), but probably the branches cost ~nothing for a lot of real-world inputs.