Zig has weird limits in a few places (1k compile-time branches without additional configuration, the width of any integer type needs to be less than 2^16, ...). Array/vector length's aren't the issue though; you can happily work with 64-bit lengths on a 64-bit system.
Skimming the source, there are places where the author explicitly chooses to represent lengths with 32-bit types (e.g., schema.zig/readByteArray()). I bet you could massage the code to work with larger data without many issues.
So it is that straightforward for a proof of concept (downgrade to an old version of Zig compatible with the project, patch an undefined variable bug the author introduced yesterday, s/32/64, add 4 bytes to main.zig->Header and its accesses).
Doing so makes the program slower though which might be a non-starter for a performance focused project. Plus you'd need a little more work to properly handle large archives on 32-bit and smaller systems (at least those which support >4GB files).
How does that even happen in 2021?