The index is just a cache of the metadata of files in the working directory; changing how the objects in the repo are stored won't speed up a 'git status'.
When you say 'read the repo', it makes me think you're more interested in the behaviour of cloning from a remote.
Loose objects would avoid the need to inspect packfiles, but… that code's all written in C and mmaps the contents & does fast seeks. Most likely the slow parts are reconstituting objects from packs (also mmapped C with fast seeks) or delta-compressing objects for git-upload-pack to send to clients. Going to loose objects doesn't help if the remote still burns CPU creating a pack: try using a dumb remote instead of a smart one? You're trading CPU for network now.
If you're more interested in improving performance in a clone: loose objects avoids the need to (fast mmapped C) read a packfile. The index still has to track if you've changed any checked-out file in the working directory, and if there are a lot of files, it's going to be big.
I was thinking about this from the perspective of a server like Gitea. What I meant by 'read the repo' is retrieve objects like commits to display them to the user.
So on the server you should only ever have packfiles, and in order to efficiently read packfiles you read the index (idx) file. I'm not positive, but I think that this file needs to be read in its entirety in order to access an object. Even if you don't have to read the whole file, it's probably best because you generally read more than one object at a time (e.g. if you display a list of files in HEAD you read the commit pointed to by HEAD, read its tree, and read all of the blobs in its tree).
My thought with using loose files rather than packfiles is that you wouldn't suffer the memory overhead of lookup, you just open the file at `objects/some/object` and parse it.
The real solution here is probably to get a server with more RAM and cache repositories. I'd be interested to hear what GitHub does.
GitHUb use DGit to (effectively) get loose objects on demand and cache them locally.
Parsing the packfile indexes is ridiculously fast; even in a memory-constrained environment the OS will manage loading data from disk so you only use a few pages. Inflating objects from packs is slower & will trash your memory; rendering to HTML will be even worse.
Perhaps 1GB is not enough RAM to host a webviewer of the firefox repo? Maybe if you generate a static site version of it…
When you say 'read the repo', it makes me think you're more interested in the behaviour of cloning from a remote.
Loose objects would avoid the need to inspect packfiles, but… that code's all written in C and mmaps the contents & does fast seeks. Most likely the slow parts are reconstituting objects from packs (also mmapped C with fast seeks) or delta-compressing objects for git-upload-pack to send to clients. Going to loose objects doesn't help if the remote still burns CPU creating a pack: try using a dumb remote instead of a smart one? You're trading CPU for network now.
If you're more interested in improving performance in a clone: loose objects avoids the need to (fast mmapped C) read a packfile. The index still has to track if you've changed any checked-out file in the working directory, and if there are a lot of files, it's going to be big.