Nobody said it is simple. Sure, the algorithmic complexity of the models are high, filters over filters, and in the same way the resulting dump file it is not editable (without unbalancing the rest of the data, i.e. tracking and modifying the bytes that was used by each token of the model; it is vectored data at bit level (In my case I don't see it exactly as part of a graph )).
Nevertheless, the above does not exclude what one can see, a lossy compressed database (data is discarded), where the indexes are blended within the format of the data generated by the model weights, main reason why the model weights are needed again to read the database as expected, for being used by the predictive algorithm that reconstruct the data from the query, query that conform the range of indexes triggering the prediction direction/directions.
oh, my apologies executive chef, now I understand, you appears to insinuate the data is not stored, they are a handful of unaligned logic gates spontaneously generating data.
Nevertheless, the above does not exclude what one can see, a lossy compressed database (data is discarded), where the indexes are blended within the format of the data generated by the model weights, main reason why the model weights are needed again to read the database as expected, for being used by the predictive algorithm that reconstruct the data from the query, query that conform the range of indexes triggering the prediction direction/directions.