The supply bottlenecks have been around commercializing the ChatGPT product at scale.
But pretraining the underlying model I don't think was on the same order of magnitude, right?