Historically compute/memory/storage costs have fallen as demand has increased. AI demand will drive the cost curve and essentially democratise training models.
This assumes that commercial models won't continue to grow in scope, continuing to utilize resources that are beyond the reach of mere mortals. You could use 3D rendering as an analogy - today you could easily render Toy Story on a desktop PC, but the goalposts have shifted, and rendering a current Pixar film on a shoestring budget is just as unfeasible as it was in 1995.
It's always been the case that corporates have more resources, but that hasn't stopped mere mortals outcompeting them. All that's required is that the basic tools are within reach. If we look at the narrow case of AI at this point, then the corporates have an advantage.
But the current model of huge, generic, trained models that others can inference, or possibly just query, is fundamentally broken and unsuitable. I also believe that copyright issues will sink them, either by failing to qualify as fair use or through legislation. If there is a huge LLM in our future is will be regulated and in the public domain, and will be an input for other's work.
The future not only consists of a multitude of smaller or refined models but also machines that are always learning. People won't accept being stuck in a (corporate) inference ghetto.
or the other way around - large, general-purpose models might sink copyright itself since good luck enforcing it.... even if they somehow prohibit those models, they'll still be widely available