Hopefully we see enough efficiency gains over time that this is true. The models I can run on my (expensive) local hardware are pretty terrible compared to the free models provided by Big LLM. I would hate to be chained to hardware I can't afford forever.