Much of this already exists. But if you're expecting identical performance as the giant models, well that's a moving goalpost.
The paper I linked explicitly mentions how Falcon 180B is outperformed by Llama-3 8B. You can find plenty of similar cases all over the lmarena leader board. This year's small model is better than last year's big model. But the Overton Window shifts. GPT3 was going to replace everyone. Then 3.5 came out at GPT 3 is shit. Then o1 came out and 3.5 is garbage.
What is "good accuracy" is not a fixed metric. If you want to move this to the domain of classification, detection, and segmentation, the same applies. I've had multiple papers rejected where our model with <10% of the parameters of a large model matches performance (obviously this is much faster too).
But yeah, there are diminishing returns with scale. And I suspect you're right that these small models will become more popular when those limits hit harder. But I think one of the critical things that prevents us from progressing faster is that we evaluate research as if they are products. Methods that work for classification very likely work for detection, segmentation, and even generation. But this won't always be tested because frankly, the people usually working on model efficiency have far fewer computational resources themselves. Necessitating that they run fewer experiments. This is fine if you're not evaluating a product, but you end up reinventing techniques when you are.
The paper I linked explicitly mentions how Falcon 180B is outperformed by Llama-3 8B. You can find plenty of similar cases all over the lmarena leader board. This year's small model is better than last year's big model. But the Overton Window shifts. GPT3 was going to replace everyone. Then 3.5 came out at GPT 3 is shit. Then o1 came out and 3.5 is garbage.
What is "good accuracy" is not a fixed metric. If you want to move this to the domain of classification, detection, and segmentation, the same applies. I've had multiple papers rejected where our model with <10% of the parameters of a large model matches performance (obviously this is much faster too).
But yeah, there are diminishing returns with scale. And I suspect you're right that these small models will become more popular when those limits hit harder. But I think one of the critical things that prevents us from progressing faster is that we evaluate research as if they are products. Methods that work for classification very likely work for detection, segmentation, and even generation. But this won't always be tested because frankly, the people usually working on model efficiency have far fewer computational resources themselves. Necessitating that they run fewer experiments. This is fine if you're not evaluating a product, but you end up reinventing techniques when you are.