Yes and no. Hype over ‘API wrapper’ projects and startups will crash a bit, I think.
On the other hand we are no where near approaching hard limits on LLMs. When LLMs start to be trained for smaller subject areas with massive hand curated examples for solving problems, then they will reach expert performance in those narrow tech areas. These specialized models will be combined in general purpose MoEs.
Then new approaches beyond LLMs, RL, etc. will be discovered, perfected, made more efficient.
Seriously, any hard limits are far into the future.
I agree. We're seeing great results for very narrow use cases using smaller LLMs too. It's no different than classical ML and emerging AI over the past 10 years. If you don't have a well scoped use case, you're not going to succeed.
Now the one API wrapper projects that I love are my meeting transcription and summarization apps. You can tear those from my cold, dead hands.
Yeah I think there's an empty marketing driven hype that will die off, but I think we're going to start to see it continue to integrate into peoples real life workflows and competition's going to heat up in delivering more consistently reliable results.
with regard to art AI, I think the debates are going to die off and the artists and people making stuff are going to just keep doing that, and some of them will use AI in ways that will challenge people in ways good art often does.
On the other hand we are no where near approaching hard limits on LLMs. When LLMs start to be trained for smaller subject areas with massive hand curated examples for solving problems, then they will reach expert performance in those narrow tech areas. These specialized models will be combined in general purpose MoEs.
Then new approaches beyond LLMs, RL, etc. will be discovered, perfected, made more efficient.
Seriously, any hard limits are far into the future.