Hacker News new | past | comments | ask | show | jobs | submit login

> but the rate of progression feels like they will soon

The rate of progression seems to be logarithmic - so we got "something looks plausible" but to get that last 10% it's probably going to cost more in HW than just using humans, unless there's some breakthroughs. Just like self driving cars.

My impressions at least looking at the developments from a sort of technical perspective - they are hitting all kinds of scaling problems - both in terms of data available, runtime complexity, hardware available, etc.

nVidia raking it in is a perfect example of how inefficient the whole thing is. Models seem to be doing fairly simple math computation (nowhere near the complexity of a general purpose GPU core required), things are limited by memory bandwidth and memory available. I'm sure a better hardware design specifically for transformer inference could make it cheaper and faster. But it seems like anything like that is years out for general market and nVidia is raking in on selling repurposed GPU architectures, and nobody else can even compete with that based on software stack alone.

People selling us on AI replacing software developers are getting fleeced for billions because they can't port their software stack to similar hardware from a different vendor...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: