It's not really like that? A small plane with one passenger is a concept that you can extrapolate to a bigger, faster plane. Afaik an often-wrong generative AI is _not_ a concept that extrapolates to a never-wrong generative AI: that's just not how it works at a fundamental level.
Although presumably very smart folks are working on it.
For me, it’s already a jetliner in the context of coding assist. It’s correct more often than top hits provided by a top search engine (and any coworker), and it is a very enjoyable user experience (no ads and SEO garbage to filter out). I’d say the wright brothers version was something like BERT or earlier GPTs.
I see that the analogy is having the unintended side-effect of apparently being predicated on a supposed utility of ChatGPT’s direct descendants. The point I want to get across is how difficult anything like it seemed before, regardless of whether it begets anything markedly better afterwards.
Although presumably very smart folks are working on it.