When progress decouples from investment is when I'd say there is a true bubble. But so far progress has continued to soar.
Nobody thinks GPT4-o1 or Sonnet 3.5 is going to change the world. People are looking at the past 2-3 years though and extrapolating forward, and right now they don't see any evidence that progress will slow down. Quite the opposite actually.
As for where that value will manifest itself the most in the stock market? That's probably where this guy is getting hung up.
The risk is that the combined market caps of AI firms exceed 1 Trillion dollars now. This market cap is likely supported by several Billion dollars in Revenue, but margins are debatable outside of NVidia due to capex/opex of inference, training, and data acquisition.
These valuations only make sense if the future free cash flow of these AI firms reach the ~100 Billion dollar mark within the next 3-5 years. It's somewhat unclear what the path here is if one takes the position that
a) 1 billion people are not going to spend 100/year on ChatGPT like subscriptions.
b) White collar productivity is not going to increase by 80%.
> Nobody thinks GPT4-o1 or Sonnet 3.5 is going to change the world.
I do. AI progress could magically hit a brick wall right now and never advance any further, and o1 would still change the world more drastically than you can imagine. But AI progress will also not magically hit a brick wall.
I supposed for that reason o1 specifically will not change the world, because it will be superceded too soon to. But it would if it wasn't.
Agreed. o1 is a step in the right direction. There are half a dozen improvements that could be made along the same lines without introduction of any new technology.
Uh, a lot of people did, and continue to, think AI is going to change the world - including Chat GPT.
Hell, read any of the comments from the last year or two on any HN thread about it. Plenty of people claiming it’s the new god, it changes everything, replaces all knowledge workers, etc.
It is freaking amazing that this works almost perfectly. Seriously, it’s mind blowing. The problem is, when you keep in mind how the technology works, you realize that the “almost” can never be removed. That’s fine for some use cases but not for others. I understand that human translators make mistakes but they have a conception of truth and correctness, that matters.
We have some people that read Mandarin and double check the output once in a while. If it didn't work well the story would quickly become incoherent and make no logical sense because chapters are translated on their own.
The common failure mode is names and genders, for some reason it likes to swap names and genders of characters.
My point with regards to 2 is that it would have to maintain consistency across translation runs. Entire novels don't fit in the context so it can't make up a logically consistent novel across prompts.
When I do the translations, I actually don't even include previous chapters in the context.
So the last novel I completed is long one, but not unheard of I think it had 6 million characters. Now I don't know how many tokens would that be, but I doubt most models can support that large context.
And really consistent editing and choices between what is translated and what is not and instead is Romanized is rather important with many Chinese novels.
You can get something you can figure out, but I doubt you get something really enjoyed.
I don't think you should be getting downvoted for this, because you're pretty much correct. People's sense of what's normal has shifted so much in the last two years that LLMs now seem quaint, but the impact they've already had on things like the quality of machine translation is huge.
If you go to plain ChatGPT, a system not specifically designed to translate languages, and tell it to translate "以后你再使用我照片请使用这两张任何一张都可以这是我们结婚的照片" to English, you get a better result than any machine translation from just a few years ago. For example, it gets from context that "这是" has to be translated to a plural phrase in English. Even right now, Google Translate still gets this wrong.
I'm worried that a lot of the impact these technologies have will eventually turn out to be overwhelmingly bad. Google Photos is already partially broken by the amount of shitty AI images it returns. But the fact that they do have a huge impact can't be denied.
I don't know what exactly qualifies something as "changing the world", but if LLMs don't qualify, then not a lot of things do.
1) Most people have no need for translation software
2) Before LLMs we already had decent free translation in the form of the free Google Translate, using pre-transformer NN models
Personally I still use Google Translate as my go-to for translation, rather than using Sonnet 3.5. Maybe Google now use an LLM under the hood, but I haven't noticed any increase in quality in last few years.
You can test it by comparing human translations with LLM translations. The results are pretty close. Like I said in another comment, the common failure mode with mandarin is around names and genders
> I'm in a few communities that like to read novels from China / Korea. Claude Sonnet translates is able Mandarin to English almost perfectly.
What novels are you reading?
This is fascinating to me, because the world is quickly becoming a place where we have to choose which information from the unlimited information stream to consume. It feels like unlimited opportunity cost. I, for one, don't think I'll ever have enough time to watch every Academy Award nominated film (let alone all of the winners). And that's just one type of information.
You're going after some obscure (?) stuff. What brought about the interest?
Xianxia I expect. Distinctly Chinese fantasy webnovels set around Cultivators seeking immortality that go for 6000 chapters and start with the main character being the weakest guy in the weakest part of a world to them being a god like being who pinches galaxies between their finger tips.
As for why do people read it? Well.. there's lots of it, it's free and it's inherently progression fantasy most of the time which can often be addictive.
One must simply be careful they do not read forbidden scriptures.... and develop the Dao of Brainrot, it's sadly an ever present danger.
I like Xianxia because there is actual power progression. Compared to many of the new mangas where characters go to max power in about half a chapter...
Also it is often quite different fantasy and sometimes world building can be truly imaginative and different. Where as lot of others are rather too formulaic.
Yeah, I'm one of those readers who adore world building, I can honestly have cardboard cutout characters so long as the world building is great. Coupled with good progression and honestly I could read for a month solid, I have read for a month solid, it was glorious!
The novels I like the most right now are "Mysteries of the Immortal Puppet Master" and "Eternal tale" which are both just fun Chinese fantasy novels.
> What brought about the interest?
They are very unique coming from the perspective of an American that has mostly read books published by Western authors. There are all these unique fantasy tropes based on Chinese history that are like a parallel branch to Tolkein based fantasy. Also, you can clearly see that they have completely different value systems and ironically you can tell they are comparatively less censored.
a bit of a tangent but regarding the translation, can you compare it to the work of a human translator>? I often find translated works unsatisfying. While the fault may well be with me, I thought the Three Body Problem was a pretty poor piece of fiction (yes, I know, HN loves it, mea culpa etc) but I wonder if I dislike the original work, or the rendering in English.
I thought the translation of the first of the trilogy was stilted and flat, I could appreciate and enjoy the underlying story but the prose felt like a mechanical translation. The latter two books though I thought read much more naturally.
Nobody thinks GPT4-o1 or Sonnet 3.5 is going to change the world. People are looking at the past 2-3 years though and extrapolating forward, and right now they don't see any evidence that progress will slow down. Quite the opposite actually.
As for where that value will manifest itself the most in the stock market? That's probably where this guy is getting hung up.