In the late 90's I got a Master's degree, specializing in neural networks. This caused me to learn about the hype cycle in AI, and since then I have seen it continue. Remember Watson, IBM's AI technology that was going to drive its growth? IBM got lots of press by making computers that could play chess and Jeopardy, which seemed impressive to people at the time, but they never found a way to make money from it. "AI winter" is the term for the trough of the AI hype cycle.
Note that every "AI summer" prior to this one has produced something useful, just never all that world-changing compared to people's expectations. Most people think that if it can BS (excuse me, generate convincing text), then it can do lots of other jobs. Well, in previous AI summers, people thought that if it can play chess, or answer Jeopardy questions, it could do many other things that it turned out, it could not do (or could not do well enough).
For that matter, the ability to do math, at one time, was thought of as a sign of great intelligence. But, it turned out that computers could do math, long before they could do anything else. Our intuition about, "if it can do this, soon it will be able to do that", is not very good.
I have heard ChatGPT described as a better autosuggest, which sounds about right. It's not that autosuggest isn't useful, it can be very useful, but it's not a thing that is going to change the world, and the jobs which it will automate are neither numerous, nor very well paid even now.
If you're trying to pump that VC hype machine for $$, though, cryptocurrency is not going to work anymore, so they need something.
I think there have been so many hyped up stuff in the tech world that it's trained a lot of people to point at everything and call it part of the hype cycle.
But this causes people to literally fail to notice that there are many things in tech that Aren't just hype. I mean literally think about it, the internet wasn't just hype, the smart phone wasn't just hype. There's millions of things that weren't just hype.
>I have heard ChatGPT described as a better autosuggest, which sounds about right. It's not that autosuggest isn't useful, it can be very useful, but it's not a thing that is going to change the world, and the jobs which it will automate are neither numerous, nor very well paid even now.
This is a poor characterization. chatGPT has the capability of answering extremely complex questions with novel answers that are completely indistinguishable from human answers. And remember much of these answers are novel, meaning that it wasn't just a copy of the answer out of nowhere.
There are of course huge problems with our ability to control chatGPT to give us the correct answers consistently but the fact that it can even do the above 50% of the time is a feat that moves the needle far beyond a mere "auto suggest". All you need to do is increase that 50% rate and suddenly it can auto suggest you out of an entire career. Cross your fingers and hope token prediction is just a technological dead end and that we can't really raise the correctness rate past 50%. In many instances of project development getting to 50% is often the hard part and getting to 100% could be easier.
Science went from something where you needed to have a physical library of journals into something much easier, where scientists the world over can now parse these repositories online. This massive increase in efficiency did not result in fewer scientists, it resulted in a lot more scientists as it became easier for more people to do research thanks to the internet.
Farming went from something where you manually had to plant stuff and take care of crops to where now a machine handles volumes of work. In feudal society, basically almost everyone farmed, not few people do.
So you gave me an example where technology caused the employment of more people. I gave you an example of where technology caused the employment of less people. Does either example have anything to do with what chatGPT will do to employment? Likely not.
If you're going to use it for anything where accuracy is important, you're going to need a human in the loop, verifying each thing ChatGPT comes up with. That means it isn't going to replace nearly as much as you seem to believe (or, unfortunately, that it will replace people, but not for very long), because it isn't going to meaningfully increase productivity in any application which requires accuracy.
>If you're going to use it for anything where accuracy is important, you're going to need a human in the loop,
Having a human in the loop doesn't mean nobody is replaced.
If I have a job that requires 20 people to respond to emails all day, I can have AI do the job with 1 person in the loop. That's 19 people replaced.
The other thing you need to think about is basically the trendline. Sure the AI requires humans in the loop now, but will it in the future? Just 1 year back a tool like chatGPT didn't exist. Now it exists. What's the next year going to bring? Most likely a tool Better than chatGPT. If "better" keeps happening every year, inevitably there will be a point where the AI doesn't need a human in the loop.
It's funny you say that because ChatGPT knows how to play chess although it wasn't explicitly trained for that[0]. The "if it can do this, soon it will be able to do that" is actually becoming real.
Note that every "AI summer" prior to this one has produced something useful, just never all that world-changing compared to people's expectations. Most people think that if it can BS (excuse me, generate convincing text), then it can do lots of other jobs. Well, in previous AI summers, people thought that if it can play chess, or answer Jeopardy questions, it could do many other things that it turned out, it could not do (or could not do well enough).
For that matter, the ability to do math, at one time, was thought of as a sign of great intelligence. But, it turned out that computers could do math, long before they could do anything else. Our intuition about, "if it can do this, soon it will be able to do that", is not very good.
I have heard ChatGPT described as a better autosuggest, which sounds about right. It's not that autosuggest isn't useful, it can be very useful, but it's not a thing that is going to change the world, and the jobs which it will automate are neither numerous, nor very well paid even now.
If you're trying to pump that VC hype machine for $$, though, cryptocurrency is not going to work anymore, so they need something.