On the contrary -- the opposite will happen. There's a decent body of research showing that just by training foundation models on their outputs, you amplify their capabilities.
Less common opinion: this is also how you end up with models that understand the concept of themselves, which has high economic value.
Even less common opinion: that's really dangerous.
This was done so that the IRL robot manipulation tasks could be done fast enough. In the future, we may always need small models mixed with large models for some tasks (e.g., for slow long term planning and fast short term planning), though compute does have a tendency to improve exponentially...
There's a neat argument against these models doing interpolation: the manifold of the data is so sparse that it's infinitesimally unlikely for a good predictor to be doing interpolation between existing points on the manifold.
I wonder what the net effect of such pieces of writing is. The problem is that these abstract and contextless statements make sense only if they cause the reader to reflect on some experience, and thus only mildly reinforce currently held beliefs. Otherwise, I can't see how the statements would stick for most people (not even as cached memory).
What would add significantly to this is a bunch of Gwern-style links embedded within each of these quips. The author is clearly speaking from a vantage point not many others have attained, and he'd be able to provide a story or other context to each.
Advice can be useful if you try to make it so. Number three on the list is "Dont ever work for someone you dont want to become." I was given this same advice as I was starting out my career. It made me reevaluate my approach to work and lead to some profound changes in my life. Years later, I can confidently say those choices were the best ones I've ever made. I will forever be grateful for receiving this advice. It's important to try to look forward at how following the advice may impact your life, rather than looking backward for confirmation of your currently held beliefs.
I think they'd make an excellent content for a loading screen on a game. Pick one at random and give the player something to think about while they're waiting. If this happens enough times, you'll get some repeats, which will help reinforce the message or at least re-trigger the thinking process.
It's still littering. Used toilet-paper is biodegradable too. Would you care if I threw some on your lawn?
Anyway, throwing away food is appalling. It costs a titanic amount of resources to grow, make, manufacture, transport, package and sell. There are significant environmental costs to producing anything and most of it isn't captured in what you pay for things. To throw almost anything away is bad; throwing away food is criminal.
And it's irresponsible behavior in front of children, who typically take on board what you do, and not what you say.
Don't do it. It's nothing to do with Freedom. It's to do with being a responsible adult.
The problem with software-controlled permissions is that nation-state actors (who have unbounded resources) can snoop on your private matters with significantly greater ease.
At least with a hardware switch, someone would have to physically intercept the air waves in the room you're in. In software, the surface for OS-level vulnerabilities is massive, and state sponsored mass surveillance just gets easier.
Sadly, this is a trade-off we have made as a society for "ergonomics".
This line of argument is bikeshedding at it's finest.
If Mossad is out to get you, they are going to get you, no matter what you do. The threat model for 99.999% of the population doesn't include bespoke attacks from three letter agencies.
While others here have touched on the idea that Codex has changed their coding habits, what I find interesting is that Codex has changed how I write code altogether. For example, I had to connect a database to an API a little while ago. Obviously I had the option to use an ORM as one would normally. But instead, I just wrote out all the SQL commands and wrapper functions in one big file. Since it was all tedious and predictable, Codex helped me to write it in just a few minutes, and I didn't need to muck around with some complex ORM. These are the tradeoffs I'm personally excited about.
As someone else already mentioned, the scaling laws paint a different story empirically: we haven't hit diminishing returns at all, and there's no end in sight.
But more anecdotally, the first applied neural network paper in 1989 by LeCunn has pretty much the same format as the GPT paper: a large neural network trained on a large dataset (all relative to the era). https://karpathy.github.io/2022/03/14/lecun1989/
It really just seems that there are a certain number of flops you need before certain capabilities can emerge.
While progress has been made in computer vision, that progress has been relatively narrow up until now, and I think the activation energy required to produce this level of quality would be more than it's worth. As others have mentioned, new footage comes out all the time.
However, I agree with the sentiment. Someday, we will have a massive foundation model capable of producing any video with a little conditioning on text. But we don't currently have such a model. In some sense, we're still in the era of easily verifiable video, and this era might end someday soon.
Less common opinion: this is also how you end up with models that understand the concept of themselves, which has high economic value.
Even less common opinion: that's really dangerous.