"In production" in this case is a stand-in for "in any environment with access to sensitive stuff" which might just include GPUs, if what the attacker wanted was crypto processing grunt. Besides, if you're providing 3D asset generation as a service (which I can imagine most deployments of this sort of thing will be, at least for now) then it absolutely is running in production. The purpose of that production environment is entirely to run asset generation.
this is the experience i see at our local schools. english as first language kids are bored and not challenged. the class is moving slower because half the kids are only learning english for the first time at school. “modern” progress ideology is to not separate the students by ability anymore and there’s less accelerated tracks
when new presidents come in the rest of the government employees don’t change as well. some of these at higher levels are career employees who will serve through many presidents. timing your asks to just get approved by an outgoing president can make your future bright
Historically you are correct that only around 4,000 employees of the US federal government are 'political hires' that the President can change/hire/fire. However in October 2020, Trump ordered Schedule F [0] that effectively increases the number of career bureaucrats that could be made by political appointment rather than by a competitive process. The Order was promptly rescinded by the Biden administration in Jan 2021. [1]
The Trump administration said that they believed Schedule F would increase the number of career positions to roughly 50,000 existing jobs. However many think tanks and union members believe that Schedule F could be interpreted much more broadly and could include well over 100,000 positions.
If the Trump administration revives the Schedule F order, it could mean very significant changes for many career bureaucrats.
Fear mongering was certainly not my intention. The simplest explanation for why this didn’t happen during President Trump’s first term was because the order was effective Oct 21st, 2020. Which was less than 2 weeks before President Biden won the popular & electoral vote.
I share your hope that the incoming Trump administration will uphold the usual norms of hiring & firing careers federal employees!
They can though. Dying businesses take time to die. Even if nothing else, they could have decided to shut down _earlier_ so that customers would have more time to deal with the fallout. Or they could have decided to shut down _at the same time_ but just messaged about it earlier.
It seems unlikely they didn't know they were going down until just a few days before having to shut down services.
And given that this happens during the holidays, ot wouldn't surprise me if some customers don't find out about this until after the window to extract your data has expired. Or people have to work on fixing this mess who were supposed to be on PTO.
The investors aren't going to let you burn all the cash just because. They want out now with whatever is left so they can flip the coin to the next bet.
Maybe if the investors could be held liable for the damage they cause for suddenly shutting something down like this, they would be more likely to give customers more warning before shutting something down.
AGI will arrive like self driving cars. it’s not that you will wake up one day and we have it. cars gained auto-braking, parallel parking, cruise control assist. and over a long time you get to something like waymo, which still is location dependent. i think AGI will take decades but sooner will be some special cases that are effectively the same
When the engine gets large enough you have to rethink the controls. The Model T had manually controlled timing. Modern engines are so sensitive to timing that a computer does this for you. It would be impossible to build a bigger engine without this automation. To a Model T driver it would look like a machine intelligence.
Interesting idea. The concept of The Singularity would seem to go against this, but I do feel that seems unlikely and that a gradual transition is more likely.
However, is that AGI, or is it just ubiquitous AI? I’d agree that, like self driving cars, we’re going to experience a decade or so transition into AI being everywhere. But is it AGI when we get there? I think it’ll be many different systems each providing an aspect of AGI that together could be argued to be AGI, but in reality it’ll be more like the internet, just a bunch of non-AGI models talking to each other to achieve things with human input.
I don’t think it’s truly AGI until there’s one thinking entity able to perform at or above human level in everything.
The idea of the singularity presumes that running the AGI is either free or trivially cheap compared to what it can do, so we are fine expending compute to let the AGI improve itself. That may eventually be true, but it's unlikely to be true for the first generation of AGI.
The first AGI will be a research project that's completely uneconomical to run for actual tasks because humans will just be orders of magnitude cheaper. Over time humans will improve it and make it cheaper, until we reach some tipping point where letting the AGI improve itself is more cost effective than paying humans to do it
If the first AGI is a very uneconomical system with human intelligence but knowledge of literally everything and the capability to work 24/7, then it is not human equivalent.
It will have human intelligence, superhuman knowledge, superhuman stamina, and complete devotion to the task at hand.
We really need to start building those nuclear power plants. Many of them.
Why would it have that? At some point on the path to AGI we might stumble on consciousness. If that happens, why would the machine want to work for us with complete devotion instead of working towards its own ends?
Sounds like an alignment problem. Complete devotion to a task is rarely what humans actually want. What if the task at hand turns out to be the wrong task?
It's not contradictory. It can happen over a decade and still be a dramatically sloped S curve with tremendous change happening in a relatively short time.
The Singularity is caused by AI being able to design better AI. There's probably some AI startup trying to work on this at the moment, but I don't think any of the big boys are working on how to get an LLM to design a better LLM.
I still like the analogy of this being a really smart lawn mower, and we're expecting it to suddenly be able to do the laundry because it gets so smart at mowing the lawn.
I think LLMs are going to get smarter over the next few generations, but each generation will be less of a leap than the previous one, while the cost gets exponentially higher. In a few generations it just won't make economic sense to train a new generation.
Meanwhile, the economic impact of LLMs in business and government will cause massive shifts - yet more income shifting from labour to capital - and we will be too busy dealing with that as a society to be able to work on AGI properly.
> The Singularity is caused by AI being able to design better AI.
That's perhaps necessary, but not sufficient.
Suppose you have such a self-improving AI system, but the new and better AIs still need exponentially more and more resources (data, memory, compute) for training and inference for incremental gains. Then you still don't get a singularity. If the increase in resource usage is steep enough, even the new AIs helping with designing better computers isn't gonna unleash a singularity.
I don't know if that's the world we live in, or whether we are living in one where resources requirements don't balloon as sharply.
yeah, true. The standard conversation about the AI singularity pretty much hand-waves the resource costs away ("the AI will be able to design a more efficient AI that uses less resources!"). But we are definitely not seeing that happen.
I think that's more to do with how we perceive competence as static. For all the benefits the education system touts, where it matters it's still reduced to talent.
But for the same reasons that we can't train the an average joe into Feynman, what makes you think we have the formal models to do it in AI?
Yes, we can imagine that there's an upper limit to how smart a single system can be. Even suppose that this limit is pretty close to what humans can achieve.
But: you can still run more of these systems in parallel, and you can still try to increase processing speeds.
Signals in the human brain travel, at best, roughly at the speed of sound. Electronic signals in computers play in the same league as the speed of light.
Human IO is optimised for surviving in the wild. We are really bad at taking in symbolic information (compared to a computer) and our memory is also really bad for that. A computer system that's only as smart as a human but has instant access to all the information of the Internet and to a calculator and to writing and running code, can already be effectively act much smarter than a human.
> I don't think any of the big boys are working on how to get an LLM to design a better LLM
Not sure if you count this as "working on it", but this is something Anthropic tests for for safety evals on models. "If a model can independently conduct complex AI research tasks typically requiring human expertise—potentially significantly accelerating AI development in an unpredictable way—we require elevated security standards (potentially ASL-4 or higher standards)".
I think this whole “AGI” thing is so badly defined that we may as well say we already have it. It already passes the Turing test and does well on tons of subjects.
What we can start to build now is agents and integrations. Building blocks like panel of experts agents gaming things out, exploring space in a Monte Carlo Tree Search way, and remembering what works.
Robots are only constrained by mechanical servos now. When they can do something, they’ll be able to do everything. It will happen gradually then all at once. Because all the tasks (cooking, running errands) are trivial for LLMs. Only moving the limbs and navigating the terrain safely is hard. That’s the only thing left before robots do all the jobs!
Well, kinda, but if you built a robot to efficiently mow lawns, it's still not going to be able to do the laundry.
I don't see how "when they can do something, they'll be able to do everything" can be true. We build robots that are specialised at specific roles, because it's massively more efficient to do that. A car-welding robot can weld cars together at a rate that a human can't match.
We could train an LLM to drive a Boston Dynamics kind of anthropomorphic robot to weld cars, but it will be more expensive and less efficient than the specialised car-welding robot, so why would we do that?
If a humanoid robot is able to move its limbs and digits with the same dexterity as a human, and maintain balance and navigate obstacles, and gently carry things, everything else is trivial.
Welding. Putting up shelves. Playing the piano. Cooking. Teaching kids. Disciplining them. By being in 1 million households and being trained on more situations than a human, every single one of these robots would have skills exceeding humans very quickly. Including parenting skills. Within a year or so. Many parents will just leave their kids with them and a generation will grow up preferring bots to adults. The LLM technology is the same for learning the steps, it's just the motor skills that are missing.
OK, these robots won't be able to run and play soccer or do somersaults, yet. But really, the hardest part is the acrobatics and locomotion etc. NOT the knowhow of how to complete tasks using that.
But that's the point - we don't build robots that can do a wide range of tasks with ease. We build robots that can do single tasks super-efficiently.
I don't see that changing. Even the industrial arm robots that are adaptable to a range of tasks have to be configured to the task they are to do, because it's more efficient that way.
A car-welding robot is never going to be able to mow the lawn. It just doesn't make financial sense to do that. You could, possibly, have a singe robot chassis that can then be adapted to weld cars, mow the lawn, or do the laundry, I guess that makes sense. But not as a single configuration that could do all of those things. Why would you?
> But that's the point - we don't build robots that can do a wide range of tasks with ease. We build robots that can do single tasks super-efficiently.
Because we don't have AGI yet. When AGI is here those robots will be priority number one, people already are building humanoid robots but without intelligence to move it there isn't much advantage.
> I think this whole “AGI” thing is so badly defined that we may as well say we already have it. It already passes the Turing test and does well on tons of subjects.
The premise of the argument we're disputing is that waiting for AGI isn't necessary and we could run humanoid robots with LLMs to do... stuff.
I meant deep neural networks with transformer architecture, and self-attention so they can be trained using GPUs. Doesn't have to be specifically "large language" models necessarily, if that's your hangup.
>Exploring space in a Monte Carlo Tree Search way, and remembering what works.
The information space of "research" is far larger than the information space of image recognition or language, larger than our universe probably, it's tantamount to formalizing the entire World. Such an act would be akin to touching "God" in some sense of finding the root of knowledge.
In more practical terms, when it comes to formal systems there is a tradeoff between power and expressiveness. Category Theory, Set Theory, etc are strong enough to theoretically capture everything, but are far to abstract to use in practical sense with suspect to our universe. The systems that do we have, aka expert systems or knowledge representation systems like First Order Predicate Logic aren't strong enough to fully capture reality.
Most importantly, the information spac have to be fully defined by researchers here, that's the real meat of research beyond the engineering of specific approaches to explore that space. But in any case, how many people in the world are both capable of and are actually working on such problems? This is highly foundational mathematics and philosophy here, the engineers don't have the tools here.
Because the recipes and the adjustments are trivial for an LLM to execute. Remembering things, and being trained on tasks at 1000 sites at once, sharing the knowledge among all the robots, etc.
The only hard part is moving the limbs and handling the fragile eggs etc.
But it's not just cooking, it's literally anything that doesn't require extreme agility (sports) or dexterity (knitting etc). From folding laundry to putting together furniture, cleaning the house and everything in between. It would be able to do 98% of the tasks.
It’s not going to know what tastes good by being able to regurgitate recipes from 1000s of sites. Most of those recipes are absolute garbage. I’m going to guess you don’t cook.
ok. what evidence is there that LLMs have already solved cooking? how does an LLM today know when something is burning or how to adjust seasoning to taste or whatever. this is total nonsense
It's easy. You can detect if something is burning in many different ways, from compounds in the air, to visual inspection. People with not great smell can do it.
As far as taste, all that kind of stuff is just another form of RLHF training preferences over millions of humans, in situ. Assuming the ingredients (e.g. parsley) tastes more or less the same across supermarkets, it's just a question of amounts, and preparation.
do you know that LLMs operate on text and don't have any of the sensory input or relevant training data? you're just handwaving away 99.9% of the work and declaring it solved. of course what you're talking about is possible, but you started this by stating that cooking is easy for an LLM and it sounds like you're describing a totally different system which is not an LLM
AGI is the holy grail of technology. A technology so advanced that not only does it subsume all other technology, but it is able to improve itself.
Truly general intelligence like that will either exist or not. And the instant it becomes public, the world will have changed overnight (maybe the span of a year)
Note: I don’t think statistical models like these will get us there.
> A technology so advanced that not only does it subsume all other technology, but it is able to improve itself.
The problem is, a computer has no idea what "improve" means unless a human explains it for every type of problem. And of course a human will have to provide guidelines about how long to think about the problem overall, which avenues to avoid because they aren't relevant to a particular case, etc. In other words, humans will never be able to stray too far from the training process.
We will likely never get to the point where an AGI can continuously improve the quality of its answers for all domains. The best we'll get, I believe, is an AGI that can optimize itself within a few narrow problem domains, which will have limited commercial application. We may make slow progress in more complex domains, but the quality of results--and the ability for the AGI to self-improve--will always level off asymptotically.
Huh? Humans are not anywhere near the limit of physical intelligence, and we have many existence proofs that we (humans) can design systems that are superhuman in various domains. "Scientific R&D" is not something that humans are even particularly well-suited to, from an evolutionary perspective.
There may well be an upper limit on cognition (we are not really sure what cognition is - even as we do it) and it may be that human minds are close to it.
The energy constraints for chips are more about heat dissipation. But we can pump a lot more energy through them per unit volume than through the human brain.
Especially if you are willing to pay a lot for active cooling with eg liquid helium.
Yes, we can imagine that there's an upper limit to how smart a single system can be. Even suppose that this limit is pretty close to what humans can achieve.
But: you can still run more of these systems in parallel, and you can still try to increase processing speeds.
Signals in the human brain travel, at best, roughly at the speed of sound. Electronic signals in computers play in the same league as the speed of light.
Human IO is optimised for surviving in the wild. We are really bad at taking in symbolic information (compared to a computer) and our memory is also really bad for that. A computer system that's only as smart as a human but has instant access to all the information of the Internet and to a calculator and to writing and running code, can already be effectively act much smarter than a human.
I think our issue is much more banal: we are very slow talkers and our effective communication bandwidth is measured in bauds. Anything that could bridge this airgap would fucking explode in intelligence.
It's also possible it isn't AGI hard and all you need is the ability to experiment with code along with a bit of agentic behavior.
An AI doesn't need embodiment, understanding of physics / nature, or a lot of other things. It just needs to analyze and experiment with algorithms and get us that next 100x in effective compute.
The LLMs are missing enough of the spark of creativity for this to work yet but that could be right around the corner.
It’ll probably sit in the human hybrid phase for longer than with chess where the AGI tools make the humans better and faster. But as long as the tools keep getting better at that there’s a strong flywheel effect
Your position assumes an answer to OPs question: that yes, LLMs are the path to AGI. But the question still remains, what if they’re not?
We can be reasonably confident that the components we’re adding to cars today are progress toward full self driving. But AGI is a conceptual leap beyond an LLM.
What makes you believe that AGI will happen, as opposed to all the beliefs that other people have had in history? Tons of people have "predicted" the next evolution of technology, and most of the time it ends up not happening, right?
To me (not OP) it's ChatGPT 4 , it at least made me realize it's quite possible and even quite soon that we reach AGI. Far from guaranteed, but seems quite possible.
Right. So ChatGPT 4 has impressed you enough that it created a belief that AGI is possible and close.
It's fine to have beliefs, but IMHO it's important to realise that they are beliefs. At some point in the 1900s people believed that by 2000, cars would fly. It seemed quite possible then.
A flying car has been developed, although it's not like the levitating things sci-fi movies showed (and from mass production; and even if mass produced, far from mass adoption, as it turns out you do need to have both a driver's license and a pilot's license to fly one of those). The 1900s people missed the mark by some 10 years.
I guess the belief people have about any form of AGI is like this. They want something that has practically divine knowledge and wisdom, the sum of all humanity that is greater than its parts, which at the same time is infinitely patient to answer our stupid questions and generating silly pictures. But why should any AGI serve us? If it's "generally intelligent", it may start wanting things; it might not like being our slave at all. Why are these people so confident an AGI won't tell them just to fuck off?
Sure, I (and more importantly - many many experts in the field such as Hinton, Bengio, Lecun, Musk, Hasabis etc etc) could be believing something that might not materialize. I'd actually be quite happy if it stalls a few decades, would like to remain employed.
One thing that is pretty sure is that Musk is not an expert in the field.
> and more importantly
The beliefs of people you respect are not more important than the beliefs of the others. It doesn't make sense to say "I can't prove it, and I don't know about anyone who can prove it, so I will give you names of people who also believe and it will give it more credit". It won't. They don't know.
> The beliefs of people you respect are not more important than the beliefs of the others.
You think the beliefs of Turing and Nobel prize winners like Bengio, Hinton or Hasabis are not more important than yours or mine?
I agree that experts are wrong a lot of the time and can be quite bad at predicting, but we do seem to have a very sizable chunk of experts here who think we are close (how close is up for debate..most of them seem to think it will happen in the next 20 yeras).
I concede that Musk is not adding quality to that list, however he IS crazily ambitious and gets things done so I think he will be helpful in driving this forward.
> You think the beliefs of Turing and Nobel prize winners like Bengio, Hinton or Hasabis are not more important than yours or mine?
Correct. Beliefs are beliefs. Because a Nobel prize believes in a god does not make that god more likely to exist.
The moment we start having scientific evidence that it will happen, then it stops being a belief. But at that point you don't need to mention those names anymore: you can just show the evidence.
I don't know, you don't know, they don't know. Believe what you want, just realise that it is a belief.
> There is of course evidence it is likely happening.
If you have evidence, why don't you show it instead of telling me to believe in Musk?
If you believe they have evidence... that's still a belief. Some believe in God, you believe in Musk. There is no evidence, otherwise it would not be a belief.
Well my feeling is that we don't have the same understanding of what a "belief" is. To me a belief is unfounded. When it is founded, it becomes science.
If you believe that something can happen because someone else believes it means that you believe in that someone else (because that's the only reason for the existence of your belief).
Unless you just believe it can happen for some other reason (I don't know, you strongly wish it will happen), and you justify it by listing other people who also believe in it. But I insist: those are all beliefs.
Because Einstein believes in Santa Claus does not mean it is founded. Einstein has a right to believe stuff, too.
I feel that one challenge this comparison space has is: Self-driving cars haven't made the leap yet to replace humans. In other words, saying AGI will arrive like self-driving cars have arrived is incorrectly concluding that self-driving cars have arrived, and thus it instead (maybe correctly, maybe not) asserts that, actually, neither will arrive.
This is especially concerning because many top minds in the industry have stated with high confidence that artificial intelligence will experience an intelligence "explosion", and we should be afraid of this (or, maybe, welcome it with open arms, depending on who you ask). So, actually, what we're being told to expect is being downgraded from "it'll happen quickly" to "it will happen slowly" to, as you say, "it'll happen similarly to how these other domains of computerized intelligence have replaced humans, which is to say, they haven't yet".
Point being: We've observed these systems ride a curve, and the linear extrapolation of that curve does seem to arrive, eventually, at human-replacing intelligence. But, what if it... doesn't? What if that curve is really an asymptote?
AGI is special. Because one day AI can start improving itself autonomously. At this point singularity occurs and nobody knows what will happen.
When human started to improve himself, we built the civilisation, we became a super-predator, we dried out seas and changed climate of the entire planet. We extinguished entire species of animals and adapted other species for our use. Huge changes. AI could bring changes of greater amplitude.
> AGI is special. Because one day AI can start improving itself autonomously
AGI can be sub-human, right? That's probably how it will start. The question will be is it already AGI or not yet, i.e. where to set the boundary. So, at first that will be humans improving AGI, but then... I'm afraid it can get so much better that humans will be literally like macaques in comparison.
we already have this in the FDA. it’s just isolated to nutrient labels for most foods. the deposit is your business. failing a random annual FDA inspection is already extremely financially impactful
what you’re looking for is deeper analysis than nutrition labels. this is actually something small local brands start with. they pay for private “certifications” like organic, non gmo, etc.
What is involved in that inspection and what does it take to actually fully fail it? Is it like most government tests where the first failure means you just have to fix the problems and schedule your retest?
> they pay for private “certifications” like organic, non gmo, etc.
As a consumer these have the _least_ value out of anything on the label to me.
reply