How would this compare to something like the Godox LA 200D, which claims an output of 100,000 lumens? I use two of these lights on tall stands pointed at my ceiling, which seems to work very well
They claim 101,000 lux, not lumens. Lux is light per square meter (roughly), lumens is total light output. The closer you get to the source, the higher the lux will be, so it's hard to compare lux equally. Based on their 230W power draw and typical COB efficiency, I'd guess it's 20,000-35,000 lumens.
Tldr, we're brighter, fanless, have adjustable color temperature, smart home compatible, and more aesthetic.
* Signal Intel is in it for the long term, specifically kill the dividend.
* Be more customer centric/rebuild customer trust, specifically publicly acknowledge the 10nm slip.
Though it seems customer trust in Intel has eroded to the point where only actions will change customers' perception of Intel — I'm not sure what Intel could have communicated to Tofino/Oxide that they hadn't already.
Does anyone have a set of queries that google returns poor results for?
I spent a few minutes looking at my search history (filtering chrome history by "google search"), and the vast of my queries are quite simple (e.g. people's names) that google does well on (in fact I find google search for people better than linkedin sometimes).
I also tried a few complex queries and compared them to Kagi:
"How much bitcoin does microstrategy own" -> Google returns the correct snippet from here[0] while Kagi only linked to articles about how much it acquired in the last few days.
"how to pronounce stratchery" -> Google returns the correct snippet from the Stratechery website[1] while Kagi's first result is a spam entry[2] with the wrong pronunciation (the second result is a tweet with the correct pronunciation).
I'd be curious to see more comparisons!
Edit: I just remembered Dan Luu's post (https://danluu.com/seo-spam/) but after looking through my search history, the queries he uses are not at all representative of my day to day searches.
You’ve subconsciously altered your search behavior to avoid categories that Google is horrible at.
Any product reviews will be SEO garbage (blogspam too 10 lists). Anything travel will be a page full of ads before organic results, if any. You just know to not even bother so you’re left with the queries that still work.
I was wondering the same thing. I see all these complaints that Google is awful and broken but it generally seems to work fine for me, apart from stuff that all the search engines struggle with.
Some example of something that's hard to find with Google but easy with something else?
If Google is so bad why don't people, myself included click on one of the other ones?
I sympathise with Giant Freakin Robot not getting clicks - I'd never heard of them. But that's different from Google being bad from a user point of view.
I just tried clicking on them all - they all work. Baidu is kind of funny as it's all in Chinese and searching The Sound of Music came back with Chinese which Google translated to "The Nun and the Seven Naughty Children!"
> If Google is so bad why don't people, myself included click on one of the other ones?
I believe because stickyness is a strong force. Google gets worse, but you know it and you know how it fails and what to do to make it better (i.e. how to word your query differently, what parameters to add etc when you get terrible results).
You don't know this for any other search engine, so you need to make a) an active decision to try another search engine, and b) learn how to use it so it delivers what you want.
And I assume that, for most people, Google isn't bad enough to make that kind of investment.
Earlier today I tried to find out what's currently happening in Sudan, is there any shift in the civil war etc. Google was pretty useless. News articles from 6 weeks ago, the best results were probably Wikipedia, but they ranked the Timeline higher than the actual article. I tried with "sudan civil war", "sudan civil war maps" etc.
I tried just now and Yandex actuall provided much better results for what I wanted. They had sudan.liveuamap.com, they showed polgeonow.com, which looks very interesting (can't say if accurate or trustworthy, but definitely topically relevant), sudanwarmonitor.org etc. Compared to Google, they show much less "top 1000 media sites" and more of what look to be topic-experts.
Bing found sudanshahid.org (again, don't know how accurate), multiple arcgis.com-hosted articles, and also included sudanwarmonitor.org. I'd say more Big News Corps than Yandex, but less than Google.
On this search, I'd say Yandex was best, Bing second, Google last. Yet still I use Google as my daily driver, I think mostly because I don't know how far I can trust Yandex, and I have a general bias against MS products. It's certainly not because they don't deliver better results -- but I know Google and it's "the devil you know", until Google becomes too hard to extract results from, at which point I'll be forced to switch.
Yeah, 2023's profits was $235k,[0] which is good, but it doesn't leave a ton of room to hire some sort of executive to replace me. I don't even know if I'd attract someone qualified if I gave them the entire profit, but at that point, it makes more money to sell than to own a company that's just breaking even with someone else in charge.
A 15 year old can reason about how to move their body through a complex obstacle course. They could reason about the nonverbal social cues in a complex interpersonal situation between multiple people, estimate the mood of each person even if there are very few words being exchanged, and determine how different possible actions would affect the situation. They could learn with brief instruction how to control their muscles to climb up a rope. They could learn how to learn so that they become better at a task of their choosing. They can receive new information that permanently changes their understanding of the world. They can learn new tasks for which no massive data set of training data exists. They can perform hierarchical reasoning, like “if I want to fly from San Francisco to New York I first need to buy a plane ticket, then pack my bags, tell my family where I will be going, make sure my phone is charged, walk to the train station, etc etc.”
Also if you ask them a question they can provide you one answer with very little thinking, and then if that’s not good enough they can devote more time to thinking about the answer before they answer again. They can devote arbitrary levels of thinking to any problem depending on what is needed. They can continuously take in new data and continually update their world view throughout their entire existence based on this new information.
There’s actually a huge list of things current autoregressive approaches to AI cannot do, but they can be hard to describe and people don’t like to talk about them so many people actually don’t understand how limited the current systems are.
Here’s a great video where Yann Lecun talks about the limits of autoregressive approaches to AI with many examples:
That’s fair. In the interview LeCun uses the example of flying from San Francisco to New York and he asserts that these systems are not good at hierarchical reasoning. I’m no expert in this field so I take him at his word but maybe it warrants further explanation.
He also says that such a system wouldn’t be familiar with how to actually move through the world because we don’t have good datasets for how to do so. The rest of what I said still stands. These systems aren’t good at things for which we don’t have massive datasets, and they’re not able to devote different amounts of thinking time to different problems.
What isn’t abstract about looking at an obstacle course and then imagining how you will move your body? Or looking at someone’s face and imagining how they feel. Isn’t that abstract?
These "it's like a young/stupid person" arguments are wretched. LLMs are interesting but it should be obvious their development is not comparable to the development of human beings.
It's obvious to everyone who isn't willfully blind that LLMs aren't truly intelligent, and all the mental gymnastics that people go through to try to portray LLMs as genuinely intelligent is just so tedious.
More specifically, something like “whats the best brand of phone”. The LLM just summarizes common knowledge. But even a child will grasp some of the differences and have opinions drawn from experience.
Note that this isn’t just an anthro-good argument. AI systems could have experiences and be trained on long duration tasks with memory of what worked and why.
Good question, I'm working on exactly this, I suppose you could call it the replacement of RAG.
It's actually not very easy to achieve this. I could give a very long winded answer (don't tempt me) but suffice to say it's a resolution problem.
All AI have a fixed resolution on creation. Long running tasks focus on a very particular narrowing space per step, the resolution required for an infinite task is infinite resolution.
No 9s of error will ever fix this.
Funny enough, small animals do this with ease so I strongly disagree the idea that our AI outcompete even small mammals in every way.
I agree. Whenever people complain about LLM hallucinations they behave like they never seen one in humans.
Not only humans hallucinate all the time, humans also have persistent hallucinations as evident from the presence of opposing beliefs in various slices of society.
Current LLMs have a number of limitations that human reasoning doesn't. Whether these are intrinsic to the technology or can be overcome with larger and better datasets is an open question.
It's extremely ironic you picked megawatt hour of power because that is approximately the amount of power humans need to get good at anything according to the popular proverb.
But don't worry just yet, GPT-4o could not detect the irony on its own either.
I wouldn't say humans are so different. You could argue we've been trained on about one quadrillion bytes of visual data by the time we're 4 years old: https://x.com/ylecun/status/1750614681209983231
I would say as counter, a child, pseudo-random training by parents and environment. Not sure what price tag to put on this, but in comparison, LLMs, how many billions, to reach what level of competency exactly?
Because that tells us how you approach novel problems. If you need tons of data to solve a novel problem that makes you bad at solving novel problems, while humans can get up to speed in a new domain with much less training and thus solve problems the LLM can't.
Thus AGI needs to be able to learn something new with similar amounts of data as a human, or else it isn't an AGI as it wont be even close to as good as a human at novel tasks.
Part of me wonders if these people are intentionally framing the debate around ethics and potential risks as longer term extinction level problems to distract from the nearer term damage caused by them accelerating the economic inequality of the AI have-nots while they make themselves even richer.
I believe you may be alluding to longtermism[0]. At it's face value, longtermism seems like a good thing, but I've heard many criticisms against it - mainly levied against the billionaire class.
And the criticisms mostly center on what you're saying here - how many billionaires are focusing on fixing problems that are very far off in the future, while ignoring how their actions affect those of the very near future.
This is really less of a criticism of longtermism, and more of a criticism of how billionaires utilize longtermism.
Is it important that we find another planet to live on? Sure, but many will argue that we should be taking steps now to save our current planet.
The more I look at AI, the more I get the feeling that this is true. Spinning an intriguing sci-fi tale of apocalypse and extinction is relatively easy and serves to obfuscate any nearer-term concerns about AI behind a hypothetical that sucks the air out of the room.
That said, I don’t think that it’s necessarily disingenuous so much as it is myopic - to them of course AI is exciting, world-changing, and profitable, but they (willfully or not) fail to see the downsides or upsides for anyone else but them. Perhaps in the minds of the ultra-rich AI proponents, solutions to nearer-term effects of their tech are someone else’s problem, but the “existential risks” are “everyone’s” problem.
The short-term effect is a harbinger of the long-term risk, since capitalism doesn’t inherently care for people who don’t provide economic value. Once superintelligent AI arises, none of us will have value within this system. Even the largest current capital holders will have a hard time holding on to it with an enormous intelligence disadvantage. The logical endpoint is the subjugation or elimination of our species, unless we find a new economic system with human value at its core.
There are a lot of assumptions going on here. One of them is that superintelligent AI will arise. We have no reason to believe this will happen in our lifetimes. I posit that we are about as close to superintelligent AI as James Watt was to nuclear fusion.
The other assumption is that wealth and power are distributed according to intelligence. This is obviously false, wealth and power are largely distributed according to who you or your father plays golf with. As long as AIs don't play golf and don't have fathers, we are quite safe.
> There are a lot of assumptions going on here. One of them is that superintelligent AI will arise. We have no reason to believe this will happen in our lifetimes. I posit that we are about as close to superintelligent AI as James Watt was to nuclear fusion.
This is a perfectly reasonable response if nobody is trying to build it.
Given people are trying to build it, what's the expected value from ignoring the problem? E($Damage_i) = P(BadOutcome_i) * $Damage_i.
$Damage can be huge (there are many possible bad outcomes of varying severity and probability, hence the subscript), which means that at the very least we should try to get a good estimate for P(…) so we know which problems are most important. In addition to it being bad to ignore real problems, it is also bad to do a Pascal's Mugging on ourselves just because we accidentally slipped a few decimal points in our initial best-guess, especially as we have finite capacity ourselves to solve problems.
Finally, let's assume you're right, that we're centuries off at least, and that all the superintelligent narrow that AI we've already got some examples of involve things that can't be replicated in any areas that pose any threat. How long would it take to solve alignment? Is that also centuries off? We've been trying to align each other since laws were written like 𒌷𒅗𒄀𒈾 at least, and the only reason I'm not giving an even older example is that this is the oldest known written form to have survived, not that we weren't doing it before then.
> The other assumption is that wealth and power are distributed according to intelligence. This is obviously false, wealth and power are largely distributed according to who you or your father plays golf with. As long as AIs don't play golf and don't have fathers, we are quite safe.
Nepotism helps, but… huh, TIL that nobody knows who was the grandfather of one of the world's most famous dictators.
Cronyism is a viable alternative for a lot of power-seekers.
So I propose the Musk supremacy criterion to be the following.
Suppose that a wealthy and powerful human (such as Elon Musk) were to suddenly obtain the exact same sinister goals as the hypothetical superintelligent AI in question. Suppose further that this human was able to convince/coerce/bribe another N (say 1000) humans to follow his bidding.
A BadOutcome is said to be MuskSupreme if it could be accomplished by the superintelligent AI, but not by the suddenly-evil Musk and his accomplices.
Obviously[citation needed] it is only the MuskSupreme BadOutcomes we care about. Do there exist any?
For example 1000 people — but only if you get to choose who — is sufficient to take absolute control of both the US congress and the Russian State Duma (or a supermajority of those two plus the Russian Federation Council), which gives them the freedom to pass arbitrary constitutional amendments… so your scenario includes "gets crowned King of the USA and Russia, 90% of the global nuclear arsenal is now their personal property" as something we don't care about.
> As long as AIs don't play golf and don't have fathers, we are quite safe.
Until it becomes 'who you exchange bytes most efficiently with" and all humans are at a disadvantage against a swarm of even bellow average intelligence AGI agents.
Because, as unlikely as it is, if we're discussing risk scenarios for AI getting out of hand. Well then a monolithic superintelligence is just one of the possibilities. What about a swarm of dumb AIs that are nonetheless capable of reasoning and decision making and they become a threat?
That's pretty much what we did. There's no super intelligent monkey in charge. As much as some have tried to pretend, material or otherwise. There's just billions of average intelligence monkeys and we overran all Earth's ecosystems in a matter of centuries. Which is neither trivial nor fully explained yet.
The difference is that we have 100% complete control of these AIs. We can just go into the power grid substation next to the data center and throw the big breaker, and the AI ceases to exist.
When humans developed, we did not displace an external entity that had created us and that had complete power to kill us all in an instant.
Look at the measures that were implemented during covid. Many of them were a lot more extreme than shutting down datacentres, yet they were aimed to mitigate a risk far less than "existential".
That data is in fact orthogonal to my point, for two reasons:
1. When we are talking about wealth and power that actually can influence the quality of the lives of many other people, we are talking about way less than 0.01% of the population. Those people aren't covered in this survey, and even if they were it would be impossible to identify on an axis spanning 0-100%.
2. Your linked article talks about income. People with significant wealth and power frequently have ordinary or below-ordinary income, for tax reasons.
Actually, it will have the opposite effect, at least in the short term.
People who own high value assets (everything from land to the AI) will continue to own them and there will be no opportunities for people to earn their way up (because they can be replaced by AI).
"The logical endpoint is the subjugation or elimination of our species"
Possibly, but it would be by our species (those who own and control the AI) rather than by the AI.
I would venture to say that transhumanism will be the path and goal of the capital class, as that will be a tangible advantage potentially within their grasp.
I suppose then that they would become “homo sapiens sapiens sapiens” or some other similarly hubris laden label, and go on to abandon, dominate or subjugate the filthy hordes of mere Homo sapiens sapiens.
No, they are not. Pretty much everyone in the x-risk community also recognizes the existence of short-term mundane harms as well. The community has been making these predictions for over a decade, long before it was anything other than crazy talk to most people.
Google has a big investment in reducing AI bias (remember Gemini got slammed for being “too woke”). Altman is a big proponent of UBI. Etc.
This; Gates too. It's becoming an obvious attempt to garner support of the government restricting the use of AI to large players. None of the entrenched interests want any disruption that AI might cause what so ever.
Replace "AI" in all the doomsaying with "the internet," and it will become clearer.
I’d like to remind people that these ppl have no more knowledge about AGI as anyone else on this planet since there is no knowledge yet and everything they say about this topic is as relevant as something every other random person can say.
Yes, let's go with the random layperson knowledge of an HN commenter compared to the people smart enough to actually build all the AGI tech. 50/50 coin toss, I'm sure.
Dario Amodei (Anthropic CEO, builder of Claude 3 Opus): "My chance that something goes, you know, really catastrophically wrong on the scale of human civilization, might be between 10 - 25%"
They did not build a AGI yet so they have build as many AGIs as anyone else and therefore they are also laypersons regarding the effects of AGI on humanity.
Is your position that the only people who will ever be qualified to have an opinion that AGI is a threat that might destroy humanity, are the people who have already successfully built this thing? If that is the case, by what means might any credible warning be provided before a possibly-humanity-destroying AGI gets made? That's rather like saying "the only credible way to tell if there's a gas leak is to strike a match and see if we explode or not". I would suggest lowering the bar a little.
No, my position is that you can not extrapolate on something that does not exist and that’s “category” ever never existed.
We don’t know any significant other intelligent then humans, not even aliens, so we have just no trace of direction that is rooted in reality rather then fantasy.
This is especially obvious since opinion differ so much about the effects of AGI on humanity and since you cannot prove them right or wrong, every opinion is equally realistic as each other.
Like I could say that every AGI that becomes self aware will kill itself instantly because of boringness, and you cannot prove me wrong.
It still seems to me that you're saying that we won't possibly be able to declare AGI an existential threat to humanity until after it has already been built. At that point, we can presumably settle the question by seeing if humanity goes extinct. This poses something of a paradox to those of us who prefer existential threats to humanity not get built in the first place.
We don't even know if AGI is possible. Let's not mince words here: nothing, and I do mean nothing, not a single, solitary model on offer by anyone, anywhere, right now has a prayer of becoming AGI. That's just... not how it's going to be done. What we have right now are fantastically powerful, interesting, and neato to play with pattern recognition programs, that can take their understanding of patterns, and reproduce more of them given prompts. That's it. That is not general intelligence of any sort, it's not even really creative, it's analogous to creative output but it isn't outputting to say something, it's simply taking samples of all the things it's seen previously and making something new with as few "errors" as possible, whatever that means in context. This is not intelligence of any sort, period, paragraph.
I don't know to what extent OpenAI and their compatriots are actually trying to bring forth artificial life (or at least, consciousness) forward, versus how much they're just banking on how cool AI is as a term in order to funnel even more money to themselves chasing the pipe dream of building it, and at this point, I don't care. Even the products they have made do not hold a candle to the things they claim to be trying to make, but they're more than happy to talk them up like there's a chance. And I suppose there is a chance, but I really struggle to see ChatGPT turning into skynet.
I agree. I think there are some critical ingredients missing. Obviously the weights need to be able to update to new data in a semi-online fashion, for example. But I think there are less ingredients missing than there were a decade ago, by a much larger factor than I expected at the time, and a decade from now, the number of missing ingredients might be zero.
My uncertainty on this is not because I think GPT is more than it seems,but because it's unclear how much of the amazing cognitive capability that humans have is, under the hood, much simpler than it seems.
I really want humanity to exist even ten decades from now; I want my kids to have grandkids. So I care about this even if I don't expect AGI from a 2025 Q1 product launch. And I don't think it's too early to worry about it, just like I don't think 1980 was too early to worry about climate change.
> Reading this is like hearing "there is no evidence that heavier-than-air flight is even possible" being spoken, by a bird.
This is a vivid bit of rhetoric to underscore this point, but if you think about it for any length of time it starts to fall apart really, really quickly. The Wright brothers and the dozens if not hundreds of inventors forgotten who came before them drew upon the physics of what they observed in heavier than air flight to create the winged shape we know today that reliably causes lift, and then set about constructing it. That's not what OpenAI is doing. We still do not have a very solid understanding of where our own intelligence emerges from, apart from having particularly large brains relative to our body's size. So, to borrow your metaphor, it is indeed like a bird saying that there's no evidence to say that heavier than air flight is possible, because the bird lives in a world without atmosphere upon which to glide.
Maybe by "we don't know if it's possible" you meant "we don't presently have a step-by-step plan to implement AGI using the tools we already have in our toolbox, and no new ones"? If so then I certainly agree.
But when you look at, say, Sora's videos... How certain are you that there's no path between that and the human visual cortex? How certain are you that they aren't solving at least some of the same problems in structurally analogous ways? Given that nature built a visual cortex (more than once!) by applying survival pressure and turning a crank, just how hard can it be? When we apply a bunch of optimization pressure and billions of dollars turning cranks and something eerily similar pops out, that tells me that there's just less magic to our own brain than we thought. Like when spent centuries wondering if our solar system is unique, and then we finally put Kepler up there and it turns out that planets are everywhere you look.
To reject that this necessarily shows that AGI is possible, you would have to demonstrate human intelligence is tied to an immaterial soul granted by a supernatural being.
Some people do in fact believe this. I think that if we can't build AGI even with a perfect copy of a brain, that would instead be a surprising proof the existence of souls.
> This is not intelligence of any sort, period, paragraph.
Let's run with that. Just using your definition of intelligence: given AI can already beat us in Chess, in Go, in Mathematical Olympiad puzzles, in protein folding predictions, in poker, at real world stock market analysis, in the game of Diplomacy, … — does it matter that they're not what you call intelligent?
Do submarines swim?
> bring forth artificial life (or at least, consciousness) forward
Why does it matter if it's "life" or "consciousness"?
“Deep Blue was intelligent the way your programmable alarm clock is intelligent. Not that losing to a $10 million alarm clock made me feel any better.” ― Garry Kasparov
> To reject that this necessarily shows that AGI is possible...
Incorrect. For something to be possible in an absolute sense, it has to be possible in both the physical and the metaphysical realm, and a lack of knowledge of blockers in the metaphysical does not mean that there are none (in a comprehensive sense), it only means it appears that way, due to (at least) our cultural beliefs ("truths") and conditioning.
> ...you would have to demonstrate human intelligence is tied to an immaterial soul granted by a supernatural being.
Could demonstrating that intelligence rests upon a virtual reality generation machine (that cloaks itself from being realized as such) or something else not also work, at least maybe?
Given that you don't think the CEO of Anthropic is qualified to comment on this matter, what kind of "data that is relevant" do you imagine we might be able to have, prior to an AGI existing?
I am indeed uncomfortable with someone saying that we can't worry about a gas leak until after someone strikes a match and notices an explosion or the lack thereof. If the sad reality is that we can't detect the gas leak any other way, I would argue that the precautionary principle suggests we ban the striking of matches until such sensors have been developed. But here we are in a room of people grinding up saltpetre with their mortars and pestles.
I really wonder why there is such a fixation with these CEOs.
I have proven to you logically that they cannot have more knowledge then anyone else, but you continue to present their authority as knowledge.
Pls prove to me that they know more about the effects of AGI on humanity or that they have some magical knowledge we normal ppl are unable to comprehend.
If you cannot do this, then why are you bringing up these ppl?
> I have proven to you logically that they cannot have more knowledge then anyone else, but you continue to present their authority as knowledge.
You've asserted it, not proven it.
Given their entire research field is "how to make the thing", it's a fair bet that they know more than me — even if what they know is, to paraphrase Thomas Edison, "1000 ways to not make an AGI", that's still more than me, and I follow some of the research and read some of the papers.
Correct, and this (your counterpart's unrealized because it is the normal, proper way to think error) is what worries me... I think once again (as with affordable vehicles powered by fossil fuels) humans may be once again walking blindly and confidently into a situation where we've over invested in certain competencies to the detriment of others, increasing the already dangerous imbalance that exists between power and wisdom.
This is not about “having knowledge “, but about being able to anticipate future problems.
Considering how rapidly the technology is developing, and considering how clearly a threat a superior intelligence will be to the human race, it is absolutely the time to have this discussion NOW
I wasn't the one who brought that person up. I only kept with that example because you didn't answer my question when I asked you what would qualify someone to have an opinion, or what evidence you imagined could exist, and I didn't want to put words in your mouth by assuming that you also think that eg. Yoshua Bengio or Geoff Hinton or Max Tegmark (or take your pick of anybody) don't have informed opinions either. Or that they have well-reasoned arguments, rather than authoritative opinions, if you think that reason is applicable.
I'm only asking for a standard that falls slightly short of "already having built Schrodinger's doomsday weapon and observed whether humanity dies as a result". You can set the bar wherever else you like, please!
The hour before Thomas Newcomen invented the first practical fuel-burning engine in 1712, the most informed person in the world to do so was an ironmonger by the name Thomas Newcomen.
The hour before the Kitty Hawk flight, the two most informed people in the world on the topic of heavier than air flight were a pair of bicycle retailers and manufacturers named Wilbur and Orville.
These people sure don't spot everything, but they're still way closer than the average human.
>The hour before the Kitty Hawk flight, the two most informed people in the world on the topic of heavier than air flight were a pair of bicycle retailers and manufacturers named Wilbur and Orville.
Actually, this is quite false. There were many people working on flight with massive credentials and massive funding. That the Wright brothers beat them is part of what makes the story of the Wright brothers so interesting.
That the other teams were better funded and yet failed, says to me the Wright Brothers were better informed.
That society didn't know this at the time is indeed relevant to AI development, and the analogy would be that the next big thing might come out of some minor research team that most people dismiss instead of the biggest and most well known groups… a bit like how Google's LaMDA was making headlines before ChatGPT surprised everyone.
It's certainly still possible today that some random individual has a crucial insight that gives them an edge over the big names, yet even those big names know far more than me.
Which is rather annoying, because I'd be interested in taking the opportunity of having been laid off to switch from iOS to AI. (If anyone reading this is hiring people in Berlin, contact details on my website. Plus side of hiring me: I'm relatively cheap in that field, and enthusiastic about the possibilities).
Now, imagine if the people who were beaten by the Wright brothers had set up a regulatory system 20 years before their first flight, which instituted a bunch of rules and requirements that made it impossible for the Wright brothers (or anyone) to actually achieve flight.
What do you call something humans built (artificial) that can solve all topics of AP tests (general) with most being 5/5, better than almost all humans (intelligence)?
- Ask ChatGPT to produce some output meeting certain criteria
- ChatGPT responds, but the output does not meet the criteria in one or more obvious ways
- Point this out and ask for a corrected version
- ChatGPT says "I apologize, here's the corrected version:" and spits out a nearly identical answer with the same obvious mistake.
- Repeat steps 3 and 4 ad nauseam.
I experience this frequently when I ask it to do anything somewhat outside the norms of its training. It cannot correct itself or even see its own mistake. It cannot even conclude that it doesn't have the ability to do what I've asked and tell me so.
Whatever sort of intelligence it may have, I would categorize it as not "general" enough to fall under AGI. Being able to answer AP test questions cannot be a sufficient measure of generality if a model can do it with flying colors but still fail at far more basic tasks.
I’d say that LLM’s inability to handle DSL’s in particular is an interesting question on the topic of knowledge and language. Should a generally intelligent entity be able to quickly figure out a new dialect with little background? If I were to air-drop you into an extremely foreign nation, how long would it take you to organically decode the local language and be able to meaningfully craft expressions in it? I’m not sure if humans are “generally intelligent” but a common bar for AGI is for it to be able to beat or meet average humans at normal tasks. I’m not sure if something like writing perfect k8s yaml should be a requirement, though I do agree that LLM’s inability to do word puzzles or relatively straightforward math should disqualify them.
I agree with your points in general, though I wasn't talking about DSL generation or any specific task. I was talking about ChatGPT's general tendency to cheerfully apologize for mistakes, explain exactly what the mistake was, and then present the same mistake while claiming that it's been corrected.
You can ask ChatGPT what it means to be asked to correct a mistake, and it will give you a perfectly thorough and eloquent answer. But it is often unable to apply this concept to its own behavior. If asked, it can explain back to you exactly what correction you want it to make. You can even make it pledge to correct the mistake in exactly the manner that it just described. And then it will completely fail to do it. It reminds me of that "repeat after me" meme from Friends [1].
This makes me lean in the direction of "stochastic parrot" when I think about what LLMs are. As impressive as it is, ChatGPT demonstrably lacks a sense of self. It talks as if it understands that it's an agent in control of its behavior, but then fails to control or even recognize its own actions.
Google (and library indexes) can solve exactly and only the questions in their databases.
All AI, even narrow ones, necessarily have the ability to interpret novel questions in some fashion — how novel and how well they interpret being the "G" and the "I" in AGI.
We don't give AP tests to humans to determine whether or not they are in fact an intelligent being. We give them to humans to show that over the course of about a decade and a half that human went from not realizing it had hands to ingesting a bunch of knowledge and had the capability to recall it answer questions.
ChatGPT was trained on hundreds of terabytes of information. I would hope that with all that effort it could answer AP test questions. That doesn't make it intelligent.
what's your definition of intelligence that you're using here. there are various takes on what it means, so I'd like to know your take on it.
That it's able to spit out essays makes it something, intelligence is too loosely defined to describe it for everybody, so we need to develop new words and meanings and new language for what it does. It doesn't think, but it does dot products to tensors to get create its output.
intelligence in animal species, roughly, is how well the animal deals with novel situations. Corvid tool use is a common example of animal intelligence.
It is interesting to consider AIs that way given that almost every AI will fail catastrophically when confronted with an area of expertise that is outside of their training corpus, although I guess Corvid tool use isn't exactly out of the animals' range of expertise.
Fairly boring - "The answer is known and I want to see if you know the answer" is good for judging students abilities, but no real world problem exists like that. I will be much more hopeful for AGI when AI starts producing new and interesting knowledge, rather than regurgitating already known facts.
Your comment (+ username) reads like what I would have written once upon a time when I was fully in the EA bubble.
Truly no offense meant, as I was deeply into the EA movement myself, and still consider myself one (in the original "donate money effectively" sense), but the movement has now morphed into a death cult obsessed by things like:
* OMG we're all going to die any time now (repeated every
year since circa 2018)
* What is your pDoom? What are your timelines? (aka: what is your totally made up number that makes you feel like you're doing something rational/scientific)
I'm deep in the weeds w/ LLMs, e.g. I probably finetune an average of 1 model a day, and working with bleeding edge models... and AI safety just sounds so silly. Wanting to take drastic measures today to prevent an upcoming apocalypse makes as much sense as taking the same drastic measures when gradient descent was invented.
My username was created before I knew anything about EA or adjacent. I'm not in any EA movement, though I am sympathetic. I've spent 100x the time on HN, with people mostly in denial, than I do in EA or adjacent forums, nor have I met any of them.
It's sadly twisted how mentioning that -- the majority of leaders doing the cutting edge research on AGI think it has a significant chance that it kills humanity -- is considered being part of a "cult" movement.
Your analogy is the same as early Intel engineers completely unaware that those chips would bring on the ramifications of social media. "In the weeds" and yet unable to foresee the trajectory and wider impact. Same with the physics that led to nuclear weapons.
> Wanting to take drastic measures today to prevent an upcoming apocalypse makes as much sense as taking the same drastic measures when [nuclear fission] was invented.
> Your analogy is the same as early Intel engineers completely unaware that those chips would bring on the ramifications of social media
Exactly! As they should be. (for both Intel engineers developing chips, and physicists developing nuclear research)
There were a billion more potential dangers from those technologies that never materialized, and never will.
I'm glad we didn't stop them in their track because a poll of 10 leaders in the field thought they were too dangerous and progress should stop. (note that no one is against regulating dangerous uses of AI, e.g. autonomous weapons, chemical warfare; the problem is regulating AI research and development in the first place)
> We will be the first species ever to design our own descendants. My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like. - Sam Altman (https://blog.samaltman.com/the-merge)
I love how, per the quote, he thinks we're anywhere close to being able to merge with AI.
I'm a neuroscientist and, man alive, we're no where close to being able to merge with machines. Like, do you have any idea how many diseases we could eradicate if we could modify neurons like that? Like, for real, 'curing death' would be step 9 or 10 on that 1000 step journey.
I hope you see how terribly uninformed such a take is then.
> I hope you see how terribly uninformed such a take is then.
Yeah, I only quoted it to show how that other statement up-thread can't be taken seriously.
The only way I can square it is if it's a lie, or they're reassuring themselves whatever destruction comes will be fine as long as "there will be peace and security in [their] lifetime."
You're assuming people have coherent beliefs, but they don't. It's possible to intellectually believe AI has high extinction risk and emotionally be convinced to work on it anyway, without reconciling the two.
Even worse, some factions literally advocate for killing all humans in the pursuit of a synthetic intelligence, and YC's Garry Tan is advocating for these people!
Beff Jezos (e/acc founder): "e/acc has no particular allegiance to the biological substrate for intelligence and life, in contrast to transhumanism
Parts of e/acc (e.g. Beff) consider ourselves post-humanists; in order to spread to the stars, the light of consciousness/intelligence will have to be transduced to non-biological substrates"
Is this particularly controversial? Isn't this just saying "sending not-biological bodies to the stars would be a heck of a lot easier"? Which, it would. The hard disk containing me is going to be a lot easier to keep intact then the biology.
Im sure he'd be happy to become the first multi-instantiated, immortal billionaire. The pesky laws about death and personhood would need to be changed first though.
This is the definition of strawman. "Advocate for killing all humans" sounds like someone advocating for a genocide, but instead it's just the same transhumanist thinking (which Yudkowsky also believes in, FYI)
Commenter: "If AI replaced us, it's fine because they're a worthy descendants?"
Beff Jezos / Verdon: "Personally, yes."
Yudkowsky is transhumanist in that he is hopeful people could voluntarily extend their biological self, but isn't advocating for the elimination of all biological selves in pursuit of other artificial intelligences.
given who the people are that signed it really just comes off more like an attempt at creating a regulatory moat around the territory they got to first.
"This is extinction level important, so all you people who aren't us need to be careful meddling with the stuff we're meddling with for profit."
Mitigating the risk of unnecessary global death due to the curious suboptimal manner in which humans have "decided" ("democratically", dontcha know) to distribute wealth on the other hand, nothing to see here!
I tried both Mint and YNAB but they both had syncing issues that infuriated me. My friend recommended Copilot[0] and it's been a joy to use. It's designed so well that it somehow got me to take the time to label my transactions & set my budget, where with Mint/YNAB I would just add my accounts & not do anything else because of the laggy interfaces.
[0]https://getbrighter.com
reply