not necessarily. also need to stop calling it AI because it is not. It works on existing knowledge. Anything new or unfamiliar it will not be able to help you.
How does that level set with scientific discoveries and paradigm shifts? Knowledge is scaffolded upon itself - that doesn't mean it is not reaching new heights. The author of that quote wouldn't recognize germ theory or understand powered flight and interstellar probes.
"There will be things that people will look at and say: 'they are new!', but these were here before us. Since forever."
So, IMO the author did realize that there will be "new" things, the premise is that these things are already part of our world, and the process and mechanism of discovery is repeated. In the human perception (and LLM's ?) there is only "rehashed" knowledge in a way, nothing is really new.
That doesn’t take into account emergent properties, though.
I don’t see how “general purpose computers” have always been here before us — many things can compute (one proper, but not too useful definition of a computer is something that represents a computable function by upon giving it an input in a specific format, it will give an output that can be interpreted as a result. E.g. a sundial is a computer), and none of the primitives are unique (logical gates can be made from many things), but their unique composition into a programmable computer is a novel invention.
Sure, but how much of that existing knowledge has been written down and published somewhere so it can be scrapped and fed into an AI?
Ultimately existing AIs can only help you with stuff that’s publicly documented, preferably documented multiple times. Lots of the world is either undocumented, or the documentation only exists in private repositories. Even AI can’t know what it doesn’t know.
discovery and curiosity is essential for knowledge, questions is the mother of all knowledge. coming up with answers is easy, but knowing what questions to ask is what will broaden our minds. If we kept working on existing knowledge we wouldn't have evolved.
i can add X + Y for any values of X and Y. not because i have a gigantic table in my head, but because i know the rules of adding 2 numbers. AI is to some extent doing the same.
the fact that it does increasingly weird unhuman things to me indicates it is infact intelligence. it isnt just parroting the answers for itself. it is using its flawed understanding to attempt to reach an answer. humans are also flawed and come out with stupid answers, but we are used to that and can understand it.
Knowledge != intelligence. So many people tend to conveniently forget this fact these days. You know how addition works, this fact alone does not make you intelligent.
Intelligence = ability to solve novel problems. Requires out-of-the-box thinking. ChatGPT cannot learn and solve problems it has never encountered before, thus it is not intelligent. Also, training != learning.
Knowledge would be memorising a big table of additions.
Intelligence is knowing the rules of addition to apply it to 2 numbers you've never been shown how to add.
I can memorise 1+1. I've never been shown the answer to 47459592271638494 + 3745802297337747488. That for me is a novel problem that would need to be solved.
So the fact that I have knowledge of mathematics and can apply it shows I have intelligence.
Out of the box thinking isn't a prerequisite of intelligence. It's a special case that humans are good at and computers aren't.
If a mouse bumbles around a maze, eventually finds the cheese, and from then on goes straight to the cheese, that would be a sign of intelligence. If a robot does the same, why is that any less intelligent?
The thing is this is basically instinct. At the transistor level a computer knows how to add two numbers together. But then again any kind of AI is going to come down to binary digits so unless, definitionally, a computer can never be intelligent then we have to allow that your example is some kind of intelligence.
There is nothing new or unfamiliar required to generate an explanation of what a dozen assembly instructions do, of all the tasks in the world this is one that definitely should be doable by a glorified pattern-recognizer.
Sorry, but you can get emergent behavior that LOOKS intelligent anywhere, and the only way you cant know is if all you get is a single way to observe it.
As long as people weren't ready to look at human anatomy, they could only observe symptoms and make super wild guesses.
That is what this is. Youre observing a thing in a black box with a screen, and youre reading too much into it if anything in the past is to be believed.
Show a person a cellular automaton and dont explain it to them, leave them with it in a room for a month, see what their theories are. You bet it wont be "theres three rules and a random number generator", even if thats all there is.
If there is such emergent behavior that looks intelligent, how can you tell it apart from human intelligence? Devising a test that can answer this is an area of active research. Let's say for instance that it can come up with a novel mathematical proof that was not part of its training data - would you accept that as evidence towards AGI?
I'm pretty sure engineers at OpenAI working on GPT-4 could very much demystify this entire discussion, but they choose not to. They didn't find out how to do AGI, they found a way to get a lot of people very very excited by employing a veil of secrecy and mystery. Its a program. You can see what it does, and there is a possibility to see what it was trained on. "Open"AI just chooses to hide that. If it was really that smart, OpenAI would have no issue publishing some more informationn on it.
Our conception of 'intelligence' is based on us rather than some objective metric.
There's absolutely no reason for an intelligence to be anything like us. We are flawed in a great many respects. We'll parrot things we have no reason to believe in etc, etc,etc.
Im not even sure there's a distinction between intelligence and emergent behaviour. Did we evolve to speak and write complex language or is that emergent behaviour?
I would accept that as a proof of AI, not AGI. Provided it can do it repeatably and reliably. Hallucinating a proof that's 90% time wrong is just a parrot.
For AGI, I would expect it to be teachable during the conversation. That is, to be able to form abstract models of reality and utilize them on the fly. Repeatably, reliably. Like a human with a pencil and a piece of paper can.
If I showed an AI a picture of a duck. They could remember that.
We already have models that can be shown multiple pictures of ducks, learn what the essential characteristics of a duck are and identify novel pictures of ducks.
So the AI has been taught. Has formed an abstract model of a duck and can reliably identify ducks.
When does that become AGI? It can't purely be writing proofs because that's niche even for humans.
Further what is intelligence? A professor of mathematics may be able to rattle off a proof. A 3 year old probably not. The 3 year old has amazing learning potential though. We accept that both the professor and the 3 year old display intelligence, but we dont seem to apply the same rules to AI.
Another analogy is not to a child but to a part of the brain.
Can a part of the human brain be intelligent? Are humans missing a part of their brain, such as through brain damage, intelligent? Such humans may not be as fully capable as humans with an uninjured brain, but (depending on the extent of the damage) we would still consider them intelligent and conscious.
We may think of current AI's like that: as partially functional intelligences.
But in a way they are more, because some of their functions exceed human capacity.
So they are really something new: in some ways less than humans, in some ways more.
Equally there’s evidence that GPT-4 only excels on tests that happened to appear in it train data (presumably via accidental contamination). As has been demonstrated by it strong ability to solve tests written before its data cutoff date (Sep 2021), but struggles with equivalent tests written after that date.
> we discuss the challenges ahead for advancing towards deeper and more comprehensive versions of AGI, including the possible need for pursuing a new paradigm that moves beyond next-word prediction
sounds very much like it depends on the definition and the vendor themselve agrees that the current modeling options might not go much further...
Not all of last year's internet of original ideas have had Monte Carlo complexity paths fuzzed and rebased for something ummm-imaginably different using A.I.
Don’t assert your personal philosophical opinion of something as nebulous as “intelligence” as absolute fact in an attempt to nerd snipe someone on Hacker News. Utterly cringey and a weird thing to be pedantic about given that you very obviously knew what OP was talking about.
I asked it for Hello World and it got that wrong. It left out the exit syscall required at the end, which meant the CPU would continue executing until it hit a seg fault.
A few weeks later I found the exact blog post that ChatGPT copied from, down to the blog article explaining how the exit syscall was required and was intentionally missing from the first step as a pedagogical exercise. But of course ChatGPT won't know that.
If you know assembler, it's much easier and safer writing it yourself. If you don't don't assembler, you're doing stupid and dangerous things and should immediately stop.
This is fantastic, congrats. Have you ever thought of building a SaaS around this? Getting users to pay $5 a month for each specific service. It could be even more intuitive....
Yes, have thought deeply about this. There is a $19/month option for those who can't afford the lifetime price as it now. And that would include all the tools too when I add more paid tools.
The "SaaS" I would do is make paid versions of the google sheets add-ons and more work on Asa, the AI inside of Google Sheets I made. Right now it's free to use with your own API KEY but I'm working on a paid version.