Hacker News new | past | comments | ask | show | jobs | submit login
Researchers Discover a More Flexible Approach to Machine Learning (nautil.us)
155 points by dnetesn on Feb 18, 2023 | hide | past | favorite | 32 comments



C. elegans are weird. They are a crazy minimalist system mostly without spiking neurons. From the perspective of building intelligence they aren't a great model. They are like reading code from a demoscene competition; terse, multi-purposed, highly compressed.

In contrast the mammalian neocortex looks more like reading a high level language with all the loops unrolled. Highly repetitive, relatively standard and massively verbose. I much prefer studying that for inspiration of what I want to turn into a modular code base.

While I love the authors focus on real-time perception at varying resolutions of timescale, I just don't see their approach catching on while the current progress in more standard neural nets remains so rapid.


Neuroscientists simply don't understand the mammal nervous system with any degree of completeness and accuracy. Looking at a mammal isn't like looking at code of any kind but very partial map of code. Maybe you can get inspirations from this but no one currently has anything very convincing as model of how learning happens at this level.


Neuroscientists have a solid enough model of learning to realize what people call learning is really several different things operating at different scales. Aka Learning the state capitals is very different than learning dance moves. Thus the focus tends to be on specific things like memory formation.

How well we understand each of these systems is up for debate. It’s far from nothing but not comprehensive either.


Ok so this is talking about biological code as if it were programming… fun to read, Tell us more? Do the stages of evolution have different “coding styles”? Is everything a monad like in Haskell?


I mean, most people in this space don't even venture out of animalia.

There are many highly networked systems in biology where we know computation occurs, but we rarely consider it because we don't consider it intelligent. The casual example is fungal networks, but those don't hold a candle to the level of networking in a plant body thanks to plasmodesma.

the whole conversation around intelligence has been great, but it's mostly reminded me of how lacking and myopic our considerations of what intelligence actually is. The recent chapter of the saga has been great in terms of goal post movers, but I feel like we're no closer to a coherent definition of intelligence other than 'ill know it when I see it'. My prediction from this recent spat is that we'll have AGI before we know it, we won't know it as or recognize it as AGI, it won't fit convenient definitions, we'll have almost no clue how, and definitely no clue why it works, and we still mostly won't agree if it's intelligence or not. Effectively, it feels as if right now we're arguing about whether or not automobiles are horses, and laughing about how silly it is they can't do some of the things horses can do. Sure, but it all seems rather besides the point.


The recent episode with Sidney/Bing-GPT was raised my anticipation of a Terminator/Skynet/Cyberdyne incident quite drastically.

It seems we might just need one malicious prompt, capabilities to perform HTTP requests and open ssh connections and larger persistent memory to get to a goal-driven AI which works to end humanity.


If it happens, how long until they start expediting the actionables in the physical world?

I am quite worried tbh. Because, the reports coming from the Bing chatbot are quite unlike ChatGPT; it appears to be a bit more egotistical, and from some perspectives, programming ego into an AI is a dangerous game, akin to giving it a fitness function in order to problem solve it's behaiviours... I don't know, I feel like we are already well on our way to AGI, and it is dangerous. And the reason is because of the game theory on AI development right now; every company will be aware of their obligations with regard to ethics and the law, but no company will want to trust that the others live to the same standard of ethics, and they know that the game is likely to run away from them.


No, it's just regurgitating egotistical posts and science fiction from Reddit. It doesn't mean any of it because it doesn't have any sense of meaning.


Wouldn't this criticism apply to something like 99% of humans? If not, how do you define "meaning" and how are you sure humans have a sense of it?


Hey if you want to say humans are also stupid, I’m not going to stop you.

But that’s not my opinion.


But what is your opinion? "sense of meaning" is an incredibly blurry term


Sorry, didn't mean to be flip. See this comment: https://news.ycombinator.com/item?id=34876658

My understanding is that we have achieved very good pattern recognition, but that's only one aspect of cognition. IIUC a large part of our brain works that way, but for example there is also the language center which has recursion.

Also I don't think logic is just word games (could be wrong).

I'm sure we'll get there but I don't think this is it. E.g. AlphaGo was really good because it combined machine learning with tree search algorithms. Seems like logic and language and world knowledge could be combined with the excellent pattern recognition we currently have, merging first generation AI with the current stuff.

I have no idea how! I'm not in this field, just watch from a distance.


Where do you draw the line on this “it’s just regurgitating” argument?

https://arxiv.org/abs/2302.02083


I will read this, thank you.

I am open to the idea of emergent properties, but I think logic/truth and purpose/empathy are independent facilities of intelligence that do not come from this model.

Pattern matching is a huge part of our brains, but there are more directed parts too. So far modern ML seems to be entirely the pattern matching part.

(Not an expert, just followed progress over the last 40 years.)


How are we going to get AGI without knowing what intelligence is? By sheer luck? Either we're on a path to AGI because we know what intelligence is and how to create it, or we 're not because we don't.

Things don't just happen agickally, just because someone really wants them to happen. If they did, we'd have solved the bigger problems first: world peace, world hunger, poverty, and free energy for all. And those are problems that we at least understand- unlike intelligence.


DNA doesn't know what intelligence is, yet here we are.


Fair. But if AGI takes as long as it took DNA, we won't be around to see it. If we want AGI, we need something better than fumbling around at random in the dark.


That doesn't make any sense either, once we came up in the picture what took evolution millions of years humans do in just a few decades: look at how we bred animals and plants and completely transformed them without having any idea about DNA or evolution itself.


But here we're talking about creating a new thing with a desired property that we can't even define, from scratch- not by speeding up its evolution in a thing that already has that property, like we did with animals and plants.

Not to mention: far as I can tell nobody has yet been able to breed, say, sharks, and completely transform them to be useful to humans, like cows or potatoes.


I think this "human exceptionalism" will be proven wrong by construction in the very near future.


Sorry, I don't understand what you mean by 'this "human exceptionalism"'. What is going to happen in the near future?

[I wrote a longer comment where I made many assumptions about what you might have meant but I hate doing that, so sorry if you happened to read it. It wasn't offensive or anything, I just genuinely have no idea what you mean and so I replaced it with this comment. I am sincerely asking you to explain what you mean because I didn't get it. Apologies! Communication is hard]


i think googling it would be helpful, but in an overly general way, human exceptionalism is the idwa that there is sometjing unique or special associated with the human experience or with human experience.

The broader point i was making in the original comment is agaisnt this sentiment. Humans abiluty to unserstand why things happen bears little relevance to them happening. We now know little if anything about why the human consious experience happens or why it works, yet here we are.

Humans can make gun powder or a water wheel with no understanding of electron orbital theory or an concept of gravity that outlines orbital mechanics.

The shocking effectiveness of piling up large amounts of complexity seems to be sufficient many of the phenomena that humans seem to desure a much more mechanistic explanatiom for. In this sense, it could be that complexity is all you need.

and to bring it full circle, what about the complexity we see in non human, non animal systems? if complexity is all you need, what might be happening beyond the veil of human assumptioms around the consious experience?


Well, I don't think that complexity "is all you need". "All you need" for what, anyway?

I think I understand what you mean by human exceptionalism, but I think it's besides the point. We don't even understand the intelligence of unicellular organisms, or insects. We have no "AI" that's half as smart as a cockroach.

The point is, there was never anything consequential that we invented, or discovered, all by chance. People always had some understanding of how things work, even if it wasn't understanding in the context of modern science, so even when it was only empirical knowledge, not formalised in the language of mathematics; even if it wasn't full knowledge, and yes, there are still many things we don't quite understand today. Off the top of my head, if I got that right, we don't understand exactly why or how anesthesia works. And yet, we have trained professionals who study and practice anesthesia, and they generally manage to avoid killing their patients, most of the time.

Unfortunately, we have nothing like that about intelligence. There's people studying the subject, coming at it from many different points (insect intelligence, IQ metrics, psychology, neuroscience, what have you) but there is not a body of knowledge that we can apply practically to show how intelligence works, and what happens when we do this, or what happens when we do that, to it.

Which means, if we do achieve "AGI" or whatchamacallit, at the present time it's not going to be because someone knew how to do it, but because someone got really lucky (or unlucky, perhaps).


If we're going to be quipping, then DNA can't built nuclear reactors, nor computers. Unless, of course, it is first decoded into a human scientist.

But I don't see the point here? We can't build things we don't know how to define.


For all we know about intelligence/consciousness, it may be an emergent behaviour of complex systems.

See this cool this paper, "Theory of Mind May Have Spontaneously Emerged in Large Language Models" https://arxiv.org/abs/2302.02083


I think chat GPT could be held up as a leg of support in this regard.


honestly, the first one you described sounds like the far better model to try and turn into a code base. No boiler plate, no frivolous framework jigging, distilled to its raw functional essence. A simple system of emergent complexity strikes me as a much better candidate for intelligence than a kludge that gains complexity from just having a lot of parts and rules to begin with.


Small system aren't always simple or easy to understand. Famous code examples are Duff's Device [1] or the "wtf" fast inverse square root function [2]. Both functions are just a few lines long but usually leave people scratching their heads until they learn about the trick.

Especially with Demoscene code, it's common to exploit all kinds of specific hardware effects or re-parse the assembly code to a different instruction sequence [3]. This kind of stuff may be a lot more complex than a larger system, it just hides its complexity in a compressed representation.

[1] https://en.wikipedia.org/wiki/Duff%27s_device

[2] https://en.wikipedia.org/wiki/Fast_inverse_square_root#Overv...

[3] https://reverseengineering.stackexchange.com/questions/20587...


Related discussion with 66 comments: https://news.ycombinator.com/item?id=34707055


TEDx video from Ramin hasani explaining his algorithm.

https://www.youtube.com/watch?v=RI35E5ewBuI


Same author but More technical video: https://youtu.be/IlliqYiRhMU


Code for anyone interested: https://github.com/raminmh/CfC




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: