Hacker News new | past | comments | ask | show | jobs | submit | b450's comments login

The authors here are claiming, as your quote states, that biological evolution is just one instance of a more general phenomenon. I'm not sure that's contrary to the views you're expressing. You wrote:

> The expectation that life is somehow special is wrong. There is, as far as we can see, no difference in the quarks in a dog and those in a rock

But the authors' examples do include the "speciation" of minerals! As I read it, the authors describe:

- some initial set of physical states (organisms, minerals, whatever)

- these states create conditions for new states to emerge, which in turn open up new possibilities or "phase spaces", and so on

- these new phase spaces produce new ad hoc "functions", which are (inevitably, with time and the flow of energy) searched and acted upon by selective processes, driving this increase of "functional information".

I don't think it's saying that living things are more complex or information dense per se, but rather, that this cycle of search, selection, and bootstrapping of new functions is a law-like generality that can be observed outside of living systems.

I'm not endorsing this view! There do seem to be clear problems with it as a testable scientific hypothesis. But to my naive ear, all of this seems to play rather nicely with this fundamentally statistical (vs deterministic) picture of reality that Prigogine described, with the "arrow of time" manifesting not just in thermodynamics and these irreversible processes, but also in this diversification of functions.


I clicked through to this article about the chairman of the EPA's Clean Air Scientific Advisory Committee:

https://revealnews.org/article/trumps-air-pollution-adviser-...

Making a career out of making the case for air pollution. I hope the money is worth it. This guy should have to live and raise his kids next to a coal plant.


This is a great demonstration of the fact that people coming from very different perspectives can, through good faith inquiry, find much to agree on. I think there are a lot of thoughtful arguments and conclusions in here even though I generally find the catholic church's metaphysical pyrotechnics to be fairly ridiculous. It goes to show that E.O. Wilson's concept of "consilience" can apply even outside of sciences - just as different lines of scientific inquiry converge on a common reality, so can very disparate forms of moral inquiry converge because they both proceed from a shared human experience of what's good and bad in life.


Yeah! Perhaps a bit naively, as a Highly Opinionated Person (HOP) on this topic I was ready for this to have something controversial to say about the nature of intelligence.

It's not out of the ordinary for even Anglosphere philosophers to fall into a kind of essentiallism about intelligence, but I think the treatment of it here is extremely careful and thoughtful, at least on first glace.

I suppose I would challenge the following, which I've also sometimes heard from philosophers:

>However, even as AI processes and simulates certain expressions of intelligence, it remains fundamentally confined to a logical-mathematical framework, which imposes inherent limitations. Human intelligence, in contrast, develops organically throughout the person’s physical and psychological growth, shaped by a myriad of lived experiences in the flesh. Although advanced AI systems can “learn” through processes such as machine learning, this sort of training is fundamentally different from the developmental growth of human intelligence, which is shaped by embodied experiences, including sensory input, emotional responses, social interactions, and the unique context of each moment. These elements shape and form individuals within their personal history.In contrast, AI, lacking a physical body, relies on computational reasoning and learning based on vast datasets that include recorded human experiences and knowledge.

I have heard this claim frequently, that intelligence is "embodied" in a way that computers overlook, but if that turns out to be critical, well, who is to say that something like this "embodied" context can't also be modeled computationally? Or that it isn't already equivalent to something out there in the vector space that machines already utilize? People are constantly rotating through essentialist concepts that supposedly reflect an intangible "human element" that shifts the conversation onto non-computational grounds, which turn out to simply reproduce the errors of every previous variation of intelligence essentialism.

My favorite familiar example is baseball, where people say human umpires create a "human element" by changing the strike zone situationally (e.g. tighten the strike zone if it's 0-2 in a big situation, widen the strike zone if it's an 3-0 count), completely forgetting that you could have machines call those more accurately too, if you really wanted to.

Anyway, I have my usual bones to pick but overall I think a very thoughtful treatment that I wouldn't say is borne of layperson confusions that frequently dog these convos.


Yep I think that is an interesting point! I definitely think there are important ways in which human intelligence is embodied, but yeah - if we are modeling intelligence as a function, there's no obvious reason to think that whatever influence embodiment has on the output can't be "compressed" in the same way – after all, it doesn't matter generally how ANY of the reasoning that AI is learning to reproduce is _actually_ done. I suppose, though, that that gets at the later emphasis:

> Drawing an overly close equivalence between human intelligence and AI risks succumbing to a functionalist perspective, where people are valued based on the work they can perform

One might concede that AI can produce a good enough simulation of an embodied intelligence, while emphasizing that the value of human intelligence per se is not reducible to its effectiveness as an input-output function. But I agree the vatican's statement seems to go beyond that.


As an aside, and more out of curiosity, I want to mention a tiny niche corner of CogSci I once came across on YouTube. There was a conference on a fringe branch of consciousness studies where a group of philosophers hold a claim that there is a qualitative difference of experience based on material substrate.

That is to say, one view of consciousness suggests that if you froze a snapshot of a human brain in the process of experiencing and then transferred every single observable physical quantity into a simulation running on completely different material (e.g. from carbon to silicon) then the re-produced consciousness would be unaware of the swap and would continue completely unaffected. This would be a consequence of substrate independence, which is the predominant view as far as I can tell in both science and philosophy of mind.

I was fascinated that there was an entire conference dedicated to the opposite view. They contend that there would be a discernable and qualitative difference to the experience of the consciousness. That is, the new mind running in the simulation might "feel" the difference.

Of course, there is no experiment we can perform as of now so it is all conjecture. And this opposing view is a fringe of a fringe. It's just something I wanted to share. It's nice to realize that there are many ways to challenge our assumptions about consciousness. Consider how strongly you may feel about substrate independence and then realize: we don't actually have any proof and reasonable people hold conferences challenging this assumption.


It's going to sound rather hubristic, being that I'm just a random internet commenter and not a conference of philosophers, but this seems... nonsensical? I don't understand how it isn't obvious that the new consciousness instance would be unaware of the swap, or that nevertheless the perspective of the original instance would be completely disconnected from that of the new one.

It seems to be a question that many apparently smart people discuss endlessly for some reason, so I guess I'm not surprised by this proposal in particular, but it's really mystifying to me that anybody other than soulists think there's any room for doubt about it whatsoever.


Completely agree. I'm interested in the detour, perhaps as much fascinated in the human psychology that prompt people to invest in these debates as anything about the question itself. We have psychology of science and political psychology and so it seems like a version of that that attempts to be predictive of how philosophers come to their dispositions is a worthy venture as well.


And then Marvin Minsky asked: what if you substitute one cell at a time with an exactly functioning electronic duplicate? At what point does this shift occur?


Related to that are Searle's "Chinese Room" argument and the question of "Mind uploading" (can you up/download mental states): https://plato.stanford.edu/entries/chinese-room/#ChinRoomArg...

https://en.wikipedia.org/wiki/Mind_uploading and Chapter 8 about Mind Uploading in https://www.researchgate.net/profile/Alfredo-Pereira-Junior/...

The related Reddit conversation https://www.reddit.com/r/Futurology/comments/2ew9i2/would_it...


Sounds like an experimental question. Maybe 99%, maybe 1%, maybe never.

Can you suggest another way to answer your question other than performing an experiment? Can you describe how to perform an experiment to answer your question?

Would you agree to be the subject of such an experiment?


>I have heard this claim frequently, that intelligence is "embodied" in a way that computers overlook, but if that turns out to be critical, well, who is to say that something like this "embodied" context can't also be modeled computationally?

Well, Searle argued against it when presenting the case for the Chinese Room argument, but I disagree with their take.

I personally believe in the virtual mind argument with an internal simulated experience that is then acted upon externally.

Moreso, if this is the key to human like intelligence and learning in the real world, I do believe that AI would very quickly pass by our limitations. Humans are not only embodied, but we are prisoners to our embodiment and we only get one. I don't see any particular reason why a model would be trapped to one body, when they could 'hivemind' or control a massive number of bodies/sensors to sense and interact with the environment. The end product would be an experience far different from what a human experiences and would likely be a super organism in itself.


Experience is biological, analog, computers are digital; that's the core of the problem. It doesn't matter how many samples you take, it's still not the full experience. Witness Vinyl.


This is just so story more than it's an actual argument and I would say it's exactly the kind of essentialism that I was talking about previously. In fact, the version of the argument typically put forward by Anglo-sphere philosophers, and in this case, by the Vatican, are actually more nuanced. The reference to the "embodied" nature of cognition at least introduces a concept that supports a meaningful argument that can be engaged with or falsified.

It could be at the end of the day that there is something important about the biological basis of the experience and the role it plays in supporting cognition. But simply stipulating that it works that way doesn't represent forward motion in the conversation.


That's not a very good answer, imho.


>people coming from very different perspectives

Care to elaborate? Which people and which perspectives? It's a bit unclear to me.


I believe parent is referring to the HN crowd's, which interestingly is rather diverse reacting regarding this post (though I could be wrong and they could be referring to the document and its sources).

Either way, I must admit that, as a Catholic I appreciate the great discussion here. There are of course the usual snarky comments you would expect regarding the Church and religion (which is fine by me) but overall it's a well grounded discussion.

I'm personally enjoying reading the thoughtful perspectives of everyone.


Given the scale and variety of transformations in the 20th century - technological revolution, mass urbanization, the integration of billions of new workers into global markets, nuclear weapons, mass media, environmental change, and unprecedented population growth – it would be very surprising if all the graphs just maintained linear trends the whole time. Many of these graphs appear to show continuations - though perhaps at inflection points of exponential growth – of trends already taking place.


My guess is we're supposed to read this sentence as:

Ɐh G(h)

(for all my hats, the hat is green)

or whatever similar formulation:

Ɐx (H(x) ^ M(x)) → G(x)

(for all x, if x is a hat and x is mine, then x is green)

Either way, the general idea will be that negating the statement (making it a lie) will make it a negative existential quantifier:

Ǝh ~G(h)

(there exists one of my hats such that it is not green)

Or in the case of the alternate formulation:

Ǝh (H(x) ^ M(x)) ^ ~G(x)

(there exists an x, such that x is a hat and x is mine, and x is not green)

So I think we answer (A) The liar has at least one hat.

All that said, I think other commenters are rightly pointing out that this relies on a very questionable distinction between semantics - which is what we've formalized above - and pragmatics. In conversational pragmatics, "All my hats are green" means that I have at least one hat (probably at least 3, even, since the sentence didn't say "My only hat" or "Both my hats"). One might explain this by way of an implicit pragmatic conversational principle that all statements should be relevant and informative in some way, which vacuously true statements (like, "all grass growing on the moon is purple") are not (see the "Gricean maxims").

If we don't make this implausible distinction between semantics and pragmatics (implausible to me because it assumes that sentences in general are usefully analyzed as having "propositional" meanings which can be evaluated outside of any conversational context), we might cash out the statement as:

Ɐh G(h) ^ Ǝh G(h)

so we can conclude, since this is a lie, that:

Ǝh ~G(h) ∨ Ɐh ~G(h)

Which is consistent with the liar owning no hats, as in:

> "All my hats are green"

> "Liar! You don't own any hats"


Seeing the image come into focus is pretty satisfying, and the half-solved puzzle looks pretty cool. Nice job


Thank you :)


It's hard to take too much issue with the general argument. Seems like there are loads of examples of theories developed out of "pure" intellectual pursuits that found practical applications later on.

But I quibble with the broad inclusion of "jazz" in the list. I don't really like the idea that jazz is this never-ending process of avant-garde musical boundary pushing. There are these cults of personality around artists like Miles Davis and Coltrane, and at some point people decided that "easy-listening", "smooth jazz", and "elevator music" were the nadir of "cool", but those particular cultural trends don't necessarily define jazz as a whole. It's also reasonable to regard jazz as having a matured musical vocabulary that we can construct accessible tunes out of without pushing boundaries all the time. I suspect a lot more people do enjoy "lounge" jazz than would admit it.


I guess this is just a silly little thought experiment, but the final estimate (6.97s 100m) is quite ridiculous on its face.

The heavy lifting seems to be done by the study linked in the "anatomical studies suggest peak speeds up to 15.6-17.9 m/s (35-40 mph) are achievable" line. I'm not sure where those exact numbers were pulled from - I can't find them with a cmd+f. One line in the study uses some nearby numbers:

> If, for simplicity, we assume no change in contact lengths or the minimum aerial times needed to reposition the swing limbs at top speed, the average and greatest individual top speed hopping forces (Favg) of 2.71 and 3.35 Wb would allow top running speeds of 14.0 and 19.3 m/s and of 50 and 69 km/h, respectively

But the study concludes that, even though our leg extensor muscles can produce much higher maximum forces than those generated during sprinting, the "contact length" imposes a constraint on these "hopping forces":

> Because humans have limbs of moderate length and cannot gallop, they lack similar options for prolonging periods of foot-ground force application to attain faster sprinting speeds at existing contact time minimums. Consequently, human running speeds in excess of 50 km/h are likely to be limited to the realms of science fiction and, not inconceivably, gene doping.

So the craziness of the original estimate seems to follow from on a misreading of that study.



We don’t do it efficiently or effectively would be better than saying we cannot do it.

It’s only useful for going downhill quickly which we’ve all subconsciously or perhaps even mindfully done.

From your link:

> Gallopers exerted that effort unevenly, with the front leg doing more work than the back leg. And the galloping stride, researchers saw, demanded more from the hips than running did.

This tired people out quickly. Out of 12 treadmill gallopers in the study, 4 gave up before the end of their 4-minute session, complaining of fatigue and stress in their hips and thighs.

(An intended 13th galloper couldn't figure out how to gallop on the treadmill belt in the first place.)

When researchers calculated their subjects' metabolic rates, they found that galloping was about 24% more costly than running at the same speed. In other words, galloping burns up more energy, takes more effort, and is less comfortable than running.

It's no wonder we don't usually opt for it


> Because humans have limbs of moderate length and cannot gallop

Yes, we don't because it's not great, but that's not the same as not being able to. (This is very pedantic sorry, it's just a stronger claim than it needs to be which bothers me).

Also, I wonder if that changes if you have very uneven leg lengths?


I mean, this is a bipedal gallop, standing upright. What if someone were to train for running on both their hands and feet, the way a horse, dog, or cheetah does? I’ve seen a video of a young woman doing this, and it looked very uncomfortable/unnatural and it was frankly terrifying to imagine a human running at you in this way.

Mechanically it seems like the advantage would be using more muscles and being able to take advantage of your core and upper body when pushing off in addition to the legs. landing seems like it would be a challenge as fingers aren’t really made for that.


In 2048, the fastest human on the planet will be a quadrupedal galloping man

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4928019/


“The winning time was fitted to a rational fraction curve for the quadruped records (r2 = 0.823, adjusted r2 = 0.787, F = 26.9, P < 0.05) and to a linear curve for the biped records (r2 = 0.952, adjusted r2 = 0.949, F = 336.1, P < 0.05; Figure Figure1).1).”

Unfortunately, a linear extrapolation implies that at some time, the bipedal 100m will take negative time…



I saw a video about that which claimed that 4-legged running is in general faster than 2-legged running. The video concluded that it might be possible for humans to "run" faster with 4 limbs rather than 2 limbs, if trained properly. Btw they also mentioned the record in 100m 4-limb running is something over 15s.


I've genuinely wondered whether humans could "run" faster if they employed full body flex and did a sort of springing cartwheel, using all the muscles in their body to propel them forward.

Imagine a slinky, but more elastic and with better roll.


Skipping world championship when?


> This week’s edition is a guest post about spooky Kindle A.I. slop from Leah Beckmann, an L.A.-based screenwriter and journalist and Chief Kindle Bullshit Correspondent for Read Max.

I had to reread this a couple times before understanding that the guest post - not the A.I. slop - was from Leah Beckmann.


>This week’s edition is a guest post from Leah Beckmann about spooky Kindle A.I. slop, an L.A.-


Agreed. First couple paragraphs of the post had me like, wtf am I reading right now? Threw the article into Claude and had it succinctly summarize the key ideas the author was trying to convey. Thankful for the clarity that provided and time it saved me.


It's not just the input. The playback is pretty screwy. Try filling every slot with hi-hats. At least in my browser, the playback is quite stuttery and I can't really entrain to it.

It sure is neat though! I love how the icons for each instrument can combine together, and the dynamic page title is fun.


Yeah, that's my experience too. It's a fun diversion but given the output is ostensibly precisely quantised it's irritatingly glitchy


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: