Hacker News new | past | comments | ask | show | jobs | submit login
Eigenmorality (scottaaronson.com)
439 points by bdr on June 21, 2014 | hide | past | favorite | 78 comments



As Aaronson points out, PageRank has a few edge-cases when used to do this analysis, basically because it treats its graph as a closed, internally-solipsistic system--it has no definition of morality other than what each of its nodes prefer of one-another. This works if you have a diverse spectrum of preference functions distributed among the nodes (the result tends toward a "live and let live" meta-ethics), but if your analysis is aimed at a preferentially homogeneous group (e.g. Nazi Germany), PageRank won't give you the solution of "move the 'evil' majority toward the tenets of the good minority." It'll instead suggest that the optimal system would have the 'good' minority give up and become 'evil'.

Scott Alexander suggests (http://slatestarcodex.com/2014/06/20/ground-morality-in-part...) you could instead use DW-nominate, the tool that does meta-cluster-analysis to mathematically detect "party lines" in congress (which are basically just clusters in human-utility-function-space anyway), to find what preference-subfunctions (e.g. helping old ladies cross the street, returning a wallet you find laying on the ground) correlate together into a cluster (that might be called 'goodness') -- and then grounding/normalizing the PageRank analysis with that, such that you can tell whether the system as a whole is in a 'good' or 'evil' state.


Agreed. Once a network becomes too homogeneous, its collective problem solving abilities go way down. Effective networks can adapt to and benefit from a large number of different fluid participants. It is a cheap way to get complexity/variety from simple individual nodes, and to be more robust, safe-guarded against variance and overfit.

The author aims to converge upon a group with a good morality (even though its members may have been in the minority). But what is to say that all-good morality is good for a group's decision making as a whole? Would it be sustainable from an energy viewpoint? Or collapse and destroy everything? Won't we need a supply of villains for our moral heros [1]? That to converge on an optimal answer as a network we need wildly opposing views? That to be stable an ecosystem needs variety, decay and destruction?

More philosophically: Is a program like this moral cooperation plan based on pay-it-forward-currency even moral in itself (as it clearly discriminates)?

Attempts at a supermorality have stunned philosophers and logicians for ages. If a rich benefactor would give a 1000 people in a room a 1000$, and if you ask for more you get 1$ more, but 999 people would get 2$ deducted, people would leave that room with some having enough to buy a cup of coffee. 1 million dollars wasted by the greedy individualistic game theory that seems to be in place in animals: I want energy for me and my family first, forget the network. Contests to give a $ amount equal to the lowest unique number send in, would have perfectly rational players roll a dice with the number of contestants, all submit 1 trillion, and the one person who rolls a 1 submits a billion. Instead they receive bids as low as single cents in a manner of days.

[1] http://www.sciencedirect.com/science/article/pii/S0022519311... "The joker effect: Cooperation driven by destructive agents"


This is very long but worth reading.

The modeling exercise herein is basically attempting to use a game theoretic model to test out some really dumb/simplified models of cooperation and whether the behaviors observed approximate anything approaching what our intuitions might say is moral behavior, up to and including an 'eigenjesus' and 'eigenmoses' up against tit_for_tat bots and the like.


It's very long but written in a light, readable style.

What I took away from it was the point late-on about, if we had a PageRank-type system for establishing trust in issues that drive policy (such as climate change), then it would show where a group had left the consensus and formed a separate talking-shop that was trying to shout down the greater consensus.

I think this actually applies back to the Iterated Prisoner's Dilemma games. The 'morality' calculation might not be able to perfectly establish what is moral and immoral behaviour, for the reasons given in the article, but it ought to be able to establish where groups are not working to the moral codes of the majority, perhaps because they are discriminating or discriminated against, but also possibly because they are co-conspiring.


Yep. My gut reaction to the phrase "eigenmorality" is to immediately turn around and ask "have you ever seen an eigenface?" (see: https://www.google.com/search?q=eigenface&tbm=isch )

They are a certainly a mathematically accurate way to represent a distribution of human faces, but no one would ever confuse it with an actual human face.

And this is one way i view political/policy compromises (which is ultimately where the rubber hits the road regarding morality/ethics). One example is middle ground positions on immigration reform: http://www.vox.com/2014/6/12/5803912/americans-either-want-u...

There are circumstances where opinions fall into a multimodal distribution, and in those situations, taking a policy position that is the global average not only will piss everyone off, but won't necessarily fix the policy problem at hand.


Except that eigenvectors play different role in facial recognition as from pagerank.

In the former, eigenvectors are an orthogonal basis for representing the set of human faces: first, the vary in this respect, then next most significantly in this respect, and so on.

In the latter, the primary eigenvector tells you which nodes are the most significant. The eigenvector not looking like a good site (or typical face) has little to do with whether it's informative about the goodness of a site.

Also, the point of using morality eigenvectors is to quantify relative morality of positions, not to find a compromise between currently-popular positions, so the bimodality issue is not a problem in this respect.


Oh, yes, you're right. Orthogonality isn't at issue here, the decomposition is just being used for sorting/ranking/re-weighting.

Thanks for pointing out the muddle i was in there.


I found your post interesting, because I think there is always a way for a group to make smarter decisions than any of its individuals. Let's say a middle-ground compromise is chosen on immigration reform: Deport new illegals, offer stricter official ways to become an American. Naturalize people who have worked or studied in the US for a certain time period.

Even if the individuals all agree the collective made a bad decision, that is not to say that the group decision itself was bad: It may have very well been the perfect lesser of all evils. A system can converge to an optimal solution, while individual participants do not realize this. Likewise, a participant may spot a problem in need of a solution, where in reality there is no such problem or manageable solution.

I do agree that, next to making more optimal decisions, a committee can make poor collective decisions that any individual members would never make. If the crowd is too frantically opposed, not willing to give in, then that crowd or system itself may be broken and dysfunctional, and will always produce inferior solutions. It has a bigger problem than a single circumstance.


To each his own. I actually think its overly long, wordy, rambling, poorly conceived and short any novel or important ideas.


He is exploring how the Internet might be used to save civilization. What constitutes an "important idea" in your book?


Every tyranny is predicated on saving civilization.

Also, every idea that actually helps civilisation is incubated in a tiny minority (perhaps in just one mind). Since that minority is engaged in creative work, it is almost certainly an out-group. Adopting the morality of the ruling class and building connections with it are the surest way to power. But these are a full-time job.

I think the idea of quantifying morality might be improved by basing it not on cooperation but simply upon communication, e.g. how well do you know the opinions of those you disagree with? Note that this is almost the opposite approach of the path to power.


I don't think "tyranny of the majority" applies here. The proposed system makes minority opinions more visible, if anything. There would even be an incentive to have a minority opinion, if you truly believed the majority was incorrect about something. In response to your last point: that sounds like an interesting modification: letting every bot see every other bot's (possibly evolving) code. But perhaps to avoid Skynet, bots should use the other bots' published APIs (which could opt to include a "getCode" method), and judge each other by their actions.


Exactly. Imagine his anger if someone had written something amateurish about quantum computation or complexity without acknowledging any work done since the beginning of these fields. I seriously hope most of the article is in jest.

"Hey all, I have a new way of measuring how much resources computer programs take..."

If I remember correctly, he banned a professor from commenting on his blog over what seemed to be a routine academic debate.


If I remember correctly, he banned a professor from commenting on his blog over what seemed to be a routine academic debate.

Not really: for the record, Scott "sentenced" John Sidles repeatedly to a 3-month ban (later "commuted" to two weeks) due to his increasingly derailing and nearly trollish behavior [1], but soon he "dismissed" it for "time served" [2].

[1] http://www.scottaaronson.com/blog/?p=1478#comment-84734

[2] http://blog.computationalcomplexity.org/2013/09/tldr.html?sh...


Perhaps you have studied morality simulations in the past, whereas I have not. It was written simply. (In my opinion the occasional wordiness was a stylistic choice to mimic casual conversation rather than a strictly rigorous academic paper.. fitting for a blog post.) I found it a though-provoking read, and I have no qualms if his ideas were not "novel". Digesting old ideas and writing about them is not a crime. If anything, it is beneficial to those who did not read the original work (like me). As you said... to each his own.


That is the problem here. For anyone who has studied ethics and morality, this is hair-pullingly bad. It is not easy to communicate these ideas.

An analogy: Would you trust a physicist who does not communicate often, or a creationist who writes popular science essays?


Well, I never knew this field of study existed before, and now I'm interested in it. I'm not going to blindly trust everything in this article, but now I have some starting points on things to look up. The article mostly discussed the interesting, big picture ideas. That aroused my interest far more than if I just started on some technical math article.


You didn't know about the field of Ethics? It's (IMVHO) one of the most important fields in Philosophy, and (again, IMHO) with the most practical applications in real life.

Before you rush off to study it (which I highly recommend, for anyone who has the chance / ability), it's not so much about learning better systems of ethics or learning about the many types of systems of ethics that have been thought up in the past, but it's about learning the arguments why they are/were wrong and how they break down in situations, and to carry these arguments. That's how a lot of fields in Philosophy work, and why it seems such a dry study of "arguing for argument's sake". The point is, by doing this, practice, you learn a lot of useful things along the way and sharpen your mind.

I do agree with the other poster that the featured article (while interesting to read), does seem to lack a bit in how it connects with actual philosophy of ethics.


I didn't know about morality simulations.


Are there any resources you would recommend to the uninitiated? I am willing to put in the effort needed to understand even very technical texts. Thank you.


Please tell us what sources you follow that provide you with (a) comparably lucid ideas and (b) not only that, but also in a shorter, clearer form.

I for one want to follow them too if they really exist.


One does not solely get their understanding of novelty or content/length relation from single newsfeed style sources. There are more than a few books and even more wikipedia entries that provide a wealth of understanding for a relative pittance in word count. Not to mention all the non-classical forms that do the same. The article was okay, but the standard you implicitly set forth is lacking in fairness to the commenter, to the original story, and truly to one-off forms of valuable information that don't provide a serialized source of continued information.

One of my personal favorites: http://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspond...


Books and Wikipedia articles are sources too. I didn't say anything about single newsfeed style sources.

I find the linked-to article to be a gem of lucidity amidst a barrage of mostly noise, busywork, and lottery-playing. Not only that, but the topic of the foundations of morality is going to become a central issue soon, with multiple strong global trends taking us towards that difficult issue.

The article you linked to is nice, but Curry-Howard isomorphism is neither novel nor as important as the topic covered by the original article.

As fragile agents inhabiting a ball of mass and fire organized in billions of partly-autonomous somewhat-intelligent resource-sharing systems flying through space and time, just recently coming up with scarily and growingly powerful new ways to rearrange control which may destroy existing controlling systems, including us, we should care very, very much about this.

And the article provides powerful new tools to tackle this.


"foundations of morality is going to become a central issue soon" -- for many people it has been the central issue for three thousand years.

For anyone who has studied philosophy or ethics -- even casually -- this article was terribly naive and hopelessly unaware of its own ignorance. Beyond that, the applied linear algebra was not interesting, no real game or decision theory to speak of and the whole 'eigenmoses' and 'eigenjesus' was too cute by half.

Iterated prisoner's dilemma? Not my idea of a powerful tool.

Was there more? I may have missed it.


"Follow" is not a verb used in the context of books or Wikipedia articles. Any neither characteristically provides ideas in a "shorter, clearer form". Your request was, to any reasonable observer, explicitly a request for a content-driven site or blog (i.e., a "single newsfeed style source"). Denying it when it's so clearly a falsehood only undermines the rest of your assertions, and casts into doubt your commitment to the (accurate) assertion that we should care about methods of disrupting existing controlling systems... perhaps to the point where it would not be unreasonable to conclude that you're explicitly attempting to undermine good-faith attempts to produce new stable systems.

So who are you working for?


"Follow" is used in the context of authors of classical works in academia and has been used prior to any use in the forms you claim are its exclusive domain.


Just myself.

EDIT: The answer to the rest is not generally interesting.


> I actually think its overly long, wordy, rambling, poorly conceived and short any novel or important ideas.

Yes, it's written like the draft of an academic research paper that might or might not make it to peer review. Unfortunately, that description also matches the bulk of published papers.


Please note that what follows can be interpreted as criticism, but it's not intended as such. I found this article quite interesting, and for me, it was the starting point for a lot of different thoughts about game-theoretical approximations of morality. So what follows is a somewhat tangential addition to the article, and not a critique of it.

My problem is not with the "eigenmorality" concept, nor with the various takes on playing it out across consecutive Prisoner's Dilemma sessions. That aspect is extremely interesting. Rather, my problem is with the Prisoner's Dilemma as a valid ground on which to test something like morality.

The Prisoner's Dilemma is a foundational, theoretical framework for evaluating human behavior. And it's a wonderful, elegant framework. But it treats humans as emotionless agents, and the "punishment" as an abstract, theoretical, rationally navigable scenario. Place real human beings into the Prisoner's Dilemma, with real-world consequences, and you get all sorts of unexpected results. The Prisoner's Dilemma is notorious for holding up perfectly fine in vitro, but less so in situ. Cultural conditioning plays a huge role in how real people act in the game. So do emotions, and irrational heuristics like overemphasizing loss aversion. (Tversky and Kahneman's work has a lot to say about the latter.)

Using the Prisoner's Dilemma as a proving ground, I think you'd arrive at an abstract model of morality -- but you wouldn't capture how morality actually plays out with quasi-rational, emotional, circumstantially driven, human agents. And, philosophically speaking, that's where morality actually counts the most.


> But it treats humans as emotionless agents, and the "punishment" as an abstract, theoretical, rationally navigable scenario. Place real human beings into the Prisoner's Dilemma, with real-world consequences, and you get all sorts of unexpected results.

No, that's actually the entire point of the Prisoner's Dilemma - it's not a framework for evaluation, it's the tension between the rational decision and human action that is exactly why the Prisoner's Dilemma is a prized example of game theory.


Prisoner's Dilemma (PD) is good abstraction. The power for game theory comes from abstraction. If you want to understand the implications, you must understand how this abstraction works.

If you can make humans really play PD and they are rational, nothing unexpected happens. The problem with these human experiments is not that PD can't model decision making, it's in the leaking implementation where payoff does not quantify the utility for the players.


This is an interesting idea. Aaronson may be joking when he says he's "solv[ed] a 2400-year-old open problem in philosophy," but in case he's not, this doesn't come anywhere close to solving ethics. Philosophically speaking, it's still necessary to show why his definition of "moral" holds up. All he's done is assess a certain quality and then call it "morality." I think it could better be called "meta-cooperativeness" or something like that.

I think Aaronson realizes this, because he does talk about how Eigenjesus and Eigenmoses don't accord with our moral intuitions in some cases. He also addresses this somewhat in the section "Scooped by Plato." His major point--that something like Eigenjesus can be useful, even if it cannot deduce terminal values--still holds.


Yep. I think what he's really trying to do is find a definition of morality that's useful, not necessarily complete. As models go, it was useful enough to change my thinking a bit.


I think the definition of morality in the article is far too simplistic. In my (Christian) view, it's an important aspect of moral maturity to be able to be nice to immoral people without cooperating with their goals. Besides that dichotomy, the article already mentions that the model lacks critical information, specifically, the actors don't know whether the other actors they're [not ]cooperating with are "good" or "bad".

That said, I find this approach to defining morality fascinating. Maybe if the definitions are refined it will manage to tell us something we already know (not entirely sarcastic; that would be legitimately impressive for a mathematical construct regarding morality).


Occasionally "being nice" (cooperating when tit-for-tat suggests defection) dampens avalanche effects caused by defection-happy actors; it prevents the "an eye for an eye makes everyone blind" ending. In that sense, it has value.


Thus forgiving acts much like error correction in a data stream - by preventing propagation of defects, it limits damage.


Forgiveness also has great psychological benefits for those who practice it. In some cases people whose lives have been focused for years on some great wrong that was done to them, have only been able to reach their own personal goals after forgiving the offending party.

Of course vengeance can have psychological benefits too.


Tit-for-two-tats is very good in that regard


my friends and i had started on this already. i had a hard time explaing to people why it was valuable; looks like scott has done it for us.

please help us!

https://github.com/neyer/dewDrop

right now all we have is a way to state which facebook users a person trusts. there's a chrome extension to help with this. it's extremely basic.

i have a server running at https://dewdrop.neyer.me - we need a lot more help!

i'm just putting it on github now - so i'll update the readme in a few minutes.


I'm happy to find other people that share my vision.

Drop me a line: username @ gmail


Cool... After reading that, I realised that Eigenmorality is to social networks what Pagerank is to search engines - great to see somebody already working on this!


You would think someone at Facebook is already looking at this question...


This is good stuff, bookmarked!


Considering that a majority of people who agree with each other are "moral" is highly problematic. Even if everyone in the system is morally equal, this system would automatically create and enhance differences between groups.

The author uses the example of climate-change deniers to express the opinion that minority groups have "withdrawn itself from the main conversation and retreated into a different discourse."

Is this true of other minority groups - feminists? Homosexuals? Minority ethnic groups? It seems highly awkward to claim the same thing.

A better system would be one which considers how to cater for individuals rather than declaring a populist majority to be a special, protected ingroup. There's enough of the latter already.


The article does acknowledge this point, and offers a possible solution. In the edit at the end, i t suggests a system for distinguishing those who initiate defection from those who respond to ti with defection, and in principle that could make defecting against a group that didn't do anything "wrong" a wrong thing.


This seems related to the idea of coherent extrapolated volition (https://intelligence.org/files/CEV.pdf). Both have some of the same problems--in particular, setting up the system requires making moral judgments about how to do so, so it's not actually value-neutral.

(Aside: If I have two completely different thoughts about an article, should I post them in two separate comments or in the same comment?)


(My preference is two separate comments. Better threads, better ordered.)


Wow: "The mathematical verdict of both eigenmoses and eigenjesus is unequivocal: the 98% are almost perfectly good, while the 2% are almost perfectly evil." The author says this diverge violently from most people’s moral intuitions, but actually this result is PRECISELY what moral relativism predicts. See, there are 2 school of thoughts attempting to explain where morality comes from:

- either morality is an absolute concept (things are inherently good or evil, theists might say this good/evil is defined by a god or gods). This is http://en.wikipedia.org/wiki/Moral_absolutism

- or morality is relative, defined by people, defined by cultures (what one culture might consider immoral, another culture will consider it moral, and nobody is inherently right or wrong). This is http://en.wikipedia.org/wiki/Moral_relativism

If moral relativism is right, it would be absolutely expected that the 98% are "almost perfectly good", since they do things that the majority consider good. What a fantastic essay...


That's... kinda not a very good description of the major contemporary schools of thought on metaethics. I don't know if any respected analytic philosophers take moral relativism seriously as philosophy; you can't ground the foundational meaning of the word 'good' as 'different cultures think different things are good', since there's no base case for the recursion. Well-known mainstream positions in metaethics hold that moral language is not meant to express statements which are either true or false, i.e., it is not semantic or truth-apt; but I have no idea what it would mean for 'good' to be defined as 'different cultures think different things are good'. What's the difference between 'good' and 'fzoom', then?

This appears both well-written and standard: http://cdn.preterhuman.net/texts/thought_and_writing/philoso...

I'd refer you to my own writings on the subject but I don't think they've been very productive in practice of understanding, so I'll leave you with a reference to the standard literature, and remark that the correct analysis (using standard nomenclature, which is somewhat misleading) is obviously moral cognitivism::strong cognitivism::moral realism::naturalist reductionism.


I don't know if any respected analytic philosophers take moral relativism seriously (...)

Well, many contemporary metaethicists argue that forms of moral relativism undergird / best justify non-cognivitism [1,2], for one. Also Gilbert Harman [3] and David Wong [4] have proposed that forms of moral relativism are associated with naturalism (!), and their work overall is an excellent reference I strongly recommend you check out.

[1] http://stanford.library.usyd.edu.au/entries/moral-relativism...

[2] http://stanford.library.usyd.edu.au/entries/moral-cognitivis...

[3] http://en.wikipedia.org/wiki/Gilbert_Harman

[4] http://en.wikipedia.org/wiki/David_Wong_(philosopher)


> I don't know if any respected analytic philosophers take moral relativism seriously as philosophy; you can't ground the foundational meaning of the word 'good' as 'different cultures think different things are good', since there's no base case for the recursion.

Wouldn't the meaning of "good" be "considered to be good by the given culture, group, or individual"?

> Well-known mainstream positions in metaethics hold that moral language is not meant to express statements which are either true or false, i.e., it is not semantic or truth-apt;

Do you have any data on the percentage of philosophers who subscribe to various beliefs? It sounds like you're describing non-cognitivism, which I'm fairly familiar with, although I didn't think it was a widely accepted view.


Take a look at PhilPapers Surveys, maybe: http://philpapers.org/surveys/

the rough gist (from http://philpapers.org/archive/BOUWDP.pdf) seems to be

  14. Meta-ethics: moral realism 56.4%; moral anti-realism 
  17. Moral judgment: cognitivism 65.7%; non-cognitivism 17.0%; other 17.3%.
The results are also here: http://philpapers.org/surveys/results.pl (set response grain to "fine" as the note at the top suggests)

correlation results should also be interesting.


There are probably more fundamental classification dichotomies than absolutism vs. relativism. One example is cognitivism, which is the belief that moral statements are propositions and can thus be true or false (absolutism and relativism are both cognitivist theories), vs. non-cognitivism.


Don't take this negatively. Your comment is absolutely muddled up.

Just because there is an absolute morality does not mean everyone has to agree on it. Why? Ask yourself this question. Physics is absolute. Do physicists have 100% agreement?

Nobody takes relativism seriously.


"The deniers and their think-tanks would be exposed to the sun; they’d lose their thin cover of legitimacy."

Don't we have the ability to do this now by visualizing or analyzing citations? A set of "fake" think-tanks which promote bogus ideas should be identifiable as a mostly-disconnected component of a graph today. We don't need to get each think tank's explicit opinions about the others. Aaronson points out this single-purpose inquiry would encourage gaming, but analyzing a graph built for other incentives may give more "honest" results (at least for a while).

And we have, at least five years ago: http://arstechnica.com/science/2009/01/using-pagerank-to-ass... . You can follow links from there to a project called EigenFactor, academic research about shortcomings of PageRank in this application, and more.

Results of such analyses should be used as input to human thought processes and not some sort of legislative robot.


I found the addendum about the time-sequence of bad acts to be the most interesting, in that how you approach the problem leads to another wide spray of outcomes.

Scott mentions the "forget the past" and "address root causes" sides, but how do you deal with things in the middle?

Even being able to provide a model that allows for injustices from centuries ago would be impressive, but how should such things decay? Again, the same pressures come into play, based on the interests of the judged parties.


It's probably no coincidence that repeated prisoner's dilemna models another phenomenon: willpower.

George Ainslie argues in "Breakdown of Will" that will is actually the result of negotiations between past and future selves.

http://www.picoeconomics.org/HTarticles/Bkdn_Precis/Breakdow...


This Tolkien quote builds a similar circular definition of "worth", which might be amenable to the same kind of analysis. https://twitter.com/JRRTolkien/status/480127254857400320


“All have their worth and each contributes to the worth of the others.”


I now realize I should have copied the text into the comment in the first place.


It's not too late to edit your post :)


It seems to me that this would only be of interest if it can be shown that an immoral person is not someone that cooperates with other immoral people but not with moral people.


Do you mean that "moral" should mean "someone who cooperates with others" and "immoral" should mean "someone who does not cooperate with others"? Then "moral" and "immoral" would just mean "cooperative" and "uncooperative," which we already have words for. Plus, morality shouldn't require you to cooperate with immoral people (although whether you should actively punish them or not is the eigenJesus-eigenMoses question).


Needy babies are moral monsters according to many of these models..


Needy babies are also stupid according to many notions of intelligence, ugly according to many notions of beauty, etc. That doesn't mean there's anything wrong with those notions; it means they aren't designed for assessing babies.


Maybe this is me being pedantic, but I'd disagree with that they aren't designed for assessing babies, and say that there's a more elegant explanation of a second layer of filtering which informs on the validity of a first layer of assessment. Stupid does define a baby, but this definition holds contextually different connotations; and this context is what makes there be "nothing wrong with" those "traditionally bad" assessments.

Not sure where this (my rambling) came from off of the parent article, but it spawned some interesting thoughts at least :)


And cripples and elderly.


His definition is much closer to "popularity" than to anything I would recognize as "morality".

It's strange to exclude intent from your model when it's an important factor in almost all systems of morality.


Good read, but I can't help but end in thinking that by the time all of this would have been figured out, our civilization will be long gone.


There is no right or wrong, just acts with unescapable consequences and your freedom to learn something from your choices.


The concepts of right and wrong certainly exist, or we couldn't be talking about them. But not everyone agrees what is right and what is wrong.

It sounds like you are saying that there is no absolute right or wrong, that right and wrong are human inventions prone to variation, not some fixed celestial law. That is exactly the stance which Aaronson took in his essay so I believe you two agree on that point.


You can't "solve" this problem in the same sense that you cannot develop a universally consistent foundation for mathematics. Goedel is there preventing you from EVER proving that one set of axioms is better than another.

I again wrote a longer response but have shortened it because the author seems to have committed a rather grave error which is to assume that human moral 'intuition' is in any way consistent. There are heaps of evidence (cue the trolley car) that human moral judgements really should not be considered a guide for anything. The fact that we can capture the disasters of collective morality observed under various regime's during the 20th century ought to tell us that following those models as a universal foundation for human relations is a terrible idea.

Might also be worth paying a visit to eigennicolo and not adhere to such rigid systems.


Well I would like to read your "longer" response. But I thought I would just back you up on the Godel connection: the key to that theorem is also a self-referential-ness.

I would also throw in that financial systems in general suffer from this same problem: we assign value to items that get assigned value. Where is the objectivity? There is none.

It is quite ironic that I found your comment at the bottom of the HN comment queue, and it is also by far the most penetrating, IMNSOO.


I was following Scott's posts for a while. Most notable feature of those posts: everything he says is predictable. Blog is designed to appeal to the liberal academic establishment, which knows answers to all important questions, and is never in doubt. I don't remember a single example of Scott's opinion which could be deemed controversial in any sense. "Eigenconformism" would be a better name for his blog.


I don't think the blog is consciously designed to appear to liberal academics. As an MIT professor, Aaronson IS the liberal academic establishment, so it is no mystery that his writing appeals to his peers.

I don't know about you, but I'm willing to admit Aaronson knows more answers to important questions than I do.


> Aaronson IS the liberal academic establishment, so it is no mystery that his writing appeals to his peers

I'm afraid you got cause and effect in reverse.


Happiness is the only intrinsic value for a human being, and thus a moral person is a person who pursues happiness effectively. (How to do that is another story.) However, Aaronson's proposed definition of a moral person is not the effective way to pursue happiness. Thus, it is immoral.

It's also immoral to call for all of us to sacrifice industrial output for future generations to solve the supposed climate change problem. There is no reason to presume that future generations are more important than the present generation (in fact, it is demonstrably the case that they are not). Thus, this position is profoundly immoral.

However, the implicit assumption that sacrifice is moral is common to most world religions and also altruism, which is probably where he imported it from. All of them are morally bankrupt. A scientist shold be able to be skeptical and see such logical flaws, even if he is not able to propose the correct solution.


Are you trying to make some baroque point that I'm missing, or am I giving you too much credit, i.e. you're actually just ranting incoherently about climate change conspiracy and Randian selfishness-as-virtue?

Or, option 3, are you just trolling?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: