Larry Page's view that the last best hope for humanity is some white guy with the hubris to tell the residents of Camden that his not being allowed to sell $60,000 luxury cars is the most pressing issue they faced, didn't surprise me. The American notion of Manifest Destiny isn't dead. It's not even past.
The mistake that many people of the developed world seem to make is to think that the Page's of the world see us as a distinct class apart from the billion global poor who live on less than $1 a day. It's a straight line from their suffering being just the business cost of sending rockets to Mars, to our $60,000 a year lives being the cost of doing it at scale.
Right now, we can be comfortable in sharing Page's and Elop's abstractions. People we know aren't effected. When it comes our time, the best we can hope for is to be the Sixth Army on the Volga and get some lip service that our suffering was honorable and our death glorious and will go down in history a thousand years hence.
Look up your street to the left, then down it to the right. If you don't see a native American, it's probably you.
I do feel for people dying of Polio in third world countries. But it is just silly to think that Gates style philanthropy of saving those people is going to change the world, though. It just means a few more people in third world countries will live who might not, and those countries might perform a little better GDP wise and take care of their people a little bit better, and a microscopic few of those people may someday make a change in the world by inventing something or creating some huge company or something.
Meanwhile, getting everyone in the US to use electric vehicles instead of gas would make a huge difference in the world. A society where transportation costs are much lower will be a very different society. Throw in Google self driving cars and we may not be mostly jammed into crime ridden cities any more, for example, since commuting would be a few pennies of electricity sitting in a cabin sleeping or working on a laptop instead of driving.
Polio eradication is only one of many health initiatives funded by the Gates Foundation. It is an outlier in terms of total lives saved and was selected because the funding could leverage several generations of polio mortality reduction initiatives (iron-lungs, Saulk vaccine, Rotary PolioPlus) over into complete eradication.
Sure, you probably don't know any of those people. They don't have laptops or cars or spare cabins in the woods. And they certainly don't hang out at libertarian circle jerks.
Gates-style philanthropy is on a course to eradicate polio. That's a potentially huge impact. Sure, the effects of polio are limited to a few poor people right now (mostly because herculean effort was already exerted to get it there), but if we can eradicate it, it means that not only is the current suffering eased, but we will never again have to spend any resources on polio, nor have people suffer from it.
Gates style philanthropy is like compound interest, it is small in the beginning but has outsized returns over decades. Having a few more percent of the population surviving childhood each year does not have a "little better GDP" results, it changes a country from China 1964 to China 2014.
In truth, there's a lot of injustice in this world, and that's not something that can be legislated or marketed away or even corrected except in some small way by charitable people doing charitable things on their own or in small groups. We shouldn't feel guilty for good fortune, but we can still help lift the lost, weak, innocent, dying.
Corporations, charities, and governments cannot, and will not, do that: they are an expensive middleman; more than that, they are an abstraction layer that insulates us from the pain of others.
Chattel slavery was eradicated solely through government action and radical philanthropy. Political equality for women has likewise advanced by those same means.
There's a cost and sometimes we display a weakness of character for not paying it.
Minnie balls, Spenser Rifles, trenches, revetments and mechanized transport by rail and steamship contributed to the 1,000,000 deaths in the US Civil War. Slavery didn't go down without a fight and that fight was industrial.
For the sake of argument, let's assume OP is correct: the finance industry and corporations in general are indeed blood-sucking vampire squids jamming their funnels into everything that smells of money and are antithetical to human values. (I strongly disagree with it, but others are criticizing that aspect, so I'll move on.)
So what? How does this justify his conclusion:
> But they have chosen a fantasy version of the problem, when human interests are being fucked over by actual existing systems right now. All that brain-power is being wasted on silly hypotheticals, because those are fun to think about, whereas trying to fix industrial capitalism so it doesn’t wreck the human life-support system is hard, frustrating, and almost certainly doomed to failure.
Even if the finance industry / corporations are a parasite, so what? That doesn't wish away AI: AI will still remain a threat (or not a threat) regardless of whether Goldman Sachs is dissolved. Threats are not a zero-sum game: if you show that finance is a parasite, that doesn't suddenly alter the universe to guarantee AI is safe and easy.
Even if we interpret him charitably as making a nuanced cost-effectiveness argument that 'we face many threats, have very limited resources, and corporations are more cost-effective to fight than working on AI', that still seems pretty darn unlikely. Corporations are a major political issue which attract the attention and effort of hundreds of thousands of people and inspire real world action like Occupy Wall Street; why does this fight need a few more people in it? Diminishing returns set in a very long time ago. And fundamentally, financial types have (or have not) been parasites throughout history. Their impact is bounded. They are not anything new under the sun.
Whereas with AI, we don't know whether AI will ever exist; whether the impact will be huge or close to zero; hardly anyone takes it remotely seriously (quick question: have you ever read a mainstream article which did not make either a _Termintor_ or 'Rapture of the Nerds' allusion?); and the sum total of all efforts on thinking about the topic is, what, maybe $7m a year? (Google's safety board + MIRI + FHI + the new existential risk org whose name escapes me). You have to assume some pretty extreme values to make cost-effectiveness criterion spit out 'nobody should be thinking about AI! Everyone should be swotting up their Marx!'
Conflating socially-harmful corporations with 'hostile AI' does both topics a disservice.
OP here, nonplussed that his obscure blog suddenly is near the top of Hacker News.
First, I don't think that corporations are nothing but parasites, and said as much in the article. I did say that finance was parasitical. That was probably too strong. Allocation of capital is a useful function; the problem is that the allocators are helping themselves to a far larger cut of the proceedings than the value they produce (and I don't particularly want to argue that here, just wanted to clarify what I meant).
Second, the more central point about whether the Singularity Institute types are wasting their time and should refocus their thinking. It is true that not many people are thinking about the effects of superhuman AIs, and there's no harm and possibly some good in a few smart weirdos focusing on that, especially since corporations have been around and critiqued for so long.
However, we are so far away from producing anything remotely close to that level of AI, that I fear their work is ungrounded. It strikes me as fun quasi-SF rather than serious engineering.
But I take believe they are seriously interested in world improvement, so my suggestion to them in their work on "benevolent goal architectures" is to study how existing goal architectures work, that is to look at existing economic and social systems where disparate individual goals are coordinated into cooperative or conflicting action, and how the results are related to human goals and human flourishing.
For my part, I am in the process of trying to understand their friendly AI work better so I critique it from a more informed standpoint.
You have not demonstrated that corporations have artificial intelligence. At most we can say they are an artificial life form that thrives and takes over when the environment is right for them. So by changing the contents of the Petri dish (laws, regulations, democratic power, competing grassroots organism etc.) we can still control the outcome.
Sure, the corporations will fight back through lobbying etc. but at the end of the day we are still talking about the human intelligence of the corporate masters versus the collective intelligence of the citizenry. So the strategy of the corporation is not a form of artificial intelligence. Most importantly, the corporate masters fully realize that they live in a symbiosis with the rest of the society so they will not pursue paths that threaten the human civilization, for example they will not engage in nuclear war.
Unfriendly AI on the other hand, is an artificial life form capable of superhuman intelligence that has no necessary symbiosis with humans. It will view humans in the same way we look at apes: an inferior life form from our past evolution that can be preserved in reservations, studied and certainly destroyed if it grows to the point of competing for our resources. Bonus, they will likely be imune to high doses of radiations and have a fault tolerant distributed architecture specifically designed to survive nuclear war.
> Allocation of capital is a useful function; the problem is that the allocators are helping themselves to a far larger cut of the proceedings than the value they produce
Yeah, I find this (that financial actors are acting in a hostile manner) more tenable than the impression I took from your blog post, which was that the financial system itself was acting with intentionality.
> look at existing economic and social systems where disparate individual goals are coordinated into cooperative or conflicting action
I hope you can bring that perspective to the table; I've noticed that a lot of the public work is about preserving goals while modifying structure, while actually mediating between conflicting goals/values is a bit handwavey.
And yeah, I hope we're far away from human level AI, but we're talking about our uncertainty on conceptual distance to an undiscovered theoretical framework, here. It's not quite as bad as trying to guess when the Riemann Hypothesis will be resolved, but it has that quality.
I really like your articulation of this. It has struck me for a while now that the people working on friendly AI could have more impact if they invested their energy and intellectual prowess on the real threats to humanity we currently face rather than an amorphous future hypothetical we are ill prepared to understand.
I also like the alternative problem that you posit: what can we do about existing semi-autonomous systems whose functioning is harmful to humanity? However, I don't feel very hopeful about any sort of solutions we might have to these problems. Semi-autonomous systems at societal level have existed for probably nearly as long as as humanity itself. Are there some good examples of such systems that were initially harmful but have been carefully cultivated by humanity to make their effects largely beneficial?
> However, we are so far away from producing anything remotely close to that level of AI, that I fear their work is ungrounded. It strikes me as fun quasi-SF rather than serious engineering.
At what point would you allow research to go forth? And you know, there's a lot of work that can be done which is not 'directly coding an AI'.
> But I take believe they are seriously interested in world improvement, so my suggestion to them in their work on "benevolent goal architectures" is to study how existing goal architectures work, that is to look at existing economic and social systems where disparate individual goals are coordinated into cooperative or conflicting action, and how the results are related to human goals and human flourishing.
But why would one expect any of that to transfer to AI? What do the internal dynamics of a Board of Directors have to do with, say, the Code Red worm? What lessons can we extract from 501(c)3 nonprofits which will tell us anything about deep-learning-based architectures? Do CEO salaries really inform our understanding of Moore's Law? Or can study of Congressional lobbying seriously help us better understand progress on brain scanning and connectomes? Do Marxian dialectics truly help us improve forecasts for when feasible investments in neuromorphic chips will match human brains?
The closer I look at corporations and modern economies, the more worthless they seem for understanding the possibilities of AI, much less engineering safe or powerful ones. Modern economies are based on large assemblages of human brains, acculturated in very specific ways (remember Adam Smith's other major work: _The Theory of Moral Sentiments_), which are limited in many ways, and which are fragile and nonrobust: consider psychopathy, or consider economic's focus on 'institutions'. Why do some economies explode like South Korea, and others go nowhere at all? Even with millennia of human history and almost identical genomes and brains, outcomes are staggeringly different. (You complain about corporations; well, how 'friendly' is North Korea?)
And this is supposed to be so useful for understanding the issues that we should be focused on your favored political goals instead of directly tackling the issues?
> But why would one expect any of that to transfer to AI? What do the internal dynamics of a Board of Directors have to do with, say, the Code Red worm? What lessons can we extract from 501(c)3 nonprofits which will tell us anything about deep-learning-based architectures? Do CEO salaries really inform our understanding of Moore's Law? Or can study of Congressional lobbying seriously help us better understand progress on brain scanning and connectomes? Do Marxian dialectics truly help us improve forecasts for when feasible investments in neuromorphic chips will match human brains?
Studying human systems is one of the best ways of studying complex systems and systems engineering, which are already crucial for complex engineering projects, like developing a complex AI. Before we can even talk about our future binary overlords being hostile or friendly, we will have to study how basic, but constantly developing, AI integrates and plays off of human social systems. We have to gather quantified data about how two distinct forms of intelligence interact and what, if any, conclusions can be generalized to a future where humans are no longer the species with the highest intelligence.
You have no data about how real AI's would behave in our society except for fiction, which contains no more guidance now than the Bible did for 16th century astrophysics. We have no consistent models that explain our own intelligence, let alone an artificial one that has yet to exist. You can pontificate about Plato's ideal Terminator but it won't make a bit of difference until we get our telescope.
> Studying human systems is one of the best ways of studying complex systems and systems engineering, which are already crucial for complex engineering projects, like developing a complex AI.
What does it mean to study a generic 'complex system' and 'systems engineering' and what does this have to do with estimating the potential risks and dangers?
> we will have to study how basic, but constantly developing, AI integrates and plays off of human social systems.
This presumes you already know all about what the AI will be and is putting the cart before the horse.
> We have to gather quantified data about how two distinct forms of intelligence interact and what, if any, conclusions can be generalized to a future where humans are no longer the species with the highest intelligence.
Consider an aborigine making this argument: 'we have observed their firearms and firewater, and know there are many unknowns about these white men in their large canoes; if we look at their capabilities, our best analyses and research and extrapolations certainly suggest they could be a serious threat to us, but we must reserve judgement and quantify data about how our forms of intelligence will interact with theirs'.
> You have no data about how real AI's would behave in our society except for fiction, which contains no more guidance now than the Bible did for 16th century astrophysics
Really? We know nothing about AI and our best guesses are literally as good as random tribal superstitions?
> We have no consistent models that explain our own intelligence, let alone an artificial one that has yet to exist.
Someone tell the psychologists and the field of AI they have learned nothing at all which could possibly inform our attempts to understand these issues.
I think you are homing in on a key philosophical difference between me and LessWrongsters. Don't really have time to get into it now, except to say that it is kind of arrogant to think you can design or think about superintelligences without reference to the best existing intelligent systems we have. Especially if you want to keep them goal-compatible. The pathologies of such systems are especially instructive.
> it is kind of arrogant to think you can design or think about superintelligences without reference to the best existing intelligent systems we have.
I don't think it's any more arrogant than, say, Djikstra pointing out that submarines move in a completely different manner than a human swimmer. How arrogant were the Wright brothers in looking at birds and deciding to try to achieve some of the same goals by a completely different mechanism?
If computers thought the same way humans did, if they came with builtin moral sentiments, the product of a very unusual evolutionary history and social structure, if they had access to no novel capabilities, then humans and forms of cooperation developed over the last few eyeblinks (centuries) might be relevant. But then, no one would care about the issue in the first place...
> However, we are so far away from producing anything remotely close to that level of AI, that I fear their work is ungrounded. It strikes me as fun quasi-SF rather than serious engineering.
Why do you think this? The idea behind the Singularity is that even an AI that starts far below human-level, but is capable of recursive self-modification, can bootstrap itself to superintelligence. We don't need to understand the human brain to create such an AI; we need only to:
A. create a the simplest of "digital animals", and
B. give it the capability to push itself into self-directed Lamarckian evolution to satisfy its preferences;
C. and have this all happen on electronic time-scales (e.g. successive "generations" being created in milliseconds or less),
and we'll likely see a Singularity.
> look at existing economic and social systems where disparate individual goals are coordinated into cooperative or conflicting action, and how the results are related to human goals and human flourishing.
"Existing economic and social systems" allow humans to cooperate with humans. The whole point of FAI research is to figure out how to make an AI that is "human"-enough that it would have cause to participate in such systems with us.
A paperclip-maximizer has nothing to gain from participating in a stock market, or an election. People do not want to be turned into paperclips, and it does not want paperclip-resources turned to human ends. Its goals are not disparate, but rather incompatible with our own.
A non-Friendly AI should not be conceptualized by your intuition as a certain kind of human that we can possibly learn to "live-and-let-live." It can, at best, be conceptualized as something like a Baby-Eating Alien[1]. Usually, though, it would seem to us something more akin to a supermassive black hole: an unlimited force that consumes many things we want and outputs nothing useful to us, that we must keep away from, and especially must avoid creating in the laboratory.
The fact that it hasn't happened yet is good evidence that it isn't as easy as all that.
Also, evolution optimizes for survival and reproduction, not intelligence. While it may be possible to design an artificial evolutionary system that optimizes for superintelligence, I don't think doing so is trivial (or it would have been done).
Let's posit that your image of post-singularity non-friendly AIs is accurate. They are an immense and irresistible force, like a black hole. I can imagine something like that, but I have a great deal of difficulty imagining something like that that is also constrained by initial design decisions made by humans before the singularity, which is what the friendly AI people hope to achieve. To be fair I haven't read all their stuff yet and I'm sure they address this point.
There is a good rule of thumb[1]: "It has never happened yet" is an insufficient safety argument when considering scenarios which are so bad that it would be unacceptable for them to happen even once. When talking about existential risks that certainly applies!
[1] I think I read it in Trevor Kletz' book "What Went Wrong?: Case Histories of Process Plant Disasters".
That's the tough nut to crack. Everything is predicated on that capability. It's not about AI overcoming current challenges. It's also about it overcoming future ones - and evolving fast enough so that it's smart enough at the time it meets those challenges.
I see only 2 possible problems with that line of reasoning, that would justify thinking directly about computer inteligence, bypassing institutional inteligence:
1 - The researchers can't program those institutions;
2 - The researchers intend to use AI as a tool for that goal.
>However, we are so far away from producing anything remotely close to that level of AI, that I fear their work is ungrounded. It strikes me as fun quasi-SF rather than serious engineering.
Really we have no idea. True nothing around today is remotely close to human intelligence, but the question is; will AI be a slow gradual process, or will someone make a breakthrough and suddenly make a lot of progress very quickly? In any case, it won't happen at all if people aren't working on it.
question is: is evolution 'target' for us to build machines or to leverage machines to increase its pace ? if pure AI were the target evolution may have 'targeted' it more directly. Singularity 'theory' forgets about evolution.
I'm honestly not sure if I misunderstood OP's argument or if you did. I understood his argument as such:
1) Our current economic system is already a form of AI. Using the wikipedia definition, it is a system that perceives its environment and takes actions that maximize its chances of success. Humans that are in this system are inputs in the environment. They may have individual agency, but cannot direct the system as a whole.
2) The "success" that this AI is maximizing is not currently in line with optimizing human interest.
3) Instead of theorizing about how to make sure a future AI system is in line with human interest, we should look at the current dominant AI and attempt to improve that.
All three of those premises can be argued, and I've come to reasonable arguments on both sides just on my own with a few minutes of thought. But I think those are the premises to argue.
An even more charitable interpretation might be: We can gain insight on how to manage fully autonomous rule-based systems (AI) by looking at existing semi-autonomous rule-based systems (Corporations, Governments).
Usually in these discussions we use the term "Unfriendly AI" instead of "Hostile AI". That gets at an important distinction: these AIs don't want humanity to die, it's just that they don't want humanity to live and they're presumed to have the ability to steal all our resources for themselves. Humanity still dies, but only incidentally. The author discusses this point a little but I think it's important enough to put it front and center, in our terminology.
It's interesting to think of corporations as being Unfriendly in this sense. The analogy isn't perfect, though: humans make up a corporation's computing substrate, so they're forced to value humans more than the classic Unfriendly AI would.
is there a third category of AI which doesn't necessarily care to live or not, but winds up destroying humanity because of its Lennie-like power (possibly even in the service of humanity)?
The point here is that the generation of wealth, which is the conversion of labour and marginal efficiency into a number in a ledger, is not in and of itself aligned with human interests - it turns valuable labour and intelligence into a store of labour-time exchange, which can be used to purchase more labour, rather than any absolute benefit. The problem arises when the store of capital grossly exceeds available useful labour, which it does now in the hands of those who hold it. This causes inflationary forces, the results of which are very visible, and in this way the excess labour-time is simply destroyed, as it becomes devalued.
Money is a highly inefficient way of improving the world.
>Money is a highly inefficient way of improving the world.
True vis a vis itself - as money in and of itself does not have an ethical "direction." So I would say you are throwing the bullion out with the bathwater.
Money is a very efficient way of focusing effort - as your example indicates. How it is focused then is up to what we collectively value and how much we want to invest in determining where it will find value.
"Corporations are at least somewhat constrained by the need to actually provide some service that is useful to people. Exxon provides energy, McDonald’s provides food, etc. The exception to this seems to be the financial industry. These institutions consume vast amounts of wealth and intelligence to essentially no human end."
Well, no. Banks provide liquidity, investment opportunity, and loans. Just like McDonald's, there is a cost to this service, and the negatives of this cost may or may not be worth the "positive constraint" but to say there is no positive constraint is utterly rediculous.
"Of all human institutions, these seem the most parasitical and dangerous."
What of those institutions that wage wars and actually people killed?
> What of those institutions that wage wars and actually people killed?
keep in mind that these are the same institutions that at some point saved the rest of the European jews, the rest of the European gays, the rest of the European gypsies, the rest of the European communists, mentally or physically disabled etc.
It's a valid point on its own, but it's not a valid counterargument against the Singularity Institute. Human gatherings are still created from humans, which constrains the space of what they are, what they can do, and what they can think. Arguably they've evolved along with us, and what we have today are still logical extensions of the political structures we've been evolving for the last few million years.
AIs have no such constraints. However alien a financial corporation may be, it might as well be a single human compared to what AIs could be. If the problem is real, it's a different problem, and the problem of large human corporations is only the smallest, smallest taste of the problem.
The jab at the financial industry is unnecessary, but the actual point is very interesting.
I first heard of this as the 'Santa Claus' argument on a forum thread. Short version: just because you can't point at a single person and say 'he is Santa', doesn't mean that Santa doesn't exist. Despite living solely as a meme in people's heads, he has desires (make kids happy on Christmas, make kids behave), and can act on them (put presents under the tree).
Those who supported this position used Microsoft as an example. 'Microsoft', like every company, is at its core an idea or a meme that everyone just accepts exists. Parts of it get written down, of course, and various people hold different parts of it in their head. Nevertheless, the company is its own entity, with its own goals and means of achieving them, yet exists only in people's heads.
I'd say the financial industry is a perfect example. There's a nifty result that maintaining market efficiency is an NP-complete problem, and the standard assumption is that markets are solving this continuously and more or less autonomously in a way that don't depend on any particular actor.
Krugman's model of currency crises for instance ends up with "markets" as a kind of Cthulu-esque omnipotent, amoral, and inscrutable actor.
Long story short: People hear about an event or idea, which did not happen or does not exist, and aspects of that thing become a part of reality, purely as an extension of the collective psychological reaction of those who become aware of the rumor or myth.
But since Santa only exists as a meme in people's heads, Santa can't do anything without the people in whose heads the meme exists doing something.
Similarly, corporations can't do anything without the people that own them and work for them doing something. So viewing corporations as quasi-autonomous agents is misleading; they can be viewed as collective agents in some respects, but in the end everything they do has to come down to some human or humans taking some actions.
Basically, the author of this article is punting: he's saying the problem is corporations being "effectively independent of human control", instead of actually looking at the actual humans whose decisions and actions constitute what the corporations do.
I don't think he's punting. He's saying that actually looking at the actual humans is misguided, because they can and will be replaced or exchanged with no effect on the system as a whole. At the micro level, a department or even a company may change course in a noticeable way with one person being replaced. But industry wide, and society-wide, the "actual people" are mostly irrelevant to the mechanics of the system.
they can and will be replaced or exchanged with no effect on the system as a whole
Which just punts on the next question: why? Who set up the corporate structure that way? Answer: other humans.
Corporations and othe large organizations don't come into existence by magic, and they don't develop structures that make people, even at the CEO level, into replaceable parts, by magic. Human beings have to make choices for those things to happen. Human beings can also make choices to change those structures, if that is what needs to be done. But to do that you have to stop ignoring the fact that it is human agency, not some amorphous "corporate" agency, that is causing the problems in the first place.
BUt corporations don't behave like human beings; they are not run by moral human motives. You keep your job at a corporation by promoting the corporate interest. That means profits, efficiency, etc. If you fail to make your numbers, you'll be replaced by someone who will.
So corporations are definitely NOT run by humans behaving sensibly, but instead by a strange machinery specific to the corporate ecosystem.
Corporations don't "behave" at all. What you really mean is that human beings behave differently when they work for corporations:
You keep your job at a corporation by promoting the corporate interest.
That's true, but that by itself doesn't make the behavior immoral. If the corporate interest is best served by making products that people actually want or need, and selling them at a fair price, then keeping your job by promoting the corporate interest is perfectly moral.
If, on the other hand, the corporate interest is best served by something else, then we, as humans, need to revisit the whole issue of corporate governance: what is it that puts corporate interests out of alignment with our interests as human beings and members of society? And you can't just say "get rid of corporations" because corporations are necessary: without them we wouldn't have enough food, wouldn't have houses, wouldn't have cars, wouldn't have computers or the Internet, etc., etc.
None of us can survive purely by our own efforts; we need to be able to specialize and trade, and we need to engage in collective projects that required the coordinated efforts of many people. Corporations are a necessary part of doing that. The fact that the legal powers we give corporations have been abused does not mean all corporations abuse them. And the abuse doesn't happen because corporations magically do things without humans doing things; the abuse happens because some humans use corporations as tools to prey on other humans instead of providing value.
Imagine a solitary actor in a large corporation who is make a choice between x and y. She is presented with the precise set of inputs like emails, voice calls, water cooler discussions which lead her to choose y. She is not forced to make the decision, but it comes naturally, quickly. Now imagine the result of that decision y being fed back into the corporate decision making structure. Thousands of people all "naturally" coming to their respective conclusions y', y'', y''', ... etc. This is corporate culture and history.
Now imagine some external driver of change appears on the horizon. How will the company respond? How will individuals within that company respond? I don't think it is so clear how or why people make the choices that they do.
In terms of right and wrong, the law identifies the act, but justice identifies with larger patterns; it is elusive yet tangible.
I don't think it is so clear how or why people make the choices that they do.
That's often true; but that doesn't mean people aren't making choices. It just means they're making uninformed, and often bad, choices. And to fix the problem, they need to make better choices. Pretending that it's not human choice at all, but some "corporate agency", that is causing the problems is not the way to fix them.
The author is certainly misguided in imagining that finance produces no value, but the difference is that McDonalds can't grow itself and broaden its influence by eating its own Quarter Pounders, while Goldman Sachs can do that by leveraging its assets through its own financial instruments.
If corporations are so evil why has their pay fed my family, paid for my house and created so many amazing products we surround ourselves with. It's ridiculous. Big organizations have problems, but to pronounce them as inherently evil ignores all the good they do. Maybe you have a different idea of good in mind, in which case we'll probably never agree on anything. When people organize they can create amazing things, and that's what corporations are, an organization of people: employees, customers, and investors. And they do amazing things. Go to a country that doesn't do so well at creating functioning organizations and see the difference.
The article states that the financial industry* is the "hostile AI" not all corporations (in fact, it clearly states that even the hated "energy" corporations actually provide a product - even if that ).
Having recently re-read Dan Simmons' Hyperion series (sci-fi humans vs. AI masterpiece), I could almost see the analogy perfectly - finance is key to almost every part of our individual life, business and even political machinations. And in each, the big finance outfits are impervious, indifferent and have their own goals outside of their stated purpose.
* I'd expand slightly to include all of FIRE - finance, real-estate and insurance
> If corporations are so evil why has their pay fed my family, paid for my house and created so many amazing products we surround ourselves with.
"If farmers are so evil why have they fed my family, paid for my house and created so many amazing products we surround ourselves with" - said the cow.
I think you and the other replies are missing the point. This is by far the most prosperous time in human history. It's not like we are living in some dystopia caused by corporations. It's certainly not anything compared to unfriendly AI.
> This is by far the most prosperous time in human history. It's not like we are living in some dystopia
Believe me, I don't disagree with that. Maybe I should emphasize that, even though I live in the Silicon Valley now, I grew up under communist dictatorship in Eastern EU. So I think I know a thing or two, from a practical standpoint, about Dystopia - the indoctrination, the forced labor, the rationing of electricity, fuel and food, the permanent fear of the secret police. That - I have not forgotten, for how could I.
All I'm saying is - this golden age could be even better, far better, if it weren't for the constant skimming of the cream that goes on all the time at the top.
If anything, it's the relative prosperity that makes it unlikely that the situation will ever be fixed. People don't rise as one, in anger, unless they are literally hungry and cold, in a physical sense. That, too, I know from first-hand experience. I've lived the incandescent emotions of the rioting mob while being one of them, out on the streets, 25 years ago, back home, in the dead of winter.
But when they are more or less sated, and provided with adequate (if not highbrow) entertainment, people will allow astonishing amounts of corruption, bribery, and downright theft to keep going at the top of the pyramid of money.
Is my belly full now? Yes. Am I more or less free to do most of the things I want? Yes. But do I still see corruption and injustice at the top, to the benefit of Big Money? Yes.
I guess I'm one of those people who are strongly motivated by principle. Injustice remains injustice no matter how pretty the makeup.
'the logic of the larder'. There could never exist as many cows in a state of nature as are supported right now, for similar reasons to why the earth could not support 7 billion human hunter-gatherers. Is the cow suicidal and would prefer to have never been? If not, then the cow is right to praise the farmers.
That's horrible logic. We can make far more humans than 7 billion if we are willing to accept a bare minimum standard of living. Should we? Should the new humans praise us for doing this?
That doesn't answer the question. Given that cows could not exist in their current numbers unless they were going to be eaten, why shouldn't they be grateful?
I find it hard to believe that cows care much about the average, and there's good reason that average utilitarianism is not very popular in population ethics.
I think there's other forms of hostile AI too like the algorithms that determine what advertisements to show in order to pollute your mind with intentionally-injected memes.
Naah, corporations aren't independent profit-seeking AIs. They don't bother making money except insofar as their owners drive them to. Even in that state, they're subject to principal/agent conflicts in which people seek to turn the company's revenue into their own overpriced salaries / stock options.
Now, government bureaucracies may be another story, especially once they grow lobbying arms ;)
There is a link to a "book-length pdf" at "http://singularity.org/files/CFAI.pdf", but the link shows up as a 404. I am curious about what the "book-length pdf" discusses, but my best efforts at finding it have failed.
Anybody have an alternate link, or more information about where it might be found?
Fixed in post, thanks (and insert some juvenile snark about how-are-you-gonna-make-a-superintelligence-when-you-can't-even-keep-your-links-from-rotting? here).
Which logical fallacy / form of argument is being raised here? It's a match for political issues (Say Scottish Independance - if we have AI / Independance, then we can do "improve human race / Create fairer society" - but then the argument goes why not just do the "fairer society thing" now as best you can
perhaps it is the "What are you really waiting for?" argument (probably a favourable shift in power, to answer my own question)
Financial institutions provide value. not sure how you buy a house without a mortgage. Not sure how you come back from your house burning down without insurance.
>All that brain-power is being wasted on silly hypotheticals, because those are fun to think about, whereas trying to fix industrial capitalism so it doesn’t wreck the human life-support system is hard, frustrating, and almost certainly doomed to failure.
This is one of my big problems with LessWrong (and I've been reading it four years): for all their claims of relevance, a startling unwillingness to question the social structure in a manner that would decrease the privileges of Bay Area techies like themselves. Politics is the mind-killer, don't you know.
The offshoot Effective Altruist movement has the same problem: throw money at the symptoms (and denigrate anyone who doesn't do the same) while noticeably never, ever questioning the system the problems are in the context of.
I'm glad that all of those intelligent idiots are stuck writing HFT algorithms for a middling salary on Wall Street so that the rest of us who are playing a better game can get on with life.
So McDonalds is ok because it is at least somewhat restrained by the need to offer a useful good (food) but the financial industry is parasitical and evil and offers nothing of use? Seriously?
Uhm, whatever his other points may be, I can't take him seriously when he lumps all financial institutions together into one group that offers nothing of use.
Sure, you can argue all day against the usefulness of speculative speed trading firms - and even then you'd still have some decent arguments against you - but to argue that financial institution in general offer nothing of use is ludicrous.
Some relevant quotes:
Corporations are at least somewhat constrained by the need to actually provide some service that is useful to people. Exxon provides energy, McDonald’s provides food, etc. The exception to this seems to be the financial industry. These institutions consume vast amounts of wealth and intelligence to essentially no human end. Of all human institutions, these seem the most parasitical and dangerous.
it [the financial system] has interests that are distant or hostile to human goals
This is the textbook definition of utilitarianism:
"Utilitarianism is a theory in normative ethics holding that the proper course of action is the one that maximizes utility, usually defined as maximizing happiness and reducing suffering."
He sees financial institutions as a waste, and therefore consuming vast amounts of resources to no useful or positive effect.
Yeah... it's easy to criticize the financial network. So would you mind if we tacked on an extra N% to your mortgage payment, like you'd expect with less efficient financing? :)
McDonald's isn't "ok", but it's limited in how evil it can be because it has to give some benefit to other people. A hostile AI could be even worse, because it doesn't have that constraint.
The financial system is what provides farmers with the necessary capital to be able to grow the food McDonalds needs to be in business. That doesn't necessarily make them okay either, but isn't allowing people to eat, without having to worry about producing their own food, at least of some benefit as well?
Doesn't 'financial institution' cover a pretty wide gamut? There are small institutions that exist in only a small town, to government owned entities, to giant businesses that span the entire world. Are they all categorically bad? What can be done to make them less bad?
Why not? A 2012 report estimated over 19 trillion dollars in lost household wealth, and contribution toward the largest number of people living in poverty ever recorded by the US Census bureau. http://www.pbs.org/wgbh/pages/frontline/business-economy-fin...
Uhm, well, the financial industry isn't parasitical and evil nor does it have interests that are distant or hostile to human goals. It's just another profit oriented industry that offers services which plenty of entities find valuable enough to pay for.
The financial industry is as evil as any profit oriented industry can be and to single it out as something completely wasteful and parasitic is unreasonable.
Well, it's wrong in no small part because the financial industry does, of course, produce something of value. I don't really get how the hard core "finance is fucked up" people imagine that financial firms got started. Do they imagine they were just created by evil aliens or something?
Finance firms provide lots of very sophisticated ways for money to get from people who have lots of money that they don't currently have a use for, into the hands of people who have a need for money they don't currently have.
Note this is a weak claim. I'm not saying anything about the efficiency of this process or whether the financial industry is good or evil or anything. But you can't criticize the finance industry without at least understanding and engaging its outputs.
Computer, per se, are not bad. Computers infected with malware are bad. Infected computers can still do useful things for you - and in the background they'll be screwing you over also.
The argument I would like to make is not that finance is bad intrinsically - as you've said above. The real problem is that the current instantiation of this concept has grown a series of goals of its own, and to fulfill those goals it is doing things that use the common, shared, limited pool of resources of the global economy, siphoning off stuff that the rest of us could put to better use.
In other words, they could perform useful services for us, and they do. It's all the extra layer on top of that that's the problem.
In a strict, mathematical sense, the OP probably exaggerated when saying they are entirely parasitical.
The mistake that many people of the developed world seem to make is to think that the Page's of the world see us as a distinct class apart from the billion global poor who live on less than $1 a day. It's a straight line from their suffering being just the business cost of sending rockets to Mars, to our $60,000 a year lives being the cost of doing it at scale.
Right now, we can be comfortable in sharing Page's and Elop's abstractions. People we know aren't effected. When it comes our time, the best we can hope for is to be the Sixth Army on the Volga and get some lip service that our suffering was honorable and our death glorious and will go down in history a thousand years hence.
Look up your street to the left, then down it to the right. If you don't see a native American, it's probably you.