Hacker News new | past | comments | ask | show | jobs | submit login
"We need a pony. And the moon on a stick. By next Thursday." (antipope.org)
208 points by cstross on June 4, 2014 | hide | past | favorite | 196 comments



I think in the future people will be perplexed by how sci-fi during this period was absolutely obsessed by the so-called singularity. Its a current fad, accepted by many, but not backed up by anything substantial like a lot of fads from the past. Its a little embarrassing to read some sharp thinkers from the 1970s who fell into cheap counter-culture concepts like astrological signs or shammy Indian gurus or shoddy California-style Buddhism, that often has little or nothing to do with the Buddhavacana source materials.

It really surprises me that we think there's a realistic path to the singularity and that we obsessed about it. Human-like AI is a dead field, or at least dead as far we can tell. We're not even sure how to build it because conceptually we don't even have a grasp on consciousness or cognition.

Then there's the false assumption that we can get to this level before doing things like, say, solving all mental health issues and that it can magically be scaled to super-human speeds. For all we know cognition happens at a certain speed and any attempts past that are troublesome. Look at how easily we suffer from mania and other issues when our natural limiters go haywire.

Singularity prediction is almost a cargo cult. Its weird that so many take it seriously.


Singularity prediction isn't even taken seriously by most formerly-intrigued SF writers these days. (Go google Ramez Naam on the subject. Or me. Or Cory Doctorow.) Best we can probably hope for is better intelligence augmentation/amplification than we've got now.

The reason it's popular is that it plays to some deep eschatological anxieties shared by many people -- and it strikes resonant echoes with Christian millennialism, making it an easy memetic poisoned chalice for anyone who has consciously rejected knee-jerk Christian doctrine but not actually re-evaluated all their composite assumptions and beliefs to drink from.


>Singularity prediction isn't even taken seriously by most formerly-intrigued SF writers these days.

Becoming a science fiction cliche does not reduce the probability of an event occurring.

Edit: retracting the following:

>The reason it's popular is that it plays to some deep eschatological anxieties shared by many people.

This is probably true, but that's not an argument against the feasibility of AI or the extent of its implications. You're saying, "These projections "rhyme" with those of a low-status group, therefore they can't be true."


> You're saying, "These projections "rhyme" with those of a low-status group, therefore they can't be true."

No, that's not his argument for why it's not true, it's his argument for why the singularity concept persists when it seems that otherwise non-religious people would reject it.

Some of his arguments against an imminent singularity:

http://www.antipope.org/charlie/blog-static/2011/06/reality-...

http://www.antipope.org/charlie/blog-static/2014/02/the-sing...


You're right. It still feels like an unproductive comparison.


And may increase it.

The entire low level ongoing effort to build a flying car (Terrafugia, Moller, etc) is basically driven by no market and no common sense, but a whole lot of Jetsons.


Well yeah, but welcome to humanity. We're at our best when we do what we dream of, as what's "realistic" and "feasible" is almost always just code for "the same crappy thing that happened yesterday happening again tomorrow because nobody made the effort to change it". Except then someone puts in the effort, fulfills their dreams, and the real world winds up totally different from the expected world in which those people never acted.


At least Terrafugia has a functioning prototype (i.e. it actually flies, rather than a 36-inch tethered hover), as well as orders.

A separate effort which actually makes a bit more sense and has more of a market is the iTec Maverick [1] - more of a flying dune buggy using a parasail as the "wing", but targeting a wholly different market (the Transition is aimed at wealthy pilots in wealthy countries with streets, airports, etc, whereas the Maverick is - in part - aimed at villagers living in remote jungles, far removed from civilization).

--------------------

[1] http://mavericklsa.com/


"Becoming a science fiction cliche does not reduce the probability of an event occurring."

I think I'd like to see some statistics on that.

Edit: No really, I'm serious. Actual data on the commonality of some SF idea and the measured likelihood of it would be very cool.


Kinda tough to find what's applicable from this list: http://tvtropes.org/pmwiki/pmwiki.php/Main/OverusedSciFiSill...

But there are a few clear standouts. 6. The patently obvious design flaws in a vehicle or weapon system go uncorrected during the entire life cycle of the system in question.

The original humvee had issues with passenger armor back in 2004 - they could have been purchased with the armor but weren't.

13, Computers, when shot, explode as if they had been stuffed full of Roman candles.

The videos of laptop batteries catching fire are pretty sparkly. Not sure how that would actually interact with a gunshot.

25, On-board computers always know exactly how long it will take for the malfunction to blow up the ship.

The Tesla roadster battery stories seem to give warning and force drivers to stop and get out.

31. A robot that can't climb stairs is deployed in an area where stairs are common.

Roomba?

At the very least, we can make an existential argument about sci-fi cliches actually being accurate. I couldn't think of a good way to get historical science fiction cliches. But the thing about the future is it's unknown. some of Jules Verne's stuff worked out, like submarines and travel to the moon, and other stuff didn't, like time travel.

I guess the rational response to "the singularity" is to stick a number on the likelihood of that actually happening - and then getting on with your life.

I think you're thinking about the future in a funny way, you should give Nassim Nicholas Taleb's The Black Swan a read.


> Singularity prediction isn't even taken seriously by most formerly-intrigued SF writers these days.

Which is probably more a statement about its lack of novelty than anything else.


Meh, I always thought your "Why There Will Be No Singularity" blog post was a pretty accurate prediction. AI will work, but be so piecemeal that the chances of total domination by a superintelligent singleton get quite low. WBE will eventually work (after AI works, most likely), but will trigger lots of social strife and so come into use quite piecemeal itself. Life gets a lot better and a lot weirder but basically just goes on.

Which has the major upside of meaning we'll never have to deal with the Vile Offspring.

By the way, the final component to Rapture of the Nerds didn't seem to have a very good "exam", as I can't imagine the Galactic Authority would never have come up with any way to analyze minds, for instance a sample of human minds, and thus infer the chance that we can coexist as a species without having to run trillions of parallel copies performing very expensive simulations. Hilarious book, other than that, though; it really reminded me how much of a total nerd I am that I understood every single damn word without having to google.


Is this the blog post you've mentioned?

http://www.antipope.org/charlie/blog-static/2011/06/reality-...

It is titled "Three arguments against the singularity" from June 22, 2011


Yeah, that one. And I didn't bother addressing the "Simulation Argument", since it's unfalsifiable nonsense.


Unfalsifiable, okay. But why nonsense? (I haven't read it, yet, genuinely interested in your non-hand-waving thoughts).


"Nonsense" in that there's literally no way to ever tell the difference between a "simulation" being run on a computer in some exterior universe and a very real universe whose laws of physics are fundamentally discrete. "Real" is what you can causally interact with, which means that until that exterior universe decides to deliberately talk to us, they are not actually real with respect to us.


Maybe "pointless" is more correct then. "Nonsense" suggests you have a strong opinion on whether we live in a simulation, which as you said, we can't know.


This bit: "What we're going to see is increasingly solicitous machines defining our environment"

reminded me of Jack Williamson's With Folded Hands scenario (from 1947, http://en.wikipedia.org/wiki/With_Folded_Hands )


> Meh, I always thought your "Why There Will Be No Singularity" blog post was a pretty accurate prediction. AI will work, but be so piecemeal that the chances of total domination by a superintelligent singleton get quite low.

Oh god - domination by the mediocre AI bureaucrat sounds worse than the AI overlord.


In the future, cyborg personality emulations embodying immortal human minds will still behave like arseholes towards each other.


> Best we can probably hope for is better intelligence augmentation/amplification than we've got now.

That seems as if it can get pretty close to the singularity, and answers a lot of questions about what a super intelligent being's motivations would be.


> Best we can probably hope for is better intelligence augmentation/amplification than we've got now.

Sounds like we can get the "Captain America" singularity - everyone gets a genius-level IQ, but nothing further.


And as a side note, Ramez' nexus and crux are decent fiction books in which augmentation takes on a serious role...


I assume it was unintentional. Your last paragraph appears rather opaque to my tired mind.


As far as I can tell, the claims of the Singularity are as follows:

Human-level AI is possible.

Human-level AI will likely be practical this century.

Once AI is sufficiently advanced, it will be capable of AI research.

Most terminal goals would lead to intelligence enhancement as a sub goal, as any system with a goal would be better able to pursue it if more intelligent. Therefore, most AIs will devote resources to increasing their intelligence.

Assuming humans are not near the maximum possible intellectual level, super-human AI will likely be developed soon after an AI is created that is as smart as the smartest human.

This means the future will not be controlled by humans, as we will no longer be the smartest things in town.

Unless we can predict the goals of the AI, we should consider the future unpredictable after the point of its creation, as our models of the future rely on human-shaped agents with human goals and capabilities. With these assumptions broken, we have a sort of singularity, a breakdown of our models of the future.

It's quite conjunctive, but this all seems pretty reasonable to me - and it's worth spending resources studying.


That assumes AIs can enhance their intelligence given fixed computational resources. We don't actually know the shape of the curve for optimization power on the X-axis and necessary FLOPs on the Y-axis. If it's anything larger than linear, AIs will not self-improve in a "singularity" type fashion, but in fact hit a point of severely diminished returns relatively quickly. Judging by the computational complexity of machine-learning algorithms, it's actually entirely possible that curve is exponential (meaning that AGI is DEXPTIME-complete), in which case the only question is how far we are towards the diminishing-returns point.


This objection further assumes that the end-point after consuming all low-hanging fruit is a pre-singularity level AI. If a self-improving AI runs into algorithmic limits only after figuring out how to turn the solar system into computronium, that's still worth calling a "singularity".


Does the singularity concept actually require some sort of quickly growing or even asymptotic curve for intelligence improvement? I don't see why that would be a requirement. Even if AIs are only slightly more intelligent than humans, it seems like many of the effects associated with the singularity could occur. If AIs were more intelligent than humans to the same extent humans are more intelligent than the next most intelligent species on Earth, surely we would consider that to resemble the proposed singularity.


But that assumes running AI on some architecture where FLOPs have any real meaning. If we grow an AI that's externally accessible, is that less of an achievement than simulating the same on a general purpose computer? Still assuming it could achieve singularity that is...


Except that we already have examples of semi-autonomous non-AGI software that successfully acquires additional computational resources via parasitism.

Further, beyond the task of software improvement, the task of improving hardware is itself already heavily software-mediated when performed by humans, and will likely be a prime candidate for applying pre-AGI AI.

There are likely other ways to attack the problem, such as not focusing on hardware improvements per-se, but instead on the cycle time for getting the improvements deployed (eg. somehow eliminating or cutting the size/cost/construction time of new Fabs).


This idea of singularity relies on the assumption that intelligence scales easily. If we have a machine that is more intelligent than the average human, or even more intelligent than the most intelligent human. Will it be more intelligent than a group of intelligent people? How many? If you have an AI running on a supercomputer that is more intelligent than the most intelligent human you might still need a few billion supercomputers to outmatch the human race.


What messy about this is figuring out exactly how to define intelligence, and what it really means to be more intelligent than someone. If somebody said that an AI was more intelligent than the most intelligent human, then how would you figure out if that was true? If it was true, what exactly would this AI be able to do that the human couldn't do?

How does a difference in the level of intelligence of a particular entity compare to the effective intelligence of having lots of less intelligent entities? Having lots of people means that you can apply intelligence towards a lot of tasks, but perhaps all of them together will never come up with something that a greater intelligence could come up with.

Thinking about it, it seems to me that you have to apply it against an actual difficult task. Take a task like "build a power-producing fusion reactor" that nobody knows how to do yet. Would we be able to figure out how to do that faster if there was some superior intelligence applied to it? Or does the huge number of engineering problems associated with building such a thing respond better to a large number of more ordinary intelligence?


This assumes that a single greater-than-human AI doesn't figure out how to get humans onto its side. Given that it will not be a cartoon supervillain, that task will actually be incredibly easy: find ways to give humans what they want in exchange for doing what the AI wants.


Sure, then it will become just another productive person/citizen. We work together because we benefit from it. Working against another intelligent being is significantly harder in the long term.

edit: And the collective is significantly more intelligent than the individual (for some but definitely not all reasonable definitions of intelligent and collective).


>Sure, then it will become just another productive person/citizen.

Only in the sense that Stalin or Pinochet was a productive citizen. Powerful people can mobilize others to their own ends without those ends being at all good or moral for everyone who is not so mobilized.


You can disagree with his methods, but saying Stalin wasn't productive is just plain wrong.

From agrarians to victors of ww2 in ~20 years is pretty impressive by any measure.


It's only one data point, but observe how much more intelligent humans are than the next most intelligent species on Earth.


We are much more intelligent because we have efficient ways to interchange information. It allows us to use our collective intelligence. Our singular intelligence isn't that much more evolved. ("Intelligence is in our language")

Sorry for reviving old thread.


And observe how large our brains have to be to achieve that, and how many calories it requires.


I don't understand how this is relevant to the conversation at hand. I had assumed that we were discussing this topic with the assumption that we can make full use of first world resources, so we should have a capable, (mostly) reliable power source, food, materials, etc. readily available.


> Human-level AI is possible.

> Human-level AI will likely be practical this century.

Both are unknown.


>Human-level AI is possible.

Well we exist, so we know human-level intelligence can be implemented in physics. So we certainly know AI is possible.


I agree to that. But the question of whether human-level I is capable of designing human-level AI remains open still.


Well, design is a slippery word. We are capable of producing children that grow up to be intelligent.

In fact, I think it is quite clear that we are capable of designing systems that are more intelligent than an unaided human (or even group of humans). One of the first such systems were writing. I also tentatively claim that new means of recording and arranging knowledge (which we do today by means of computers) is another step towards superhuman intellect in the strictest sense of "more than mere human".

I think it is an open question what will come out of our playing with genes directly. I certainly expect we'll able to produce some change in the human condition, I hope some of that change will be for the "objectively" better. I have no idea if it will directly impact our intelligence.


Even if we never figure out a formal computational design for human-level AI, I find it hard to believe that we wouldn't be able to achieve the same ends with genetic engineering and computer interfaces to biological computers (i.e. brains).


> I agree to that. But the question of whether human-level I is capable of designing human-level AI remains open still.

"Designing" isn't necessary for creating (OTOH, whether we can artificial select for it and avoid selecting for dead ends may be an issue.)


You are correct. But it's not 100% certain whether we would be capable of setting up a world simulation that would create intelligent agents of the human level. And much less certain, (1) are we able to set up such simulation in the next couple of centuries, and (2) will the simulation produce these intelligent agents after having been running for only a couple of centuries (wall clock time).

Btw, Crystal Nights by Greg Egan is a fascinating short story: http://ttapress.com/553/crystal-nights-by-greg-egan/


Playing a little bit devil advocate here ;)

We have no mean to know. Maybe the subjective experience is an inherent part of our intelligence, and not just a side effect. And I do not see any way of applying the scientific method to the subjective experience.

Nethertheless, I really hope that the future will prove me wrong.


Please correct if I'm wrong, but sounds like you think human intelligence requires something outside of known physics, something "magical", to work?


I'm agnostic about this question. We do not have the necessarry tools to answer right now. One possible way I can see to do that would be to map a human brain neuron by neuron and simulate it. We are not here yet.


> Human-level AI will likely be practical this century.

Given how far away from that goal we are now, after 85 previous years of research, I would not hold my breath for reaching that goal in the next 85 years.


While I'm in the camp of "people who say 'X will happen in the next Y years' are really saying 'X will happen before I die'", I think it is unreasonable to treat the last 50-85 years as a failure for AI.

There are two things holding back AI (which from a singularity-standpoint just means "a machine capable of general problem-solving", and not "a machine possessing the ineffable characteristics of human consciousness"). First, we don't know how. Second, we might not have the computational capacity.

While we still don't know how, the bar is getting lower and lower as time goes on. Increasing computational capacity means that we can start with worse and worse initial versions of a general intelligence. Maybe a really special genius could write a general problem solver which ran on a 386 (even if it lacked the self awareness to feel the pain of running on a 386). Maybe we're too dumb to ever write a general problem solver even if we had an infinately fast computer with unlimited memory.

But even if programmers have made no progress since the 1960s, we are definitely not in the same position.


"While I'm in the camp of "people who say 'X will happen in the next Y years' are really saying 'X will happen before I die'"..."

That depends on who is saying it. I've noticed that when actual researchers say, "We hope this will provide a treatment for reticularly homicidal brain-worms within 5 years", they mean they have no idea if or when any such practical material benefits will appear. If they say, "We will be able to get effectively unlimited amounts of energy from russet potatos within 20 years," they mean they don't believe it's ever going to happen.

If the difficulty is exponential, as I rather expect it to be (in my own completely unfounded opinion), then lowering the bar a given amount doesn't help much. And as far as increasing computational capacity goes, we're already deep into the realm where communication costs outweigh actual computational costs, which makes it a whole different ball game going forward.

If you want a further speculation, I suspect that modern statistical methods like those behind th' Goog, self-driving cars, and what-not will not get too much closer to a super intelligence than conventional heuristic search. They're capable of doing a lot of things easily, that only humans could do heretofore, but they're not actually more powerful. So, "a machine capable of general problem-solving" is easily conceivable in the same sense that you are capable of general problem solving, but I can't see a singularity on the other side of that.


Relevant xkcd: http://xkcd.com/678/.


> (even if it lacked the self awareness to feel the pain of running on a 386)

Now we know the real motivation behind pretty much every sci-fi storyline involving AI revolts.


2014-2*85=1860 -- so I take it your statements means that discovering AI in the next 85 years would be a bigger change/achievement than the changes we see in the world today from the world of 1929, that was founded on work started in 1860?

So, having a look at wikipedia, we find: "Inspired by Babbage's difference engine plans, Per Georg Scheutz built several difference engines from 1855 onwards, one of which was sold to the British government in 1859." -- that's very convenient. We can cheat a little, and say that in 1929, there'd been about 86 years worth of research into computing -- and so by extension, achieving AI by 2100 is a greater feat than the NSAs new data center was as seen from 1929?

Perhaps. Makes for some interesting thought experiments.


How do you even quantify "how far away from that goal we are now"? Do you have a percentage?


You are right that you won't see many researchers publicly talking about human-level AI, or framing their research in terms of the search for general AI. But that doesn't mean the field is "dead".

For example, I work in aging and, to a lesser extent, neuroscience. I have had many private conversations with researchers who will candidly say that the long-term goal of their work is lifespan extension or cognitive enhancement. Yet, for the purposes of grant and paper-writing, you'd never write such a thing; it would sound too speculative.

I'm not in AI, but I strongly suspect the same thing is going on there. There are probably a lot of AI researchers who want to see human-level or higher general AI happen, and are working on it, but simply can't say so in public. We know it's possible, since we have an existence proof in the human brain.


>For example, I work in aging and, to a lesser extent, neuroscience. I have had many private conversations with researchers who will candidly say that the long-term goal of their work is lifespan extension or cognitive enhancement. Yet, for the purposes of grant and paper-writing, you'd never write such a thing; it would sound too speculative.

I think it was Scientific American or another such magazine that ran a story, maybe a month and a half ago, in which they interviewed a whole bunch of real theoretical and computational neuroscientists about mind uploading. A not-that-small portion of these admitted that when they are in private, have had a drink or two, and are ready to admit things they would never say in a grant application, they are definitely trying to achieve whole-brain emulation.

Hell, it's basically the implicit point of the Human Brain Project, or whatever those initiatives are called by the US and EU governments this past year.

> You are right that you won't see many researchers publicly talking about human-level AI, or framing their research in terms of the search for general AI. But that doesn't mean the field is "dead".

Indeed. What instead happened is that actual humanlike cognition has been steadily broken down into subproblems and each subproblem is being formalized and worked-on separately. There are a few papers on optimal utility-maximizing agents for general environments, but those tend to require "until the stars burn out" levels of FLOPs to work anyway.

So actually, AI is going to work, but by the time it does, it won't be called AI. In fact, the researchers who'll create it would look at you funny for calling it AI. They do and will have lots of academic jargon used for specifying exactly what their algorithms do and how they do it, so much so that only when their algorithm kills all humans will anyone actually admit it was artificial intelligence ;-).


  I think it was Scientific American or another such 
  magazine that ran a story, [...] real theoretical and 
  computational neuroscientists [...] admit things they 
  would never say in a grant application, they are 
  definitely trying to achieve whole-brain emulation.
Another possibility is they're actually working on the things their grant applications say, but when pushed and offered drinks by journalists looking for a story, they'll extrapolate their research far enough to reach something cool-sounding.

For example, if I'm researching image correlation techniques, which have applications in machine vision, which has applications in obstacle detection and tracking, which has applications in self driving cars, I could tell a journalist that I'm working on image processing techniques with potential applications in self driving cars.

Is that me admitting things I would never say in a grant application, that I'm trying to achieve self-driving cars? Or am I working on exactly what my grant application says, but I've simplified it and added context because I know Scientific American isn't going to be publishing articles about efficient convolving and fast fourier transforms any time soon?


Frankly, I think the real answer is "both", but that might just be my optimism about the... let's call it soulfulness of most scientists. To me it seems as though one would be nuts to keep up in academic science just for the sake of the boring, tiny, insignificant bullcrap one writes in grant applications to sound impressive; to me it seems as if a scientist must have some actual dream burning in him that keeps him going.

On the other hand, my real-world observations say that for many scientists, the burning dream heating their blood and getting them up in the morning to deal with all the bullshit is... careerist ego-stroking.

So on further reflection, ugh.


> they are definitely trying to achieve whole-brain emulation

Of course they are trying.

They were also trying in 1956: "We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer."

http://en.wikipedia.org/wiki/Dartmouth_Conference

The history of AI research is full of totally unfounded optimism that has failed every time. (I mean the kind of research that promises significant advances towards human level AI. Of course AI research has produced methods that have useful applications.)


I find it strange that you think AI and WBE are the same thing. Putting your claims about machine cognition (which can then be separated into various cognitive faculties) aside, you really think that emulating human cognition on a separate substrate is outright impossible? Then what's the seemingly "magical" property of our meat substrate that forbids outright, or forbids via excessive expense, using any other substrate ever?


> you really think that emulating human cognition on a separate substrate is outright impossible?

Absolutely not.

But I think it's too far in the future, that it is totally pointless for us, in 2014, to worry about it.

I view the singularity crowd as a bunch of medieval alchemists, after they have seen gunpowder, worrying about mutually assured destruction of the whole world, after someone scales their firecrackers to a size capable of destroying whole countries.

Yes, coming to 1950's we did reach nuclear weapons with end-the-world capability.

But it would have been quite pointless for (al)chemist, or philosophers, in the 1400's, or even in the 1700's, to start worrying about this threat.


Ah, so you think scientific progress proceeds not merely at a constant rate, but at a constantly slowing rate, thus causing the rate of new discoveries to be roughly constant over the centuries, despite our constantly adding more scientists, and publishing more papers, based on increasingly solid foundations.


> so you think scientific progress proceeds not merely at a constant rate, but at a constantly slowing rate

No, I don't think so.

I just think the way looks very long, even given our current acceleration rate. Plus, this field has a very strong track record of blatant overestimations.


> I find it strange that you think AI and WBE are the same thing.

I apologize but being ambiguous. I don't think they are the same. But I see the same totally unfounded optimism in both the people who expect AGI to happen relatively soon, and in the people who expect WBE to happen relatively soon.

Like, in the next 50–100 years.


And that they are frequently the same people.


How long ago would realtime auditory language translation have been considered obviously strong AI?


That's basically where my theory about all this comes from. We keep finding that more and more tasks that "clearly" require a whole human-level mind... can actually be done quite well with Narrow AI/ML techniques and crap-tons of training data. We consistently overestimate the Kolmogorov complexity of human cognitive tasks in order to flatter ourselves, thinking that surely no computer can do XYZ unless we're within six months of a capital-S Singularity.

Gaaah, I'm trying to remember the link to some short story that Someone (we all know who) wrote about a man fretting over his JRPG party members seeming to be conscious. The author's afterword said that he expected seemingly-conscious video game characters to show up "six months before the Singularity". I currently expect seemingly conscious video-game characters - which are actually just very stylish fakes with really good NLP - many decades before anyone manages to produce a self-improving agent.


You could say the same thing about "nanotechnology" - which in practice is materials scientists getting money for chemistry when they know damn well that using the term is implicitly promising magical tiny robots to the gullible.


"...but by the time it does, it won't be called AI."

That is the historical nature of artificial intelligence: it's only AI until you know how to do it reasonably well.

Edit: See https://news.ycombinator.com/item?id=7845803


Point them at SENS Research Foundation, if they haven't already heard of it.


Funny you mention that. I just came back from this year's AGE conference in San Antonio, which is very much a mainstream research conference, and not only was it partially sponsored by SENS, but de Grey himself was there.

So yes, they are very much aware of it. I think many aging researchers have a love/hate relationship with SENS, in that they share its goals in theory, but are afraid its ambitious public claims will draw disrepute (and associated loss of funding) upon the field.

You have to remember that we are mostly funded by the NIH, whose mandate is to treat disease, and aging is not widely considered a disease. So we have to frame our discourse accordingly.


De Grey himself seems to take the approach: "we are working on regeneration medicine that will cure each particular, nameable thing that is wrong with old people, and when you have done all of that, there is nothing left to call aging." Thus the NIH problem is side stepped.


He has definitely moderated his tone in the past few years, and has implicitly acknowledged that the specific program he outlined in his book may have been an oversimplification.

But still, you have to be a bit more subtle with grant reviewers than that. "Age-associated disease" is a common catchphrase. For best results, the emphasis has to be on the disease rather than on the aging (even you happen to believe, as I do, that aging is a common root of many diseases).


> Human-like AI is a dead field, or at least dead as far we can tell. We're not even sure how to build it because conceptually we don't even have a grasp on consciousness or cognition.

The current optimism around A.I. is the result not of real advances in the field, but the very useful outcomes that arise from doing pattern matching and machine learning on big data sets. When people can type "weather today" into Google and get a little card that shows the weather in Philadelphia today, it's hard not to be impressed, not realizing that these tricks don't bring us closer to "real A.I."

As a counter-example, take something very simple like search-and-replace. Word processors have had this function for decades, and it hasn't gotten any better. How long will it be before I can tell my word processor to replace all occurrences of {singular proper noun phrase} with {plural proper noun} and shoot it off to my boss without double-checking each replacement?


I think it's a little deeper than that. During the '70s and '80s heyday of AI, the focus was on mechanisms that would mimic high-level aspects of human cognition: A* search, Prolog, and so on. The overall approach at the time was top-down, and focusing on deterministic algorithms.

I think what we have learned in the interim is not just that "machine-learning" style approaches are successful at specific tasks, but also that a statistical, data-driven, "bottom-up" approach is going to need to be an integral part of any eventual general AI solution. Whether we will need to explicitly design the high-level cognitive elements, or whether they will emerge from properly constructed low-level elements, remains an open question.

Another very important advance since the 70s/80s is the widespread realization that every major aspect of cognition is essentially probabilistic. People have realized that a search for "exact" or "optimal" solutions to problems with the complexity seen in the real world is a futile task because of the high dimensionality of these problems.


> Human-like AI is a dead field, or at least dead as far we can tell. We're not even sure how to build it because conceptually we don't even have a grasp on consciousness or cognition. >Then there's the false assumption that we can get to this level before doing things like, say, solving all mental health issues and that it can magically be scaled to super-human speeds. For all we know cognition happens at a certain speed and any attempts past that are troublesome.

On a purely physical level, this is hard to believe. We don't have precise models of consciousness and reasoning, but the general ones (bayesian reasoning, decision theories, etc) are speed-agnostic. It's hard to imagine one that isn't.

> Look at how easily we suffer from mania and other issues when our natural limiters go haywire.

Sure, our hardware sucks. Plus, that's even more the reason to worry about the singularity - if a very fast ai can think and plan, but suffers from mania, that's not a very good outcome. UFAI is an area of concern for many.


I'm pretty sure that the whole Singularity bit in this piece is sarcastic. That's kind of the whole theme of the article. I mean, this should tip you off pretty hard: "Well now, here's the thing: automating sarcasm detection is easy. It's so easy they teach it in first year computer science courses; it's an obvious application of AI."


Human-like AI isn't necessary for singularity events, especially the ones that end up killing all of us.

We don't need to understand consciousness or cognition to create optimization processes that end up hurting (or killing) humanity.

It surprises me how many people seem ignorant of this fact.


What makes me so frustrated about this argument is that we already live in a world with thousands of optimization processes that end up hurting or killing humanity. Everything from the flu to tigers. Plus, all those 'processes' are evolving through selective pressure.


Corporations, governments, political systems, and other organizations are also optimization processes. Some of which are certainly harmful and also already hurt many people.

And these are evolving through selective pressures as well. And not generally in the pro-humanity direction, more in the "preserve self at all costs" direction.


>What makes me so frustrated about this argument is that we already live in a world with thousands of optimization processes that end up hurting or killing humanity.

Yes, we've noticed. Those are evil, too.


I think you're misunderstanding the use of the term "singularity". It does not mean "X will happen, after which we won't be in charge", or "...after which we'll be dead".

Instead, it means that something will become so intelligent that it will be different in kind, in somewhat the same way that we are so different from gravel that gravel cannot hope to understand our behavior.

An optimization process that ends up killing all of us is bad, but it's not a singularity.


Singularity talk by fictional-ish writers is just a cheap search and replace job done by people who want to talk about their theories of a monotheistic God Version 2.0 and don't want historical baggage and automatic knee jerk reactions from the version 1.0 crowd.

An even better theistic analogy for the singularity is the return of Jesus. Just wait till he comes and rights all the wrongs and banishes death and knows everything about everyone who ever lived and makes a paradise on earth for the true believers ... wait was I talking about the singularity or Jesus again?

(note that I'm not (intentionally) making fun of either subject, but trying to focus on the crude search and replace aspect of the analogy)


I've long thought that the singularity is the tech version of the rapture.


You may enjoy reading this then: http://craphound.com/rotn/download/


Atheist who can't quite get rid of religious fear? Why reinvent Jesus when you can reinvent Yahweh! http://rationalwiki.org/wiki/Roko%27s_basilisk


Regardless of whether or not you belief in a singularity theory, it's only mentioned in passing in the article and not really relevant to this discussion.


Whatever it is we're grasping toward, it may be "artificial intelligence" only in the sense that a car is an "iron horse".


Trying to make a really good model bird held back the development of flight for centuries.

Certainly, Julius Caesar or Crassus could have afforded a Roman era hot air balloon or hang glider or glider aircraft if this false analogy hadn't stood in the way of development. Performance would have been somewhat lower than modern material craft, but probably not as bad as you think. There were several decades of modest performance aircraft before titanium, synthetic fabrics, and carbon fiber.

AI workers might be more successful in the long run if they focus on making a useful martian brain, not clone humans in silicon.


> AI workers might be more successful in the long run if they focus on making a useful martian brain, not clone humans in silicon.

AI has been a diverse field for a long time, with many different paths -- both with direct biological inspiration (often not human -- including, e.g., insect-inspired approaches) and without are common. Its not an exclusive "human or not human" modelling choice.


We are currently in the midst of a singularity of tools.

Stone > Bronze > Steel > Silicon > Software >

It's the "geek rapture" version that's more fantastic and extremely unlikely.


This is a very interesting perspective. What do you think is the next step? I am asking for an educated guess, or even interested speculation; people in the middle of the bronze age likely didn't know that steel was the next step until after iron was extracted for the first time.

If I had to guess, I would put the next phase at "integration" or "control." In that once we have functional, reliable libraries and virtual instruments, the logical next step is to integrate the high-level control of such into our physiologies. Alternatively, I could see "space" being an age we expand into, drawing our resources increasingly from space and broadening the scope of our potential that way. I'd like to see these happen in parallel if possible.


I'd say integration and control are N+1 optimization, not 0+1 invention.

Things like new forms of quantum logic/algorithms, artificial atoms, metamaterials, even subvocal tools (moving us from graphical user interfaces towards thought-based user interfaces), are more 0+1 to me.


There is nothing wrong with studying and preparing for emergent complexity in distributed autonomous networks.

Not studying that would be cargo culture.


> It really surprises me that we think there's a realistic path to the singularity and that we obsessed about it. Human-like AI is a dead field

Even if the latter is true, its largely irrelevant to the former. Human-like AI might be one route to a singularity, but not required for it; most of the singularity predictions that I've seen are more centered more around humans rapidly (and at an ever-accelerating pace) becoming proportionately less of the "processing power" involved in the systems where they are the central intelligence, rather than human-like AI per se.


I think you're attacking a very narrow strawman by making it sound like a singularity event is something we'd actively plan or work for.

Why do you think "human-like" AI is a requirement for a singularity to occur?

Isn't it much more likely to emerge simply as a side-effect from continued exponential technological progress and the ever increasing network density?

Maybe try to look at it with less anthropocentrism, perhaps consider how we have evolved from shouting to the internet in a mere few thousand years?

Do you think we'll still be using our eyes and hands to interface with the network, in another thousand years from now?


I think future intelligent beings will be perplexed that so few humans were absolutely obsessed with the singularity, and artificial intelligence.

But only those beings who have the capacity of experiencing surprise.

See, I can make up predictions too.


Human-level AI is not such a worthy goal. In a few years when self-driving cars are more common, we will be thankful that they aren't using human-like AI, otherwise they will get in wrecks for all kinds of preventable reasons. The human mind is highly irrational, though the general public scoffs at that statement. We are very dependent on our chemical state.


In the future, Roko's Femilisk will simulate you from first principles to see if you thought anything nasty about Tumblr social justice. http://newstechnica.com/2014/05/24/future-advanced-cyborg-hu...


But... but... the exponential growth!


You are all fools! THIS IS THE SINGULARITY. WE ARE INSIDE OF IT RIGHT NOW!!!


Almost 40 years ago, some kid jumped up in the front row of an audience and squirted Ronald Reagan with a squirt gun--water only, no injury inflicted. After the Secret Service tackled him, cuffed him, and took him off for psychiatric evaluation, they started to inquire among persons who had previously known him.

One was an acquaintance of mine, who had lived down the hall from the kid in a college dorm. As he recounted it, the conversation went in part like this:

Secret Service guys: Did he seem like the sort of person who would do something like this? Acquaintance: No, really pretty serious. Now, I might think it was funny to do something like that. [Smiles, chuckles.] SSg: [Silence, no smiles, no chuckles]

Let's face it, they aren't paid to have a sense of humor.


Ford survived two assassination attempts. In both cases, the assailants were able to get close, but no bullets fired. Regan survived an actual attack - shots fired and one bullet stopped. The assailant was seen in the waiting guests visibly nervous, abrupt and aggressive. The 'special agent' (the ones with guns who travel with the president) got in the way to block further shots.

I suspect it is a bit edgy being the one just behind the president...


And lest we forget, reliably identifying sarcasm on the internet is pretty hard even for humans who have all the requisite cultural knowledge. Unless the secret service can turn on the twitterer's webcam and observe their facial expression while they type, this really is a no go.

And we all know that could never happen.


I think what will happen is for any given target a weighting will be inferred, giving the likelihood of that person being sarcastic or joking.

As more and more data about us is collected, about what we say/type vs. what we actually do, this has the potential to become more accurate over time.

Facial and voice-tone analysis, analysis of our peers' responses to what we say/write, analysis of our actual movements/purchases compared to what we say/write ("I'm so going along to that Nickelback concert at the weekend...!") etc., combined with leading brains in stats and ML lead me to think this problem could be tractable to some extent in the future. Scary :-/

If something like this comes to be, and is deployed 'wholesale', it has the potential to influence our behaviour, and over generations, alter our definition of 'sarcasm' by what we are able to get away with saying. Insidious!

That said I hope Charlie is right and that humans can game the system significantly that it is a non-starter.


I'm upvoting this because of the extent to which it illustrates my point. Or, conversely, the extent to which it plays along with my point. My head hurts.


Sarcasm doesn't work well on the Internet for the same reasons it doesn't work well in person when conversing with a stranger ... but the stranger has the advantage of reading your body language.

I was brought up in a family that used sarcasm heavily and "turning it down" on the Internet has been a real challenge. Sometimes I still fail (something I believe is obviously sarcastic is misread).


Being European is the worst when it comes to sarcasm. I can't even express my excitement/happiness over something without it coming off as sarcastic.


I come from a family with a dry sort of sarcasm that we deploy often. My sister and I were having a conversation the other day when it slipped into sarcasm and we proceeded to go back and forth till one of us said something serious. Or so the other thought. Eventually neither of us could tell.

My daughter (13) observing this suddenly chimed in: "I think you're caught in a sarcasm trap."

She claims to have invented the term on the spot.

My 13 year old knows it's sometimes not possible for humans to detect sarcasm, much less a computer program. It's a shame that the secret service doesn't too.

(I would venture to say that more people struggle with sarcasm than not. It often requires shared culture/experience to detect if it's not delivered in an obvious tone.)


The ability to reliably identify sarcasm, with a low false-positive rate but even with a very high false-negative rate, would likely save the Secret Service a significant amount of money and a very significant amount of bad press.

Also, your daughter is much more perceptive than the vast majority of the people I know.


My favorite NLP/Computer Language problem:

  Moe: Hey is Curly back from Vacation yet?

  Larry: I saw a Red Lamborghini in the parking lot.

  Moe: Cool
We all assume Curly now drives a Red Lamborghini meanwhile most computer language systems would be lost at the inference here. Children learn this trick early computers are very challenged to understand these sorts of language challenges.


I didn't make that inference, I figured Larry was avoiding the question.

NLP is hard


I actually made the inference, but then rejected the hypothesis. It seemed unrealistic to me that Curly would own a Red Lamborghini.

An AI could form _a number of_ inferences and then use a knowledge base to choose a likely candidate. In fact: I feel like IBM's Watson gave us a little preview of what an AI's "thought process" might be like.


I figured there was some kind of temporal timey whimey thing going on because the three stooges pre-date red Lamborghinis.


I bet it'd make more sense with vocal inflection included ;)


I don't think the main challenge is to teach this "trick" to computers. The problem is probably to give them the proper context.

If the system knows Curly drives a red lambo, and that it is a rare car in this area, and that Curly usually only parks in the parking lot when he's around, and etc. etc. etc.


> "If the system knows [...]"

He was outlining that the difference is: humans do not need the context.

I do not know who Curly is, and I still got what car he drives just by reading the three-line dialog.


Except that you already know what a red lambo is, and that its safe to assume its a rare car almost anywhere, et al. You know this from experience, as do I and most other people. I think that was the parents point.


>I still got what car he drives just by reading the three-line dialog.

I inferred that too, but it doesn't mean that it is correct.


Are you by any chance channelling Doug Lennat?


The point is that humans don't need the context to understand the dialogue.


There's still a rule here though, and one that can be learnt: when someone references an object when asked about a person, the inference is the object has something to do with the person. It doesn't directly answer the question, but is a clue that can be used to predict the answer.


Yeah, it doesn't seem all that hard to form up an association between the car mentioned and Curly. From that association and the association between the car and the parking lot (space/time locality) -- it seems reasonable to add some probability to an association between "Curly" and "here/now" - and from there be able to answer some questions about Curly (Does Curly have a car? Which car? Is Curly back from vacation?) -- all from pretty straight forward parsing based on nouns and proximity.

Not saying that parsing natural language is easy, just not sure this is such a terribly hard example (for a system that's prepared to cheat and/or appear stupid/gullible).

Eg, parse both the above and the below correctly with the same parser:

    Ann: Have you seen my dragon?
    Dad: I think he is playing with the bear in your room.
    Ann: Ok.
(Personally I'd be happy if a system thought Ann had a dragon, but CIA analysts might be less than enthusiastic)


According to this [1] BBC News article from last year, a French company called Spotter [2] offers an analytics tool that they claim can identify sarcasm in web and social media comments. Supports multiple languages to 80% accuracy. I've no idea if it works and their site is rather light on detail.

[1] http://www.bbc.co.uk/news/technology-23160583

[2] http://spotter.com/


It would be interesting to know how this algorithm behaved on fundamentalist texts. (see Poe's law, https://en.wikipedia.org/wiki/Poe%27s_law)


This paper appears to claim up to 91% accuracy (precision) [1] by looking at hashtags, with a description of the approach they use (and a link to the software). The approaches probably aren't directly comparable since one depends on hashtags while the other doesn't though.

[1] : https://gate.ac.uk/sale/lrec2014/arcomem/sarcasm.pdf


Yeah, really. It's creating a whole human mind that's hard. Particular, isolated tasks or features about particular things in particular cultures are not actually that hard to learn/classify in themselves, as your data corpus will be more uniform than if you were trying to perform some more general cognition across a wider variety of contexts. More uniformity makes it easier to learn.


This video suggests their early prototypes required some further calibration: https://www.youtube.com/watch?v=mSy5mEcmgwU



HN (and similar tech-oriented forums) often host outraged discussions of government over-reactions.

They also host excited discussions of linguistic technologies like Siri or Watson.

Here we have a government agency attempting to push the envelope of technology in hopes of reducing their over-reactions. Seems like the sort of thing that would be greeted warmly but, nope, apparently not.


Ah, but therein lies the (too often hidden) assumption behind the angst: we don't want law-enforcement to scale. It should take a human to destroy a human life. It should take a human to violate another human's privacy. Government, with it's monopoly on force, has the capability of doing these things at it's whim, but we are fundamentally opposed to treating these actions as "productivity" and the use of computers as "productivity enhancements" in those contexts.

So, while the minor impulse (reduce overreach) is right, the major impulse (automated justice) is still very wrong.


Origin of the phrase: Moon on a stick

"phrase used widely by teenagers and young adults in the mid 90s as it was made popular by the comedy geniuses Stew Lee & Richard Herring. they used it mostly in the second series of their hit tv & radio show 'Fist of Fun'.[1] It means basically to want everything, if you want the moon on a stick then you want everything, including things you can't have."[2]

[1] https://www.youtube.com/watch?v=ERDUbAv8Qz0

[2] http://www.urbandictionary.com/define.php?term=moon%20on%20a...


If you like this sort of thing, you might want to try out Stewart Lee's Comedy Vehicle:

http://www.youtube.com/watch?v=-yUDh_IErT4


On the other hand, handing out huge wads of cash in the hope that the recipient will do the impossible may not be likely to be directly successful, but it often has interesting results. I wonder what they will discover. I just hope that it won't be classified.


Think of the spinoff technologies!

(That was, btw, sarcasm. Normally I don't like it, but it seemed appropriate here.)


I can certainly imagine automated disinformation techniques - or as Neal Stephenson described it in Anathem: "Artificial Inanity".


What book was it that posited an automated system for the retroactive logical justification of predetermined conclusions back to reasonable-sounding premises? That was Adams's _Dirk Gently's Holistic Detective Agency_, right? A program called Anthem?

That would also be a plausible spinoff.


My daughter has been identifying sarcasm since she was 5 or 6 years old. At 8 she's really good at it. While non-trivial for machines, this problem is not really as advanced for people as it may seem.


Kids are amazingly bad at detecting sarcasm, due to the lack of shared context. Instead, they rely on vocal cues to identify it[1]. My sarcasm is very dry, and my 15 year old daughter still has trouble recognizing it.

But, with the NSA databases, any government solution will have that shared context, right? (I can't tell if I'm being sarcastic...)

1. http://onlinelibrary.wiley.com/doi/10.1111/j.1467-8624.1990....


Not sure whether or not this is sarcasm... Too bad we don't have that detector yet!


Ambiguity implies guilt!

I'm pretty sure the secret service could be convinced to proceed on that maxim.


Her world is probably extremely small. So sarcastic commentary about adult political matters is highly likely to fly over her head.

This might be an interesting way to detect / work around / operationally neutralize the theorized sarcasm detector, as it should be easy to make a google-verifiable sarcastic comment thats ridiculously out of line for most mouth breathing humans WRT CS or quantum mechanics or math in general.

What does this mean to the hypothetical algo: "If P=NP then I'm killing Heisenbergs cat". (edited to add, I hope the algo tries to prove P=NP or crashes when it thinks about it)

Even more fun way to hack with the hypothetical AI, tag something obscure in wikipedia with your site as an external source, then use that obscure reference in a theoretical threat against the prez and log the accesses to your site and research your site accesses accordingly.

Perhaps by careful, fast, automated wikipedia edits if you can make changes faster than the generic spider can figure out what to look at, you could theoretically SWAT people using the sarcasm detector. So if you posted in 2011, "Obama's going to get Wisconsin in 2012" as in electoral college votes, by proper automated wikipedia editing you could turn that into a SWATting incident by editing "Wisconsin" to be a military grade nerve gas before the sarcasm detector accesses it. Who cares if your automated edit gets reverted in 5 minutes if the men in black are already carrying M16s in the van on the way to the SWATting victim's house by the time its reverted...


A 5 or 6 year old can also speak one or several natural languages very fluently, drive a bicycle, and do lots of other stuff that computers (and robots) are very bad at.

So that's actually a pretty high bar in those areas that humans have traditionally much better at than computers.


Non-trivial for machines? That's a massive understatement.

(Or were you being sarcastic?)


I think he's Aussie so that's normal.


Suure he was...


We also have clues about the tone it was said in, which is more difficult in text.


Anyone care to guesstimate how many assassination plots could have been prevented if just read the assailants public twitter feed in which he or she outlined, in plain english, his/her intentions to perform said act?

Somehow I feel like the people that are serious about this level of criminal activity, and capable of actually accomplishing it, aren't first announcing their intentions on twitter.


I'm not sure about that. People who actually want to assassinate the president are not criminal geniuses, or even rational people at all. They're often mentally ill, or at least stupid enough to think that killing the president is a good idea and they can get away with it.

More than one mass murderer has talked publicly about what they're planning online. And two guys that planned to assassinate Obama in 2008 were caught because they bragged to a friend about what they were doing: http://en.wikipedia.org/wiki/Barack_Obama_assassination_plot...

Obviously you won't catch everybody, but I don't think the idea of getting tipped off about a potential assassination plan based on posts online is totally far fetched.


Thought experiment:

As a culture we've been pretty effective lately at electing miserable disappointing failures to public office. This seems a relatively uncontroversial observation, and note I'm careful not to name any names so as to not appear biased toward one side or the other of the one coin we've been dealt.

Precondition 2 is everyone has social media accounts where they say stuff no one cares about, constantly. The CB radio of this decade.

Precondition 3 is anyone planning an assassination is smart enough to self censor and say nothing about the entire topic.

Given the above preconditions, the only rational conclusion is 99.9% of the population will continually be whining and complaining, and you can focus your attentions on the 0.1% who are at least considering doing something bad enough to self censor.

So the thought experiment is Obama comes to visit a city and the guy who gets a SS visit is the only guy in the whole city who's not rambling on in social media about "Kenya" or "Can't believe I wasted my vote on him" or writing racial slurs or "he broke every campaign promise" or whatever.


Disagree. As a culture we have developed public offices that are structured to prevent anyone who might be elected with a mandate to effect change from doing so. The iron law of bureaucracy applies: after the first generation most of the staff of any organization see their job not as pursuing the mission statement of the org but as preserving their own jobs. Attempts to change things generate resistance from within because, hey, jobs might be threatened. Hell, political candidates who might challenge their party's ability to win future elections by accomplishing change (which might be unwelcome to some elements of the voting -- or election-buying -- public or oligaarchs) are weeded out before they get a chance to run for office.

The "disappointing" incumbent is merely a printed-paper face on the front of a machine. Which some loons choose to use for target practice. Resulting in the existence of a vast reactionary bureaucracy dedicated to extirpating threats to printed-paper faces.


Do you by any chance know any elected officials or career civil servants?


  Somehow I feel like the people that are serious about this
  level of criminal activity, and capable of actually 
  accomplishing it, aren't first announcing their intentions 
  on twitter.
If you're in the secret service and you let the president get assassinated, it would be much more embarrassing if the assassin had announced their intentions in public.

Once the assassin has been identified, if they had a tweet from a month ago saying "I am going to assassinate the president in a month" you can bet that will be on the front page of every newspaper, making you look like an idiot.

And if your anti-embarrassment system should prevent a real assassination, so much the better.


Though I completely agree with the intention of your point, the same sort of logic can be used to justify all sorts of dubious "three letter" activity.

By that same logic, are you still an idiot if instead of a tweet, it was a facebook public post? How about a facebook post marked only visible to your friends? Or in an email to your brother's wife's third cousin?

I'm not a big fan of slippery slopes, but this one looks a bit lubricated.


The criteria for it getting on the front page is "anything an investigative journalist could get their hands on with a few days of dedicated searching, assuming they knew the person's name"

So tweets are in and public facebook messages are in, but private facebook messages and e-mails are out.


Funnily enough, I once sponsored a Booth at a convention for presidential assassins.


This article rubbed me the wrong way. The Secret Service is trying to improve their methods in an open, competitive way. Maybe they don't have to perfectly solve sarcasm recognition to reduce their false positive rates.

And let me add that anyone that's stupid enough to try to troll/game this system deserves all the consequences of their actions.

edit: Remembering Bayes: Given a threat on someone the SS is charged w/ protecting, what is the probability it is not actionable?


The original SOW, if anyone's interested: https://www.fbo.gov/utils/view?id=fb2ca11b7d9ca8c61e5ee6d8ae...


Thanks for that. It's always great to see more of the original concept than any columnist's hyperfocused version of it.

It seems "Ability to sarcasm and false positives" is just one of the large list of 22 reqs. The list also includes:

• Compatibility with Internet Explorer 8;


> Compatibility with Internet Explorer 8

That was sarcasm.


Winner!


Compatibility with Internet Explorer 8 is an anti-terror requirement. Clearly no terrorist is dumb enough to use it still so it must be safe.


Reminds me of the Twitter Joke Trial incident:

http://en.wikipedia.org/wiki/Twitter_Joke_Trial

Also, self-linkingly, the problem is that people don't tend to broadcast their violent intentions very reliably:

http://zarkonnen.com/terrorists_on_twitter/


For use of language to circumvent government filtering, see http://en.wikipedia.org/wiki/Baidu_10_Mythical_Creatures_%28...

(I suppose that's timely with the Tianamen Square anniversary)


Poe's Law says this is pretty much impossible. It is tough enough for humans to tell sarcasm. How are we going to get machines to do it well.

http://en.wikipedia.org/wiki/Poe's_law


I would just like to point out that there are many things that are hard for humans to do but easy for machines (cars drive much faster then humans can run, for example). So that argument alone is not persuasive.

Better would be to argue that it is not only hard for humans to determine between sarcasm and genuine speech, we have no idea of how it would be done well. It is not a problem which is dominated by our limited abilities, it is dominated by imperfect knowledge (the inability to know the internal thoughts of another person). This same limitation seems to apply to any automated solution which only had access to the public speech (text, tweet, etc.). Therein lies the intractable difficulty.

However, consider if we had a device that could read minds. Maybe some model is built that can accurately translate brain waves, or the input of subdermal probes into thoughts and intentions. Then it would be much simpler to write a program that could detect sarcasm. It might still need the physical presence of the author, but again there may be a creative solution to that.

Just because it is hard for humans doesn't mean we can't make it easy using our tools. That is one of the prime characteristics of our species.


My point is, for any given computer program, a human, given enough time, can sit down with pencil+paper and figure out the output given an input.

If humans can't yet detect sarcasm accurately, that is evidence that it will be near impossible to do it programmatically.

Edit: I'm not saying to not try. Clark's first law says we should always try things, even the crazy stuff. (as you can tell I like my eponymous laws).


> My point is, for any given computer program, a human, given enough time, can sit down with pencil+paper and figure out the output given an input.

No human is ever given enough time. If you're not done in a hundred years (tops) -- you'll never get done.

It's an interesting thought that a program that has access to a "Total Intelligence Awareness" feed, might actually have more context than any one human has (and can possibly have) -- and by extension be better at detecting sarcasm.


You have missed the point it seems. The actual amount of time is irrelevant. If a program can get an answer given some input, so can a human (given infinite time). If a human can not get reliably get an accurate answer to something (given infinite time), no machine can. That isn't a guess or a hypothesis, that is just how programming works.


    >> No human is ever given enough time.
    >
    > You have missed the point it seems. The actual amount of time is
    > irrelevant. If a program can get an answer given some input, so
    > can a human (given infinite time).
I'm not entirely sure what you mean by "a human (given infinite time)" -- what does that mean in terms of contrast to a (Turing) machine?

We do absolutely know that humans are fallible, so even if a human has enough time, and enough paper -- there is no guarantee that a human can duplicate the effort of a machine in following an algorithm.

You are assuming both the human and machine are given the same input (perhaps you think of only looking at the tweet in isolation). This might be how a human would intuitively try to determine if a given tweet is sarcastic. The machine might be able to look at and review the authors entire corpus of written statements, the authors movements in the past 20 years, analyse all tv programs and other writings the author has ever referenced along with an exhaustive study of all events in the authors life --- In short --- a human might not have "enough time" to "figure it out with pencil and paper" in the same way the machine can.

So, I agree with your initial point: If we cannot define what it means to tell if something is sarcasm or not (do we for instance care what the author meant, or only how the author was perceived? Can we be certain that the author knows if he/she is being sarcastic?) --- then how can a we program a machine to tell the difference?

On the other hand, if we can agree on some rules for what we might more likely consider to be sarcastic, it isn't entirely impossible that that algorithm can use so much data to draw its conclusions, that it would in effect be impossible for a human to verify -- because a human doesn't have infinite time. Or in other words: it's not inconceivable that we could create a system, that ends up being better at telling sarcasm from honest opinion than humans are. But it is likely that such a system would come to its decision in a very different way than a human would.


You are getting hung up on the practical. The rule that "if a program can compute something, then so can a human" is theoretical. It has nothing to do with how long it would actually take a human to do something. Obviously humans make mistakes and won't live long enough to completely walk through even some of the more basic sorting algorithms given a large enough list.

The rule is one method to determine if a program is theoretically possible or not.

For example, writing a program to determine if a given piece of code contains an infinite loop is impossible.

Read up on Computability and Decidability for more examples of things that are impossible to write programs for.


Yes, I'm familiar with the theory of computation, I'm just not sure it applies to the ability to determine if a tweet is sarcasm or not. Or, put it another way, if a human had enough time and the ability to access the amount of data a computer system can, perhaps the human would better be able to determine if a given tweet is sarcasm, than the same human without such help.

I do agree (with you and Pope) that sarcasm might be indeterminable in the general case. Now, if a computer system that has superhuman domain knowledge of the author can tell if a tweet is sarcastic, but a human can not -- is the tweet then, in fact, sarcastic? I'd say that's an open question.


That's not necessarily true. We have already written programs which are able to come to conclusions which are not obvious from a human perspective. This is fairly common with big-data algorithms where a human just can't see the same pattern because we have different limitations in our ability to process mass amounts of data. My point was that it is human nature to build tools which allow us to exceed our own limitations. Sure, from my vantage point I see little reason to be optimistic about an algorithmic solution to sarcasm detection, but "if man were meant to fly he would have been given wings."

Also, the example I gave is a simple solution: mind reading is a trivial solution to sarcasm detection, and it is not inconceivable from our current point of technology.


> well.

Well enough to protect the president, or well enough to perpetuate the taxpayer->government->university financial cycle by making something that kinda sorta works but could get better with uint8_t more years and uint32_t more dollars‽


Shhhhh! The correct answer is "We can have a working prototype ready in 10 years."


This relevant Monty Python skit is worth hoisting out of Stross' comments and posting here:

http://www.youtube.com/watch?v=-fNvi6xG-5Y

from about 1:00


What I want to know is how Charlie knows the subject line of all my client emails ever.


Summary: "No system to monitor or control the population will work because the entire population would obsess over gaming the system." Except no one cares enough to game it, or ever would. Sigh Another programmer who thinks that his buddies who obsess over gaming systems represent more than .001% of the population in question. Next.


> Or they could just ban sarcasm on the internet.

I think what we're all half-expecting, is that they'll define the output of the program to be true; thereby a) ensuring the program is 100% correct, and b) In the event of real threats (as identified by the program) have actionable intelligence.

Almost, but not quite, the same as banning sarcasm on the Internet.


We Homo sapiens were intelligently designed by an omnitrollistic god as the ultimate act of sarcasm. It purposely designed an entity guaranteed to destroy itself by soiling its own world and creating ravenous hegemonic super optimizing AIs. Then it planted the fossil record as an elaborate prank.


This wonderful comic comes to mind every time the discussion of sarcasm comes up http://www.qwantz.com/index.php?comic=168


Favourite comment:

"I went to a presentation from a sentiment analysis business last year. They were discussing how they were using it on the firehose feed of everything from twitter. Rather than assassinations, they were more interested in marketing, branding, and the impact on stock prices (there are apparently some iiiiiiinteresting correlations).

The thing that entertained me was they said they had to special-case content coming from the UK — because our levels of sarcasm/irony were skewing their results.

I think it was the most patriotic moment of my life <sniff> ;-)"


This comment thread seems like a good place to test detectors on.


Or.. force a public ID (or FaceBook) on every Internet-user and arrest/scare the hell out of a few teens being "sarcastic". Send the bill for the SWAT team or something like that. That will stop most people from behaving "sarcastic" on the social media. Did you ever try sarcasm during a conversation with the police/TSA? Doesn't end well.


Are you being sarcastic?


Ha ha! No, I was just trying to see the world through their eyes... In their view, wanting these functions isn't the problem. We are the problem.


I think the most salient point was further down in a comment:

> ...the real target is the autonomous exercise of professional judgement.


shamelessly promoting work by a colleague of mine that detects sarcasm in online product reviews:

http://www.cs.huji.ac.il/~oren/papers/sarcasmAmazonICWSM10.p...


In case nobody mentioned The Simpsons yet "Oh a sarcasm detector that's a really useful invention"Comic Book Guy

http://www.youtube.com/watch?v=mSy5mEcmgwU


Nice of him to provide some test data.


We'll bring the pony and the stick.


I was going to say, if you buy the right figurine, you can give the client a moon-pony.


Never overlook the power of organizational CYA. Likely the next announcement will be they've discovered automated sarcasm analysis of twitter feeds is, in fact, as suspected, completely impossible. Then announcement after that one will be a policy change to cease to analyze twitter feeds because its impossible to triage fast enough to keep up and its all noise anyway. Then the announcement after something tragic happens will be a steaming pile of CYA about how the worlds best minds couldn't find a way to automate the detection of XYZ but if our budget were only a little larger ... (never let a crisis go to waste...)

This is beginning to read like a Stross book plot and maybe he's pissed because yet again he's scooped where the world is getting weirder faster than his plots are getting weird, like happened with the halting state series (as I recall). The laundry series is still safe from this danger, at least as far as we know...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: