Hacker News new | past | comments | ask | show | jobs | submit login
Cornell Natural Language Visual Reasoning Dataset (cornell.edu)
159 points by indescions_2017 on Nov 14, 2017 | hide | past | favorite | 5 comments



SHRDLU, anyone? http://hci.stanford.edu/winograd/shrdlu/

For almost 50 years, we've been talking to computers about blocks in a box!


Pretty amazing.

It seems today people write off those demos as showy, shallow and misleadingly optimistic. But if you think about it, the exact same thing can be said about modern ML demonstrations. A lot of them hint at nearly human-level intelligence, but that's often achieved through careful restriction on the problem domain and by choosing the most successful examples.

Considering that old-school AI was operating on laughable hardware with tiny hand-made datasets, with no crowd-sourcing options... I wonder, did attempts to generalize some of that research fail because the approach was fundamentally flawed, or was it because such efforts themselves weren't as good as initial projects?

I mean, the article on Wikipedia talks about "more realistic level of ambiguity and complexity", but the demonstrated domain is already more complex than 90% of all programming problems I solve on day-to-day basis.

* * *

I though Minsky also did something similar, but with physical blocks and a robotic arm, and without the NLP interface. Anyone knows about that? Or am I making it up? I was looking for info about this recently and couldn't find anything.


If you're talking about human-level intelligence, I think you're on to something. Old-fashioned AI was probably closer to the right track than we are now.

> did attempts to generalize some of that research fail because the approach was fundamentally flawed, or was it because such efforts themselves weren't as good as initial projects?

In many cases there haven't _been_ attempts to follow up on that early work. It's more like the zeitgeist just shifted to other things. For example, SHRDLU was just a couple years before the first AI winter, and when spring came (~1980) people had largely moved on. (Which isn't to say there weren't also flaws with the approach.)


> I though Minsky also did something similar, but with physical blocks and a robotic arm

Google says http://museum.mit.edu/150/9


I actually tried to do something similar to the SHRDLU project in JavaScript. It was a few years ago and I thought it would be fun to see how much I could accomplish over a weekend, and I slowly realized just how hard this type of project (NLP/NLU) is.

https://danielborowski.github.io/site/etaoin/




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: