Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

they really should drop the term ‘AI’ and call it what it is - statistics


Honestly, after studying neural networks for the past year-ish, you just said one of the smartest things in this forum.

I'd just like to say it should be complex statistics. But that's just me.


Would you say that AlphaGo is really just statistics?

Even AlphaZero, which, iirc, is trained entirely using self-play, with no starting data from other players?


If I recall correctly their neural network was mostly trained to judge board situations accurately. So yes, I suppose you could consider it statistics.

Of course the situation becomes rather interesting when you start training it against itself, but you're still fundamentally trying to find a good statistic to estimate your chances of winning.


My understanding of “statistics” is that it is about either probability distributions, or gleaning some aggregate information from an existing data set which is taken to be either a (not necessarily uniform) random sample from some distribution, or describing the entire population.

Perhaps I am not using the right definition of “statistics”?


Given that your definition of statistics contains nested levels of 'either or', it shouldn't really surprise you that it doesn't really generalize well.

More generally I'd consider statistics to be the applied version of probability theory. Of course in this case the very thing they were trying to compute also fit the definition of a 'statistic'.

If you consider this is to be too broad, then keep in mind that it's simply better when you can apply the concepts and techniques from probability theory to more things.


While AI learners are not actually perfect generalizers so to speak, they are also quite clearly not purely statistical correlation machines, and there is a lot of evidence to show for this, such as the surprisingly similarities of ConvNets with low level human perception, etc.

It seems to me that there is a category of people who are eager to dismiss deep learning altogether and say "iT's JuSt stAtIStics" even though there is a good amount of evidence to show that it isn't the case. That isn't real science, it's human bias.


they are also quite clearly not purely statistical correlation machines, and there is a lot of evidence to show for this, such as the surprisingly similarities of ConvNets with low level human perception, etc.

Wrong.

If you actually study signal processing, you will find out that CNNs aren't something magically rooted in something other than "statistical correlation machines." CNNs in fact work because they're used to calculate cross correlation!

"Convolving" in terms of a CNN is a misnomer, it's the same as calculating cross correlation in terms of signal processing.


Yeah, I'm aware that it's cross-correlation from the signal processing world, there are a lot of misnomers like that in the DL world, e.g. "deconvolutions", etc.

> Wrong.

The best way to respond to this, somewhat humorously would be "Wrong".


That wasn't the point. The point was, contrary to your claims, CNNs are indeed all about statistical correlation.


It seems odd that GANs would learn to generate images with precise yet lifelike shape and texture based purely on statistical correlation.


No it doesn't. Because that's what the CNNs used by the GANs are fitting on, cross correlation.


Sounds to me like you're being overly reductive of statistics and probability.


There are a lot of folks in this thread who are AI "experts", fighting tooth and nail to defend their startup/PhD/career path, even if it means defending complete nonsense.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: