Hacker News new | past | comments | ask | show | jobs | submit login

You cannot not reduce it to statistics, vast majority of your cognitive activity is statistical in nature, in the more general sense of the word. The actual high-level structured thoughts are a very thin layer on top of a huge statistical information processing powerhouse.



Yes, everything can theoretically be reduced to statistics. Next time I build a web app, I'll first design it from the ground up using a Turing machine ticker tape, since all computation can theoretically be reduced to a Turing machine. What's the point of bothering with JavaScript?


You need to use the right level of abstraction for the problem at hand. For modeling intelligence, abstracting away the probabilistic/statistical aspects of cognition greatly limits your options. Continuing the web development analogy, abstracting away statistics in cognitive science is like abstracting away JavaScript/HTML and using WYSIWYG GUI tools. With the same effect of only being applicable in toy problems and not scaling to real world scenarios :)


There's a huge middle ground between reducing cognition to statistics (as you seem to be proposing) and "abstracting away" the statistics. You seem to be accusing any approach that's not "statistics all the way down" of ignoring the statistical nature of the mind. When in fact, there are many approaches to cognition which make heavy uses of advances in statistical theory without reducing cognition to statistics.

What you're proposing is an overly reductionist ontology, IMO, and yet I agree that statistics are important, if not critical. (Hint: not the boring linear kind of statistics that you so often see in Big Data)


No I'm not arguing for a statistics-only scheme, just that you can't ignore it / abstract it away, and that its role is crucial in cognition. I think we agree with each other.


Well, you said "you cannot not reduce it [AI] to statistics". And I'm saying, not only is it possible to not reduce AI to statistics, it is desirable. It's the reduce part that I have a hard time with.


Possibly poor wording, my intent was to say that you cannot dismiss or underestimate statistics in modeling the mind because the damn thing is a sophisticated statistical information retrieval system (with some higher-level structure on top for doing things like debating mind modeling on HN).


What makes the right level of abstraction right? Seems like the units of abstraction have to be amenable to application (analogy), but rightness is fluid. If you have a turing machine, it might make more sense to write a JS compiler first because of the familiarity and perceived difficulty of translating your thoughts, but that's a statistical process of your own head. If you're a fresh mind, you would never create a JS compiler. The aggregated feedback of our social consciousness over time is what sets the 'rightness' of any approach by modulating the difficulty and expressiveness of each concept to each individual.


What makes it right is purely pragmatic and a function of our abilities and limitations, scientific and computational resources, time constraints, etc. We could in theory run quantum simulations for everything, given the resources, but we don't have them.


who cares?

No one is arguing that they've found a domain that statistics can't be applied to or gathered from.

What is implied is that statistics should not be chosen as the preferred main engine of conciousness--because as far as we can tell conciousness is more than just a gestalt voting between the cells of the host it's housed in.


The issue being discussed is the choice of right level of abstraction and the right tools for modeling a given real-world phenomenon. My comment says that statistics is essential in modeling this particular phenomenon, based on my experience and intuition and that of many experts in the field. Responding with "who cares?" doesn't make sense to me. What part of my statement do you disagree with?


I don't disagree with any of it, and I think perhaps my non-chalant "Who cares?" somehow conveyed to you that I think something stated was false. I don't.

But, why waste time with this?

You cannot not reduce it to statistics, vast majority of your cognitive activity is statistical in nature, in the more general sense of the word.

Most here believe statistics are globally applicable, even in domains dealing with the odd, esoteric, or random. With that said..

- Hofstadster tries non-strictly-statistical-methods

- "but you can't be non-statistical!"

- lumps hofstadter into statistical ai > "but hofstadter is non-conformist!"

- create seperate subgroup which hofstadter can exist within with his experiments

- "but you can't be non-statistical!"

... and on and on. Is this the kind of recursion geb was talking about?

A lot of us think we know statistics are universally applicable. It's a waste of time to state, and adds little to the discussion overall. Sorry that it came off so negatively, either before in my first post or now. I don't mean it that way.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: