Hacker News new | past | comments | ask | show | jobs | submit login

Afaik this was stated in my Intro to ML course. A kernel machine can do anything when the similarity function has infinite dimensions. Similarly, I think they mentioned an infinitely wide MLP is also all you need.

Also, this all breaks down when you introduce reinforcement learning methods.




> Afaik this was stated in my Intro to ML course.

Isbell?


CS4780 at Cornell. To be clear I took the class, didn't teach it.


That was my guess, too :-)


What do you mean breaks down?


Breaks.

Methods that expect the same input to map to the same output don't work with feedback.


There's no longer a concept of training examples to be close to, since it's just going along the gradient of high reward actions in the RL environment and going away from those with low reward.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: