Hacker News new | past | comments | ask | show | jobs | submit login

Does DL provide mechanisms for feeding back into theory? As in, does a successful deep convolutional neural network provide a means to extract enough from it's structure and behavior to potentially not NEED the CNN for prediction in a future iteration? Gradient projects can be used to gather a total derivative quantity, and compare sensitivities across inputs. We can regularize to prevent overfitting with cross validation, L-curves, etc. But what about hypothesis generation?

For many of us that have dipped our toes in the ML tooling but don't have a great application for it in our work areas, this would is the kind of thing that we would like. A NN that predicts well, AND has a well understood methodology that gives us actual insight and not just a black box.

Maybe what I have in my head is a deterministic gradient-based analog of an evolutionary algorithm? I'm not sure.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: