Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think co-evolution should be investigated for encoder-decoder architecture training.

The idea being that you pair multiple decoders with each encoder, and multiple encoders with each decoder (randomly sample if large populations). The selective pressure is a feedback loop between the encoder and decoder populations that requires the members to produce and interpret the latent vector as well as possible. In theory, this creates a form of generalization pressure wherein the encoders and decoders must perform well with a wide range of possible up/down stream states. I think with large enough populations, this could be robust to premature convergence and overfitting.



I remember playing around with RapidMiner about 10 years ago and it had a hyper parameter optimizing primitive that uses evolutionary search. It was quite pleasing. RapidMiner doesn't do any of the modern stuff though. Asking chatgpt turns up mostly pretty old (abandoned) projects from around 2017.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: