Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can see where you’re coming from, but not really. Unlike an RNN, the main transformer still processes sequences non-recurrently. The “sidecar” model just encodes internal activations into compressed latent states, allowing introspection and rollback without changing the underlying transformer architecture.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: