Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you use the provided pytorch code, have a modern CPU and enough physical RAM, you can do this currently. As you suggest, inference/generation will take anywhere from hours to days using a CPU instead of a GPU or other ML-accelerator-chip.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: