Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is no guarantee that local processing is going to have lower latency than remote processing. Given the huge compute needs of some AI models (e.g. chat gpt) the time saved by using larger compute likely dwarfs the relatively small time need to transmit a request.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: