Hacker News new | past | comments | ask | show | jobs | submit login

Well, we could do a much much better job of it but in fact Qualcomm does compete with NVIDIA for use cases like this (inference). Both in mobile devices and the data center.

Disclaimer: I work at Qualcomm.




What is on offer optimized for running these LLMs?


The hexagon NSP is reasonably well suited for running ML in general. I know it's used for some image/CV use cases and I think it will work well for language models, but maybe suboptimal for the recent large ones.

This processor shows up in both snapdragon SoCs and cloud ai 100.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: