Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah. It’s good with YOLO and Dino though. My M2 Max can compute Dino embeddings faster than a T4 (which is the GPU in AWS’s g4dn instance type).


MLX will probably be even faster than that, if the model is already ported. Faster startup time too. That’s my main pet peeve though: there’s no technical reason why PyTorch couldn’t be just as good. It’s just underfunding and neglect


t4's are like 6 years old




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: