Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's just the base/stock/instruct model for general use case. There gotta be a finetune specialized in translation, right? Any recommendations for that?

Plus, mistral-small3.2 has too many parameters. Not all devices can run it fast. That probably isn't the exact translation model being used by Chrome.



I haven’t tried it myself, but NLLB-200 has various sizes going down to 600M params:

https://github.com/facebookresearch/fairseq/tree/nllb/

If running locally is too difficult, you can use llm to access hosted models too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: