Hacker News new | past | comments | ask | show | jobs | submit login

Thanks!

I want to download my first HuggingFace model, and play with it. If you know of a resource that can help me decide what to start with, please share. If you don't, no worries. Thanks again.




Most of the HF models have a code snippet that you can use in order to run inference on the model. The transformers library will take care of the download as a dependency when you run the code. Typically, a python 3.10-3.11 environment is sufficient as environment. Example: https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct#t...

If you have a MBP, you need to adjust the device name in the examples from "cuda" to "mps".


If you're on a Mac, https://lmstudio.ai is a quick way to get things running with a decent UI and a REST API that is compatible with OpenAI (which is the de facto standard these days). And the GGUF models that it downloads can be used directly via llama.cpp, if you are so inclined later.


Their docs are very fun to read. I’d probably recommend starting with the “transformers” library for python if you want to play with some language models e.g. Bert:

https://huggingface.co/docs/transformers/en/model_doc/bert




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: