Easiest way to run a local LLM these days is Ollama. You don't need PyTorch or even Python installed.
https://mobiarch.wordpress.com/2024/02/19/run-rag-locally-us...
Hugging Face can be confusing but in the end a very well designed framework.
https://mobiarch.wordpress.com/2024/03/02/start-using-mistra...
Easiest way to run a local LLM these days is Ollama. You don't need PyTorch or even Python installed.
https://mobiarch.wordpress.com/2024/02/19/run-rag-locally-us...
Hugging Face can be confusing but in the end a very well designed framework.
https://mobiarch.wordpress.com/2024/03/02/start-using-mistra...