|
|
user: | danielhanchen | created: | September 8, 2021 | karma: | 853 | about: | Unsloth github.com/unslothai/unsloth - finetune Llama 2x faster + use 70% less VRAM 1. Used to work at NVIDIA RAPIDS cuML 2. Discord: https://discord.gg/unsloth 3. Github: https://github.com/danielhanchen 4. Twitter / X: x.com/danielhanchen 5. Email: my handle @ gmail.com 6. Bug fixes for Gemma: https://news.ycombinator.com/item?id=39671146 7. Bug fixes for Gradient Accumulation: https://x.com/danielhanchen/status/1846235913443262891?lang=en | | submissions | | comments | | favorites |
|