Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So there's 2 main reasons:

1. DDP itself has an overhead since it has to synchronize gradients at each training step since GPU0 and GPU1 has to give gradients to GPU0.

2. Huggingface seems to not be optimized well for DDP mainly due to inefficient data movement - we fixed that - interestingly - even on 1 GPU it's faster.



I agree that synchronization causes overhead, so 2x GPUs won't achieve the ideal 0.5x total runtime. But here, taking your Alpaca benchmark as an example, we are seeing 2x GPUs get 3.6x runtime with Huggingface, or 1.15x with Unsloth Max.

In other words, every benchmark, in either HF or Unsloth, is slower in absolute terms when going from 1 to 2 GPUs. That makes me think something is wrong with the test.

Could you share your benchmark code?


You can refer to QLoRA's official finetuning notebook https://colab.research.google.com/drive/1VoYNfYDKcKRQRor98Zb... for your reference!! Obviously I can't provide the code we have, but if you use the same datasets and the same settings (bsz = 2, ga = 4, max_grad_norm = 0.3, num_epochs = 1, seed = 3407, max_seq_len = 2048) you should be able to replicate it.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: