"This paper presents a distributed model-parallel training framework that enables training large neural networks on small CPU clusters with low Internet bandwidth." Low bandwidth being <1Gbps. They've also tested it with GPUs as well.
Maybe there's the possibility of a completely new AI architecture that can still be efficiently trained when there are very low bandwidth connections between nodes? Specifically targeting this use case would make sense, given all the millions of underutilized GPUs out there in peoples' desktop computers.
https://arxiv.org/pdf/2201.12667
Maybe there's the possibility of a completely new AI architecture that can still be efficiently trained when there are very low bandwidth connections between nodes? Specifically targeting this use case would make sense, given all the millions of underutilized GPUs out there in peoples' desktop computers.
Also https://arxiv.org/pdf/2106.10207 ?