Nope. People aren't addicted to CUDA. In fact its hell to work with CUDA. Just to even download the binaries you have to go through registration process, docs are shit and version dependency is nightmare. The only reason CUDA is in use is because in old days it was the only game in town and Caffee framework integrated it from some very early code researchers wrote. Then people kept using that baseline code all the way to TF and PyTorch. Thanks to TPU, frameworks are already being to forced to be agnostic and new alternatives will be much easier to integrate.
If Groq chip delivers what its promising then you can bet that it would be integrated within few months in most frameworks and people will soon forget about CUDA. Most people who work with deep learning neither write code specific to cuda nor do they care that cuda is being used under the hood as long as things are being massively parallelized.
> If Groq chip delivers what its promising then you can bet that it would be integrated within few months in most frameworks and people will soon forget about CUDA.
Groq seems careful not to promise any price point. Even if Groq delivers every promise, if it's expensive its adoption will be chancy.
If Groq chip delivers what its promising then you can bet that it would be integrated within few months in most frameworks and people will soon forget about CUDA. Most people who work with deep learning neither write code specific to cuda nor do they care that cuda is being used under the hood as long as things are being massively parallelized.