Also, Siri, and consider: you’re scaling AI on apple’s hardware, too, you can develop your own local custom AI on it, there’s more memory available for linear algebra in a maxed out MBP than the biggest GPUs you can buy.
They scale the VRAM capacity with unified memory and that plus a ton of software is enough to make the Apple stuff plenty competitive with the corresponding NVIDIA stuff for the specific task of running big AI models locally.
> there’s more memory available for linear algebra in a maxed out MBP than the biggest GPUs you can buy.
But this hardly applies to 95% if not more people of all people running Apple's hardware, the fastest CPU/GPU isn't worth much if you can fit any at least marginally useful LLM model on the 8GB (or less on iPhones/iPads) of memory that you device has?
Also, Siri, and consider: you’re scaling AI on apple’s hardware, too, you can develop your own local custom AI on it, there’s more memory available for linear algebra in a maxed out MBP than the biggest GPUs you can buy.
They scale the VRAM capacity with unified memory and that plus a ton of software is enough to make the Apple stuff plenty competitive with the corresponding NVIDIA stuff for the specific task of running big AI models locally.