There is a tiny one in the iPhone 7. But, that's for flexibility on current tasks not future proofing.
In terms of AI there is little reason to run it on the phone unless it's heavily used or needs or be low latency. Consider if they add a 100$ of computing power to a phone that sits unused 99% of the time they can just build a server using those same 100$ worth of parts that can then serve 100 phones saving 90+$ per phone including upkeep etc.
PS: This is the same logic why Amazon Echo is so cheap, it simply does not need much hardware.
It could be both. Perhaps Apple concluded that 1) they're subpar with cloud services and will have difficulty competing, 2) there's a growing need/demand for more privacy and less 'cloud', and 3) Apple's products are already, on the whole, recommended when it comes to privacy.
And based on that they figured privacy was a good thing to aim for. Play to their strengths and differentiate based on that.
Facebook has Cafe2Go. Apple is working on this (and already has bindings optimized to use the ARM vector unit for DNN evaluation).
Running on device, if it can be done with reasonable power, is a win for everyone. Better privacy, better latency, and more robust operation in the face of intermittent connectivity.
In terms of AI there is little reason to run it on the phone unless it's heavily used or needs or be low latency. Consider if they add a 100$ of computing power to a phone that sits unused 99% of the time they can just build a server using those same 100$ worth of parts that can then serve 100 phones saving 90+$ per phone including upkeep etc.
PS: This is the same logic why Amazon Echo is so cheap, it simply does not need much hardware.