Hacker News new | past | comments | ask | show | jobs | submit login

There is a tiny one in the iPhone 7. But, that's for flexibility on current tasks not future proofing.

In terms of AI there is little reason to run it on the phone unless it's heavily used or needs or be low latency. Consider if they add a 100$ of computing power to a phone that sits unused 99% of the time they can just build a server using those same 100$ worth of parts that can then serve 100 phones saving 90+$ per phone including upkeep etc.

PS: This is the same logic why Amazon Echo is so cheap, it simply does not need much hardware.




Privacy is another reason to keep the computation local.


I believe that's actually the main reason to choose a "local" AI.


Good luck getting any of the big players to acknowledge that. >.<


That's exactly why Apple does things like analyze your photo library locally on the phone - for privacy.


Yes, surely this has no relation to their inability to build reliable cloud products ....


It could be both. Perhaps Apple concluded that 1) they're subpar with cloud services and will have difficulty competing, 2) there's a growing need/demand for more privacy and less 'cloud', and 3) Apple's products are already, on the whole, recommended when it comes to privacy.

And based on that they figured privacy was a good thing to aim for. Play to their strengths and differentiate based on that.


I believe many of them do. Google has TensorFlow Lite: https://techcrunch.com/2017/05/17/googles-tensorflow-lite-br...

Facebook has Cafe2Go. Apple is working on this (and already has bindings optimized to use the ARM vector unit for DNN evaluation).

Running on device, if it can be done with reasonable power, is a win for everyone. Better privacy, better latency, and more robust operation in the face of intermittent connectivity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: