Been using Tensorflow embedded in a mobile app for a few months and honestly, I’m constantly surprised at how well thought-out the tooling is, and how quickly you can get results. Conversely, I think a few things are still unnecessarily dense (installing dependencies, optimizing hyper-parameters, and some of the embedded/XLA stuff is very raw). Kudos to the team though. It sounds like they’re on the right track with TF overall, and focusing on performance (including the XLA stuff) + ease of use (high-level, Keras API) is absolutely what I want as a user right now. Keep up the great work, y’all.
Would you happen to know if it requires additional code to support the Hexagon digital signal processor (DS)from Qualcomm Or is it automatic (kinda like switching between Tensorflow-CPU and Tensorflow-GPU)? I mainly work with Tensorflow on a PC so I'm not too familiar with the embedded variants of Tensorflow. Thanks!
I don’t have any experience with that unfortunately. I’ve seen a couple of talks/demos/announcements about it and it sounds like it’s automatic, but I haven’t been able to find the SDK or any tutorial for it, so I’m not 100% sure. The Qualcomm speaker this morning said there would be more details about it later today but I don’t see anything on the Agenda [0]. Maybe Pete’s session at 12:40 will cover it?
One thing to note is that this isn't available on production phones yet, because we need a signed driver to run within Android. You should be able to run this on a Dragonboard 820 development board though, using the instructions in the README.
This is all very new though, so apologies in advance for any hiccups getting up and running. My email's petewarden at google.com if you are trying this and hit problems.
Essentially there are two ways to do this. The “old” way is to export your TensorFlow neural network into a protobuf file, then load up the TensorFlow interpreter in your iOS/Android app, feed it the neural net, and run the inference directly on device. The GitHub repo [0] has a good set of examples of what that looks like in practice.
The new, still experimental way is to compile your neural net into executable code with their XLA / tfcompile tool, and link that into your app. They are adding more docs on this on the TensorFlow website [1].
They don't want you to go there. Remind who started the latest big AI Projects (Google, Amazon, Microsoft, Facebook) and I don't think the will stop grabbing data.
I think they'll develop a hivemind, where mobile adds to the pool. In short Skynet ;)
It's worth pointing out Tensorflow is basically Google's clone of Theano, including a lot of the same design decisions. They've improved some things but it's not like Google handed us the secret to fire here. It's just a good implementation of the same things a lot of people have been working on for years.
TensorFlow is not a clone of Theano. It's based on the earlier Google's platform DistBelief, mostly known outside of Google as the engine behind 2012 Youtube cat videos paper. Like DistBelief, TensorFlow was designed from the ground up to be scalable across multiple nodes.
Theano, on the other hand, seems to be focused on the optimizations for the single machine, single GPU code. It only recently got the ability to run each function on a different GPU.
To be truthfully honest it doesn't matter either way or even if there is something "better" out there (if Theranos was...).
TensorFlow has already become the winner from my reading around it so I'm going to continue learning it rather than another framework until I've become fairly proficient. By which time why change?
TensorFlow does not make AI or DL "more accessible". It's not easier to use than Theano. Both have good documentation, and both have lots of code examples/model implementations.
If you're looking for something that would make it easier for you to learn DL, you should try Keras - it's a higher level library, which can use both Theano and TF as a backend.
DistBelief was a CPU-only special purpose neural network system that would have been difficult to modify to support arbitrary neural architectures like theano. TensorFlow is not based on DistBelief in any meaningful way other than that they were written by mostly the same people.
I agree 100%. I'm not sure what AMD is thinking, but without support from major ML tools there is no chance of competing against NVidia in this space - and this space will grow larger and larger.
I totally agree with you as well. I was looking for a new graphics card and was debating between the GTX1050 or the RX480. I ended up getting the 1050 since it has CUDA and CUDANN support even though the RX480 has better specs.
> Plus, soon Google will open-source code that will multiply the speed of TensorFlow — specifically version three of Google’s Inception neural network model — by 58.
Uh, nope, that was speedup on 64 GPUs (or CPU cores, can't remember). i.e. it scales linearly, something that TF hasn't always been good at v other frameworks. I'm amazed a journalist with (I assume) basic technical competence could make this mistake.
I have a couple of applications in mind, mostly time series predictions. But the machine learning field seems to be vast and I don't know where to start.
The ML/DNN rabbit-hole goes deep. If the video above leaves you wanting more, http://www.deeplearningbook.org/ does a good job on drilling into more specifics for the various techniques used. The examples on the tensorflow webpage are also very good.
http://cs231n.github.io/ is a great site for beginners. I've been following the site along the Udacity Self Driving Car nanodegree. The CS231 material has helped me understand the concepts significantly.
Edit: I should mention that the class mainly focuses on neural networks and image recognition. However, once you have the foundation, you can apply your skillset to a vast range of applications.
Don't worry that just because it isn't using deep nets that it isn't state of the art or won't get the job done well. That would be like thinking python's built-in sort function isn't sufficient because it doesn't use Spark.
Deep learning is primarily a study of multi-layered neural networks, spanning over a great range of model architectures. This course is taught in the MSc program in Artificial Intelligence of the University of Amsterdam. In this course we study the theory of deep learning, namely of modern, multi-layered neural networks trained on big data. The course focuses particularly on computer vision and language modelling, which are perhaps two of the most recognizable and impressive applications of the deep learning theory.
I would recommend starting with a spreadsheet-sized dataset (no more than a few thousand records) where you want to predict one of the columns, use binary decision trees to try to predict it's value. Use either Azure ML Studio, or Jupyter with the Sci-Kit Learn library, depending on your comfort level with programming.
if you are based in the SF Bay Area you can come to Data Weekends (www.dataweekends.com). They are 2-day workshops to get started with Machine Learning and Deep Learning (full disclosure: I run them)
I am looking at the Martin Wicke talk. The Estimator API is very reminiscent of SparkML. Nice to see that the tensorflow crew are flexible enough to take good ideas from projects such as SparkML and Keras (now included natively in the TF stack).
Other highlights include the hotspot compiler (I was not that impressed so far, but it's early days for them), and embedded visualizations (looked quite cool) for visually inspecting learnt manifolds.
I stumbled across a three-chapter preview of the upcoming book Learning TensorFlow on Safari Books Online and went through them in a sitting. It was so accessible - both the book and TensorFlow itself - and inspired me to start learning math so that when the rest of the book comes out I will be better prepared to go deeper. I love learning in general, but haven't been this excited about learning something totally new (for me) in a long time.
Good book to learn the deep learning concepts. The official tensorflow tutorials are also good to learn the programming part which is not covered in the book.
I'll discuss this a bit during my talk at the dev summit.
The short answer is no.
The long answer is yes, but only if you create the model in Python, export it, and then feed training data in other languages. There are some people doing exactly that.
Long term, I'd like to give all languages equal footing, but there's quite a bit of work left.
Forgive my ignorance, but why is it that it is Python-only?
Does Python have intrinsic qualities that other languages don't possess or is it that the huge initial investment in creating TensorFlow was based on Python and duplicating that effort somewhere else would require too much work?
Traditionally, most neural network architectures have been implemented in C/C++ - for performance reasons. But ML researchers are not hackers, for the most part, and Python has the lowest impedence mismatch for interfacing with C/C++ of all the major languages. Julia was popular for a bit, but now Python is dominant. Programs tend to be very small, and not modular - so static type checking is less important than it would be in picking up errors in larger systems.
It's not just the lowest impedance mismatch, but it's also a framework coming out of google, where python and Java were really the only two language choices for a high level interface, and of the two python is the clear winner in prototyping / scientific community acceptance. I think it's because of the ease in experimentation and expressiveness of the language.
TensorFlow comes with an easy-to-use Python interface and no-nonsense interfaces in other languages to build and execute computational graphs. Write stand-alone TensorFlow Python, C++, Java, or Go programs, or try things out in an interactive TensorFlow iPython notebook where you can keep notes, code, and visualizations logically grouped. This is just the start though — we're hoping to entice you to contribute interfaces to your favorite language — be it Lua, JavaScript, or R.
Yes, though last I remember reading about this the symbolic differentiation only worked in python, and ergo training with other languages wasn't quite there. I think the language on the page was always similar to the above.
Well there's Gorgonia[0] (shameless promo: I wrote it). It's like TF/Theano. I'm finishing up porting/upgrading the CUDA related code from the older version (long story short: I needed a dependency parser and so I hacked on CUDA stuff and now I'm paying the price for not properly engineering it)
Could MacBook Pros (with Intel HD Graphics 3000 384 MB, to be more specific) train with GPU? I've always wanted to train algorithms but without using the GPU it is really slow.
I doubt the integrated Intel Card would be supported, even if it is, using the CPU would be just as good if not better. A lot of the high performance you see on GPUs are because of very highly optimized libraries available for Nvidia cards (like CuDNN) and so on.
Great news. I have several TensorFlow examples in a new book I am writing. I need to read up on the new higher level APIs, and can hopefully shorten the book example prob-grams.
We changed the URL from https://www.tensorflow.org/, which doesn't say anything about 1.0, to an article which gives a bit of background. If someone suggests a better URL we can change it again.
The URL when I first clicked this was the GitHub release notes, which is far more informative and apropos to the HN audience than either the TF landing page or vague VentureBeat pseudonews.