They say that they make use of GPUs. Anybody know if this means GPU VMs are coming to google compute engine? That would be amazing. AWS GPU instances are popular but somewhat limited as they use older hardware.
Interesting to see the speech, vision API among others. I wonder what this means for metamind, clarifai and many other startups offering an API for doing something close?
Pretty nice model and makes sense why they OSed Tensorflow: write, train using the API, and feel safe that you can run it on your own hardware as well.
You should see the (pile of numbers) project id when you visit https://console.cloud.google.com in the upper left inset. Alternatively, at any place in the console you can hit the Settings gear and select Project Info (which should have both your numeric project number as well as your alphanumeric project identifier).
Yes. Have your advisor email me (dga at cs . cmu . edu) and I'll send him / her pointers to some of the programs Google has for faculty. Note: cloudml alpha is a limited invite-only thing, so is unlikely to work for you on a short timescale, but nothing stops you from using GCE as a source of cycles on which to run your own install of TF in the meantime.
So if I understand the situation correctly, using this offer, we all feed the AI Google is building, thereby creating another Google monopoly situation (the AI system that gets the most training will be the strongest, thereby attracting even more users/training and becoming even stronger).
At the risk of giving a serious answer when your phrasing makes it seem like you're trolling: No. The Cloud ML service is your data and your model and your training results (except as required for google computers to touch it to train it and store it - but the results of that training are yours.)
At a high level, you can think of it as having mostly the same properties as if you rented a bunch of GCE machines (or AWS machines) and ran TensorFlow on them, with your data stored in GCS (or S3 or whatever). The difference is that Cloud ML handles the scaling pain for you -- managing the machines, starting and keeping tasks running, load balancing, etc.
(disclaimer: I'm working on TensorFlow, not Cloud ML.)