Hacker News new | past | comments | ask | show | jobs | submit login

The best way to advance AI is probably to make the hardware faster, especially now that Moore's Law is in danger of going away. The people doing AI research generally seem to be fumbling around in the dark, but you can at least be certain that better hardware would make things easier.



I am not in the AI space at all, but I am under the impression that python is the most used language for it. If workload is becoming an issue, wouldn't the low hanging fruit be a more performance driven software language?

With how fast things are evolving, developing something like an ASIC ($$$) for this might be outdated before it even hits release, no?


The heavy lifting done in neural networks is all offloaded to C++ code and the GPU. Very little of the computation time is spent in python.

ASICs will definitely be very helpful, and there's currently a bit of a rush to develop them. Google's TPUs might be one of the first efforts, but several other companies and startups are looking to have offerings too.


> Google's TPUs might be one of the first efforts, but several other companies and startups are looking to have offerings too.

NVidia's Volta series includes tensor cores as well [1]. So far, I think they've only released the datacenter version, which is available on EC2 p3 instances [2].

[1] https://www.nvidia.com/en-us/data-center/volta-gpu-architect...

[2] https://aws.amazon.com/ec2/instance-types/p3/#Product_Detail...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: