Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Any thoughts on why AWS/Xilinx didn't go for a mid-range FPGA to help validate customer requirements?

My guess is that Amazon will have to be very careful not to price themselves out of the market, for mid-range Deep Learning based cloud apps.

Wild guestimate but I think it'll cost more than $20/hr for each instance.



Based on my speculation, and to make a long analysis short: fewer, bigger FPGAs are better in the cloud from a user experience perspective than more, smaller FPGAs. The big applications are all going to consume as much FPGA fabric as they can (machine learning, data analysis, etc). Even "mid-range" Deep Learning will consume these FPGAs like candy. Non-deep learning will too; they can always just go more parallel and get the job done faster.

Amazon is betting on the fact that they can get better pricing than anyone else. They probably can. No one else will be buying these FPGAs in quantities Amazon will if these instances become popular (within their niche). So for the medium sized players it'll be cheaper to rent the FPGAs from Amazon, even with the AWS markup, than to buy the boards themselves. Especially for dynamic workloads where you're saving money by renting instead of owning (which is generally the advantage of cloud resources).

That's my guess anyway.


It would not be inconceivable that Amazon just buys Xilinx (before someone else does).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: