Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So the cloud TPUs are more powerful...? Or what are you saying?


Yeah, it’s a silly branding thing.

One TPU (not even a pod, just a regular old TPUv2) has 96 CPU cores with 1.4TB of RAM, and that’s not even counting their hardware acceleration. I’d love to buy one.


Huh, this doesn't seem right. Based on #s you seem to be referring to pods but even then I'm not familiar with such a configuration existing.

A single TPUv2 chip has 1 core and 8gb of memory. A single device comes in the v2-8 configuration with 8 cores and 64gb of memory.

Pod variants come in v2-32 to v2-512 configurations.


A single TPUv2 host has 8 TPU cores with 64GB of total HBM (8GB per core), but like GPUs, TPUs can't directly access a network, so the host also needs CPUs and standard RAM to send data to them. They are fast, and the host has to be fast enough to keep them fed with data, so the host is pretty beefy. But FWIW, a TPUv2 host has somewhere around 330GB of RAM, not 1.4TB.


Thanks for clarifying, I misinterpreted the commenter as referring to the accelerator as the conversation was about TPU availability for purchase.

I know just enough about the architecture to facilitate using TPUs for research training runs but I'm not sure what's so special about the host?

Sure it's beefy but there are much beefier servers readily available.


There's nothing super-special about the host. The accelerators are the special part (and, as described elsewhere, they are orders of magnitude more powerful than the Edge TPU). However, if you're an academic/independent researcher, being able to access a system with that much system memory/CPU cores for free through TPU Research Cloud is potentially appealing even without the accelerators.


Edge TPUs are low cost, low power inference devices the size of a dime. I have a hundred of them sitting in a closet. (Alas. Anyone want to buy 100 coral minis? :-)

The TPUs you rent that are being discussed here are capable of training, consume hundreds of watts and have a heatsink bigger than your fist and really spectacular network links. They're analogous to Nvidia's highest end GPUs from a "what can you do with them" perspective.

Both are custom chips for deep learning but they're completely different beasts.


Can I hook a microphone up to a Coral Mini and run Whisper? I'd love to have a home assistant that wasn't on the cloud.

As for the rest of them, list them on Amazon and let them do the fulfillment. That $10k of hardware isn't going to sell itself from your closet. (Yet. LLMs are making great strides.)


It has a microphone built in.

And that's a good idea, thanks. I've been dreading the idea of using ebay.


They are entirely different chips - like an order of magnitude in terms if transistor count and die size.


yes




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: