Hacker News new | past | comments | ask | show | jobs | submit login

You need a lot of space for that, cooling, and a good fuse that won't trip when you turn it on. I would totally just pay the money for an M4 Ultra MacStudio with 128 GB of RAM (or an M4 Max with 64 GB). It is a much cleaner setup, especially if you aren't interested in image generation (which the Macs are not good at yet).

If I could spend $4k on a non-Apple turn key solution that I could reasonably manage in my house, I would totally consider it.




Well, that's your call. If you're the sort of person that's willing to spend $2,000 on a M4 Ultra (which doesn't quite exist yet but we can pretend it does), then I honest to god do not understand why you'd refuse to spend that same money on a Jetson Orin with the same amount of memory in a smaller footprint with better performance and lower power consumption.

Unless you're specifically speccing out a computer for mobile use, the price premium you spend on a Mac isn't for better software or faster hardware. If you can tolerate Linux or Windows, I don't see why you'd even consider Mac hardware for your desktop. In the OP's position, suggesting Apple hardware literally makes no sense. They're not asking for the best hardware that runs MacOS, they're asking for the best hardware for AI.

> If I could spend $4k on a non-Apple turn key solution that I could reasonably manage in my house, I would totally consider it.

You can't pay Apple $4k for a turnkey solution, either. MacOS is borderline useless for headless inference; Vulkan compute and OpenCL are both MIA, package managers break on regular system updates and don't support rollback, LTS support barely exists, most coreutils are outdated and unmaintained, Asahi features things that MacOS doesn't support and vice-versa... you can't fool me into thinking that's a "turn key solution" any day of the week. If your car requires you to pick a package manager after you turn the engine over, then I really feel sorry for you. The state of MacOS for AI inference is truly no better than what Microsoft did with DirectML. By some accounts it's quite a bit worse.


M4 Ultra with enough RAM will cost more than $2000. An M2 Ultra mac studio with 64GB is $3999, and you probably want more RAM than that to run bigger models that the ultra can handle (it is basically 2X as powerful as the Max with more memory bandwidth). An M2 Max with 64GB of RAM, which is more reasonable, will run you $2,499. I have no idea if those prices will hold when the M4 Mac Studious finally come out (M4 Max MBP with 64 GB of ram starts at $3900 ATM).

> You can't pay Apple $4k for a turnkey solution, either.

I've seen/read plenty of success stories of Metal ports of models being used via LM Studio without much configuration/setup/hardware scavenging, so we can just disagree there.


>You need a lot of space for that, cooling, and a good fuse

Or live in europe where any wall-socket can give you closer to 3kW. For crazier setups like charging your EV you can have three-phase plugs with ~22kW to play with. 1m2 of floor-space isn't that substantial either unless you already live in a closet in middle of the most crowded city.


3 phase 240v at 16amps is just about 11kW. You're not going to find anything above that residential unless it was purpose-built.

That's still a lot of power, though, and does not invalidate your point.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: