Yeah, it explains why I can't buy one of these. Now, what I am puzzled by is why none of the various PCIe (standard form factor) cards that offer m.2 nvme slots on them will work with any of my older computers; they have PCIe lanes to spare but I suppose the system firmware itself just doesn't know what to do with them.
If you want to (for example) put 4 NVME drives (4 lanes each) in a 16x slot, then you need two things:
1. The 16x slot actually needs to have 16 lanes (on consumer motherboards there is only one slot like this, and on many the second 16x slot shares the lanes, so it will need to be empty)
2. You need to configure the PCIe controller to treat that slot as 4 separate slots (this is called PCIe bifurcation.
For recent Ryzen CPUs, the first 16x slot usually goes directly to the CPU, and the CPU supports bifurcation, so (assuming your BIOS allows enabling bifurcation; most recent ones do) all you need to do is figure out which PCIe lanes go to which slots (the motherboard manual will have this).
If you aren't going to use the integrated graphics, you'll need a 16x slot for your GPU. This will at best have 4 lanes, and on most motherboards (all 800 series chipset motherboards from all major manufacturers) will be multiplexed over the chipset, so now your GPU is sharing bandwidth with e.g. USB and Ethernet, which seems less than ideal (I've not benchmarked, maybe someone else has?).
In the event that you want to do a 4x NVME in a 16x slot I found that the "MSI X670E ACE" has a 4x slot that does not go throught the chipset, and so does the "ASRock B650M Pro x3D RS WiFi" either of those should work with a 9000 series Ryzen CPU.
ThreadRipper CPUs have like a gajillion PCIe lanes, so there shouldn't be any issues there.
I have also been told that there are some (expensive) PCIe cards that present as a single PCIe device to the host. I haven't tried them.
Thanks, very helpful! My use case is a bit different: NVMe drives are now cheap and fast as hell, I want to put them in older existing machines in the ~10-15 year old sort of category that have PCIe but not much else. Buying a new mobo kinda defeats the purpose... but as I type this, I can't remember what the purpose really is anyway, lol!
Adapting a PCIe slot to a single M.2 slot should always work, so long as the SSD is actually using the PCIe lanes and not the SATA port that M.2 can also support.
Depending on how old the motherboard, or UEFI, is - it may not be able to directly boot from NVMe. Years ago I modified the UEFI on my Haswell-era board to add a DXE to support NVMe boot. You shouldn’t need to do that to see the NVMe device within your OS though.
As the other reply mentioned, if you want to run multiple SSDs on a cheap adapter your platform needs to support bifurcation, but if it doesn’t support bifurcation hope is not all lost. PCIe switches have become somewhat cheaper - you can find cards based on the PEX8747 for relatively little under names like PE3162-4IL. The caveat here is that you’re limited to PCIe 3.0, switches for 4.0+ are still very expensive.
> suppose the system firmware itself just doesn't know what to do with them.
Anything since socket 1155 can work and even Westemere/Nehalem should work too, except you don't really want a system that old.
I have a 4Tb M.2 drive in x1 slot in my GA-Z77X, works fine.
Not a boot drive and I didn't bother to look if the BIOS supports booting from NVMe, though a quick search says the support for that was started with Z97 chipset/socket 1150.
> use a little sata SSD as /boot
For Linux you can even use some USB thumbdrive, especially a small profile like Kingston DataTraveler Micro G2.
I have an older workstation with no NVMe sockets. You can find 4 socket NVMe PCIe cards that let you bifurcate a 16x PCIe socket into four 4x NVMe sockets. Great way to make a local flash ZFS pool.
The Network Video Recorder UNVR is 320€ VAT incl. Does this exist as a software which I can download for free and run in a VM, so that the Unify camera, which would cost at least 100€ can store the data over there?
This is such funny gatekeeping. I don't know what % of the U.S. population could even run 1 mile at all without stopping, but I'm certain it's well under 50%, much less do at least that every day for 10 years. This is an impressive feat. For real, shame on you for crapping on this person's very respectable achievement.
It's not "your" HN, HN doesn't do algorithmic/per-user ranking. (Ed.: Actually a refreshing breath of wide social cohesion on a platform, IMHO. We have enough platforms that create bubbles for you.)
It's top1 on everyone's HN because a sufficient number of people (including myself) thought it a nice writeup about fat ARM systems.
I haven’t been following hardware for a while, granted, but this is the first time I see a desktop build with an arm64 cpu. Didn’t know you can just… buy one.
For what it's worth, I've been using a Lenovo X13s for some 3 months now. It's not a desktop, and it took years for core components to be supported in mainline Linux, but I do use it as a daily driver now. The only thing that's still not working is the webcam.
Would you call Threadripper system "a normal build"? For many people they are normal builds because they need more computing power or more PCIe lanes than "normal user" desktop has.
On the other side you have those who pretend to use raspberry/pi 3 as "an Arm desktop" despite only 1GB of ram and 4 sluggish cores.
torchft can handle much larger scales but for public multi-day demonstration run this is what we had available. Point of this blog was to demonstrate correctness of the quorum algorithm and recovery with a stock PyTorch stack and not so much peak flops.
Stay tuned though -- planning on doing some much larger demos on B200s!
I was curious about this so I had o3 do a bit of research. Turns out 300 L40s have more compute than any supercomputer before 2013 (and arguably before 2016, depending on how you count reduced-precision FLOPs).
reply