I suppose this is one of the things you can do when you take DELL private, no institutional shareholders to sue you because you 'threatened their value' with a radical product idea.
As a systems guy I love the concept, but I'm a bit sad at the implementation. I would have loved to see the back plane of these things connect to a 'switch module' and take the connectors off the front. Basically a 48 port GBE switch with quad 10GbE uplinks out the "end" of the case would have been much nicer. Installing a nice SDN stack in the switch hardware such that one could virtualize the switch topology on the fly and you've got a box that you can configure in lots of ways and still get some economies of scale in both the CPU and switch infrastructure. Install a 24 port 10GbE switch on the top of the rack, and you've to 6 "copper" (18U), 24 port switch (1U), for a rack with 288 hosts, 2.3T of RAM, 288T of storage, and assuming a non-blocking 24 port switch 1. 883Mbits between any two hosts. Add a 1U boot/config management server into the rack and that is a heck of a gizmo.
As a systems guy I love the concept, but I'm a bit sad at the implementation. I would have loved to see the back plane of these things connect to a 'switch module' and take the connectors off the front. Basically a 48 port GBE switch with quad 10GbE uplinks out the "end" of the case would have been much nicer. Installing a nice SDN stack in the switch hardware such that one could virtualize the switch topology on the fly and you've got a box that you can configure in lots of ways and still get some economies of scale in both the CPU and switch infrastructure. Install a 24 port 10GbE switch on the top of the rack, and you've to 6 "copper" (18U), 24 port switch (1U), for a rack with 288 hosts, 2.3T of RAM, 288T of storage, and assuming a non-blocking 24 port switch 1. 883Mbits between any two hosts. Add a 1U boot/config management server into the rack and that is a heck of a gizmo.