Hacker News new | past | comments | ask | show | jobs | submit login

I'm still hoping to find a more detailed article about modern X86-64 NonStop, complete with Mackie Diagrams.

The last one I can find is for the NonStop Advanced Architecture (on Itanium), with ServetNet. I gather that this was replaced with the NonStop Multicore Architecture (also on Itanium), with Infiniband, and I assume x86-64 is basically the same but on x86-64, but in pseudo big-endian.




A hypervisor (software) approach is one way to accomplish it far cheaper and much more configurable and reusable than having to rely on dedicated hardware. VMware's virtualization method of x86_64 fault tolerant feature runs 2 VMs on different hosts using the lockstep method. Either fails, then the hypervisor moves the (V)IP over with ARP to the running one and spawns another to replace it. More often than not, it's a way to run a critical machine that cannot accept any downtime and cannot otherwise be (re)engineered in a conventional HA manner with other building-blocks. In general, one should never do this and prefer to use always consistent quorum 2 phase commit transactions at the cost of availability or throughput, or eventual consistency through gossip updates at the cost of inconsistency and potential data loss.


What do you want to know?


What has changed since Itanium? What counts as a logical NonStop CPU now? As I (mis?)understand it, under Itanium a physical server blade was called a slice. It had multiple CPU sockets (called Processing Elements) and memory on the was partitioned with MMU mapping and Itanium security keys so each Processing Element could only access a portion of it. All IO on a Processing Element went out over ServerNet (or Infiniband) to a pair of Logical Sync Units, and was checked/compared with IO from another Processing Element running the same code on a different physical server blade. The 2 (or 3) processing elements combined to form a single logical CPU. I wonder if this is still the case? I believe there was a follow on (I assume when Itanium went multi-core) called NonStop Multicore Architecture, but I haven't found a paper on it.

Also, I'm curious how the Disk Process fits in with Storage Clustered IO Modules(CLIMs)? Do CLIMs just act as a raw disk, with the Disk Process talking to it like they would use to talk to a locally attached disk? Or is there more integration with the CLIM - like a portion of the Disk Process has been ported to Linux, or has Enscribe been ported to run on the CLIMs.

The same thing with how Networking CLIMs fit in.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: