...
This deployment is similar to Data Zone Local High Availability. The difference is that anti-affinity rules are used to ensure that DZ nodes run on N = 2 or more different sites (data centers, availability zones). Each site runs N = 1 or more DZ replicas.<picture>
...
It protects against:
Single Site failure in DZ
Hypervisor failure in DZ. Even if an entire site’s Hypervisor fails, the other sites’ Hypervisors
remain functional.
If N > 2, it also protects against:
...
This deployment is similar to Control Zone Local High Availability. The difference is that anti-affinity rules are used to ensure that CZ nodes run on N = 2 sites (data centers, availability zones). Each site runs N = 1 or more DZ replicas.<picture>
...
It protects against:
Single Site failure in CZ
...
A variant of the above scenario is using two independent CZ, which share parts of DZ, specifically CouchBase. CouchBase replication is used to ensure HA for this instance. No other parts of DZ, e.g. MZDB and CZ file systems, are shared.<picture>
...
This gives zero-time failover in case of Diameter use cases, while session state replication allows fail- over from one site to the other. A requirement for this is that sticky sessions are used when accessing DZ.
...
This deployment is nearly identical to Multiple-Site High Availability for all zones, except that the sites are located in different regions, i.e. geographically separate. It is often referred to as “geographic disaster recovery”. This deployment type can be seen as an addition to a local HA deployment (one live site and one disaster recovery site), or a multi-site HA deployment (two or more live sites and one disaster recovery site).<picture>
...
The inter-site latency in this case is assumed to be > 3 ms, making synchronous replication infeasible.
...