Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

In this deployment, DZ storage is highly available across a single site, by having N = 2 or more replicas of all data.<picture>

...

It protects against:

  • Single Component failure in DZ.

...

For deployments using Kafka persistent storage, local HA is achieved N = 3 or more brokers with a replication factor of N = 2 or more. The replicas should not share storage. The instances should run on different hosts.

Control Zone Local High Availability

In this deployment, CZ contains N = 2 nodes, in an active-standby configuration.<picture>

...

It protects against:

  • Single Component Failure in CZ. If the active node fails, a failover procedure to the standby

...

This deployment is similar to Execution Zone Local High Availability. The difference is that anti-affinity rules are used to ensure that CZ nodes run on N = 2 or more sites (data centers, availability zones). <picture>

...

It protects against:

  • Single Site failure in EZ

  • Hypervisor failure in EZ. Even if an entire site’s Hypervisor fails, the other sites’ Hypervisors

    remain functional.

    If N > 2, it also protects against:

...

This deployment is similar to Data Zone Local High Availability. The difference is that anti-affinity rules are used to ensure that DZ nodes run on N = 2 or more different sites (data centers, availability zones). Each site runs N = 1 or more DZ replicas.<picture>

...

It protects against:

  • Single Site failure in DZ

  • Hypervisor failure in DZ. Even if an entire site’s Hypervisor fails, the other sites’ Hypervisors

    remain functional.

    If N > 2, it also protects against:

...

This deployment is similar to Control Zone Local High Availability. The difference is that anti-affinity rules are used to ensure that CZ nodes run on N = 2 sites (data centers, availability zones). Each site runs N = 1 or more DZ replicas.<picture>

...

It protects against:

  • Single Site failure in CZ

...

A variant of the above scenario is using two independent CZ, which share parts of DZ, specifically CouchBase. CouchBase replication is used to ensure HA for this instance. No other parts of DZ, e.g. MZDB and CZ file systems, are shared.<picture>

...

This gives zero-time failover in case of Diameter use cases, while session state replication allows fail- over from one site to the other. A requirement for this is that sticky sessions are used when accessing DZ.

...

This deployment is nearly identical to Multiple-Site High Availability for all zones, except that the sites are located in different regions, i.e. geographically separate. It is often referred to as “geographic disaster recovery”. This deployment type can be seen as an addition to a local HA deployment (one live site and one disaster recovery site), or a multi-site HA deployment (two or more live sites and one disaster recovery site).<picture>

...

The inter-site latency in this case is assumed to be > 3 ms, making synchronous replication infeasible.

...