1. Architecture Overview PCC

The Policy Charging and Control (PCC) solution consists of the following components:

  •  - Containing the Access, Control and Execution Zones for execution of workflows.

  • Real-time Data Repository - For storage of runtime information, available to any Execution Zone. This is called Data Repository throughout this document.

  • Monitoring - For monitoring of the overall solution. Note that monitoring does not apply to Redis.

The Policy Charging and Control solution is designed for high availability, with high throughput and low latency. The setup of the solution depends on which type of data repository you have selected: Couchbase, MySQL Cluster or Redis (ElastiCache Cluster on AWS).

PCC Setup when Using Couchbase

When using Couchbase, the setup is divided into the following parts:

PCC Architecture when using Couchbase

  • [CZ01] and [CZ02] - Two separate Platforms Containers, their databases, and various monitoring and management processes for the collection of statistics. Two [CZs] are required for failover purposes, and one of them will be on stand-by.

  • [EZ01] and [EZ02] - Two Execution Containers, hosting the pico instances, i e ECSAs, that run the worfklows that are required by the PCC solution. Two are required for failover purposes.

  • [DR01], [DR02] and [DR03] - A Couchbase cluster that contains the Data Repository where all data pertaining to processing is stored. The minimum cluster should consist of thre nodes with one replication per bucket.

Names within [ ] are used throughout the document to describe on what machines the corresponding function is to be installed, e g [CZ01] or [EZ01] . When the number is omitted, e g [CZ], the name refers to both [CZ01] and [CZ02] etc.

This instruction is written for the recommended minimal High Availability installation of the Policy Charging and Control solution which consists of seven machines when using Couchbase. In order for the solution to be highly available, an additional process surveillance tool is required for ensuring that all processes are up and running, and for the failover of e.g. [CZ] to a different machine in case it goes down. See 9. High Availability for PCC for more information

When Couchbase is used as data repository, the functionality is split between a minimum of three nodes in a cluster.

PCC Setup when Using MySQL Cluster

When using MySQL Cluster, the setup is divided into the following parts:

PCC Architecture when using MySQL Cluster

  • [CZ01] and [CZ02] - Two separate Platforms Containers, their databases, and various monitoring and management processes for the collection of statistics. Two [CZs] are required for failover purposes, and one of them will be on stand-by.

  • [EZ01] and [EZ02] - Two Execution Containers, hosting the pico instances, i e ECSAs, that run the worfklows that are required by the PCC solution. Two are required for failover purposes.

  • [DR01] and [DR02] - Contains the Data Repository where all data pertaining to processing is stored.

Names within [ ] are used throughout the document to describe on what machines the corresponding function is to be installed, e g [CZ01] or [EZ01] . When the number is omitted, e g [CZ], the name refers to both [CZ01] and [CZ02] etc.

This instruction is written for the recommended minimal High Availability installation of the Policy Charging and Control solution which consists of six machines when using MySQL. In order to scale the solution, see the PCC System Administrator's Guide. In order for the solution to be highly available, an additional process surveillance tool is required for ensuring that all processes are up and running, and for the failover of e.g. [CZ] to a different machine in case it goes down. See 9. High Availability for PCC for more information.

Would High Availability not be required, the minimum number of machines required is two. One containing [CZ] and [EZ], and one containing [DR].

When MySQL is used as data repository, the database consists of the following components:

  • The MySQL Management node, which maintains the MySQL Cluster configuration and distributes it to the Data Management Nodes, MySQL Server and Execution Contexts. The MySQL Management node runs as part of [CZ].
  • The Data Management Nodes, that provides data storage. The data storage can be expanded online by adding more Data Management Nodes. The Data Management Nodes provide the functionality of [DR].

PCC Setup when Using Redis

When using Redis for PCC in AWS, you use Amazon ElastiCache. Monitoring functions are not included in this setup.

The setup is divided into the following parts:

PCC Architecture when using Redis

  • [CZ01] and [CZ02] - Contain the Control Zone and various management processes for the management of the data repository. Two [CZs] are required for failover purposes, and one of them will be on stand-by.
  • [EZ01+DR01], [EZ02+DR02] - Contain the Execution Zones, where the workflows are executed, as well as the Data Repository where all data pertaining to processing is stored. Two [EZs] are required for failover purposes.

This instruction is written for the recommended minimal High Availability installation of the Policy Charging and Control solution. For information on how many shards to create, see https://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Shards.html. For information on high availablity for Amazon ElastiCache, see https://aws.amazon.com/documentation/elasticache/.

This instruction is based on the assumption that each "component" above corresponds to one machine, i.e, the names within [] is used throughout the document to describe on what machines the corresponding function is to be installed. Referring to [CZ] means the machines containing all the functions listed in that component.