Architecture Overview PCC(3.0)

The Policy Charging and Control (PCC) solution consists of the following components:

  •  - Containing the Access, Control and Execution Zones for execution of workflows.

  • Real-time Data Repository - For storage of runtime information, available to any Execution Zone. This is called Data Repository throughout this document.

  • Monitoring - For monitoring of the overall solution. Note that monitoring does not apply to Redis.

The Policy Charging and Control solution is designed for high availability, with high throughput and low latency.

The setup is divided into the following parts:

PCC Architecture

  • [CZ01] and [CZ02] - Two separate  installations, their databases, and various monitoring and management processes for the collection of statistics. Two [CZs] are required for failover purposes, and one of them will be on stand-by.

  • [EZ01] and [EZ02] - Two EC Groups, hosting the pico instances, i e ECs, that run the workflows that are required by the PCC solution. Two are required for failover purposes.

  • [DR01], [DR02] and [DR03] - A Couchbase cluster that contains the Data Repository where all data pertaining to processing is stored. The minimum cluster should consist of thre nodes with one replication per bucket.

Names within [ ] are used throughout the document to describe on what machines the corresponding function is to be installed, e g [CZ01] or [EZ01] . When the number is omitted, e g [CZ], the name refers to both [CZ01] and [CZ02] etc.

This instruction is written for the recommended minimal High Availability installation of the Policy Charging and Control solution which consists of seven machines. In order for the solution to be highly available, an additional process surveillance tool is required for ensuring that all processes are up and running, and for the failover of e.g. [CZ] to a different machine in case it goes down. See High Availability for PCC(3.0) for more information

The functionality of the Data Repository is split between a minimum of three nodes in a cluster.

This instruction is based on the assumption that each "component" above corresponds to one machine, i.e, the names within [] is used throughout the document to describe on what machines the corresponding function is to be installed. Referring to [CZ] means the machines containing all the functions listed in that component.