Usage Engine is an enterprise-grade data processing platform, combining strong integration capabilities with class-leading data processing functionality to derive useful information and drive real-time decision-making across many applications such as: billing, operations, revenue assurance, service assurance, entitlement enforcement, service control and business intelligence. A typical application of the software features and capabilities is outlined below:
The transaction chain starts with usage generated based on the consumption of a service or product. The service delivery platform(s) will generate, and log usage records based on actual consumption. Virtually any type of service delivery platform and associated method of generating usage information is supported, and as such the types of systems Usage Engine interfaces various significantly. Common examples include telecommunications network elements, databases, payment processors, SaaS platforms, APIs, IoT devices, and messaging queues.
The first step performed by the Usage Engine is to connect to the service delivery platforms to extract the relevant usage information. Either via a file-based interface, or over a real-time interface (bi- or uni-directional).
After the Usage Engine has collected the usage information, it needs to decode the data according to the way it was encoded by the usage delivery platform. There are many ways to encode and serialize data, and the Usage Engine supports most data encoding schemas through its data formatting subsystem. Examples include text-based formats such as ASCII, CSV, XML, and JSON. Many binary formats such as Google Protocol Buffer, Avro, ASN.1, AMA, and other complex fixed and variable length formats are also supported.
The decoder breaks down input data into transactions that are processed individually in subsequent steps, allowing for full control over every single usage transaction.
After usage data has been collected and decoded, a common processing step is to normalize the transactions. This makes it easier to work with the data, as many systems will represent similar information in different ways
Ensuring the correct data quality is an essential step to leveraging usage data in a business process. Examples of data quality issues range from corrupted files, duplicate transactions, transactions missing key information, or badly formatted data that cannot be used as is. To avoid data and revenue loss Usage Engine has capabilities to correct and repair data detected quality issues, but transactions can also be discarded at this point.
Aggregation and Correlation
Aggregation and Correlation are two very powerful capabilities of the Usage Engine, where aggregation is the process of combining multiple transactions from a single source system, and correlation being the same but across multiple sources. There are many reasons why aggregation and correlation of usage information is often necessary, such as: Multiple usage records (from one or more sources) are needed to correctly describe the service consumption, more usage records are being generated than downstream business systems can handle and the data volumes must be reduced, separate data sets must be combined to derive useful information, etc.
Usage Binding and Identity
Usage delivery platforms almost always lack the ability to put the service consumption into perspective of how the service or product is being delivered. Binding the usage information to a meaningful identity is a crucial step to derive useful actions and insights into the usage data. Often this requires enrichment of the usage data through referencing external systems and data sets, such as: subscriber information, product definitions, organizational hierarchies, or network topology information.
Any custom business rule that needs to run on the usage data can be automated as part of the transactional data processing flow.
Once clean, actionable, and usable information has been derived from the raw usage information it can be used to enable a range of downstream business processes such as: Billing, Analytics, Revenue Assurance, CRM, CPQ, ERP, etc.
The same set of integration and data formatting capabilities as used for the data ingestion can be leveraged.
Uni- and Bi-directional Flows
This entire process can be setup as a one-way data flow, or a bi-directional data flow where actions are derived and triggered by sending appropriate information and actions back to the originating service delivery platforms.
The process is also surrounded by a governance layer, ensuring auditability and traceability of the usage data processing that has taken place.