GoTransverse

Overview

Note!

The function described in this document is only available as part of an add-on package. Please contact your Account Manager for further information.

The GoTransverse forwarder function allows records to be sent to GoTransverse API tenants, to use it you need to have a prepared environment in advance. GoTransverse is an advanced cloud-based billing platform that can be used to store and retrieve records of usage-based records. This function supports GoTransverse 2.0 and the associated REST API, see the official documentation for more information. 

To connect to the GoTransverse tenant environment, you need to have your credentials at hand. To configure the GoTransverse functions, the following is required from your GoTransverse account:

Environment

Field

Description

Field

Description

Tenant

This option picks the appropriate tenant type that is to be configured for the forwarder function. The supported ones are:

  • GoTransverse API Sandbox –  Used to forward data to Sandbox and test environments. 

  • GoTransverse API Production – Used to forward data to Production environments. 

  • Custom – Used to forward the data to custom billing tenants that support the GoTransverse protocols. 

URL

This field allows for the URL to be specified when the Custom tenant is selected.

GoTransverse credentials

The Secret Access Key is provided in this input field box. Optionally, the https://infozone.atlassian.net/wiki/x/wg54  functionality of Usage Engine can be enabled.

Operational settings

There are two supported modes of operation regarding the authentication behavior: 

Field

Description

Field

Description

overwrite existing

The function will overwrite existing records if there is a match in the field names. 

fail stream

The function will terminate execution if an existing entry is found, thereby avoiding duplicate records.

Field mapping

Field

Description

Field

Description

Field Mapping

Allows custom fields to be mapped using a source-target mechanism. There are several mandatory Target Fields that must be mapped to the Source Fields:

*description, *start_time, *service_resource_identifier, *usage_uom, *usage_amount, end_time, reference_id, and sequence id.

The following format is supported for custom mapping for the given parameters. Fields processing is processed in accordance with the map format: 

  • Text + [ 01 - 06], here are some examples:  

    "text01"
    "text02"
    "text03"
    "text04"
    "text05"
    "text06"

  • Number + [01 - 05], here are some examples:

    "number01": 
    "number02": 
    "number03": 
    "number04": ,
    "number05": 

  • Date + [01 - 05], here are some examples:

    "date01": "2019-08-24T14:15:22Z",
    "date02": "2019-08-24T14:15:22Z",
    "date03": "2019-08-24T14:15:22Z",
    "date04": "2019-08-24T14:15:22Z",
    "date05": "2019-08-24T14:15:22Z"

  • Boolean  + [01 - 05], here are some examples:

    "boolean01": true,
    "boolean02": true,
    "boolean03": true,
    "boolean04": true,
    "boolean05": true,

These fields can be selected in any order.

API response collection

An optional feature available for this Function is the API Response Collection mode. This can be enabled by toggling on the option found below the Field Mapping section. Usage Edition supports AWS credentials authorization, the appropriate information needs to be entered in the input fields. Additional information regarding the exact File location where the log files are to be written needs to be specified. The available fields in this section are the following:

Field

Description

Field

Description

Access Key

Enter the appropriate AWS access key.

Secret Key

Enter the relevant AWS secret key.

Bucket

Specify the bucket name.

Folder

Specify the folder path.

Log File Name

Type the log file name. A timestamp will be appended automatically to the file name.

The function appends the .json extension to the filename, so you do not need to add any extensions to the filename.

Note!

For existing streams, if you would like to use the new filename saving method as described above, you can check the New filename saving method check box.

Upon successful AWS authentication, Usage Engine creates a folder called "Success" automatically in the specified path where the log files is placed. In case of an error, a folder called "Error" is created and error logs are placed there. 

Configuration

There are certain specifics that apply to the GoTransverse writer and its configuration when handling data. This section lists important information when the data is brought from/to Usage Edition. 

The function supports two types of input formats, here they are given with concrete examples of how they can be configured: 

  1. Standard Format (One Usage Event per Payload) 

{ payload: { service_resource_identifier: 'srid', description: 'some description', } }
  1. Array Format (Multiple Usage Events per Payload) 

{ payload: { usageEvents: [ { service_resource_identifier: 'srid', description: 'some description', }, { service_resource_identifier: 'srid', description: 'some description', }, ] } }

There are certain rules that pertain to the HTTP Requests summarised in these points: 

  • Every HTTP Request made will only contain usage events of the SAME SRID.

  • Maximum of 50 Usage Events of the SAME SRID per HTTP Request.

  • A parallel HTTP Request will be made for every SRID that is not currently waiting/pending a response from GoT Endpoint.

  • Maximum of 10 parallel HTTP Requests at the same time. Each parallel request will be a different SRID. If there are 2 batches of 50 usage events for the same SRID. The second batch of 50 will wait for the first request containing the first batch of 50 usage events to complete before sending another request containing the second batch of 50.

  • Will only start sending HTTP Requests at the end of every Transaction when ready to commit. 

Responses

Responses from the GoT Usage API are logged into the stream logs as a JSON. Each log represents one HTTP Request. The stream logs will likely be truncated if a full batch of 50 usage events is sent in one request.