Stream import enhancements and new flush trigger options

Stream import enhancements and new flush trigger options

This release contains three new capabilities for UsageCloud: a Stream management API for programmatically managing streams, the ability to import streams directly from the interface, and additional flush trigger options in the Script aggregator. Together, these updates give you more flexibility in how you build, move, and manage your stream configurations.

Stream management API

With the Stream management API, external applications can interact with your stream resources programmatically. Whether you are looking to back up configurations, audit changes, or move streams between environments, this API gives you direct access to your stream setup without using the interface.

The Stream management API can:

  • List streams - Retrieve a paginated list of all streams in a solution, with filtering by name, associated resources (collectors, meters, and aggregation stores), or solution ID.

  • Get stream configuration - Fetch the complete configuration and metadata for a specific stream, including functions (nodes), links (edges), function groups, and HTTP endpoints.

  • List stream versions - View the full version history for a stream, including all tagged versions and the current autosaved version.

  • Export stream - Export any stream version as JSON for backup, migration, or audit purposes.

  • Import stream - Import a complete stream configuration into a solution to create a new stream or update an existing one.

Import stream API in detail

The import endpoint is one of the most powerful additions in this release. You send a complete stream configuration in JSON format along with the target solution, and the API either creates a new stream or updates an existing one. This makes it straightforward to move configurations from a development environment to production or to integrate stream imports into a deployment pipeline.

When importing via the API, you can:

  • Choose how to handle name conflicts - overwrite the existing stream, create a new copy, or skip the import entirely.

  • Optionally allow function deletions when replacing a stream, giving you full control over what gets updated.

  • Tag the imported version so it appears clearly in the stream’s version history.

Access to the Stream management API is secured using Application access tokens with the Stream management scope. See the https://infozone.atlassian.net/wiki/spaces/DAZ/pages/907247690 and https://infozone.atlassian.net/wiki/x/PACvQQ for more general details, and the https://infozone.atlassian.net/wiki/spaces/DAZ/pages/224920235 for details on setting up access.

Import streams from the interface

Stream imports are not limited to the API. You can now import streams directly from the interface, making it easy to bring in configurations without writing any code.

When the stream name is unique, the import happens immediately. When a stream with the same name already exists, a conflict resolution dialog opens, giving you three choices:

  • Replace - Overwrite the existing stream. The stream keeps its identity, but its functions, links, and configuration are updated to match the imported file.

  • Import as new - Create a separate stream alongside the existing one. The system handles naming automatically to avoid duplication.

  • Cancel - Abort the import.

If the stream you are importing removes functions that exist in the current version, a warning dialog lists the affected ones before you proceed. This is especially relevant for stateful functions such as Data aggregator or Deduplicate, where removing them can affect stored state.

The import process validates for file errors, schema issues, topology problems, and function type availability, surfacing clear error messages at each stage. Version history is also updated when a stream is replaced via import, giving you a clear record of what changed and when.

See https://infozone.atlassian.net/wiki/x/E4C_QQ for more information.

Additional flush trigger options in the Script aggregator

The Script aggregator now gives you more control over when aggregated sessions are finalized and sent downstream. Previously, the only built‑in option was timeout‑based flushing, which often required workarounds (for example, setting a timeout to ‑1) to flush at transaction or stream boundaries. You can now choose the behaviour that best fits your use case directly from the configuration.

There are three flush trigger options:

  • On timeout (default) - Sessions flush when they exceed a configured timeout period. Use this for time‑based aggregation, such as hourly summaries, monthly billing aggregates, or session expiry.

  • On transaction end - All sessions flush at the end of each transaction, for example, after a file finishes processing. Use this when each input file should produce its own independent aggregation results.

  • On stream end - All sessions flush when the stream execution completes. Use this when you need a single aggregation result across all data processed during a run, regardless of transaction boundaries.

Only one flush trigger can be active at a time. Each trigger includes script block tabs where you can define custom logic to control session behaviour. See https://infozone.atlassian.net/wiki/spaces/DAZ/pages/850132993 for the full configuration reference.


We hope you enjoy these additions. As always, we appreciate your feedback. If you have any questions or concerns, please do not hesitate to reach out to us through the https://digitalroute.my.site.com/s/.