...
With Usage Engine
...
...
This allows the possibility to control the full life cycle of the EC and its workflows through a single API-controlled resource, from start to stop, creating and editing the configuration and scaling the resources. By treating both the EC and the full set of workflows as a single resource, we get a robust way to work and integrate with different use cases like load distribution, virtual function instantiation such as network slicing in a 5G telecom network, online interface management, etc.
EC Deployment is only available in Kubernetes-based deployments. It combines Kubernetes resources with Usage Engine resources Engine resources into a single higher-level resource, making it easy to scale and perform other life cycle operations on, as a single unit. Once Once deployed, the ECD is automatically managed by Kubernetes which ensures the ECD continuously runs in an active state. This means that if a workflow aborts, it will be automatically restarted through Kubernetes.
Info | |
---|---|
title | Info!EC Deployments must be created in the same namespace as your Platform. |
The resources that are managed through ECDs are:
Kubernetes Deployment - The collection of Pods that execute the Execution Contexts, with parameters like CPU and memory usage limits, JVM configuration, and default number of replicas
Kubernetes Service – How IP ports are exposed externally from the cluster
Kubernetes Ingress – How HTTP interfaces are exposed externally from the cluster
Kubernetes HorizontalPodAutoscaler - How workflows are automatically scaled based on CPU or custom metrics
Workflows – Which workflows are executed on the EC and what parameters they are executed with
ECDs can be managed through the EC Deployment Web Interface (4.0) - Jaws or through the Kubernetes API using tools like kubectl , and helm or by directly accessing the REST API.