WithUsage Engine comes a new concept and resource called EC (Execution Context) Deployment, or short ECD. The idea for this concept is to hold a single EC with all its resources and workflows as one complete package that will execute, scale and balance on its own.
This allows the possibility to control the full life cycle of the EC and its workflows through a single API controlled resource, from start to stop, creating and editing the configuration and scaling the resources. By treating both the EC and the full set of workflows as a single resource, we get a robust way to work and integrate with different use cases like load distribution, virtual function instantiation such as network slicing in a 5G telecom network, online interface management, etc.
EC Deployment is only available in Kubernetes based deployments. It combines Kubernetes resources with Usage Engine resources into a single higher level resource, making it easy to scale and perform other life cycle operations on, as a single unit. Once deployed, the ECD is automatically managed by Kubernetes which ensures the ECD continuously runs in an active state. This means that if a workflow aborts, it will be automatically restarted through Kubernetes.
EC Deployments must be created in the same namespace as your Platform.
The resources that are managed through ECDs are:
Kubernetes Deployment - The collection of Pods that execute the Execution Contexts, with parameters like CPU and memory usage limits, JVM configuration and default number of replicas
Kubernetes Service – How IP ports are exposed externally from the cluster
Kubernetes Ingress – How HTTP interfaces are exposed externally from the cluster
Kubernetes HorizontalPodAutoscaler - How workflows are automatically scaled based on CPU or custom metrics
Workflows – Which workflows are executed on the EC and what parameters they are executed with
ECDs can be managed through the EC Deployment Web Interface(3.0) or through the Kubernetes API using tools like kubectl, helm or by directly accessing the REST API.