Continuous Integration/Continuous Delivery (CI/CD)
CI/CD is a process for delivering code to production by introducing automation into all stages of deployment. Specifically, CI/CD introduces ongoing automation and continuous monitoring throughout the lifecycle of code, from integration and testing phases to delivery and deployment. CI/CD principles:
- Less manual tasks increase quality and lets testers and developers focus on their core tasks.
- Frameworks encourage good practices.
- Automation increases time to market, frequent rollouts, and encourages creativity.
The picture below shows the automated flow of solutions going from development, via test and into production.
CI/CD Model and Structure
The steps shown in the picture are:
- Development Environment – Solutions are created in a development environment. These can be hosted locally on the designer’s laptop, as a tenant in a shared private cloud infrastructure or in a public cloud infrastructure. The designer builds the solution using the Usage Engine low coding tools. He/she tests the solutions locally by executing the workflows using parameterization that work in the development environment.
- Source Control – When the designer is ready with the solution, he/she triggers an export of the solution in a JSON format, suitable for version control. The JSON data is committed and pushed to a version control system like Git.
- Test Suite Execution – A CI/CD automation tool like Jenkins, GitHub actions or similar acts on the newly committed data and starts the test and build pipeline. The test pipeline starts up a Usage Engine test/integration environment and executes a suite of Python tests to verify the solution. The Python tests are implemented using the Usage Engine TestKit.
- Packaging – If the test suite execution is successful, the pipeline triggers the build step. The build step transforms the solution into compiled Workflow Packages which contains the full solution, including compiled code that is auto-generated from the implementation.
- Image Generation – The Workflow Packages are then used as input for image generation. A tool such as docker or nerdctl is used to build a container image readable by the container runtime in Kubernetes. Ready images are stored on a container image registry, such as Docker DTR, Amazon ECR or similar.
- ECD Descriptor Implementation – ECD descriptor files are parameterizations of ECD Helm charts, i.e., a YAML formatted files of parameter values. These are designed by an operations team. The ECD descriptor files tell Usage Engine how the solution in the Workflow Packages should be deployed, interact with external systems and orchestrated. ECD descriptors are stored in version control.
- ECD descriptor deployment - The CD part of the CI/CD pipeline makes sure that the ECD descriptors are deployed in the target environment and turn the Workflow Packages into executing micro services. This is implemented using tools like Helm or ArgoCD.
- Automated operations – An executing solution generates many kinds of metrics and other operational data. External monitoring systems can be configured to act on this data and feedback to the application and solution over the provided APIs, thereby closing the automation feedback loop.
- Manual observation – Apart from automation Usage Engine also provides powerful UIs to monitor and operate the solutions. Also, tools like Grafana, Kibana, Jaeger, and Kiali can be used to visualize and build dashboards on metrics, log data, tracing data or dataflows.
This section contains the following sub sections: