Control Zone
The Control Zone contains the Platform and one or more databases.
Platform
The Platform provides a range of services to other pico processes:
Server | Description |
---|---|
Activation Manager | The Activation Manager handles all activations of workflow groups. Workflow groups can automatically be activated in one of two ways:
|
Alarm Manager | The Alarm Manager manages Alarm Raisers and their alarms. |
Audit Log | The Audit Log is a server that receives audit log information from workflows that are running, and feeds this information to the corresponding audit tables for different databases. |
Code Server | The Code Server is required all by all pico-started applications. It is the first server started on the Platform, and it is responsible for maintaining the code repository of . The system groups all software code into packages that are inserted into the Code Server repository via the Code Server by using the installation scripts, or the |
Event Server | The Event Server handles logging and triggering of entries of informative and error type, which are referred to as Events, within the system. The Event Server receives all events, and hands them out upon requests from various applications. |
Group Server | The Group Server activates and manages the workflow groups by:
|
Management Utilities | Management Utilities handles miscellaneous database related operations, such as framework persistence configuration management and caching of database meta data. |
Notification Server | Events in the system may be forwarded to any notifier. The mapping is made in the Event Notifications Configuration, and is stored and handled by the Notification Server. |
Reference Manager | Different configurations may be related to each other. For example, an agent may depend on a format or a database profile. The Reference Manager keeps track of all such relations. |
Statistics Manager | The Statistics Manager collects and manages statistics from the running system. This information is used for balancing workflow load, among other things, and may be viewed in the System Statistics Inspector that is opened by selecting the System Statistics option in the Tools menu in Desktop. |
System Log | The System Log contains services for storing, browsing, and deleting entries in the system. The entries may be viewed in the System Log Inspector that is opened by selecting the System Log option in the Tools menu in Desktop. |
Workflow Server | The Workflow Server loads all workflow data, both referential and configurational, upon workflow activation. It is also responsible for making sure that a workflow is up and running. |
Services that typically run on SCs can also be configured to run on the Platform.
Pico-start
Pico-start is a utility that automatically downloads new code from a repository maintained by the Code Server.
A client that is pico-start compliant only needs the code for the pico-start utility. When initializing, it will establish contact with the Code Server and dynamically request the latest version of all code needed. To gain performance, the client will store all code that it downloads on the local file system (in the pico cache). The next time it initializes, it will compare the code version in the local cache with the code that is stored in the Code Server. If nothing is new, the code in the cache will be used.
All the code is not transferred at once. Pico-start will send a request to the server each time the pico-started client detects a class that it does not have. For example, if a new agent package has been inserted into the system, the next time a Workflow Editor window is opened through the Desktop, this new code will be loaded.
When an EC is started, a synchronization is made and the latest picostart.jar
file will be automatically downloaded from the Platform if there is a new version available. For more information, please refer to the section MZ_HOME/lib
.
Software Packages
groups all software code into packages that are inserted into the Code Server repository via the Code Server by using the installation scripts, or the Command Line Tool.
Each package is keyed by a name. The name and a version name are encapsulated in the package file together with code, images, property files etc. When a package is inserted, the package name is used to determine if the package already exists in the Code server or not.
Note!
If the package is already present, the existing code will be replaced completely with the code from the inserted package, regardless of the package version.
The package vendor is free to use the version name in order to identify the package version. Preferably, the version should relate the official version of the code. The version name is a plain string.
You can view the installed packages by clicking on the User menu in Desktop and selecting Installed packages.
Installed packages
Code Environment
The code in the Code Server is stored on three different levels:
- Platform code
This is code related to the core Platform, including some user interface plug-ins, for example the Development Toolkit.
Note!
Updates made to the code on Platform level are only propagated to pico clients when they are restarted.
Execution code
This is code related to workflow agents, APL plug-ins, Tasks, and Events. Updates made to the code on execution level is propagated to clients on demand.
- Generation code
This is code that is automatically generated by user configuration. This includes implementation code for Ultra formats and APL scripts.
Databases
Data that is persisted by the Platform is partially stored in the file system and partially in one or more databases.
The supported database types are Derby, Oracle, PostgreSQL and SAP HANA. Derby is embedded in while Oracle, PostgreSQL and SAP HANA are installed separately.
The following is stored in the database(s):
- System log messages
- Duplicate Batch processing data
- Archiving data
- Error Correction System data
- User Access data (deprecated and configurable with Platform property)
- System Statistics
- Alarm data
- Group Server data
- License data
- Workflow transactions and workflow states
When you install a Platform container, using Derby, the data listed above will be split across multiple databases. When you are using Oracle, PostgreSQL or SAP HANA the data will be stored in one database.