Memory Configuration Recommendations
The following are recommendations for one of many ways to configure your MediationZone heap space memory. The default settings for MediationZone will work fine however should you need to make any changes due to high heap space usage, the following information can be used as a guide.
Configuring the right memory settings is a bit of trial and error. The most relevant memory settings (not counting garbage collection settings) are:
- Xmx = Maximum allocated heap space
- Xms = Minimum allocated heap space
- MaxMetaspaceSize = Maximum allocated space for compiled classes
You may also need to configure MaxDirectMemorySize
, for example when the Shared Table Profile is used.
There are no exact rules for how to configure as this differs based on a number of factors, e.g. number of workflows running on the EC, which agents are used in the workflows (e.g. in memory aggregation can use a lot of memory, a batch workflow with large input files uses more memory than one with smaller input files etc.). The best way to configure it is to set Xmx
to a reasonable amount (for example, on bare metal installation, anything between 1024 and 2048 megabytes and, lesser memory allocated in kubernetes assuming few workflows are assigned per pod). Then run a few tests with production-like data and check the memory usage in System Statistics.
When looking at System Statistics, search for Pico, Minute (resolution) and look at memory usage (preferably with Max selected instead of Average), and calculate an average of the bottoms of the graphs over time which shows the actual memory usage. Then set the Xmx
value slightly higher than that (approximately 30-40% higher unless memory usage is e.g. 15 GB in which case 30-40% might be excessive).
If memory usage is far below Xmx
, Xmx
can and should be decreased. Allocating excess memory to an Xmx
that is e.g. 10 GB for a 2 GB memory usage, can lead to long garbage collection times which in turn can have an impact on latency for real time workflows.
After setting memory usage, it should be monitored, both with regards to memory usage as well as garbage collection times. Garbage collection times should not go above 500-1000 milliseconds in most cases.
Out of Memory
There are a few parameters for the JVM that might need to be adjusted a little for a given server installation. Depending on the amount of available primary memory and the amount of disk swap space, it might be necessary to inform the JVM how much memory it is allowed to allocate. The Unix process, running an EC/ECSA, can fork itself depending on configurations in individual agents. The Disk forwarding agent, for instance, can be configured to run an external binary after every forwarded file. The JVM performs a native fork-call to do this, and the forked JVM process will initially have the same memory footprint as the parent process. If there is not enough primary memory and/or swap space available, the EC/ECSA will abort with the following exception:
java.io.IOException: Not enough space at java.lang.UNIXProcess.forkAndExec(Native Method)
If this happens, the maximum heap size for the JVM must be lowered, or additional memory must be added to the machine. Lowering the memory can be done by using the JVM argument -Xmx
,
which is specified for all pico configurations.
The following line is an example of how to specify this JVM argument in the STR.
mzsh topo set topo://container:<container>/pico:<pico>/obj:config.jvmargs \ 'xmx:["-Xmx128M"]'
Unfortunately, it is difficult to recommend a value. This JVM argument specifies the maximum heap size, meaning that the JVM will probably not reach this limit for a while, depending on how the JVM manages its heap. That, in turn, means that the forking will work for a while, and when the heap size in the JVM has grown large enough, the fork will fail in case there is no free memory pages available in the machine.
The only possible recommendation is to lower the maximum heap size value, or to add more system resources (memory or swap disk). If the physical host is running more than one Execution Context, then the memory allocation of these Execution Contexts must be taken into account as well.
The JVM also has a kind of memory called direct memory, which is distinct from normal JVM heap memory. You may need to increase the direct buffer memory when Shared Tables have been configured to use off-heap memory. This can be done by increasing the maximum direct memory using the JVM argument XX:MaxDirectMemorySize
.
Sometimes, allocating too much memory to a JVM can affect its performance. Ensure that just a sufficient amount of memory is allocated. However, make sure that the heap never "pages". The sum of all maximum heaps must fit in physical memory. Make sure to adapt these values to better fit the memory in the installed machine. Increasing the heap size for an EC/ECSA can make a big difference to performance.