10. log4j APL Logging Configurations
includes a log4j extension that enables generation of customized logs from agents configured with APL. This section describes how to configure these logs to meet deployment specific requirements.
Configuration Files
The configuration files are used to specify the path of log files, log filtering rules, log level, and formatting. These files must be stored in $MZ_HOME/etc/logging
and named apl-log4j-<log level name>.properties
or <ec-name>-apl-log4j-<log level name>.properties
. Use <ec-name>
prefixed filenames when you want to use different log settings for different Execution Contexts.
Example. Configuration filenames
$MZ_HOME/etc/logging/apl-log4j-myloglevel.properties
$MZ_HOME/etc/logging/ec1-apl-log4j-myloglevel.properties
The following configuration files are included by default:
apl-log4j-off.properties
apl-log4j-fatal.properties
apl-log4j-error.properties
apl-log4j-warn.properties
apl-log4j-info.properties
apl-log4j-debug.properties
apl-log4j-trace.properties
apl-log4j-all.properties
The files listed above have a different log level setting but are otherwise identical.
The content of the files defines the logging.
Example. Configuration file contents
log4j.rootLogger=ALL, a
log4j.appender.a=com.digitalroute.apl.log.DRRollingFileAppender
log4j.appender.a.file=${mz.home}/log/{pico}_{workflow}.log
log4j.appender.a.layout=com.digitalroute.apl.log.JsonLayout
log4j.appender.a.layout.MdcFieldsToLog=pico, workflow, agent, tag
The first line in the example above sets the log level and declares an "appender" named 'a'
. The available log levels are listed below in order of severity, from highest to lowest:
OFF
FATAL
ERROR
WARN
INFO
DEBUG
TRACE
ALL
Messages of the same or higher severity than the selected level are logged. For instance, if the configured log level is WARN
, messages with the severity ERROR
and FATAL
will be logged as well. The other settings above mean that messages are logged and stored in rotated and JSON formatted files in the $MZ_HOME/log
directory. When an active log file has reached its maximum size, it is backed up and stored with a number suffix. A new active log file is then created. The default maximum size is 10 MB, and the default number of backup files is one (1).
Appenders
There are two different types of appenders; DRRollingFileAppender
och DRRollingMultiFileAppender
.
DRRollingFileAppender
Writes to a single defined file based on the log4j.appender.<appender name>.file property
DRRollingMultiFileAppender
Writes one file for each workflow instance it encounters based on the log4j.appender.<appender name>.file
Which workflows are written into which appender is based on the log4j.logger.<class name> property.
Examples Appender Configurations
# Default log4j.appender.Default=com.digitalroute.apl.log.DRRollingFileAppender log4j.appender.Default.file=${mz.home}/log/log4j/{workflow}.log log4j.appender.Default.layout=org.apache.log4j.PatternLayout log4j.appender.Default.layout.ConversionPattern=[%d{dd MMM yyyy HH:mm:ss,SSS}];[%-5p];[pico=%X{pico}];[%t];[tag=%X{tag}];[%c]:%m%n log4j.appender.Default.MaxFileSize=10MB log4j.appender.Default.MaxBackupIndex=20 log4j.logger.Default=TRACE, Default
The appender named Default will write a single file for all workflows contained under the Default folder.
# PRIMARY log4j.appender.PRIMARY=com.digitalroute.apl.log.DRRollingMultiFileAppender log4j.appender.PRIMARY.file=${mz.home}/log/log4j/{workflow}.log log4j.appender.PRIMARY.layout=org.apache.log4j.PatternLayout log4j.appender.PRIMARY.layout.ConversionPattern=[%d{dd MMM yyyy HH:mm:ss,SSS}];[%-5p];[pico=%X{pico}];[%t];[tag=%X{tag}];[%c]:%m%n log4j.appender.PRIMARY.MaxFileSize=10MB log4j.appender.PRIMARY.MaxBackupIndex=20 log4j.logger.RT_Folder.RT_TEST_WF=TRACE, PRIMARY
The appender named Primary will create multiple files; one for each workflow instance based on the RT_Folder.RT_TEST_WF workflow.
# SECONDARY log4j.appender.SECONDARY=com.digitalroute.apl.log.DRRollingFileAppender log4j.appender.SECONDARY.file=${mz.home}/log/log4j/{workflow}.log log4j.appender.SECONDARY.layout=org.apache.log4j.PatternLayout log4j.appender.SECONDARY.layout.ConversionPattern=[%d{dd MMM yyyy HH:mm:ss,SSS}];[%-5p];[pico=%X{pico}];[%t];[tag=%X{tag}];[%c]:%m%n log4j.appender.SECONDARY.MaxFileSize=10MB log4j.appender.SECONDARY.MaxBackupIndex=20 log4j.logger.RT_Folder.RT_TEST_WF=TRACE, SECONDARY
The appender Secondary will create a single file for each workflow instance based on the RT_Folder.RT_TEST_WF workflow. The file will take the name of the first workflow instance it encounters, for example "RT_Folder.RT_TEST_WF.workflow_1"
Hint!
You can change the maximum file size and the number of backup files by adding the following lines:log4j.appender.a.MaxFileSize=100MB
log4j.appender.a.MaxBackupIndex=10
You can add a filtering rule by adding the line log4j.logger.<configuration name>=<log level>
. This is useful when you want to set different log levels for specific folders or configurations.
Example. Sets the general log level to ERROR and to DEBUG for the agent named agent_1
log4j.rootLogger=ERROR, a
log4j.appender.a=com.digitalroute.apl.log.DRRollingFileAppender
log4j.appender.a.file=${mz.home}/log/{pico}_{workflow}.log
log4j.appender.a.layout=com.digitalroute.apl.log.JsonLayout
log4j.logger.Default.debug.workflow_1.agent_1=DEBUG
If you want to apply the filtering rule to all APL configurations in the default folder, change the last line in the previous example to log4j.logger.Default=DEBUG
.
Note!
For performance reasons it is recommended to use the DRRollingFileAppender
and configure individual appenders for each workflow. Only use the DRRollingMultiFileAppender
if you need individual files on a workflow instance level.
For more information about available settings, see the log4j documentation at https://logging.apache.org/log4j/1.2/manual.html.
mzsh Command
You can select the active logging configuration for each Execution Context by using the mzsh command loglevel
.
usage: apl-loglevel [ display | set <level> ] <ec-names>
The level
argument to the command is based on the corresponding configuration filename in $MZ_HOME/etc/logging
. For instance, the command below will activate the configuration in ec1-apl-log4j-myloglevel.properties
on the Execution Context named ec1 and ec2-apl-log4j-myloglevel.properties
on ec2. If any of these files cannot be found, the command will (if possible) apply the settings of a file without the Execution Context prefix, i e apl-log4j-myloglevel.properties
.
Example. Selecting and applying a configuration
$ mzsh mzadmin/<password> apl-loglevel set myloglevel ec1 ec2
You can view the configured log directory and refresh interval with apl-loglevel display
.
Example. Display settings
$ mzsh mzadmin/<password> apl-loglevel display ec1 ec2
logdir = /opt/mz/etc/logging
refresh interval = 1000 ms
---------------------------------------------
Contents of: /opt/mz/etc/logging/ec1-apl-log4j.properties
---------------------------------------------
log4j.rootLogger=ALL, a
log4j.appender.a=com.digitalroute.apl.log.DRRollingFileAppender
log4j.appender.a.file=${mz.home}/log/{pico}_{workflow}.log
log4j.appender.a.layout=com.digitalroute.apl.log.JsonLayout
log4j.appender.a.layout.MdcFieldsToLog=pico, workflow, agent,
tag
---------------------------------------------
---------------------------------------------
Contents of: /opt/mz/etc/logging/ec2-apl-log4j.properties
---------------------------------------------
log4j.rootLogger=ALL, a
log4j.appender.a=com.digitalroute.apl.log.DRRollingFileAppender
log4j.appender.a.file=${mz.home}/log/{pico}_{workflow}.log
log4j.appender.a.layout=com.digitalroute.apl.log.JsonLayout
log4j.appender.a.layout.MdcFieldsToLog=pico, workflow, agent,
tag
---------------------------------------------
The log refresh interval is set to 1000 ms by default. You can change this value by setting the Platform property mz.logging.refreshinterval
.
When you activate a configuration the contents of the corresponding file is copied to <ec name>-apl-log4j.properties
. Any changes in this file will become effective at the next refresh interval.
APL Commands
The following functions are used to trigger logging within any of the function blocks in APL:
- void log.fatal(any, any)
- void log.error(any, any)
- void log.warn(any, any)
- void log.info(any, any)
- void log.debug(any, any)
- void log.trace(any, any)
For more information about these functions, see 23. Log and Notification Functions in the APL Reference Guide.
Log Output
The output log files are stored in the directory specified in the active logging configuration.
Example. Log file in JSON format
{"timestamp":"2015-12-20:22:44:10 UTC","level":"DEBUG","thread":"Default.logtestwf.workflow_1: TCP_IP_1_1","category":"Default.logtestwf.workflow_1.Analysis_1","message":"In consume","pico":"EC1","workflow":"Default.logtestwf.workflow_1","agent":"Analysis_1"}
The fields in the log output are described below.
Field | Description |
---|---|
timestamp | The time when the message was logged. The UTC timezone and international standard date and time notation is used by default.
For information about how to use SimpleDateFormat patterns, see: |
level | The log level i e FATAL , ERROR , WARN , INFO , DEBUG , or TRACE . |
thread | The name of the workflow thread. |
category | The logged configuration. This field contains the category class of the appender that is defined in the configuration file. |
message | The log message specified in the APL command. |
pico | The name of the Execution Context. |
Warning!
The ECs must be restarted if you manually delete or rename active log files or backup log files.
Hint!
If the log files are not generated as expected, review the EC logs. Your configuration files may contain errors.