log4j APL Logging Configurations
The system includes a log4j extension that enables generation of customized logs from agents configured with APL. This section describes how to configure these logs to meet deployment specific requirements.
Configuration Files
There is a configuration file called apl-log4j.properties
containing default settings located in the $MZ_HOME/etc/logging
 directory that is used to specify the path of log files, log filtering rules, log level and formatting. If you want to use different settings for different Execution Contexts, this file can be copied and renamed to <ec-name>-apl-log4j.properties
where you can configure different settings for the specified EC.
To provide some examples and templates, there are a number of pre-existing files with the name apl-log4j-<log level name>.properties
, for example apl-log4j-trace.properties
, which you can use to either copy properties from, or make a copy of the whole file and rename it to <ec-name>-apl-log4j.properties.
Â
Example. Configuration filenames
Default configuration file:
$MZ_HOME/etc/logging/apl-log4j.properties
EC specific configuration file for ec1:
$MZ_HOME/etc/logging/ec1-apl-log4j.properties
Copying the default configuration file into an EC specific configuration file:
$cp apl-log4j.properties ec1-apl-log4j.properties
Copying a template configuration file into an EC specific configuration file:$cp apl-log4j-trace.properties ec1-apl-log4j.properties
The following template configuration files are included by default:
apl-log4j-off.properties
apl-log4j-fatal.properties
apl-log4j-error.properties
apl-log4j-warn.properties
apl-log4j-info.properties
apl-log4j-debug.properties
apl-log4j-trace.properties
apl-log4j-all.properties
The files listed above have a different log level setting but are otherwise identical.
The content of the files defines the logging.
Example. Configuration file contents
log4j.rootLogger=ALL, a
log4j.appender.a=com.digitalroute.apl.log.DRRollingFileAppender
log4j.appender.a.file=${mz.home}/log/{pico}_{workflow}.log
log4j.appender.a.layout=com.digitalroute.apl.log.JsonLayout
log4j.appender.a.layout.MdcFieldsToLog=pico, workflow, agent, tag
The first line in the example above sets the log level and declares an "appender" named 'a'
. The available log levels are listed below in order of severity, from highest to lowest:
OFF
FATAL
ERROR
WARN
INFO
DEBUG
TRACE
ALL
Messages of the same or higher severity than the selected level are logged. For instance, if the configured log level is WARN
, messages with the severity ERROR
 and FATAL
will be logged as well. The other settings above mean that messages are logged in files that are started, stopped and stored in JSON formatted files in the $MZ_HOME/log
 directory in regular intervals. When an active log file has reached its maximum size, it is backed up and stored with a number suffix. A new active log file is then created. The default maximum size is 10 MB, and the default number of backup files is one (1).
Appenders
There are two different types of appenders;Â DRRollingFileAppender
 och DRRollingMultiFileAppender
.Â
DRRollingFileAppenderÂ
Writes to a single defined file based on the log4j.appender.<appender name>.file property
DRRollingMultiFileAppenderÂ
Writes one file for each workflow instance it encounters based on the log4j.appender.<appender name>.file
Which workflows are written into which appender is based on the log4j.logger.<class name> property.
Examples Appender Configurations
# Default log4j.appender.Default=com.digitalroute.apl.log.DRRollingFileAppender log4j.appender.Default.file=${mz.home}/log/log4j/{workflow}.log log4j.appender.Default.layout=org.apache.log4j.PatternLayout log4j.appender.Default.layout.ConversionPattern=[%d{dd MMM yyyy HH:mm:ss,SSS}];[%-5p];[pico=%X{pico}];[%t];[tag=%X{tag}];[%c]:%m%n log4j.appender.Default.MaxFileSize=10MB log4j.appender.Default.MaxBackupIndex=20 log4j.logger.Default=TRACE, Default
The appender named Default will write a single file for all workflows contained under the Default folder.
# PRIMARY log4j.appender.PRIMARY=com.digitalroute.apl.log.DRRollingMultiFileAppender log4j.appender.PRIMARY.file=${mz.home}/log/log4j/{workflow}.log log4j.appender.PRIMARY.layout=org.apache.log4j.PatternLayout log4j.appender.PRIMARY.layout.ConversionPattern=[%d{dd MMM yyyy HH:mm:ss,SSS}];[%-5p];[pico=%X{pico}];[%t];[tag=%X{tag}];[%c]:%m%n log4j.appender.PRIMARY.MaxFileSize=10MB log4j.appender.PRIMARY.MaxBackupIndex=20 log4j.logger.RT_Folder.RT_TEST_WF=TRACE, PRIMARY
The appender named Primary will create multiple files; one for each workflow instance based on the RT_Folder.RT_TEST_WF workflow.
# SECONDARY log4j.appender.SECONDARY=com.digitalroute.apl.log.DRRollingFileAppender log4j.appender.SECONDARY.file=${mz.home}/log/log4j/{workflow}.log log4j.appender.SECONDARY.layout=org.apache.log4j.PatternLayout log4j.appender.SECONDARY.layout.ConversionPattern=[%d{dd MMM yyyy HH:mm:ss,SSS}];[%-5p];[pico=%X{pico}];[%t];[tag=%X{tag}];[%c]:%m%n log4j.appender.SECONDARY.MaxFileSize=10MB log4j.appender.SECONDARY.MaxBackupIndex=20 log4j.logger.RT_Folder.RT_TEST_WF=TRACE, SECONDARY
The appender Secondary will create a single file for each workflow instance based on the RT_Folder.RT_TEST_WF workflow. The file will take the name of the first workflow instance it encounters, for example "RT_Folder.RT_TEST_WF.workflow_1"Â
Hint!
You can change the maximum file size and the number of backup files by adding the following lines:log4j.appender.a.MaxFileSize=100MB
log4j.appender.a.MaxBackupIndex=10
You can add a filtering rule by adding the line log4j.logger.<configuration name>=<log level>
. This is useful when you want to set different log levels for specific folders or configurations.
Example. Sets the general log level to ERROR and to DEBUG for the agent named agent_1
log4j.rootLogger=ERROR, aÂ
log4j.appender.a=com.digitalroute.apl.log.DRRollingFileAppender
log4j.appender.a.file=${mz.home}/log/{pico}_{workflow}.log
log4j.appender.a.layout=com.digitalroute.apl.log.JsonLayout
log4j.logger.Default.debug.workflow_1.agent_1=DEBUG
If you want to apply the filtering rule to all APL configurations in the default folder, change the last line in the previous example to log4j.logger.Default=DEBUG
.
Note!
For performance reasons it is recommended to use the DRRollingFileAppender
and configure individual appenders for each workflow. Only use the DRRollingMultiFileAppender
if you need individual files on a workflow instance level.
For more information about available settings, see the log4j documentation at https://logging.apache.org/log4j/1.2/manual.html.
APL Commands
The following functions are used to trigger logging within any of the function blocks in APL:
- void log.fatal(any, any)
- void log.error(any, any)
- void log.warn(any, any)
- void log.info(any, any)
- void log.debug(any, any)
- void log.trace(any, any)
For more information about these functions, see Log and Notification Functions in the APL Reference Guide.
Log Output
The output log files are stored in the directory specified in the active logging configuration.
Example. Log file in JSON format
{"timestamp":"2015-12-20:22:44:10 UTC","level":"DEBUG","thread":"Default.logtestwf.workflow_1: TCP_IP_1_1","category":"Default.logtestwf.workflow_1.Analysis_1","message":"In consume","pico":"EC1","workflow":"Default.logtestwf.workflow_1","agent":"Analysis_1"}
The fields in the log output are described below.
Field | Description |
---|---|
timestamp | The time when the message was logged. The UTC timezone and international standard date and time notation is used by default.Â
For information about how to use SimpleDateFormat patterns, see: |
level | The log level i e FATAL , ERROR , WARN , INFO , DEBUG , or TRACE . |
thread | The name of the workflow thread. |
category | The logged configuration. This field contains the category  class of the appender that is defined in the configuration file. |
message | The log message is specified in the APL command. |
pico | The name of the Execution Context. |
Warning!
The ECs must be restarted if you manually delete or rename active log files or backup log files.
Hint!
  If the log files are not generated as expected, review the EC logs. Your configuration files may contain errors.