Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

Problems related to submission of a Spark application are logged in the Platform log. Errors that occur after submission of a Spark application, i e runtime errors, are logged in the the Spark environment. Error information related to Kafka and Zookeeper services can be found in the SC-logs for the respective service.

Runtime Errors

Cluster

Runtime errors that occur in the cluster are logged in MZ_HOME/external/spark/runtime/logs/.

Spark Application

Runtime errors that occur in the Spark application when it is running are logged in the file MZ_HOME/external/spark/runtime/work/driver-<number>/stderr .

Runtime errors on the executor level are logged in the file MZ_HOME/external/spark/runtime/work/app-<number>/<executorId>/stderr.

You can also access these logs from the Spark Master Web UI:

  1. Click a Worker id under Running Drivers.


    Spark UI - Master

  2. Click stderr under Logs.


    Spark UI - Worker

KPI Processing Accumulators

When a Spark batch has finished processing, a set of accumulators are logged in the file  MZ_HOME/external/spark/runtime/work/driver-<number>/stdout. These accumulators serve as a  summary of what has been collected and calculated within the batch.  

The following accumulators are logged:

Accumulator

Description

CalculatedKPIs

This accumulator includes GeneratedKPIOutputs and calculated KPIs that are not closed yet.

DiscardedKPIs

This accumulator is incremented by one for each calculated KPI that belongs to a previously closed period.

FailedMetricCalculations

This accumulator is incremented by one for each metric calculation that fails, e g due to invalid data in the input records. If there are several nodes in the node tree(s) that contain the metric, one input record may affect several metric calculations.

FailedKPICalculations

This accumulator is incremented by one for a KPI calculation that fails due to undefined metrics in the KPI expression. In order for the accumulator to be incremented, the following conditions  must apply:

- The period for the KPI ends during the Spark batch.

- The KPI expression uses multiple metrics and one or more of these are undefined.

GeneratedKPIOutputs

This accumulator is incremented by one for each successfully calculated and delivered KPI.

MissingExpressionForInputType

This accumulator is increased by one for each input record that does not match a metric and a dimension object in the service model.


Example - Counters in stdout

The example below indicates that 20 input records failed to match both a metric and dimension expression in the service model.

 2016-08-29 16:24:10:24
=============MICROBATCH===============
CalculatedKPIs = 101000
DiscardedKPIs = 0
FailedMetricCalculations = 0
FailedKPICalculations = 0
GeneratedKPIOutputs = 50200
MissingExpressionForInputType = 20
 


You can also access these accumulators from the Spark Master Web UI:

  1. Click a Worker id under Running Drivers.

  2. Click stdout under Logs.

Note!

The accumulators are logged using log4j, meaning that the configured log level will decide whether or not the accumulators will be logged. The log level is configured in the property log4j.rootCategory in MZ_HOME/external/spark/runtime/conf/log4j.properties. The default log level in Spark is WARNING and the log level for the accumulators is INFO. 


Note!

it is possible to log the accumulators to a separate log file by adding at the log4j.properties
log4j.appender.accumulatorlog=org.apache.log4j.RollingFileAppender
log4j.appender.accumulatorlog.File=accumulators.log
log4j.appender.accumulatorlog.layout=org.apache.log4j.PatternLayout
log4j.appender.accumulatorlog.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n
log4j.logger.com.digitalroute.mz.spark.StreamOperations$=INFO, accumulatorlog
log4j.additivity.com.digitalroute.mz.spark.StreamOperations$=false

The file accumulators.log will be created under the driver folder in MZ_HOME/external/spark/runtime/work


  • No labels

0 Comments

You are not logged in. Any changes you make will be marked as anonymous. You may want to Log In if you already have an account.