Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

In preparation for using KPI Management in MediationZone, you need to extract the following scripts:

...

These scripts will be used for different procedures in the KPI Management - Distributed Processing sections.

Preparations before extracting scripts:

A Prerequisite prerequisite is that Spark, ZooKeeper, and Kafka are installed. Zookeeper and Kafka should be up and running as well. For more information about this, see KPI Management - External Software.

...

Extracting scripts and KPI app:

...

Rw step
  1. Set up your preferred KPI profile

...

  1. .

...

Rw step
  1. Find the kpi_spark*.mzp among the installation files and copy it to where you want to keep your KPI application files.

Rw step
  1. To extract the KPI app after building it run the following command. It extracts the software needed by spark for the KPI app as well as the scripts needed for starting and configuring spark.

    Code Block
    $ cd release/packages
    
    $ java -jar kpi_spark_9.1.0.0.mzp install

...

  1. You will find the new directory mz_kpiapp that contains all app software.

    Code Block
    $ ls -l mz_kpiapp/, will list:
    
    app # The MZ kpi app
    bin # Shell script to handle the app
    jars # Extra jar files for the app
Rw step
  1. Move the mz_kpiapp folder and add it to the PATH.

    Code Block
    Example:
    
    $ mv mz_kpiapp ~/
    $ export PATH=$PATH:/home/user/mz_kpiapp/bin
Rw step
  1. Set the environment variable SPARK_HOME.

    Code Block
    $ export SPARK_HOME="your spark home"
Rw step

...

  1. These extracted scripts, kpi_params.sh and spark_common_params.sh, are

...

  1. more of examples than a finished configuration so you need to modify the scripts under the bin folder according to your specifications and requirements.

    In kpi_params.sh, KAFKA_BROKERS need to be configured with the hosts and ports of the kafka brokers. For example:

    export KAFKA_BROKERS="192.168.1.100:9092,192.168.1.101:9092,192.168.1.102:9092"

    The username and password for a user with access to the profile is needed to be entered as the property MZ_PLATFORM_AUTH, unless the default username and password mzadmin/dr is used. The password is encrypted using the mzsh command encryptpassword. The memory settings may need to be altered depending on the expected load, as well as the UI port for the KPI App inside Spark (default 4040).
    In addition to the addresses and ports of the platform, kafka and zookeeper may need to be updated.

    In spark_common_params.sh, you may need to change the master host IP and ports if applicable. Edit the kpiapp/bin/spark_common_param.sh, so it has the SPARK_HOME path.

...

  1. Access the conf-folder of Apache Spark, the spark-defaults.conf.template file should be renamed to spark-defaults.conf and the following configuration variables and options added:

    Code Block
    languagenone
    spark.driver.defaultJavaOptions    --add-opens java.base/java.lang=ALL-UNNAMED \
    --add-opens java.base/java.lang.invoke=ALL-UNNAMED \
    --add-opens java.base/java.lang.reflect=ALL-UNNAMED \
    --add-opens java.base/java.util=ALL-UNNAMED \
    --add-opens java.base/java.util.concurrent=ALL-UNNAMED \
    --add-opens java.base/java.util.concurrent.atomic=ALL-UNNAMED \
    --add-opens java.base/java.io=ALL-UNNAMED \
    --add-opens java.base/java.net=ALL-UNNAMED \
    --add-opens java.base/java.nio=ALL-UNNAMED \
    --add-opens java.base/sun.nio.ch=ALL-UNNAMED \
    --add-opens java.base/sun.nio.cs=ALL-UNNAMED \
    --add-opens java.base/sun.util.calendar=ALL-UNNAMED \
    --add-opens java.base/sun.security.action=ALL-UNNAMED
    
    spark.executor.defaultJavaOptions    --add-opens java.base/java.lang=ALL-UNNAMED \
    --add-opens java.base/java.lang.invoke=ALL-UNNAMED \
    --add-opens java.base/java.lang.reflect=ALL-UNNAMED \
    --add-opens java.base/java.util=ALL-UNNAMED \
    --add-opens java.base/java.util.concurrent=ALL-UNNAMED \
    --add-opens java.base/java.util.concurrent.atomic=ALL-UNNAMED \
    --add-opens java.base/java.io=ALL-UNNAMED \
    --add-opens java.base/java.net=ALL-UNNAMED \
    --add-opens java.base/java.nio=ALL-UNNAMED \
    --add-opens java.base/sun.nio.ch=ALL-UNNAMED \
    --add-opens java.base/sun.nio.cs=ALL-UNNAMED \
    --add-opens java.base/sun.util.calendar=ALL-UNNAMED \
    --add-opens java.base/sun.security.action=ALL-UNNAMED
    
    spark.master.rest.enabled true

...

  1. Add this to the jvmargs section of the execution context definition for the ec that will run the KPI Management workflows. For example:
    You can open the configuration by running:
    mzsh mzadmin/<password> topo open kpi_ec

    Code Block
    jvmargs {
        args=[
                "--add-opens", "java.base/java.lang.invoke=ALL-UNNAMED",
                "--add-opens", "java.base/java.lang.reflect=ALL-UNNAMED",
                "--add-opens", "java.base/java.util=ALL-UNNAMED"
        ]
    }

NB! The lines “jvmargs {“, “args=[“, “]” and “}” are not necessarily new, but just included to clarify where to edit.

Starting KPI

Note

Prerequisite

Before you continue: Spark applications must be configured with a set of Kafka topics that are either shared between multiple applications or dedicated to specific applications. The assigned topics must be created before you submit an application to Spark. Before you can create the topics you must start Kafka and Zookeeper.

An example order of topics are the following:

kpi-input - For sending data to Spark

kpi-output - For spark to write the output to, and thus back to the workflow

kpi-alarm - For errors from Spark

...

  1. Startup Spark cluster, here “kpiapp” is a configurable name:

    Code Block
    $ start_master_workers.sh
    $ submit.sh kpiapp

...

  1. Submit the app:

    Code Block
    $ submit.sh kpiapp ...

...

  1. You should now be able to see workers, and executors:

    Code Block
    $ jps
    
    Will give you something like:
    pid1 Worker
    pid2 Worker
    pid3 CoarseGrainedExecutorBackend
    pid4 CoarseGrainedExecutorBackend
    pid5 DriverWrapper
    pid6 CodeServerMain
    pid8 Master