Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

To be able to handle the KPI management system in a Private Container Deployment, such as Kubernetes, you must prepare In preparation for using KPI Management in MediationZone, a number of scripts according to the instructions below. The scripts that you create are the followingneed to be extracted and some setup is required. This is described on this page.

The scripts are as follows:

  • flush.sh

  • kpi_params.sh

  • spark_common_param.sh

  • start_master_workers.sh

  • stop.sh

  • submit.sh

These scripts will be used by for different procedures that you find in the sections for KPI management Management - Distributed Processing.

Preparations before creating extracting scripts:

A Prerequisite is that Spark, ZooKeeper, and Kafka are installed and up and running. It is mandatory to setup the Kafka host and port as well. To do this, run the following commands in your environmentFor more information about this, see LINK needed

Before running the command to extract the scripts, these parameters need to be set as environment variables as they will be entered into some scripts:

Code Block
export KAFKA_BROKERS="127.0.0.1:9092"
export SPARK_UI_PORT=4040 
export MZ_PLATFORM_AUTH="mzadmin:DR-4-1D2E6A059AF8120841E62C87CFDB3FF4"
export MZ_KPI_PROFILE_NAME="kpi_common.SalesModel"
export MZ_PLATFORM_URL="http://127.0.0.1:9036"
export ZOOKEEPER_HOSTS="127.0.0.1:2181"
export SPARK_HOME=opt/spark-3.3.2-bin-hadoop3-scala2.13
export KAFKA_HOME=/opt/kafka_2.13-3.3.1
export $PATH=$SPARK_HOME/bin:$KAFKA_HOME/bin:$PATH

...

1. Set up your preferred KPI configuration profile or use the simplified example configuration , startup the platform. which can be found in kpi_tst.zip

2. Find and copy the kpi_spark*.mzp among the installation files. Copy it to a place you want to keep your KPI application files.

...

5. Move the mz_kpiapp folder and add it to the PATH environment variable.

Code Block
Example:

$ mv mz_kpiapp ~/
$ export PATH=$PATH:/home/user/mz_kpiapp/bin

...

  1. Access the conf-folder of Apache Spark, the spark-defaults.conf.template file should be renamed to spark-defaults.conf and the following configuration variables and options added:

    Code Block
    languagenone
    spark.driver.defaultJavaOptions    --add-opens java.base/java.lang=ALL-UNNAMED \
    --add-opens java.base/java.lang.invoke=ALL-UNNAMED \
    --add-opens java.base/java.lang.reflect=ALL-UNNAMED \
    --add-opens java.base/java.util=ALL-UNNAMED \
    --add-opens java.base/java.util.concurrent=ALL-UNNAMED \
    --add-opens java.base/java.util.concurrent.atomic=ALL-UNNAMED \
    --add-opens java.base/java.io=ALL-UNNAMED \
    --add-opens java.base/java.net=ALL-UNNAMED \
    --add-opens java.base/java.nio=ALL-UNNAMED \
    --add-opens java.base/sun.nio.ch=ALL-UNNAMED \
    --add-opens java.base/sun.nio.cs=ALL-UNNAMED \
    --add-opens java.base/sun.util.calendar=ALL-UNNAMED \
    --add-opens java.base/sun.security.action=ALL-UNNAMED
    
    spark.executor.defaultJavaOptions    --add-opens java.base/java.lang=ALL-UNNAMED \
    --add-opens java.base/java.lang.invoke=ALL-UNNAMED \
    --add-opens java.base/java.lang.reflect=ALL-UNNAMED \
    --add-opens java.base/java.util=ALL-UNNAMED \
    --add-opens java.base/java.util.concurrent=ALL-UNNAMED \
    --add-opens java.base/java.util.concurrent.atomic=ALL-UNNAMED \
    --add-opens java.base/java.io=ALL-UNNAMED \
    --add-opens java.base/java.net=ALL-UNNAMED \
    --add-opens java.base/java.nio=ALL-UNNAMED \
    --add-opens java.base/sun.nio.ch=ALL-UNNAMED \
    --add-opens java.base/sun.nio.cs=ALL-UNNAMED \
    --add-opens java.base/sun.util.calendar=ALL-UNNAMED \
    --add-opens java.base/sun.security.action=ALL-UNNAMED
    
    spark.master.rest.enabled true
  2. Add this to the jvm-args section of the execution context definition for the ec that will run the KPI Management workflows (open it by e.g. “mzsh mzadmin/<password> topo open kpi_ec”)

  3. Code Block
    jvmargs {
        args=[
                "--add-opens", "java.base/java.lang.invoke=ALL-UNNAMED",
                "--add-opens", "java.base/java.lang.reflect=ALL-UNNAMED",
                "--add-opens", "java.base/java.util=ALL-UNNAMED"
        ]
    }

NB! The lines “jvmargs {“, “args=[“, “]” and “}” are not necessarily new, but just included to clarify where to edit.

Starting KPI

Note

Prerequisite

Before you continue: Spark applications must be configured with a set of Kafka topics that are either shared between multiple applications or dedicated to specific applications. The assigned topics must be created before you submit an application to the Spark service. Before you can create the topics you must start the Kafka and Zookeeper services.

An example order of topics are the following:

kpi-input - For sending data to Spark

kpi-output - For spark to write the output to, and thus back to the workflow

kpi-alarm - For errors from Spark

...