Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

To be able to handle the KPI management system in a Private Container Deployment, such as Kubernetes, you must prepare a number of scripts according to the instructions below. The scripts that you create are the following:

  • flush.sh

  • kpi_params.sh

  • spark_common_param.sh

  • start_master_workers.sh

  • stop.sh

  • submit.sh

These scripts will be used by different procedures that you find in the sections for KPI management - Distributed Processing.

Preparations before creating scripts:

A Prerequisite is that Spark, ZooKeeper, and Kafka are installed and up and running.

It is mandatory to setup the Kafka host and port as well. To do this, run the following commands in your environment:

export KAFKA_BROKERS="127.0.0.1:9092"
export SPARK_UI_PORT=4040 
export MZ_PLATFORM_AUTH="mzadmin:DR-4-1D2E6A059AF8120841E62C87CFDB3FF4"
export MZ_KPI_PROFILE_NAME="kpi_common.SalesModel"
export MZ_PLATFORM_URL="http://127.0.0.1:9036"
export ZOOKEEPER_HOSTS="127.0.0.1:2181"
export SPARK_HOME=opt/spark-3.3.2-bin-hadoop3-scala2.13
export KAFKA_HOME=/opt/kafka_2.13-3.3.1
export $PATH=$SPARK_HOME/bin:$KAFKA_HOME/bin:$PATH

Creating scripts:

1. Set up your preferred KPI configuration or use the simplified example configuration, startup the platform. kpi_tst.zip

2. Find and copy the kpi_spark*.mzp among the installation files. Copy it to a place you want to keep your KPI application files.

3. To install the KPI app after building it, and extract the app installation:

$ cd release/packages

$ java -jar kpi_spark_9.1.0.0.mzp install

4. You will find the new directory mz_kpiapp that contains all app software.

$ ls -l mz_kpiapp/, will list:

app # The MZ kpi app
bin # Shell script to handle the app
jars # Extra jar files for the app

5. Move the mz_kpiapp folder and add it to the PATH environment variable.

Example:

$ mv mz_kpiapp ~/
$ export PATH=$PATH:/home/user/mz_kpiapp/bin

6. Set the environment variable SPARK_HOME.

$ export SPARK_HOME="your spark home"

7. Edit the kpiapp/bin/spark_common_param.sh, so it has the SPARK_HOME path.

  1. Access the conf-folder of Apache Spark, the spark-defaults.conf.template file should be renamed to spark-defaults.conf and the following configuration variables and options added:

    spark.driver.defaultJavaOptions    --add-opens java.base/java.lang=ALL-UNNAMED \
    --add-opens java.base/java.lang.invoke=ALL-UNNAMED \
    --add-opens java.base/java.lang.reflect=ALL-UNNAMED \
    --add-opens java.base/java.util=ALL-UNNAMED \
    --add-opens java.base/java.util.concurrent=ALL-UNNAMED \
    --add-opens java.base/java.util.concurrent.atomic=ALL-UNNAMED \
    --add-opens java.base/java.io=ALL-UNNAMED \
    --add-opens java.base/java.net=ALL-UNNAMED \
    --add-opens java.base/java.nio=ALL-UNNAMED \
    --add-opens java.base/sun.nio.ch=ALL-UNNAMED \
    --add-opens java.base/sun.nio.cs=ALL-UNNAMED \
    --add-opens java.base/sun.util.calendar=ALL-UNNAMED \
    --add-opens java.base/sun.security.action=ALL-UNNAMED
    
    spark.executor.defaultJavaOptions    --add-opens java.base/java.lang=ALL-UNNAMED \
    --add-opens java.base/java.lang.invoke=ALL-UNNAMED \
    --add-opens java.base/java.lang.reflect=ALL-UNNAMED \
    --add-opens java.base/java.util=ALL-UNNAMED \
    --add-opens java.base/java.util.concurrent=ALL-UNNAMED \
    --add-opens java.base/java.util.concurrent.atomic=ALL-UNNAMED \
    --add-opens java.base/java.io=ALL-UNNAMED \
    --add-opens java.base/java.net=ALL-UNNAMED \
    --add-opens java.base/java.nio=ALL-UNNAMED \
    --add-opens java.base/sun.nio.ch=ALL-UNNAMED \
    --add-opens java.base/sun.nio.cs=ALL-UNNAMED \
    --add-opens java.base/sun.util.calendar=ALL-UNNAMED \
    --add-opens java.base/sun.security.action=ALL-UNNAMED
    
    spark.master.rest.enabled true

Starting KPI

Prerequisite

Before you continue: Spark applications must be configured with a set of Kafka topics that are either shared between multiple applications or dedicated to specific applications. The assigned topics must be created before you submit an application to the Spark service. Before you can create the topics you must start the Kafka and Zookeeper services.

An example order of topics are the following:

kpi-input - For sending data to Spark

kpi-output - For spark to write the output to, and thus back to the workflow

kpi-alarm - For errors from Spark

9. Startup Spark cluster, here “kpiapp” is a configurable name:

$ start_master_workers.sh
$ submit.sh kpiapp

10. Submit the app:

$ submit.sh kpiapp ...

11. You now can see 2 workers, and 2 executors:

$ jps

Will give you something like:
pid1 Worker
pid2 Worker
pid3 CoarseGrainedExecutorBackend
pid4 CoarseGrainedExecutorBackend
pid5 DriverWrapper
pid6 CodeServerMain
pid8 Master



  • No labels