Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 24 Next »

In preparation for using KPI Management in MediationZone, you need to extract the following scripts:

The scripts are as follows:

  • flush.sh

  • kpi_params.sh

  • spark_common_param.sh

  • start_master_workers.sh

  • stop.sh

  • submit.sh

These scripts will be used for different procedures in the KPI Management - Distributed Processing sections.

Preparations before extracting scripts:

A Prerequisite is that Spark, ZooKeeper, and Kafka are installed. Zookeeper and Kafka should be up and running as well. For more information about this, see KPI Management - External Software.

Before running the command to extract the scripts, these parameters need to be set as environment variables as they will be entered into some scripts:

export KAFKA_BROKERS="127.0.0.1:9092"
export SPARK_UI_PORT=4040 
export MZ_PLATFORM_AUTH="mzadmin:DR-4-1D2E6A059AF8120841E62C87CFDB3FF4"
export MZ_KPI_PROFILE_NAME="kpi_common.SalesModel"
export MZ_PLATFORM_URL="http://127.0.0.1:9036"
export ZOOKEEPER_HOSTS="127.0.0.1:2181"
export SPARK_HOME=opt/spark-3.3.2-bin-hadoop3-scala2.13
export KAFKA_HOME=/opt/kafka_2.13-3.3.1
export $PATH=$SPARK_HOME/bin:$KAFKA_HOME/bin:$PATH

Extracting scripts and KPI app:

2.

3. To extract the KPI app after building it run the following command. It extracts the software needed by spark for the KPI app as well as the scripts needed for starting and configuring spark.

$ cd release/packages

$ java -jar kpi_spark_9.1.0.0.mzp install

4. You will find the new directory mz_kpiapp that contains all app software.

$ ls -l mz_kpiapp/, will list:

app # The MZ kpi app
bin # Shell script to handle the app
jars # Extra jar files for the app

5. Move the mz_kpiapp folder and add it to the PATH.

Example:

$ mv mz_kpiapp ~/
$ export PATH=$PATH:/home/user/mz_kpiapp/bin

6. Set the environment variable SPARK_HOME.

$ export SPARK_HOME="your spark home"

7. The next step is to modify the scripts under the bin folder according to your specifications and requirements, as they are when extracted they are to be considered as more of examples than a finished configuration. The scripts kpi_params.sh and spark_common_params.sh are the ones that need to be updated.

In kpi_params.sh, KAFKA_BROKERS need to be configured with the hosts and ports of the kafka brokers. For example:

export KAFKA_BROKERS="192.168.1.100:9092,192.168.1.101:9092,192.168.1.102:9092"

The username and password for a user with access to the profile is needed to be entered as the property MZ_PLATFORM_AUTH, unless the default username and password mzadmin/dr is used. The password is encrypted using the mzsh command encryptpassword.

The memory settings may need to be altered depending on the expected load, as well as the UI port for the KPI App inside Spark (default 4040).
In addition to the addresses and ports of the platform, kafka and zookeeper may need to be updated.
In spark_common_params.sh, you may need to change the master host IP and ports if applicable. Edit the kpiapp/bin/spark_common_param.sh, so it has the SPARK_HOME path.

  1. Access the conf-folder of Apache Spark, the spark-defaults.conf.template file should be renamed to spark-defaults.conf and the following configuration variables and options added:

    spark.driver.defaultJavaOptions    --add-opens java.base/java.lang=ALL-UNNAMED \
    --add-opens java.base/java.lang.invoke=ALL-UNNAMED \
    --add-opens java.base/java.lang.reflect=ALL-UNNAMED \
    --add-opens java.base/java.util=ALL-UNNAMED \
    --add-opens java.base/java.util.concurrent=ALL-UNNAMED \
    --add-opens java.base/java.util.concurrent.atomic=ALL-UNNAMED \
    --add-opens java.base/java.io=ALL-UNNAMED \
    --add-opens java.base/java.net=ALL-UNNAMED \
    --add-opens java.base/java.nio=ALL-UNNAMED \
    --add-opens java.base/sun.nio.ch=ALL-UNNAMED \
    --add-opens java.base/sun.nio.cs=ALL-UNNAMED \
    --add-opens java.base/sun.util.calendar=ALL-UNNAMED \
    --add-opens java.base/sun.security.action=ALL-UNNAMED
    
    spark.executor.defaultJavaOptions    --add-opens java.base/java.lang=ALL-UNNAMED \
    --add-opens java.base/java.lang.invoke=ALL-UNNAMED \
    --add-opens java.base/java.lang.reflect=ALL-UNNAMED \
    --add-opens java.base/java.util=ALL-UNNAMED \
    --add-opens java.base/java.util.concurrent=ALL-UNNAMED \
    --add-opens java.base/java.util.concurrent.atomic=ALL-UNNAMED \
    --add-opens java.base/java.io=ALL-UNNAMED \
    --add-opens java.base/java.net=ALL-UNNAMED \
    --add-opens java.base/java.nio=ALL-UNNAMED \
    --add-opens java.base/sun.nio.ch=ALL-UNNAMED \
    --add-opens java.base/sun.nio.cs=ALL-UNNAMED \
    --add-opens java.base/sun.util.calendar=ALL-UNNAMED \
    --add-opens java.base/sun.security.action=ALL-UNNAMED
    
    spark.master.rest.enabled true
  2. Add this to the jvmargs section of the execution context definition for the ec that will run the KPI Management workflows. For example:
    You can open the configuration by running:
    mzsh mzadmin/<password> topo open kpi_ec

jvmargs {
    args=[
            "--add-opens", "java.base/java.lang.invoke=ALL-UNNAMED",
            "--add-opens", "java.base/java.lang.reflect=ALL-UNNAMED",
            "--add-opens", "java.base/java.util=ALL-UNNAMED"
    ]
}

NB! The lines “jvmargs {“, “args=[“, “]” and “}” are not necessarily new, but just included to clarify where to edit.

Starting KPI

Prerequisite

Before you continue: Spark applications must be configured with a set of Kafka topics that are either shared between multiple applications or dedicated to specific applications. The assigned topics must be created before you submit an application to Spark. Before you can create the topics you must start Kafka and Zookeeper.

An example order of topics are the following:

kpi-input - For sending data to Spark

kpi-output - For spark to write the output to, and thus back to the workflow

kpi-alarm - For errors from Spark

9. Startup Spark cluster, here “kpiapp” is a configurable name:

$ start_master_workers.sh
$ submit.sh kpiapp

10. Submit the app:

$ submit.sh kpiapp ...

11. You should now be able to see workers, and executors:

$ jps

Will give you something like:
pid1 Worker
pid2 Worker
pid3 CoarseGrainedExecutorBackend
pid4 CoarseGrainedExecutorBackend
pid5 DriverWrapper
pid6 CodeServerMain
pid8 Master



  • No labels