In preparation for using KPI Management in MediationZone, you need to extract the following scripts:
The scripts are as follows:
flush.sh
kpi_params.sh
spark_common_param.sh
start_master_workers.sh
stop.sh
submit.sh
These scripts will be used for different procedures in the KPI Management - Distributed Processing sections.
Preparations before extracting scripts:
A Prerequisite is that Spark, ZooKeeper, and Kafka are installed. Zookeeper and Kafka should be up and running as well. For more information about this, see KPI Management - External Software.
Before running the command to extract the scripts, these parameters need to be set as environment variables as they will be entered into some scripts:
export KAFKA_BROKERS="127.0.0.1:9092" export SPARK_UI_PORT=4040 export MZ_PLATFORM_AUTH="mzadmin:DR-4-1D2E6A059AF8120841E62C87CFDB3FF4" export MZ_KPI_PROFILE_NAME="kpi_common.SalesModel" export MZ_PLATFORM_URL="http://127.0.0.1:9036" export ZOOKEEPER_HOSTS="127.0.0.1:2181" export SPARK_HOME=opt/spark-3.3.2-bin-hadoop3-scala2.13 export KAFKA_HOME=/opt/kafka_2.13-3.3.1 export $PATH=$SPARK_HOME/bin:$KAFKA_HOME/bin:$PATH
Extracting scripts and KPI app:
mzsh mzadmin/<password>
topo open kpi_ec
jvmargs { args=[ "--add-opens", "java.base/java.lang.invoke=ALL-UNNAMED", "--add-opens", "java.base/java.lang.reflect=ALL-UNNAMED", "--add-opens", "java.base/java.util=ALL-UNNAMED" ] }
NB! The lines “jvmargs {“, “args=[“, “]” and “}” are not necessarily new, but just included to clarify where to edit.
Starting KPI
Prerequisite
Before you continue: Spark applications must be configured with a set of Kafka topics that are either shared between multiple applications or dedicated to specific applications. The assigned topics must be created before you submit an application to Spark. Before you can create the topics you must start Kafka and Zookeeper.
An example order of topics are the following:
kpi-input - For sending data to Spark
kpi-output - For spark to write the output to, and thus back to the workflow
kpi-alarm - For errors from Spark
9. Startup Spark cluster, here “kpiapp” is a configurable name:
$ start_master_workers.sh $ submit.sh kpiapp
10. Submit the app:
$ submit.sh kpiapp ...
11. You should now be able to see workers, and executors:
$ jps Will give you something like: pid1 Worker pid2 Worker pid3 CoarseGrainedExecutorBackend pid4 CoarseGrainedExecutorBackend pid5 DriverWrapper pid6 CodeServerMain pid8 Master
Add Comment