Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

To be able to handle the KPI management system in a Private Container Deployment, such as Kubernetes, you must prepare a number of scripts according to the instructions below. The scripts that you create are the following:

...

These scripts will be used by different procedures that you find in the sections for KPI management - Distributed Processing.

Preparations before creating scripts:

A Prerequisite is that Spark, ZooKeeper, and Kafka are installed and up and running.

Creating scripts:

1. Set up your preferred KPI configuration or use the simplified example configuration, startup the platform. kpi_tst.zip

...

7. Edit the kpiapp/bin/spark_common_param.sh, so it has the SPARK_HOME path.

  1. Access the conf-folder of Apache Spark, the spark-defaults.conf.template file should be renamed to spark-defaults.conf and the following configuration variables and options added:

    Code Block
    languagenone
    spark.driver.defaultJavaOptions    --add-opens java.base/java.lang=ALL-UNNAMED \
    --add-opens java.base/java.lang.invoke=ALL-UNNAMED \
    --add-opens java.base/java.lang.reflect=ALL-UNNAMED \
    --add-opens java.base/java.util=ALL-UNNAMED \
    --add-opens java.base/java.util.concurrent=ALL-UNNAMED \
    --add-opens java.base/java.util.concurrent.atomic=ALL-UNNAMED \
    --add-opens java.base/java.io=ALL-UNNAMED \
    --add-opens java.base/java.net=ALL-UNNAMED \
    --add-opens java.base/java.nio=ALL-UNNAMED \
    --add-opens java.base/sun.nio.ch=ALL-UNNAMED \
    --add-opens java.base/sun.nio.cs=ALL-UNNAMED \
    --add-opens java.base/sun.util.calendar=ALL-UNNAMED \
    --add-opens java.base/sun.security.action=ALL-UNNAMED
    
    spark.executor.defaultJavaOptions    --add-opens java.base/java.lang=ALL-UNNAMED \
    --add-opens java.base/java.lang.invoke=ALL-UNNAMED \
    --add-opens java.base/java.lang.reflect=ALL-UNNAMED \
    --add-opens java.base/java.util=ALL-UNNAMED \
    --add-opens java.base/java.util.concurrent=ALL-UNNAMED \
    --add-opens java.base/java.util.concurrent.atomic=ALL-UNNAMED \
    --add-opens java.base/java.io=ALL-UNNAMED \
    --add-opens java.base/java.net=ALL-UNNAMED \
    --add-opens java.base/java.nio=ALL-UNNAMED \
    --add-opens java.base/sun.nio.ch=ALL-UNNAMED \
    --add-opens java.base/sun.nio.cs=ALL-UNNAMED \
    --add-opens java.base/sun.util.calendar=ALL-UNNAMED \
    --add-opens java.base/sun.security.action=ALL-UNNAMED
    
    spark.master.rest.enabled true

Starting KPI

Note

Prerequisite

Before you continue: Spark applications must be configured with a set of Kafka topics that are either shared between multiple applications or dedicated to specific applications. The assigned topics must be created before you submit an application to the Spark service. Before you can create the topics you must start the Kafka and Zookeeper services.

...

An example order of topics are the following:

kpi-input - For sending data to Spark

kpi-output - For spark to write the output to, and thus back to the workflow

kpi-alarm - For errors from Spark

9. Startup Spark cluster, here “kpiapp” is a configurable name:

Code Block
$ start_master_workers.sh
...

...

$ submit.sh kpiapp

10. Submit the app:

Code Block
$ submit.sh kpiapp ...

1011. You now can see 2 workers, and 2 executors:

...