Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Kafka and zookeeper are required for sending data to and from the Spark cluster. 

Spark applications must be configured with a set of Kafka topics that are either shared between multiple applications or dedicated to specific applications. The assigned topics must be created before you submit an application to the Spark service. Before you can create the topics you must start the Kafka and Zookeeper services.

See 5.2 Preparing and Creating Scripts for KPI Management on how to startup start Spark, Kafka, and zookeeperZookeeper.

The topics are for transferring data to the Spark Application, receiving calculated KPIs from Spark, and a third topic for alarms. The default names of the topics are kpi-input, kpi-output, and kpi-alarm, but the names can be altered in the KPI Management Profile. Please note Ensure that the number of partitions must match the number of Kafka brokers.

Retention Settings

The default data retention period in Kafka is one day. You can change the length of this period to conserve disk space.

...

Tip
titleHint!

The instruction above will change the retention settings for all topics in the Kafka cluster. You can also override the retention setting for individual topics during creation. For further information see 5.3.3 Starting Services Clusters and Creating Topics.

For further information about Kafka, see /wiki/spaces/MD82/pages/3785234 in the /wiki/spaces/MD82/pages/3768690.

...