Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

The kafka and zookeeper services are required for sending data to and from the Spark cluster. 

Spark applications must be configured with a set of Kafka topics that are either shared between multiple applications or dedicated to specific applications. The assigned topics must be created before you submit an application to the Spark service. Before you can create the topics you must start the Kafka and Zookeeper services.

See 5.2 Preparing and Creating Scripts for KPI Management on how to startup Spark, Kafka and zookeeper.

Retention Settings

The default data retention period in Kafka is one day. You can change the length of this period to conserve disk space.

Set the following properties in the copied file broker-defaults.properties:

log.retention.bytes - Must be greater than value of the property log.segment.bytes

log.segement.byte - Must exceed the size of the input/output segments to and from Kafka

log.retention.ms - Must be greater than the largest window size in the service model by at least factor 3.

Hint!

The instruction above will change the retention settings for all topics in the Kafka cluster. You can also override the retention setting for individual topics during creation. For further information see 5.3.3 Starting Services and Creating Topics.

For further information about Kafka, see /wiki/spaces/MD82/pages/3785234 in the /wiki/spaces/MD82/pages/3768690.

  • No labels