Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

This section describes details of the KPI Management implementation that you should take into account when deploying KPI Management in a production environment.

Operating System

The default maximum number of file descriptors and processes in your OS may be insufficient for the Spark cluster. 

If the number of file descriptors are exceeded, it will be indicated by the error message "too many open files" in the Spark driver log. The procedure below describes how to increase the limit on a Linux system.

Error rendering macro 'excerpt-include' : User 'null' does not have permission to view the page 'MD82:2.11 Increasing the Maximum Number of File Descriptors'.

The error "java.lang.OutOfMemoryError: unable to create new native thread" indicates that the maximum number of processes is exceeded. To increase this number, you typically need to change the settings in the file /etc/security/limits.conf . For further information about how to update this file, see your operating system documentation.

Spark

To scale out the Spark applications, you can increase the number of slave hosts for the Spark cluster or increase the number of CPUs on existing hosts.

Follow these steps to scale out the Spark applications:

  1. Stop the KPI Management workflows.

  2. Shut down the Spark cluster. 

  3. Shutdown all Kafka and Zookeepers.

  4. Update the Spark service configuration. If you are not only adding CPUs on existing host but also new slave hosts, you need to add these to the configuration.

  5. Increase the number of Kafka partitions. In order for the Spark service to work, the required number of partitions for each topic must be equal to the setting of the property spark.default.parallelism in the Spark application configuration.

  6. Remove the checkpoint directory. 

    $ rm -rf <checkpoint directory>/*
  7. Restart Kafka and Zookeeper.

  8. Submit the Spark application.

  9. Start the KPI Management workflows.

ZooKeeper

The Spark application connects to the ZooKeeper service and you must ensure that the maximum number of client connections is not exceeded since this will cause errors during the processing.  The default maximum number of connections is 60 per client host.  The required number of connections directly corresponds to the total number of running Spark executors.

  • No labels