Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

Use the provided scripts stop.sh and start.sh to redeploy the Spark workers. This is required after you have updated the number of worker nodes in the spark cluster. increased or decreased the number of slave Service Contexts that are available to the spark service.

$ stop.sh

Example - Redeploying the Spark workers

$ start_master_workers.sh

When you change the number of spark slaves, you may also want to update the property spark.default.parallellism in the service configuration. This is to optimize the performance of the Spark cluster. However, in order for the change in the Spark configuration to become effective you must restart the cluster and submit the spark application. For further information about spark.default.parallellism, see KPI Management - External Software.

If the Spark UI indicates that a worker has status "DEAD", you must restart the cluster. 

$ mzsh spark worker-restart spark/<service-instance>

Example - Restarting a worker

$ mzsh spark worker-restart spark/spark1

Note!

It is not recommended to run more than one Spark worker per host. The command above will only restart one worker per host.

  • No labels