Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Create separate KPI profiles for each kpi model. See KPI Profile.
  • Create separate input, output, and alarm topics in Kafka for each service model. See Starting Clusters and Creating Topics.
  • Create separate Spark application configurations for each service model. This can be done by updating the script kpi_params.sh in the folder mz_kpiapp/bin, making one entry per kpi model.

    Copy the whole section in the if-statement for "kpiapp", and at least alter these parameters:

    if [ "kpiapp" = "$1" ] ← Obviously one app per KPI Model 
    The name of the profile used
    export MZ_KPI_PROFILE_NAME="kpisales.SalesModel" ← The name of the profile used

    Note
    titleNote!
    Set the Spark application property spark.cores.max to limit the cluster resources that an application can use.

     

    If you do not set this property, the first submitted Spark application will use all resources available. Other applications have to wait for the resources to be freed. 

You can use the same instance of the kpi-model service for provisioning of all the service models.

Submit two different Spark apps.

You can define different configurations:

...

Two Spark Application blocks, one slave

...

Two Spark Application blocks, two slaves

...

If it is required, you can specify the amount of memory or the number of cores that are reserved for each slave individually, by setting properties in the deployment-info block. These will override property values that you have set in the spark-environment block.

Example - Overriding properties in spark-environment block

...



Scroll pagebreak