The preparations described below are required to install on Amazon Web Services using Helm charts and Docker images. Read through the steps below and follow each step before installing. This pre-installation is normally not needed to be done again, once you have done it.
Prerequisites
The following must be installed before you can install:
Application | Download from | Comments |
---|---|---|
Helm version 3.x | https://helm.sh/docs/intro/install | Mandatory |
Cert-manager 1.1.0 or later | https://cert-manager.io | Recommended Note, if you do not want to use cert-manager, there is an option to do the Helm install with the parameter mzOperator.webhook.tls.cert.delegate=internal . Search for documented helm values to understand the implications of this. |
aws cli | https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html | Mandatory, for configuring your eks cluster with kubeconfig |
kubectl | https://kubernetes.io/docs/tasks/tools/install-kubectl/ | Mandatory, for all administrative tasks with the cluster |
Terraform v12 | https://learn.hashicorp.com/terraform/getting-started/install.html | Optional, if you want to spin up the new infrastructure and use the example templates in the upcoming sections |
eksctl | https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html | Optional, if you want to spin up the new infrastructure and use the example templates in the upcoming sections |
is installed with Derby as the default platform database. Should you want to run your installation on PostgreSQL, you must install the following:
- AWS RDS with an engine PostgreSQL. or Use your existing database instance with right connection and credentials. That should be able to create the database structure for .
The existing Infrastructure should have the following services before you install.
- EKS Cluster or Opeshift Cluster
- Worker Nodes
- VPC (Public and Private nodes)
- NAT Gateways
- Internet Gateways
- Routing tables
- EFS as a persistent storage
Optional services, in case they are needed: - Load Balancers (Application Load Balancers)
- RDS Instance preferably postgres
Deployment Architecture
VPC - Virtual Private Cloud
Two or three availablity zones (AZ) are supported for the Virtual Private Cloud, which is the recommendation. Each AZ has one public and one private subnet. Note that the number of nodes does not relate to the number of AZs. You can run two nodes with a VPC in three AZs. The nodes only occupy two AZs, but have the ability to switch to another AZ if one goes down. Using three AZs is better for high availability and redundancy. In terms of cost, each AZ adds one NAT Gateway (if nodes are in private subnets). The recommended configuration is to have all nodes in the private subnets. This is more secure and does not allow direct access to the machine as they do not have any public IP assigned.
Kubernetes
This is an overview of the different pods and load balancers. A standard setup of load balancers in public subnets and kubernetes pods in private subnets are used.
If you do not have any existing setup or starting with fresh account in AWS, Please refer the Setup [hide] AWS Terraform [/hide][hide]3.0[/hide] and modify accordingly to match with your production environment.
Initialization
Use the aws configure command to setup your AWS credentials.
$ aws configure aws_access_key_id = <access key id> aws_secret_access_key = <secret access key> output = json region = <region where you have your EKS cluster>
[hide] Line Brake [/hide]Update kubeconfig context to access your eks cluster.
$ aws eks update-kubeconfig --name <name_of_your_cluster> To set the default cluster, used as the example in this installation: $ aws eks update-kubeconfig --name mz-eks
Note!
This step can be omitted if you do not have an existing eks cluster or if you will be using the Openshift Cluster.
If you do not have an eks cluster installed, you can create one in the Setup [hide] AWS Terraform [/hide][hide]3.0[/hide].
[hide] Line Brake [/hide]verify that your are in right cluster and can list the nodes and other resources
$ kubectl get node $ kubectl get all
Note!
This step can be omitted if you do not have an existing eks cluster or if you will be using the Openshift Cluster.
If you do not have an eks cluster installed, you can create one in the Setup [hide] AWS Terraform [/hide][hide]3.0[/hide].
[hide] Line Brake [/hide]To verify that the Helm CLI is initialized, run the following command:
$ helm version -c
Output example:
$ helm version -c version.BuildInfo{Version:“v3.2.1”, GitCommit:“fe51cd1e31e6a202cba7dead9552a6d418ded79a”, GitTreeState:“clean”, GoVersion:“go1.13.10"}
This section contains:
Add Comment