Create a Calico Enterprise management cluster
Big picture
Create a Calico Enterprise management cluster to manage multiple clusters from a single management plane using Helm 3.
Value
Helm charts are a way to package up an application for Kubernetes (similar to apt
or yum
for operating systems). Helm is also used by tools like ArgoCD to manage applications in a cluster, taking care of install, upgrade (and rollback if needed), etc.
Before you begin
Required
- Install Helm 3
kubeconfig
is configured to work with your cluster (check by runningkubectl get nodes
)- Credentials for the Tigera private registry and a license key
Concepts
Operator-based installation
In this guide, you install the Tigera Calico operator and custom resource definitions using the Helm 3 chart. The Tigera operator provides lifecycle management for Calico Enterprise exposed via the Kubernetes API defined as a custom resource definition.
How to
Get the Helm chart
curl -O -L https://downloads.tigera.io/ee/charts/tigera-operator-v3.18.2-0.tgz
Customize the Helm chart
If you are installing on a cluster installed by EKS, GKE, AKS or Mirantis Kubernetes Engine (MKE), or you need to customize TLS certificates, you must customize this Helm chart by creating a values.yaml
file. Otherwise, you can skip this step.
If you are installing on a cluster installed by EKS, GKE, AKS or Mirantis Kubernetes Engine (MKE), set the
kubernetesProvider
as described in the Installation reference. For example:echo 'installation: { kubernetesProvider: EKS }' > values.yaml
For Azure AKS cluster with no Kubernetes CNI pre-installed, create
values.yaml
with the following command:cat > values.yaml <<EOF
installation:
kubernetesProvider: AKS
cni:
type: Calico
calicoNetwork:
bgp: Disabled
ipPools:
- cidr: 10.244.0.0/16
encapsulation: VXLAN
EOFAdd any other customizations you require to
values.yaml
. To see values that can be customized in the chart run the following command:helm show values ./tigera-operator-v3.18.2-0.tgz
Install Calico Enterprise
- NodePort
- LoadBalancer
To install a Calico Enterprise management cluster with Helm, using a NodePort service:
Export the service node port number
export EXT_SERVICE_NODE_PORT=30449
Export the public address or host of the management cluster. (Ex. "example.com:1234" or "10.0.0.10:1234".)
export MANAGEMENT_CLUSTER_ADDR=<your-management-cluster-addr>:$EXT_SERVICE_NODE_PORT
Export one or more managed clusters.
Generate the base64 encoded CRT and KEY for a managed cluster:
openssl genrsa 2048 | base64 -w 0 > my-managed-cluster.key.base64
openssl req -new -key <(base64 -d my-managed-cluster.key.base64) -subj "/CN=my-managed-cluster" | \
openssl x509 -req -signkey <(base64 -d my-managed-cluster.key.base64) -days 365 | base64 -w 0 > my-managed-cluster.crt.base64Export the managed cluster variables:
export MANAGED_CLUSTER_NAME=my-managed-cluster
export MANAGED_CLUSTER_OPERATOR_NAMESPACE=tigera-operator
export MANAGED_CLUSTER_CERTIFICATE=$(cat my-managed-cluster.crt.base64)Append the management cluster context to your
values.yaml
:echo "
managementCluster:
enabled: true
address: $MANAGEMENT_CLUSTER_ADDR
service:
enabled: true
annotations:
type: NodePort
port: 9449
targetPort: 9449
protocol: TCP
nodePort: $EXT_SERVICE_NODE_PORT
managedClusters:
enabled: true
clusters:
- name: $MANAGED_CLUSTER_NAME
operatorNamespace: $MANAGED_CLUSTER_OPERATOR_NAMESPACE
certificate: $MANAGED_CLUSTER_CERTIFICATE" >> values.yamlInstall the Tigera Calico Enterprise operator and custom resource definitions using the Helm 3 chart:
helm install calico-enterprise tigera-operator-v3.18.2-0.tgz -f values.yaml \
--set-file imagePullSecrets.tigera-pull-secret=<path/to/pull/secret>,tigera-prometheus-operator.imagePullSecrets.tigera-pull-secret=<path/to/pull/secret> \
--set-file licenseKeyContent=<path/to/license/file/yaml> \
--namespace tigera-operator --create-namespaceYou can now monitor progress with the following command:
watch kubectl get tigerastatus
To install a Calico Enterprise management cluster with Helm, using a LoadBalancer service:
Meet cloud provider requirements
Ensure that you have met the requirements for your cloud provider to provision a load balancer in your environment.
For example, if you are using EKS, you must meet the requirements defined in create a network load balancer for AWS
Install the management cluster
Export one or more managed clusters.
Generate the base64 encoded CRT and KEY for a managed cluster:
openssl genrsa 2048 | base64 -w 0 > my-managed-cluster.key.base64
openssl req -new -key <(base64 -d my-managed-cluster.key.base64) -subj "/CN=my-managed-cluster" | \
openssl x509 -req -signkey <(base64 -d my-managed-cluster.key.base64) -days 365 | base64 -w 0 > my-managed-cluster.crt.base64Export the managed cluster variables:
export MANAGED_CLUSTER_NAME=my-managed-cluster
export MANAGED_CLUSTER_OPERATOR_NAMESPACE=tigera-operator
export MANAGED_CLUSTER_CERTIFICATE=$(cat my-managed-cluster.crt.base64)Append the management cluster context to your
values.yaml
:echo "
managementCluster:
enabled: true
service:
enabled: true
annotations:
type: LoadBalancer
port: 9449
targetPort: 9449
protocol: TCP
managedClusters:
enabled: true
clusters:
- name: $MANAGED_CLUSTER_NAME
operatorNamespace: $MANAGED_CLUSTER_OPERATOR_NAMESPACE
certificate: $MANAGED_CLUSTER_CERTIFICATE" >> values.yamlIf you are using EKS, make sure your management cluster has the following annotations:
managementCluster:
service:
annotations:
- key: service.beta.kubernetes.io/aws-load-balancer-type
value: "external"
- key: service.beta.kubernetes.io/aws-load-balancer-nlb-target-type
value: "instance"
- key: service.beta.kubernetes.io/aws-load-balancer-scheme
value: "internet-facing"Install the Tigera Calico Enterprise operator and custom resource definitions using the Helm 3 chart:
helm install calico-enterprise tigera-operator-v3.18.2-0.tgz -f values.yaml \
--set-file imagePullSecrets.tigera-pull-secret=<path/to/pull/secret>,tigera-prometheus-operator.imagePullSecrets.tigera-pull-secret=<path/to/pull/secret> \
--set-file licenseKeyContent=<path/to/license/file/yaml> \
--namespace tigera-operator --create-namespaceYou can now monitor progress with the following command:
watch kubectl get tigerastatus
Update the ManagementCluster address
Export the service port number
export EXT_LB_PORT=<your-external-load-balancer-port>
Export the public address or host of the management cluster, in this case the load-balancer's external IP (Ex. "example.com:1234" or "10.0.0.10:1234".)
export MANAGEMENT_CLUSTER_ADDR=<your-load-balancer-external-addr>:$EXT_LB_PORT
Replace the
address
field in the ManagementCluster resource.kubectl patch managementcluster tigera-secure --type merge -p "{\"spec\":{\"address\":\"${MANAGEMENT_CLUSTER_ADDR}\"}}"
Create an admin user and verify management cluster connection
To access resources in a managed cluster from the Calico Enterprise Manager within the management cluster, the logged-in user must have appropriate permissions defined in that managed cluster (clusterrole bindings).
Create an admin user, mcm-user
, in the default namespace with full permissions, and token.
kubectl create sa mcm-user
kubectl create clusterrolebinding mcm-user-admin --clusterrole=tigera-network-admin --serviceaccount=default:mcm-user
kubectl create token mcm-user -n default
Use the generated token, to connect to the UI. In the top right banner in the UI, your management cluster is displayed as the first entry in the cluster selection drop-down menu with the fixed name, management cluster
.
Congratulations! You have now installed Calico Enterprise for a management cluster using the Helm 3 chart.
Next steps
Recommended
- Configure access to Calico Enterprise Manager UI
- Authentication quickstart
- Configure your own identity provider
Recommended - Networking
- The default networking is IP in IP encapsulation using BGP routing. For all networking options, see Determine best networking option.
Recommended - Security