KaaS – Deploy workload cluster
Workload cluster deployment
Prerequisite
- A virtual datacenter
- A routed network with internet access (SNAT)
- The LoadBalancing feature enabled (request to be made)
- Bootstrap cluster deployed
- Command-line tools KaaS – Tooling
- Internet DNS resolution
- For Internal IaaS only : Source NAT for API Access
All these steps should be made when connected to the management cluster.
NOTE : If you don’t have yet a management cluster in your environment, please follow the KaaS – Information Map page before continue.
To verify you can use this command :
Deployment
Preparing the configuration file for the cluster to be created
The clusterctl tool relies on template files (provided by VMware in our case) and a configuration file with all the required parameters for creation.
The example configuration file is located at: ~/.cluster-api/clusterctl.yaml
So, we use it as a basis for our ‘wrk01‘ cluster.
Fill the ***-config.yml file previously copied
The ***-config.yml file is used to provide all the necessary information to Cluster API for deploying a cluster.
Les autres paramètres peuvent être ignorés
Paramètre | Valeur | example |
---|---|---|
VCD_SITE | VCD endpoint with the format https://VCD_HOST. No trailing ‘/’ | https://console1.cloudavenue.orange-business.com |
VCD_ORGANIZATION | organization name associated with the user logged in to deploy clusters | ORG1CUSTOMER1 |
VCD_ORGANIZATION_VDC | VDC name where the cluster will be deployed | MYPRODVDC1 |
VCD_ORGANIZATION_VDC_NETWORK | OVDC network to use for cluster deployment | PRODNETWORK01 |
VCD_CATALOG | VCD catalog name where templates are stored | cse-tkgm-template |
VCD_TEMPLATE_NAME | VM template name to use (can be found in the vCloud Director catalog) | ubuntu-2004-kube-v1.22.9+vmware.1-tkg.1-2182cbabee08edf480ee9bc5866d6933 |
VCD_REFRESH_TOKEN_B64 | vCloud Director token to use, see here to generate | Le refreshToken est a renseigner en base64. Commande pour cela : echo ajJdhYghdUgzj | base64 |
VCD_CONTROL_PLANE_SIZING_POLICY | Compute policy to use for control plane nodes, let blank to use the default one | Medium |
VCD_CONTROL_PLANE_STORAGE_PROFILE | Name of the storage policy where the control plane node(s) will be deployed. | gold Note : Ceci est un exemple, se reporter aux stratégies de stockage disponible dans votre VDC. |
VCD_CONTROL_PLANE_PLACEMENT_POLICY | Name of the placement policy where the control plane node(s) will be deployed. | |
VCD_WORKER_SIZING_POLICY | Name of the Compute Policy to use for the worker nodes. | Medium |
VCD_WORKER_STORAGE_PROFILE | Name of the storage policy where the worker node(s) will be deployed. | gold Note : Ceci est un exemple, se reporter aux stratégies de stockage disponible dans votre VDC. |
VCD_WORKER_PLACEMENT_POLICY | Name of the placement policy where the worker node(s) will be deployed. | |
VCD_RDE_ID | Cluster ID in case of importing a cluster that has been created outside of this environment. | Comment the line by starting with a # or simply delete it. |
VCD_VIP_CIDR | CIDR of the external network for creating load balancer-type services. You can modify or configure this parameter later. KaaS Standard – Deploy workload cluster | |
CLUSTER_NAME: | Cluster name to create | wrk01 |
TARGET_NAMESPACE | Namespace where the cluster objects should be created in management cluster | default |
CONTROL_PLANE_MACHINE_COUNT | Number of master nodes (must be an odd number) | 1 |
WORKER_MACHINE_COUNT | Number of worker nodes | 1 |
KUBERNETES_VERSION | Version of Kubernetes to install. You can retrieve this in the VM template name | v1.22.9+vmware.1 |
SSH_PUBLIC_KEY | sshKey used to connect to nodes Must be 2048bits & in OpenSSH format KaaS – SSHKey for node access !! DO NOT FORGET TO ENCAPSULATE YOUR TEXT BETWEEN TWO “” |
Generate the YAML creation file.
Apply the YAML file to initiate cluster creation.
Then, wait for the cluster to be created:
- Verify that the Cluster object is in the “Provisioned” phase using the command: `kubectl get cluster`
- Wait for all master nodes to be in the “READY” status using the command: `kubectl get kubeadmcontrolplane`
Thanks to the ClusterResourceSet, CNI, CSI, and CPI PODs are deployed automatically. However, they are missing some configuration elements, which we will add now.
This is why these PODs are not yet running on the new cluster created.
To do this, you should be connected to the management cluster, not the new cluster.
Modify the first 6 provided commands to the right and apply them all.
– CLUSTERNAME: Name of the management cluster you are currently creating.
– MGT: Path to the kubeconfig of your management cluster (acting as your management cluster).
– VCDHOST: URL of the vCloud Director console.
– ORG: Name of your organization.
– OVCD: Name of the virtual datacenter where the cluster is located.
– ONETWORK: Name of the VCD network where the cluster is located.
After a few minutes, all PODs should be in the RUNNING phase on the ‘wrk01’ cluster.
Create the configmaps.
Use the two commands provided here to create the required configmaps.
You can customize them, including choosing a vipSubnet.
After a few moments, all PODs should be in the “Running” state on the cluster, and the machinedeployment components on the management cluster should be in the “READY” status and “Running” phase.
Retrieve the kubeconfig file for the new cluster
Example :
- Switch to the management cluster with kctx or any other way you want
- Check if you are correctly connected to the good cluster with kubectl get nodes command
Delete workload cluster
Connected to the management cluster, run the kubectl delete cluster command :
Create a Storage Class
Using KaaS Standard you can consume storage in your Virtual Data center Automatically for the Persistent Volume for your PODS.
For that you have to create K8S Storage Class that is link to a VCD Storage Policy. In that way you can use storage with different performance.
Create a file mystorageclass.yaml and copy the content provided on the right and replace
by the storage policy of your choice.
The example provided create a default storage class, if you don’t want to set this Storage Class as default you can simply change the annotation storageclass.kubernetes.io/is-default-class to false.
Set the vipSubnet to use for external service
The vipSubnet correspond to the vCloud Director external network cidr to use. When a load balancer is created an ip will be taken inside this CIDR and will point to the LB interface using a NAT rule.
To choose or simply change the CIDR to use on a cluster, follow the below steps.
Extract the actual configmap in place
Get tue actual CPI version in use
Update the vipSubnet in the generated file
Delete the configmap & the ccm deployment
Recreate the ccm with the good cpi version.
Depending of the version installed on your cluster the cloud-director-ccm.yaml url can change so compare with the version you got in step 2 and adapt the url with your version.
Retrieve refreshToken
- Open the vCloud Director console with the account you want to use for cluster creation (kind of a service account)
- On the top right of the page clic on the 3 dots vertical button
- Clic on user preferences
- On the Access Tokens section clic New
- Give a client name to the token then clic Create
- Copy and keep safely the token generated