Cloud Container Engine – New Kubernetes 1.19 version and migration tool


Flexible Engine
Release Notes

September, 2021


Flexible Engine’s container management service: Cloud Container Engine now supports 1.19.10 kubernetes clusters. As of today you’ll be able to create kubernetes clusters in version 1.19, or upgrade your version 1.17 clusters to 1.19 with our provided migration tool.

1.1 New Features Feature Description and Impact Constraints/Comments
1 CCE cluster support Kubernetes version 1.19.10 New Kubernetes version 1.19.10 CentOS 7.6 uses kernel 3.10.0-1127.19.1.el7.x86_64.
EulerOS 2.5 uses kernel 3.10.0-862.14.1.5.h470.eulerosv2r7.x86_64
2 Support clusters upgrade from Kubernetes 1.17 to 1.19

As for previous CCE versions, CCE support cluster versions upgrade using standard “Replace upgrade” feature:

  • The latest worker node image is used to reset the node OS.
  • This is the fastest upgrade mode and requires few manual interventions.
Service pods and networks will be impacted during the upgrade as nodes are replaced

Data or configurations on the node will be lost, and services will be interrupted for a period of time.

  • Upgraded clusters cannot be rolled back. Therefore, perform the upgrade during off-peak hours to minimize the impact on your services.
  • Before upgrading a cluster, learn about the features and differences of each cluster version in Kubernetes Release Notes to avoid exceptions due to the use of an incompatible cluster version.
  • Do not shut down or restart nodes during cluster upgrade. Otherwise, the upgrade fails.
  • Before upgrading a cluster, disable auto scaling policies to prevent node scaling during the upgrade. Otherwise, the upgrade fails.
  • If you locally modify the configuration of a cluster node, the cluster upgrade may fail or the configuration may be lost after the upgrade. Therefore, modify the configurations on the CCE console (cluster or node pool list page) so that they will be automatically inherited during the upgrade.
  • Before upgrading the cluster, check whether the cluster is healthy. If the cluster is abnormal, you can try to rectify the fault. 
To ensure data security, you are advised to back up data before upgrading the cluster. During the upgrade, you are not advised to perform any operations on the cluster.
   

This new version of CCE also introduce a brand new Beta upgrade mode called “in Place Rolling Upgrade”.  It allows users to upgrade their clusters without any service interruption:

  • Kubernetes components, network components, and CCE management components will be upgraded on nodes. 
  • During the upgrade, service pods and networks will not be affected, and all existing nodes are labeled SchedulingDisabled. 
  • After the upgrade is complete, you can use these nodes as normal.

Advantage: Users do not need to migrate services, which ensures service continuity.

An in-place upgrade does not upgrade the node OS. If you want to upgrade the OS, clear the node after the node upgrade is complete and reset the node to upgrade the OS to the new version.

 

NOTE: This feature is currently in Beta mode, if you would like to test this feature, please submit a ticket on our Service Desk.

3 m6 ECS flavor support CCE cluster version 1.17 &1.19 now supports m6 flavor ECS as worker nodes  
4 Support Agency binding by API

Support agency binding when using the API to create nodes

 
5 Pod security policies are enabled in the version 1.19. By default, the PSP access control component is enabled for clusters of v1.19 and a global default PSP named psp-global is created. You can modify the default policy (but not delete it). You can also create a PSP and bind it to the RBAC configuration.  
6 Support OIDC authentication. CCE supports OIDC functionality via IAM. The OIDC protocol-based Enterprise IdP is federated with the IAM for identity authentication. The detailed OIDC authentication process and configuration procedure can be found in https://docs.prod-cloud-ocb.orange-business.com/usermanual/iam/en-us_topic_0274187177.html  
7 Fix the coredns 1.6.5 upgrade failure. Previous versions of CoreDNS failed to upgrade to CoreDNS 1.6.5, in this CCE version we fixes the issue.  
8 Console Optimization

On the job list page can display: the job execution duration, start time, and end time

CCE cronjob, support users modify YAML files in the console and it can synchronize configuration changes, and perform real-time tests.

 
1.2 Kubernetes 1.19 API Changelog
Deprecation
  • apiextensions.k8s.io/v1beta1 has been deprecated and is replaced with apiextensions.k8s.io/v1.
  • apiregistration.k8s.io/v1beta1 has been deprecated and is replaced with apiregistration.k8s.io/v1.
  • authentication.k8s.io/v1beta1 and authorization.k8s.io/v1beta1 have been deprecated and will be removed in version v1.22. You can use version v1.
  • coordination.k8s.io/v1beta1 has been deprecated and will be removed in version v1.22. You can use coordination.k8s.io/v1.
  • For kube-apiserver, the componentstatus API is deprecated. You can use kube-apiserver and kube-scheduler/kube-controller-manager health check APIs.
  • The alpha feature ResourceLimitsPriorityFunction of the scheduler is removed due to lack of application scenarios.
  • storage.k8s.io/v1beta1 has been deprecated and is replaced with storage.k8s.io/v1.
  • All resources in the apps/v1beta1 and apps/v1beta2 API versions are no longer served. You can use the apps/v1 API version.
  • DaemonSets, Deployments, and ReplicaSets in the extensions/v1beta1 API version are no longer served. You can use the apps/v1 API version.
  • NetworkPolicies in the extensions/v1beta1 API version are no longer served. You can use the networking.k8s.io/v1 API version.
PodSecurityPolicies in the extensions/v1beta1 API version are no longer served. Migrate to use the policy/v1beta1 API version.
The following kube-apiserver APIs are no longer served:
  • All resources in the apps/v1beta1 and apps/v1beta2 API versions are no longer served. You can use the apps/v1 API version.
  • DaemonSets, Deployments, and ReplicaSets in the extensions/v1beta1 API version are no longer served. You can use the apps/v1 API version.
  • NetworkPolicies in the extensions/v1beta1 API version are no longer served. You can use the networking.k8s.io/v1 API version.
  • PodSecurityPolicies in the extensions/v1beta1 API version are no longer served. Migrate to use the policy/v1beta1 API version.
  • The client label is removed from apiserver_request_total.
  • The kube-apiserver flag –basic-auth-file is replaced by –token-auth-file.
kubescheduler.config.k8s.io/v1alpha2 is upgraded to kubescheduler.config.k8s.io/v1beta1.
1.3         Fixed Bugs  
SN Description
BUG2021011901283 The description of creating a NodePort and LoadBalancer (ELB) Service is incorrect.
BUG2021011201348 After the cluster is upgraded from 1.11 to 1.17, the OS version is incorrectly displayed.
BUG2021010700785 [Experience optimization] When the get interface of the add-on is invoked, the first letter of Reason in the returned status information should be lowercase.
BUG2021010700752 The Labels tab page in the node details should be changed to Tags, the same as that in the ECS.
BUG2020111601317 [Experience optimization] The message “create cluster successfully” is displayed after a node is successfully created.
BUG2021030301522 When the console language is switched to French, some of the strings are not in French.
BUG2020111601299 [Experience optimization] When the put interface of the add-on is invoked, “support” is incorrectly written as “surpport” in the returned exception information.
1.4       Addon Changelog  
SN Descriptions
gpu-1.2.2 Added adaptation to the EulerOS kernel.
metrics-server-1.1.2

Updated to community version v0.4.4.

Enhanced the reliability of PV lifecycle maintenance

Allowed clusters of v1.19.10 to use the Attach/Detach Controller to attach and detach volumes

Everest-1.2.9

Improved the stability of SFS file system mounting

Changed the default EVS disk type to SAS for newly created clusters.

Coredns-1.17.7

Updated to community version v1.8.4.

Autoscaler-1.19.6

Fixed the issue that caused scale-out to be triggered repeatedly when taints were asynchronously updated.