CCE Update: Kubernetes 1.15

Flexible Engine
Release Notes

December 15, 2020

Cloud Container Engine is Flexible Engine’s container management service. You can find more information regarding CCE here, and CCE technical documentation here.

Support for Kubernetes 1.15.6 has been added.

Support CentOS 7.5 version for kubernetes cluster version 1.15
The cluster’s node image can be created by using Centos 7.5 version
Autoscaler Feature enhancement
Node scaling policy can specify the node pools that need to be operated
Two different types of rules are supported into node scaling policy
Metric-based rule can be triggered by metric threshold
Schedule-based can be triggered by unscheduled pod at one time or periodically
Support new type of secret in kuberbetes cluster version 1.15
Secret generate by default switch from  AK/SK to AK/SK/security-token
Support CSI for storage management in Kubernetes cluster version 1.15
The newly created CCE clusters will support new kubernetes 1.15 API compatible with community
The support of Container Storage Interface (CSI) is added in kubernetes 1.15, compatible with community APIs
Support Volume snapshot
Support volume snapshot functionnality in storage management
Use DNS in VPC as node’s DNS configuration
Use the VPC dns address as the node’s DNS configuration
Remove CoreDNS config from node’s /etc/resolve.conf
Speed ​​up DNS resolution of nodes and avoid global DNS failures caused by coreDNS failures
Support cross-AZ deployment for workload that mount EVS storage
Mount EVS workload can be deployed cross-Availability Zone

Feature limits:
1. This new version is only available for the new cluster creation
2. The data on the node will be cleared, such as the software or configuration installed by the user

CCE support cluster upgrade from 1.11 to 1.13

CCE support cluster upgrade from 1.11 to 1.13.
Upgrade principle:
The upgrade feature upgrades the nodes to version 1.13 by reinstalling the operating system. The master node(s) is upgraded to version 1.13 by means of component upgrade and the operating system will not be reinstalled.
When the community Kubernetes 1.11 version is upgraded to version 1.13.
In addition, if there are multiple nodes in the cluster, users can specify that certain nodes are upgraded in batch order.
Warning: The cluster’s kube-dns will be uninstalled and replaced with CoreDNS.
Feature usage:
Add an upgrade button to the k8s 1.11 clusters operation list. You can use this option to enter into the upgrade process.
After the pre-check is over, you can choose to specify the node or upgrade all nodes to upgrade the cluster.  
You can find all the related information on our help center and especially on the “Cluster upgrade” dedicated section    
Feature limit:
1. Does not support upgrading through the API
2. The data on the node will be cleared, such as the software or configuration installed by the user
3. If all nodes are upgraded at the same time, the business will be interrupted during the upgrade process
4. Worker nodes’ custom labels will not be retained after the upgrade
5. IP addresses of the containers on worker nodes will change after the upgrade
6. Upgrade failure requires separate recovery  
7. When the upgrade started, it cannot be canceled or stopped

Autoscaler enhancement

We offer new autoscaler feature enhancement, that we allow you to: * Sending an alert to AOM when expansion fails

  • Giving a clear Error Message when expansion fails
  • AutoScaler will try to expand the node that can be accommodated by the node pool if node pool does not have enough node quota for scale up

Support Horizontal Pod Autoscaler for workload

We Support Horizontal Pod Autoscaler for workload , that we allow you to: * Set up min and max pod of HPA

  • Set up cooldown period of HPA
  • Set up scale down and scale up percent of HPA

Support download kubectl configuration file with different Validity periods based on user types

With this feature, we allow you download kubectl config file from console with differect validity periods

Feature limit: * Only 1 month for subuser

  • API is not available

Support Automatically chose master’s AZ

Automatically select the Available Zone where the master is located based on the remain resource. This choice is transparent and when a cluster is created, the master will by default be recommanded to be created in one Available Zone. If the default setting is not appropriate, the user can change it by another Available Zone.

Support set up expansion priority for Node Pool

When autoscaler needs to be scaled up, it preferentially selects the node pool with a higher priority

Node pool add reminder for sold out flavor

The Node Pool adds a clear reminder when a flavor is sold out

Support node migration between Node Pool

Support the migration node from one Node Pool to another Node Pool

Support ECS capability check when Node Pool is scaled up

Support ECS capacity checking in an Available Zone when a Node Pool has been scald up. Before extending the CCE Node Pool, CCE will check if there is enough resources in the Available Zone otherwise an error message will be return.