MachineDeployments
MachineDeployments allow you to manage different groups of Nodes with different configurations. MetaKube performs rolling updates on Nodes when the configuration of a MachineDeployment changes.
Kubectl
Node lifecycle
Cluster API
MetaKube models the lifecycle of Nodes with the Kubernetes Cluster API which defines custom resources that are MachineDeployment, MachineSet & Machine.
Provisioning
When a new Machine is created (e.g. during rolling update), MetaKube will:
- Create OpenStack resources (port, server, floating IP)
- Wait until Node comes up and becomes Ready
- Server boots and gets initialization config from the OpenStack metadata service
- Server runs initialization code and installs Kubelet
- Kubelet gets a client certificate through the CSR
- Kubelet registers the Node for the first time
- OpenStack cloud controller manager initializes the Node metadata (e.g. addresses)
- Critical DaemonSet Pods (CNI) start running
- MetaKube removes the
node.syseleven.de/not-ready:NoScheduletaint and marks the Node as Ready
Deletion
MetaKube may delete Machines as part of:
- Rolling updates
- Scaling down of MachineDeployments
- Deletion of MachineDeployments
- If Node hasn't joined the cluster in 2h
To safely deprovision a Kubernetes Node, MetaKube will:
- Mark Machine object for deletion
- Start draining the Node, honoring potential PodDisruptionBudgets
- Clean up resources in cloud
- Delete the Machine object
This makes it safe to delete a Machine (kubectl -n kube-system delete machine my-node-1234) in order to gracefully replace a particular Kubernetes Node.
!!! In case a server corresponding to a Kubernetes Node is deleted directly in OpenStack, the OpenStack cloud controller manager (node controller) will delete the Node object and MetaKube will clean up the corresponding Machine and create a replacement.
Rolling Update
Any change to a MachineDeployment's machine template will trigger a rolling update of the nodes.
MetaKube will gradually scale up the new MachineSet and wait until the Nodes join the cluster and become Ready before scaling down the old MachineSet.
You can manually trigger a rollover with this kubectl command:
kubectl -n kube-system patch machinedeployment $machinedeployment -p '{"spec": {"template": {"metadata": {"annotations": {"kubectl.kubernetes.io/restartedAt": "'$(date +"%Y-%m-%dT%T.%3N%z")'"}}}}}' --type merge
Configuration
! Not all these options are available in all clients (UI, Terraform). ! They may instead use default values.
MachineDeployment
| Field | Type | Explanation |
|---|---|---|
spec.replicas |
MetaKube will maintain this number of nodes for this MachineDeployment. See also autoscaling as an alternative to a static replica count. |
|
spec.strategy.rollingUpdate.maxSurge |
integer or percentage |
During a rolling update, MetaKube may start provisioning this many more new nodes than spec.replicas. |
spec.strategy.rollingUpdate.maxUnavailable |
integer or percentage |
During a rolling update, MetaKube may start removing this many nodes below spec.replicas before new nodes become Ready. |
spec.template |
MachineTemplate | Template for the machines that get created. Note: Any change to the machine template will trigger a rolling update! |
MachineTemplate
| Field | Type | Explanation |
|---|---|---|
metadata.labels |
map(string) |
Labels that are added to the nodes |
spec.versions.kubelet |
string (semver) |
The version of Kubelet. Should ideally be kept the same as the control plane version. Changing this will replace existing nodes with new ones. You can upgrade the control plane and Kubelet versions independently, as long as it doesn't violate the Kubernetes version skew policy. See supported Kubernetes versions for what Kubelet versions are supported for a given control plane version. |
spec.taints |
list(taint), see docs |
Taints that are added to the nodes |
spec.providerSpec.value.cloudProviderSpec |
CloudProviderSpec | Cloud provider specific configuration |
spec.providerSpec.value.operatingSystem |
string |
Distro of the operating system. Either ubuntu or flatcar. |
spec.providerSpec.value.operatingSystemSpec |
OperatingSystemSpec | Operating system specific configuration |
spec.providerSpec.value.sshPublicKeys |
list(string) |
Authorized SSH public keys initially installed on the machine Once initialized, the keys are managed by the ssh-key-agent DaemonSet Pod. |
CloudProviderSpec (OpenStack)
| Field | Type | Explanation |
|---|---|---|
flavor |
string |
Name of Flavor to use for the OpenStack server See OpenStack cloud documentation for a list of available Flavors and here for limitations. |
floatingIpPool |
string |
Name of external network that floating IPs for servers would be allocated from |
image |
string |
OpenStack operating system image name or ID |
rootDiskSizeGB |
number |
Capacity of root disk in GB Not available for localstorage flavors. Note: The replacement disk will incur the standard fee for network storage. |
securityGroups |
list(string) |
Names of security groups that are associated with the server port |
serverGroupID |
string |
ID of the server group the server belongs to Defaults to cluster-wide shared server group, see topology |
subnet |
string |
ID of subnet the server port is allocated in |
tags |
list(string) |
Additional tags for the server |
OperatingSystemSpec (Ubuntu)
| Field | Type | Explanation |
|---|---|---|
distUpgradeOnBoot |
bool |
Whether to upgrade packages and reboot (if necessary) on first boot |