Skip to content

Profiles

Not all of a system's resources are available for Kubernetes Pods. Kubernetes reserves a certain amount of CPU and memory for:

  • the system (kernel & daemons)
  • Kubelet & container runtime
  • an eviction threshold (to be able to react to memory pressure)

The amount of reserved resources is managed through MetaKube Node Profiles.

The resource reservation of a given profile depends on the Machine's cloud flavor. MachineDeployments that don't set a profile explicitly, use the default profile.

Profile CPU formula Memory formula
metakube-2025-01, metakube-latest (default) 20m + MaxPods * 2m/Pod 190MiB + MaxPods * 6.5MiB/Pod
metakube-legacy (deprecated) 200m 300MiB

Additionally, MetaKube reserves 200m CPU and 500MiB of memory for the system regardless of flavor.

The eviction threshold is 100MiB.

Flavors

MetaKube scales the Pod limit by the available memory. That in turn determines the amount of reserved resources.

Available memory Example Flavors Pod limit Reserved CPU Reserved Memory
8GiB SCS-2V-8-50n 50 120m + 200m 515MiB + 500Mi + 100Mi
16GiB SCS-4V-16-* 70 160m + 200m 645MiB + 500Mi + 100Mi
32GiB SCS-8V-32-* 90 200m + 200m 775MiB + 500Mi + 100Mi
> 32GiB upon request 110 240m + 200m 905MiB + 500Mi + 100Mi

Tuning

The formulas used for the metakube-latest profile are derived from experiments. We intend to strike a balance of stability and practicality at the same time.

The experiments were conducted with the following assumptions:

  • Container to Pod ratio of 1.25

This accounts for typical use of sidecar containers. At higher ratios, the container runtime may require additional memory.

  • Full nodes

The reserved resources should be sufficient when packing the node with as many Pods as the limit allows.

  • Minimal Pod churn

Kubelet and the container runtime are particularly busy (CPU) during Pod create/delete events. Since they are "idle" for most of the time and may also use spare CPU time, we decided to not reserve more CPU than necessary. Frequent Pod churn may stretch the time Pods take to become running or be fully deleted.

These assumptions hold true for most use cases because usually not all these thresholds are crossed at once.

If your use case differs drastically from these assumptions you may need to adjust the reserved resources to ensure your nodes are stable.

Change MaxPods

You may achieve a more economic node utilization by setting higher or lower Pod limits.

A higher Pod limit means you can pack more Pods on a single node.

Lower Pod limits means less resources are reserved leaving them allocatable for Pods.

To change the Pod limit set the following Machine annotation in your MachineDeployment:

kind: MachineDeployment
spec:
  template:
    metadata:
      annotations:
        kubelet-config.machines.metakube.syseleven.de/MaxPods: "30"

Change profiles

! You should only change the profile to opt in/out when migrating from legacy profiles.

To configure a different profile set the following Machine annotation in your MachineDeployment:

kind: MachineDeployment
spec:
  template:
    metadata:
      annotations:
        kubelet-config.machines.metakube.syseleven.de/KubeReservedProfile: "metakube-latest"

Change reserved resources directly

!! Do not change these settings unless absolutely necessary! !! Reserving too little resources may lead to node instability.

To change the reservation for individual resources set the following Machine annotation in your MachineDeployment:

kind: MachineDeployment
spec:
  template:
    metadata:
      annotations:
        kubelet-config.machines.metakube.syseleven.de/KubeReserved: "cpu=500m,memory=1Gi"

References