You are viewing documentation for Kubernetes version: v1.19
Kubernetes v1.19 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.
Scheduling Policies
A scheduling Policy can be used to specify the predicates and priorities that the kube-scheduler runs to filter and score nodes, respectively.
You can set a scheduling policy by running
kube-scheduler --policy-config-file <filename>
or
kube-scheduler --policy-configmap <ConfigMap>
and using the Policy type.
Predicates
The following predicates implement filtering:
PodFitsHostPorts
: Checks if a Node has free ports (the network protocol kind) for the Pod ports the Pod is requesting.PodFitsHost
: Checks if a Pod specifies a specific Node by its hostname.PodFitsResources
: Checks if the Node has free resources (eg, CPU and Memory) to meet the requirement of the Pod.MatchNodeSelector
: Checks if a Pod's Node Selector matches the Node's label(s).NoVolumeZoneConflict
: Evaluate if the Volumes that a Pod requests are available on the Node, given the failure zone restrictions for that storage.NoDiskConflict
: Evaluates if a Pod can fit on a Node due to the volumes it requests, and those that are already mounted.MaxCSIVolumeCount
: Decides how many CSI volumes should be attached, and whether that's over a configured limit.CheckNodeMemoryPressure
: If a Node is reporting memory pressure, and there's no configured exception, the Pod won't be scheduled there.CheckNodePIDPressure
: If a Node is reporting that process IDs are scarce, and there's no configured exception, the Pod won't be scheduled there.CheckNodeDiskPressure
: If a Node is reporting storage pressure (a filesystem that is full or nearly full), and there's no configured exception, the Pod won't be scheduled there.CheckNodeCondition
: Nodes can report that they have a completely full filesystem, that networking isn't available or that kubelet is otherwise not ready to run Pods. If such a condition is set for a Node, and there's no configured exception, the Pod won't be scheduled there.PodToleratesNodeTaints
: checks if a Pod's tolerations can tolerate the Node's taints.CheckVolumeBinding
: Evaluates if a Pod can fit due to the volumes it requests. This applies for both bound and unbound PVCs.
Priorities
The following priorities implement scoring:
SelectorSpreadPriority
: Spreads Pods across hosts, considering Pods that belong to the same Service, StatefulSet or ReplicaSet.InterPodAffinityPriority
: Implements preferred inter pod affininity and antiaffinity.LeastRequestedPriority
: Favors nodes with fewer requested resources. In other words, the more Pods that are placed on a Node, and the more resources those Pods use, the lower the ranking this policy will give.MostRequestedPriority
: Favors nodes with most requested resources. This policy will fit the scheduled Pods onto the smallest number of Nodes needed to run your overall set of workloads.RequestedToCapacityRatioPriority
: Creates a requestedToCapacity based ResourceAllocationPriority using default resource scoring function shape.BalancedResourceAllocation
: Favors nodes with balanced resource usage.NodePreferAvoidPodsPriority
: Prioritizes nodes according to the node annotationscheduler.alpha.kubernetes.io/preferAvoidPods
. You can use this to hint that two different Pods shouldn't run on the same Node.NodeAffinityPriority
: Prioritizes nodes according to node affinity scheduling preferences indicated in PreferredDuringSchedulingIgnoredDuringExecution. You can read more about this in Assigning Pods to Nodes.TaintTolerationPriority
: Prepares the priority list for all the nodes, based on the number of intolerable taints on the node. This policy adjusts a node's rank taking that list into account.ImageLocalityPriority
: Favors nodes that already have the container images for that Pod cached locally.ServiceSpreadingPriority
: For a given Service, this policy aims to make sure that the Pods for the Service run on different nodes. It favours scheduling onto nodes that don't have Pods for the service already assigned there. The overall outcome is that the Service becomes more resilient to a single Node failure.EqualPriority
: Gives an equal weight of one to all nodes.EvenPodsSpreadPriority
: Implements preferred pod topology spread constraints.
What's next
- Learn about scheduling
- Learn about kube-scheduler Configuration