kube-scheduler通过各种算法计算出最佳节点去运行Pod,当出现新的 Pod 进行调度时,调度程序会根据其当时对 Kubernetes 集群的资源做出调度决定。但Kubernetes集群发生变化,比如一个节点为了维护,执行了驱逐操作,这个节点的 Pod 会被驱逐到其他节点,但是当我们维护完成Pod 不会自动回到该节点上来,由此 Kubernetes 集群出现了不均衡的状态,可以使用descheduler进行在均衡。
➜ descheduler git:(master) k get cronjob -A NAMESPACE NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE kube-system descheduler-cronjob */2 * * * * True 0 3d2h 8d
➜ descheduler git:(master) k get pod -A | grep desc kube-system descheduler-cronjob-1606710120-np5xc 0/1 Completed 0 3d2h
➜ descheduler git:(master) k logs -f descheduler-cronjob-1606710120-np5xc -n kube-system I1130 04:22:07.533753 1 node.go:45] node lister returned empty list, now fetch directly I1130 04:22:07.541476 1 toomanyrestarts.go:73] Processing node: cn-hongkong.10.0.3.20 I1130 04:22:07.565399 1 toomanyrestarts.go:73] Processing node: cn-hongkong.10.0.3.21 I1130 04:22:07.577738 1 duplicates.go:73] Processing node: "cn-hongkong.10.0.3.20" I1130 04:22:07.593279 1 duplicates.go:73] Processing node: "cn-hongkong.10.0.3.21" I1130 04:22:07.632923 1 lownodeutilization.go:203] Node "cn-hongkong.10.0.3.20" is appropriately utilized with usage: api.ResourceThresholds{"cpu":32.5, "memory":29.3980074559352, "pods":17.1875} I1130 04:22:07.632997 1 lownodeutilization.go:203] Node "cn-hongkong.10.0.3.21" is appropriately utilized with usage: api.ResourceThresholds{"cpu":30, "memory":23.303233323623655, "pods":15.625} I1130 04:22:07.633016 1 lownodeutilization.go:101] Criteria for a node under utilization: CPU: 20, Mem: 20, Pods: 20 I1130 04:22:07.633028 1 lownodeutilization.go:105] No node is underutilized, nothing to do here, you might tune your thresholds further I1130 04:22:07.633042 1 pod_antiaffinity.go:72] Processing node: "cn-hongkong.10.0.3.20" I1130 04:22:07.647199 1 pod_antiaffinity.go:72] Processing node: "cn-hongkong.10.0.3.21"