kubernetes 常见错误总结

环境信息

  • Centos7 5.4.212-1
  • Docker 20.10.18
  • containerd.io-1.6.8
  • kubectl-1.25.0
  • kubeadm-1.25.0
  • kubelet-1.25.0

POD 状态异常

CrashLoopBackOff

错误场景

Pod 状态显示 CrashLoopBackOff

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
test-centos7-7cc5dc6987-jz486 0/1 CrashLoopBackOff 8 (111s ago) 17m

查看 Pod 详细信息

$ kubectl describe pod test-centos7-7cc5dc6987-jz486
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18m default-scheduler Successfully assigned default/test-centos7-7cc5dc6987-jz486 to ops-kubernetes3
Normal Pulled 16m (x5 over 18m) kubelet Container image "centos:centos7.9.2009" already present on machine
Normal Created 16m (x5 over 18m) kubelet Created container centos7
Normal Started 16m (x5 over 18m) kubelet Started container centos7
Warning BackOff 3m3s (x71 over 18m) kubelet Back-off restarting failed container

结果显示,ReasonBackOffMessage 显示 Back-off restarting failed container

可能原因

Back-off restarting failed container 的原因,通常是因为,容器内 PID 为 1 的进程退出导致(通常用户在构建镜像执行 CMD 时,启动的程序,均是 PID 为1)[1]

容器进程退出(命令执行结束或者进程异常结束),则容器生命周期结束。kubernetes 控制器检查到容器退出,会持续重启容器。针对此种情况,需要检查镜像,是否不存在常驻进程,或者常驻进程异常。

针对此种情况,可以单独使用 docker 客户端部署镜像,查看镜像的运行情况,如果部署后,容器中的进程立马结束或退出,则容器也会随之结束。

定位中也可以使用 kubectl describe pod 命令检查 Pod 的退出状态码。Kubernetes 中的 Pod ExitCode 状态码是容器退出时返回的退出状态码,这个状态码通常用来指示容器的执行结果,以便 Kubernetes 和相关工具可以根据它来采取后续的操作。以下是一些常见的 ExitCode 状态码说明:

  • ExitCode 0 : 这表示容器正常退出,没有错误。这通常是期望的结果。
  • ExitCode 1 : 通常表示容器以非正常方式退出,可能是由于应用程序内部错误或异常导致的。通常是容器中 pid 为 1 的进程错误而失败
  • ExitCode 非零 : 任何非零的状态码都表示容器退出时发生了错误。ExitCode 的具体值通常是自定义的,容器内的应用程序可以根据需要返回不同的状态码来表示不同的错误情况。你需要查看容器内应用程序的文档或日志来了解具体的含义。
  • ExitCode 137 : 通常表示容器因为被操作系统终止(例如,OOM-killer)而非正常退出。这可能是由于内存不足等资源问题导致的。
  • ExitCode 139 : 通常表示容器因为接收到了一个信号而非正常退出。这个信号通常是 SIGSEGV(段错误),表示应用程序试图访问无效的内存。
  • ExitCode 143 : 通常表示容器因为接收到了 SIGTERM 信号而正常退出。这是 Kubernetes 在删除 Pod 时发送的信号,容器应该在接收到该信号后做一些清理工作然后退出。
  • ExitCode 130 : 通常表示容器因为接收到了 SIGINT 信号而正常退出。这是当用户在命令行中按下 Ctrl+C 时发送的信号。
  • ExitCode 255 :通常表示未知错误,或者容器无法启动。这个状态码通常是容器运行时的问题,比如容器镜像不存在或者启动命令有问题。

ImagePullBackOff

Harbor 证书过期后,更新了证书,更新证书后相关问题参考,Kubernetes 中更新 Pod 失败,节点上使用了 containerd 做为 CRI,具体报错信息如下

# kubectl get pods -n ops
NAME READY STATUS RESTARTS AGE
get-cloud-cdn-statistics-pjfsc-6jnzl 0/1 Init:ImagePullBackOff 0 2m28s
get-cloud-cdn-statistics-r67s2-qs8kj 0/1 Init:ImagePullBackOff 0 81m


# kubectl describe pod -n ops get-cloud-cdn-statistics-pjfsc-x9mh7
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 17s default-scheduler Successfully assigned ops/get-cloud-cdn-statistics-pjfsc-x9mh7 to k8s-worker1
Normal BackOff 17s kubelet Back-off pulling image "harbor1.mydomain.com/ops/all/cloud-server-cdn-statistics-code:master-0.0-20230207143540"
Warning Failed 17s kubelet Error: ImagePullBackOff
Normal Pulling 2s (x2 over 18s) kubelet Pulling image "harbor1.mydomain.com/ops/all/cloud-server-cdn-statistics-code:master-0.0-20230207143540"
Warning Failed 2s (x2 over 18s) kubelet Failed to pull image "harbor1.mydomain.com/ops/all/cloud-server-cdn-statistics-code:master-0.0-20230207143540": rpc error: code = Unknown desc = failed to pull and unpack image "harbor1.mydomain.com/ops/all/cloud-server-cdn-statistics-code:master-0.0-20230207143540": failed to resolve reference "harbor1.mydomain.com/ops/all/cloud-server-cdn-statistics-code:master-0.0-20230207143540": failed to do request: Head "https://harbor1.mydomain.com/v2/ops/all/cloud-server-cdn-statistics-code/manifests/master-0.0-20230207143540": x509: certificate signed by unknown authority
Warning Failed 2s (x2 over 18s) kubelet Error: ErrImagePull

在节点 k8s-worker1 上使用 dockercurl 测试访问 Harbor 域名均正常。因此判断问题出现在 containerd 未识别到证书导致。

对于 Kubernetes 使用 containerd 作为容器运行时,如果需要配置额外的证书(如信任自签名的 Harbor 仓库证书),可能需要修改或创建 containerd 的配置文件,通常为 /etc/containerd/config.toml。在配置文件中添加证书配置,在这个示例中,ca_file 应该指向 Harbor 证书。确保该路径正确且证书格式为 PEM

/etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".registry]
[plugins."io.containerd.grpc.v1.cri".registry.configs."harbor1.mydomain.com".tls]
ca_file = "/etc/docker/certs.d/harbor1.mydomain.com/ca.crt"

重启 containerd 服务,然后重新部署 Pod

systemctl restart containerd

POD 状态为 InvalidImageName

错误场景

Pod 状态显示 InvalidImageName

kubectl get pods -n cs
NAME READY STATUS RESTARTS AGE
54fdc56754-qrlt6 0/2 InvalidImageName 0 14s
8486f49b89-zp25b 0/2 Init:ErrImagePull 0 7s

可能原因

镜像的 url 地址中,以 http://https:// 开头。配置中镜像的 url 地址中无需指定协议(http://https://

Pod 状态为 Error

The node was low on resource: ephemeral-storage

错误场景

查看 Pod 状态,显示 Error

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
front-7df8ccc4c7-xhp6s 0/1 Error 0 5h42m

检查 Pod 的具体信息

$ kubectl describe pod front-7df8ccc4c7-xhp6s
...
Status: Failed
Reason: Evicted
Message: The node was low on resource: ephemeral-storage. Container php was using 394, which exceeds its request of 0.
...

其中包含异常的关键信息:Status: FailedReason: Evicted,具体原因为 The node was low on resource: ephemeral-storage

检查节点上的 Kuberlet 日志,搜索关键字 evicte 或者 disk ,也可以看到系统上文件系统空间使用率超过了阈值

$ journalctl -u kubelet  | grep -i -e disk -e evict
image_gc_manager.go:310] "Dis usage on image filesystem is over the high threshold, trying to free bytes down to the low threshold" usage=85 highThreshold=85 amountToFree=5122092236 lowThreshold=80
eviction_manager.go:349] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage"
eviction_manager.go:338] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage"

可能原因

根据以上信息,可知 Pod 异常是因为 The node was low on resource: ephemeral-storage,表示 临时存储资源 不足导致节点处于 Tainted ,其上的 Pod 被驱逐(Evicted)

本地临时存储说明

针对此种情况,如果某 Pod 的临时存储用量超出了你所允许的范围,kubelet 会向其发出逐出(eviction)信号,触发该 Pod 被逐出所在节点。

如果用于可写入容器镜像层、节点层面日志或者 emptyDir 卷的文件系统中可用空间太少, 节点会为自身设置本地存储不足的污点(Tainted)标签。 这一污点会触发对那些无法容忍该污点的 Pod 的逐出操作。

解决方法

  • 增加磁盘空间

  • 调整 kubeletnodefs.available 的 threshold 值

    修改节点上的 kubelet 的启动配置文件 /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf,添加以下启动参数,主要为定义环境变量 KUBELET_EVICT_NODEFS_THRESHOLD_ARGS,并将其添加到启动参数中

    Environment="KUBELET_EVICT_NODEFS_THRESHOLD_ARGS=--eviction-hard=nodefs.available<5%"
    ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS $KUBELET_EVICT_NODEFS_THRESHOLD_ARGS

    修改之后重启 kubelet 服务,并通过日志查看 nodefs.available 的新值是否生效

    $ systemctl daemon-reload
    $ systemctl restart kubelet

    $ journalctl -u kubelet | grep -i nodefs
    17604 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}

    日志中看到 Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05,表明更改生效。[2]

Pod 状态为 Init

Unable to attach or mount volumes

Pod 启动异常,查看 Pod 状态为 Init:0/1

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
admin-cbb479556-j9qg2 0/1 Init:0/1 0 3m37s

查看 Pod 的详细描述信息

$ kubectl describe pod admin-cbb479556-j9qg2
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m41s default-scheduler Successfully assigned admin-cbb479556-j9qg2 to k8s-work2
Warning FailedMount 99s kubelet Unable to attach or mount volumes: unmounted volumes=[logs], unattached volumes=[wwwroot kube-api-access-z8745 logs]: timed out waiting for the condition
Warning FailedMount 42s kubelet MountVolume.SetUp failed for volume "uat-nfs-pv" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs 34.230.1.1:/data/NFSDataHome /var/lib/kubelet/pods/9d9a4807-706c-4369-b8be-b5727ee6aa8f/volumes/kubernetes.io~nfs/uat-nfs-pv
Output: mount.nfs: Connection timed out

根据 Events 中输出的信息,MountVolume.SetUp failed for volume "uat-nfs-pv" : mount failed: exit status 32,显示挂载卷失败,输出中包含了挂载卷时使用的命令和参数(mount -t nfs 34.230.1.1:/data/NFSDataHome /var/lib/kubelet/pods/9d9a4807-706c-4369-b8be-b5727ee6aa8f/volumes/kubernetes.io~nfs/uat-nfs-pv)及命令失败后的返回结果(mount.nfs: Connection timed out

根据 Events 中的信息,查看配置,发现此卷为 NFS 类型的 PV,根据报错排查,此例原因为 NFS 的服务器地址填写错误,更新 PV 配置中的 NFS Server 的地址后,Pod 正常启动。

ContainerCreating

dbus: connection closed by user

更新 DaemonSet 类型的 node_exporter,其中一个节点上的 Pod 未创建成功,状态一直保持在 ContainerCreating,检查 Pod 的详细描述信息

$ kubectl get pods -n prometheus -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
node-exporter-glnk5 1/1 Running 0 28h 172.31.8.197 work2 <none> <none>
node-exporter-kzs2r 1/1 Running 1 28h 172.31.100.86 work1 <none> <none>
node-exporter-nxz9v 0/1 ContainerCreating 0 5m30s 172.31.100.38 master <none> <none>
node-exporter-vpkwt 1/1 Running 0 31m 172.31.100.69 work4 <none> <none>
node-exporter-wft7v 1/1 Running 0 14m 172.31.14.7 work3 <none> <none>
prometheus-67ccbbd78-zqw9x 1/1 Running 0 46h 10.244.14.75 work2 <none> <none>

$ kubectl describe pod -n prometheus node-exporter-nxz9v
Name: node-exporter-nxz9v
Namespace: prometheus
Priority: 0

Annotations: <none>
Status: Pending
IP: 172.31.100.38
IPs:
IP: 172.31.100.38
Controlled By: DaemonSet/node-exporter
Containers:
node-exporter:
Container ID:
Image: prom/node-exporter
Image ID:
Port: 9100/TCP
Host Port: 9100/TCP
Args:
--path.procfs
/host/proc
--path.sysfs
/host/sys
--collector.disable-defaults
--collector.cpu
--collector.cpufreq
--collector.meminfo
--collector.diskstats
--collector.filesystem
--collector.filefd
--collector.loadavg
--collector.netdev
--collector.netstat
--collector.nfs
--collector.os
--collector.stat
--collector.time
--collector.udp_queues
--collector.uname
--collector.xfs
--collector.netclass
--collector.vmstat
--collector.systemd
--collector.systemd.unit-include
(sshd|crond|iptables|systemd-journald|kubelet|containerd).service
State: Waiting
Reason: ContainerCreating
Ready: False


Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m24s default-scheduler Successfully assigned prometheus/node-exporter-nxz9v to master
Warning FailedCreatePodContainer 1s (x26 over 5m24s) kubelet unable to ensure pod container exists: failed to create container for [kubepods besteffort pode526f19a-57d6-417c-ba5a-fb0f232d31c6] : dbus: connection closed by user

错误信息显示为 unable to ensure pod container exists: failed to create container for [kubepods besteffort pode526f19a-57d6-417c-ba5a-fb0f232d31c6] : dbus: connection closed by user

查看 kubelet 日志,显示同样的日志

$ journalctl -u kubelet
master kubelet[1160]: E0707 14:40:55.036424 1160 qos_container_manager_linux.go:328] "Failed to update QoS cgroup configuration" err="dbus: connection closed by user"
master kubelet[1160]: I0707 14:40:55.036455 1160 qos_container_manager_linux.go:138] "Failed to reserve QoS requests" err="dbus: connection closed by user"
master kubelet[1160]: E0707 14:41:00.263041 1160 qos_container_manager_linux.go:328] "Failed to update QoS cgroup configuration" err="dbus: connection closed by user"
master kubelet[1160]: E0707 14:41:00.263152 1160 pod_workers.go:190] "Error syncing pod, skipping" err="failed to ensure that the pod: 0cdaf660-bb6a-40ee-99ae-21dff3b55411 cgroups exist and are correctly applied: failed to create container for [kubepods besteffort pod0cdaf660-bb6a-40ee-99ae-21dff3b55411] : dbus: connection closed by user" pod="prometheus/node-exporter-rcd8x" podUID=0cdaf660-bb6a-40ee-99ae-21dff3b55411

根据以上日志信息,问题原因为 kubelet 和系统服务 dbus 通信异常,可以 通过重启 kubelet 服务 的方法解决此问题。

“cni0” already has an IP address different from

集群中创建的 POD 状态一直处于 ContainerCreating,检查 Pod 详细信息

# kubectl describe pod -n cattle-system cattle-cluster-agent-7d766b5476-hsq45
...
FailedCreatePodSandBox 82s (x4 over 85s) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "2d58156e838349a79da91e0a6d8bccdec0e62c5f5c9ca6a1c30af6186d6253b1" network for pod "cattle-cluster-agent-7d766b5476-hsq45": networkPlugin cni failed to set up pod "cattle-cluster-agent-7d766b5476-hsq45_cattle-system" network: failed to delegate add: failed to set bridge addr: "cni0" already has an IP address different from 10.244.2.1/24

关键信息 failed to set bridge addr: "cni0" already has an IP address different from 10.244.2.1/24

检查节点上的 IP 信息,发现 flannel.1 网段和 cni0 网段不一致。可能因为 flannel 读取的配置错误,重启节点后恢复

# ip add
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8951 qdisc noqueue state UNKNOWN group default
link/ether b2:b1:12:2d:8c:66 brd ff:ff:ff:ff:ff:ff
inet 10.244.2.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::b0b1:12ff:fe2d:8c66/64 scope link
valid_lft forever preferred_lft forever
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8951 qdisc noqueue state UP group default qlen 1000
link/ether ca:88:b1:51:0f:02 brd ff:ff:ff:ff:ff:ff
inet 10.244.0.1/24 brd 10.244.2.255 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::c888:b1ff:fe51:f02/64 scope link
valid_lft forever preferred_lft forever

PodInitializing

新部署的 Pod 状态一直处于 PodInitializing

# kubectl get pods
ops ops-admin-5656d7bb64-mpqz5 0/2 PodInitializing 0 2m55s

登陆到 Pod 所在节点,检查 kubelet 服务日志

# journalctl -f -u kubelet
Sep 11 17:09:19 ops-k8s-admin kubelet[22700]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"10.244.3.0/24"}]],"routes":[{"dst":"10.244.0.0/16"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}E0911 17:09:19.245283 22700 kuberuntime_manager.go:864] container &Container{Name:php,Image:54.236.67.117:5000/comm/ops-php:20221205093123-,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:9000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:wwwroot,ReadOnly:false,MountPath:/home/www,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:uploads,ReadOnly:false,MountPath:/home/www/public/uploads,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:log-code,ReadOnly:false,MountPath:/home/www/storage/logs,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-z4cxp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:&Lifecycle{PostStart:&Handler{Exec:&ExecAction{Command:[/bin/sh -c php /home/www/artisan command:apollo.sync >> apollo.log;php /home/www/artisan queue:restart],},HTTPGet:nil,TCPSocket:nil,},PreStop:nil,},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod ops-admin-5656d7bb64-kvvmx_ops(a44af28c-3a39-439b-97c1-7e78b03ccd91): PostStartHookError: command '/bin/sh -c php /home/www/artisan command:apollo.sync >> apollo.log;php /home/www/artisan queue:restart' exited with 137: : Exec lifecycle hook ([/bin/sh -c php /home/www/artisan command:apollo.sync >> apollo.log;php /home/www/artisan queue:restart]) for Container "php" in Pod "ops-admin-5656d7bb64-kvvmx_ops(a44af28c-3a39-439b-97c1-7e78b03ccd91)" failed - error: command '/bin/sh -c php /home/www/artisan command:apollo.sync >> apollo.log;php /home/www/artisan queue:restart' exited with 137: , message: "队列延迟启动,因为.env配置不完善,rows=7,等待Apollo获取配置或手动完善

从日志中可以看到关键错误日志信息: start failed in pod ops-admin-5656d7bb64-kvvmx_opsexited with 137: : Exec lifecycle hook ([/bin/sh -c php /home/www/artisan command:apollo.sync >> apollo.log;php /home/www/artisan queue:restart]) for Container "php" in Pod "ops-admin-5656d7bb64-kvvmx_ops(a44af28c-3a39-439b-97c1-7e78b03ccd91)" failed - error: command '/bin/sh -c php /home/www/artisan command:apollo.sync >> apollo.log;php /home/www/artisan queue:restart' exited with 137

由此可以判断 Pod 无法正常启动的原因为 Pod 中的容器中的进程执行错误导致。

Failed to update QoS cgroup configuration

集群中某个节点上面的 Pod 状态显示为 Init 或者 PodInitializing,其他节点正常,登陆异常节点,检查 kubelet 服务日志

# journalctl -f -u kubelet
kubelet[26451]: E1109 13:32:04.385251 26451 qos_container_manager_linux.go:328] "Failed to update QoS cgroup configuration" err="dbus: connection closed by user"
kubelet[26451]: E1109 13:32:04.385307 26451 pod_workers.go:190] "Error syncing pod, skipping" err="failed to ensure that the pod: e31980b5-849b-4a95-b93d-983c1df31034 cgroups exist and are correctly applied: failed to create container for [kubepods besteffort pode31980b5-849b-4a95-b93d-983c1df31034] : dbus: connection closed by user" pod="6fd86565c6-4wn7k" podUID=e31980b5-849b-4a95-b93d-983c1df31034
kubelet[26451]: E1109 13:32:04.385416 26451 pod_workers.go:190] "Error syncing pod, skipping" err="failed to ensure that the pod: f9e342d5-9f69-41bc-bb5e-df46c37b7bcd cgroups exist and are correctly applied: failed to create container for [kubepods besteffort podf9e342d5-9f69-41bc-bb5e-df46c37b7bcd] : dbus: connection closed by user" pod="5bfffd564f-sn82t" podUID=f9e342d5-9f69-41bc-bb5e-df46c37b7bcd
kubelet[26451]: E1109 13:32:04.385777 26451 qos_container_manager_linux.go:328] "Failed to update QoS cgroup configuration" err="dbus: connection closed by user"
kubelet[26451]: E1109 13:32:04.385962 26451 pod_workers.go:190] "Error syncing pod, skipping" err="failed to ensure that the pod: 541c88a3-cf05-40ce-b0db-80bf07f542b6 cgroups exist and are correctly applied: failed to create container for [kubepods besteffort pod541c88a3-cf05-40ce-b0db-80bf07f542b6] : dbus: connection closed by user" pod="5994c65989-zkn2w" podUID=541c88a3-cf05-40ce-b0db-80bf07f542b6
kubelet[26451]: E1109 13:32:08.385429 26451 qos_container_manager_linux.go:328] "Failed to update QoS cgroup configuration" err="dbus: connection closed by user"
kubelet[26451]: E1109 13:32:08.385657 26451 pod_workers.go:190] "Error syncing pod, skipping" err="failed to ensure that the pod: 255ce122-804c-4bcc-9f12-0a3abce77db5 cgroups exist and are correctly applied: failed to create container for [kubepods besteffort pod255ce122-804c-4bcc-9f12-0a3abce77db5] : dbus: connection closed by user" pod="67d89cf47f-x4wp7" podUID=255ce122-804c-4bcc-9f12-0a3abce77db5

关键日志 "Failed to update QoS cgroup configuration" err="dbus: connection closed by user",根据此信息,可能是因为要与 DBus 服务通信更新容器的 QoS cgroup 配置失败。具体来说,kubelet 在尝试更新容器的 QoS cgroup 配置时遇到了 dbus: connection closed by user 错误,并且无法正确创建容器。

这种情况可能是由于系统上的 DBus 服务异常导致的。本示例中在先后重启了 dbus 服务和 kubelet 服务后问题恢复。

systemctl restart dbus

systemctl restart kubelet

集群中 Pod 状态为 Pending

Kubernetes 集群节点信息如下

# kubectl get nodes
NAME STATUS ROLES AGE VERSION
test-k8s-master1 Ready control-plane 366d v1.24.7
test-k8s-master2 Ready control-plane 366d v1.24.7
test-k8s-master3 Ready control-plane 366d v1.24.7
test-k8s-worker1 Ready <none> 366d v1.24.7
test-k8s-worker2 Ready <none> 366d v1.24.7

集群中 coredns 状态处于 Pending。通常,Pod 处于 Pending 状态意味着 Kubernetes 调度程序未能将 Pod 分配给任何节点。查询 Pod 状态如下

# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-6d4b75cb6d-7np4c 0/1 Pending 0 68m <none> <none> <none> <none>
coredns-6d4b75cb6d-ckl6f 0/1 Pending 0 68m <none> <none> <none> <none>
etcd-k8s-master1 1/1 Running 1 (65d ago) 68m 172.31.26.116 k8s-master1 <none> <none>
etcd-k8s-master2 1/1 Running 0 68m 172.31.19.164 k8s-master2 <none> <none>
etcd-k8s-master3 1/1 Running 0 68m 172.31.21.3 k8s-master3 <none> <none>
kube-apiserver-k8s-master1 1/1 Running 2 (65d ago) 68m 172.31.26.116 k8s-master1 <none> <none>
kube-apiserver-k8s-master2 1/1 Running 4 (65d ago) 68m 172.31.19.164 k8s-master2 <none> <none>
kube-apiserver-k8s-master3 1/1 Running 4 (65d ago) 68m 172.31.21.3 k8s-master3 <none> <none>
kube-controller-manager-k8s-master1 1/1 Running 1 (41h ago) 68m 172.31.26.116 k8s-master1 <none> <none>
kube-controller-manager-k8s-master2 1/1 Running 0 68m 172.31.19.164 k8s-master2 <none> <none>
kube-controller-manager-k8s-master3 1/1 Running 1 (41h ago) 68m 172.31.21.3 k8s-master3 <none> <none>
kube-proxy-84l4v 0/1 Pending 0 68m <none> <none> <none> <none>
kube-proxy-pfwd5 0/1 Pending 0 68m <none> <none> <none> <none>
kube-proxy-qbzq8 0/1 Pending 0 68m <none> <none> <none> <none>
kube-proxy-qfplm 0/1 Pending 0 68m <none> <none> <none> <none>
kube-proxy-w4t62 0/1 Pending 0 68m <none> <none> <none> <none>
kube-scheduler-k8s-master1 1/1 Running 0 68m 172.31.26.116 k8s-master1 <none> <none>
kube-scheduler-k8s-master2 1/1 Running 0 68m 172.31.19.164 k8s-master2 <none> <none>
kube-scheduler-k8s-master3 1/1 Running 1 (41h ago) 68m 172.31.21.3 k8s-master3 <none> <none>
kube-state-metrics-6d44cbdb56-kv8bm 0/1 Pending 0 68m <none> <none> <none> <none>
metrics-server-6cd9f9f4cf-rqlzf 0/2 Pending 0 68m <none> <none> <none> <none>

看到除了 kube-controller-managerkube-scheduleretcdkube-apiserver 外,其他 Pod 状态都为 Pending,并且 Node 列显示为 <none>,说明集群未将新创建的 Pod 调度到某个节点上。以 coredns-6d4b75cb6d-7np4c 为例查看其描述信息,Events 中未包含任何事件信息。

# kubectl describe pod -n kube-system coredns-6d4b75cb6d-7np4c
Name: coredns-6d4b75cb6d-7np4c
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node: <none>
Labels: k8s-app=kube-dns
pod-template-hash=6d4b75cb6d
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/coredns-6d4b75cb6d
Containers:
coredns:
Image: k8s.gcr.io/coredns/coredns:v1.8.6
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4hm48 (ro)
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
kube-api-access-4hm48:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: CriticalAddonsOnly op=Exists
node-role.kubernetes.io/control-plane:NoSchedule
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>

鉴于以上信息,怀疑这可能是集群级别的问题。Pod 调度主要由 kube-scheduler 进行,因此首先查看 kube-scheduler 组件日志

# kubectl logs -n kube-system kube-scheduler-fm-k8s-c1-master1 | tail -n 20
reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Unauthorized
reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Unauthorized
reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Unauthorized
leaderelection.go:330] error retrieving resource lock kube-system/kube-scheduler: Unauthorized
reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Unauthorized
reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Unauthorized
reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Unauthorized
reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Unauthorized
leaderelection.go:330] error retrieving resource lock kube-system/kube-scheduler: Unauthorized
reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Unauthorized
reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Unauthorized
reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Unauthorized
reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Unauthorized
leaderelection.go:330] error retrieving resource lock kube-system/kube-scheduler: Unauthorized
leaderelection.go:330] error retrieving resource lock kube-system/kube-scheduler: Unauthorized
reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Unauthorized
reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Unauthorized
reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Unauthorized
leaderelection.go:330] error retrieving resource lock kube-system/kube-scheduler: Unauthorized

根据日志内容,显示 kube-scheduler 无法获取到集群资源,原因为 Unauthorized

一般 Unauthorized 常见原因可能是因为 RBAC 或者证书。先检查 RBAC,kube-scheduler 默认使用用户 system:kube-scheduler,下面查看用户绑定的 Role 及权限

# kubectl describe clusterrole system:kube-scheduler
Name: system:kube-scheduler
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
events [] [] [create patch update]
events.events.k8s.io [] [] [create patch update]
bindings [] [] [create]
endpoints [] [] [create]
pods/binding [] [] [create]
tokenreviews.authentication.k8s.io [] [] [create]
subjectaccessreviews.authorization.k8s.io [] [] [create]
leases.coordination.k8s.io [] [] [create]
pods [] [] [delete get list watch]
namespaces [] [] [get list watch]
nodes [] [] [get list watch]
persistentvolumeclaims [] [] [get list watch]
persistentvolumes [] [] [get list watch]
replicationcontrollers [] [] [get list watch]
services [] [] [get list watch]
replicasets.apps [] [] [get list watch]
statefulsets.apps [] [] [get list watch]
replicasets.extensions [] [] [get list watch]
poddisruptionbudgets.policy [] [] [get list watch]
csidrivers.storage.k8s.io [] [] [get list watch]
csinodes.storage.k8s.io [] [] [get list watch]
csistoragecapacities.storage.k8s.io [] [] [get list watch]
endpoints [] [kube-scheduler] [get update]
leases.coordination.k8s.io [] [kube-scheduler] [get update]
pods/status [] [] [patch update]

# kubectl describe clusterrolebinding system:kube-scheduler
Name: system:kube-scheduler
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
Role:
Kind: ClusterRole
Name: system:kube-scheduler
Subjects:
Kind Name Namespace
---- ---- ---------
User system:kube-scheduler

查看 RBAC 权限,并无异常。检查集群证书,集群证书已经更新过,显示正常。

# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Dec 07, 2024 06:05 UTC 364d ca no
apiserver Dec 07, 2024 07:17 UTC 364d ca no
apiserver-etcd-client Dec 07, 2024 07:15 UTC 364d etcd-ca no
apiserver-kubelet-client Dec 07, 2024 07:15 UTC 364d ca no
controller-manager.conf Dec 07, 2024 06:05 UTC 364d ca no
etcd-healthcheck-client Dec 07, 2024 07:15 UTC 364d etcd-ca no
etcd-peer Dec 07, 2024 07:15 UTC 364d etcd-ca no
etcd-server Dec 07, 2024 07:15 UTC 364d etcd-ca no
front-proxy-client Dec 07, 2024 07:15 UTC 364d front-proxy-ca no
scheduler.conf Dec 07, 2024 06:05 UTC 364d ca no

CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Dec 03, 2032 09:50 UTC 8y no
etcd-ca Dec 05, 2033 07:15 UTC 9y no
front-proxy-ca Dec 05, 2033 07:15 UTC 9y no

查看下 kube-apiserver 日志信息,从日志中看到连接 etcd127.0.0.1:2379)异常,主要为证书问题,并且 kube-apiserver 日志中显示证书并未更新。

clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: internal error". Reconnecting...
authentication.go:63] "Unable to authenticate the request" err="[x509: certificate has expired or is not yet valid: current time 2023-12-08T08:40:26Z is after 2023-12-06T09:58:58Z, verifying certificate SN=4790061324473323615, SKID=, AKID=08:39:2B:D0:14:00:F4:7F:3F:58:26:36:32:BA:F8:0E:0E:B4:D4:83 failed: x509: certificate has expired or is not yet valid: current time 2023-12-08T08:40:26Z is after 2023-12-06T09:58:58Z]"

检查 etcd 日志,日志中显示找不到证书:open /etc/kubernetes/pki/etcd/peer.crt: no such file or directory

{"level":"warn","ts":"2023-12-08T07:54:23.780Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"172.31.21.3:30426","server-name":"","error":"open /etc/kubernetes/pki/etcd/peer.crt: no such file or directory"}
{"level":"warn","ts":"2023-12-08T07:54:24.195Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"172.31.19.164:28650","server-name":"","error":"open /etc/kubernetes/pki/etcd/peer.crt: no such file or directory"}

堆叠(Stack)高可用模式下 etcd 组件启动时会挂载 Master 节点的 /etc/kubernetes/pki/etcd/ 目录作为自己的证书文件,具体配置可以查看静态 Pod 的配置 /etc/kubernetes/manifests/

/etc/kubernetes/manifests/etcd.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
component: etcd
tier: control-plane
name: etcd
namespace: kube-system
spec:
containers:
- command:
- etcd
- --advertise-client-urls=https://172.31.19.164:2379
- --cert-file=/etc/kubernetes/pki/etcd/server.crt
- --client-cert-auth=true
- --data-dir=/var/lib/etcd
- --experimental-initial-corrupt-check=true
- --initial-advertise-peer-urls=https://172.31.19.164:2380
- --initial-cluster=k8s-master1=https://172.31.26.116:2380,k8s-master3=https://172.31.21.3:2380,k8s-master2=https://172.31.19.164:2380
- --initial-cluster-state=existing
- --key-file=/etc/kubernetes/pki/etcd/server.key
- --listen-client-urls=https://127.0.0.1:2379,https://172.31.19.164:2379
- --listen-metrics-urls=http://0.0.0.0:2381
- --listen-peer-urls=https://172.31.19.164:2380
- --name=k8s-master2
- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
- --peer-client-cert-auth=true
- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
- --snapshot-count=10000
- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
image: k8s.gcr.io/etcd:3.5.3-0
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /var/lib/etcd
name: etcd-data
- mountPath: /etc/kubernetes/pki/etcd
name: etcd-certs
hostNetwork: true
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/kubernetes/pki/etcd
type: DirectoryOrCreate
name: etcd-certs
- hostPath:
path: /var/lib/etcd
type: DirectoryOrCreate
name: etcd-data

登陆到 etcd 的容器中,检查目录 /etc/kubernetes/pki/etcd,发现下面为空,没有文件。原因未找到,重启系统后挂载正常。

网络问题

同一个节点上的 Pod 之间网络不通

问题现象

同一个节点上的 Pod 之间网络不通

排查思路

  • 检查系统内核配置是否开启转发
    $ sysctl -a | grep net.ipv4.ip_forward
    net.ipv4.ip_forward = 1
  • 检查 iptables 是否禁止转发,iptables 防火墙配置参考
  • 为了定位是否为 iptables 影响,开关闭 iptables 再进行测试,如果关闭防火墙后可以通信,可以确定是防火墙规则导致,需要检查防火墙规则。
  • 更深入的排查,可以部署 netshoot 容器 进行抓包定位,

Pod 无法访问到外部 Internet 网络

某个节点上,Pod 无法外部主机的服务(端口 6603/tcp)。分别在 Pod ,节点 cni0 网卡,节点出口网卡 eth0 ,目标服务网卡上抓包。此例中 Pod IP 为 10.244.4.173,目标服务的 IP 地址为 50.18.6.225

查看 Pod 抓包结果

可以看到源 IP 为 Pod 地址,目标为服务 IP 的 6603/tcp 的请求发送后,未收到 TCP 连接建立的响应。查看 节点 cni0 网卡 的抓包

可以看到源 IP 为 Pod 地址,目标为服务 IP 的 6603/tcp 的请求发送后,未收到 TCP 连接建立的响应。查看节点出口网卡 eth0 的抓包。

此处看到的源 IP 依然是 Pod 的 IP 地址,此处存在问题。在云主机的场景中,如果数据包以这种结构发送出去,数据包到了 Internet 网关将拒绝它,因为网关 NAT(将 VM 的 IP 转换为公网 IP) 只了解连接到 VM 的 IP 地址。

正常情况下,Pod 的流量到节点的出口网卡之前,是应该经过 iptables 执行源 NAT - 更改数据包源,使数据包看起来来自 VM 而不是 Pod。有了正确的源 IP,数据包才可以离开 VM 进入 Internet

此种情况下,数据包可以从节点的出口网卡发送出去,但是到了 Internet 网关将会被丢弃,因此目标服务无法接收到请求,查看目标服务器上的抓包,确实未收到来自此 Pod 的请求。

此处的 源 NAT 是由 iptables 负责执行,流入节点出口网卡的数据包未被正确的 源 NAT,有可能是因为 kube-proxy 维护的网络规则错误,或者因为 iptables 规则配置错误。可以通过重启 kube-proxy (由服务 kubelet 管理)和 iptables 服务尝试恢复。

systemctl restart kubelet
systemctl restart iptables

本示例中,重启这 2 个服务后,Pod 恢复正常。

Pod 间歇性无法连接外部数据库

集群中的 Pod 出现连接集群之外的数据库服务超时,且出现频率较高

参考文章

跨节点 Pod 无法访问

环境信息

  • Centos 7 5.4.242-1
  • Kubernetes v1.25.4
  • kubernetes-cni-1.2.0-0
  • flannel v0.21.4

集群中有 1 个 master 节点, 2 个 work 节点,节点状态均正常,master 无法 ping worker1 上面的 Pod,可以 ping 通 worker2 节点上面的 Pod

$ kubectl get nodes -A -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master1 Ready control-plane 23h v1.25.4 192.168.142.10 <none> CentOS Linux 7 (Core) 5.4.242-1.el7.elrepo.x86_64 docker://20.10.9
k8s-worker1 Ready <none> 23h v1.25.4 192.168.142.11 <none> CentOS Linux 7 (Core) 5.4.242-1.el7.elrepo.x86_64 docker://20.10.9
k8s-worker2 Ready <none> 22h v1.25.4 192.168.142.12 <none> CentOS Linux 7 (Core) 5.4.242-1.el7.elrepo.x86_64 docker://20.10.9

$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default tet-deployment-fbc96cc5d-hlqkg 1/1 Running 1 (4m17s ago) 28m 10.244.1.4 k8s-worker1 <none> <none>
default tet-deployment-fbc96cc5d-mcjzg 1/1 Running 0 50m 10.244.2.3 k8s-worker2 <none> <none>

$ ping 10.244.1.4
PING 10.244.1.4 (10.244.1.4) 56(84) bytes of data.
^C
--- 10.244.1.4 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1001ms

$ ping 10.244.2.3
PING 10.244.2.3 (10.244.2.3) 56(84) bytes of data.
64 bytes from 10.244.2.3: icmp_seq=1 ttl=63 time=4.27 ms
64 bytes from 10.244.2.3: icmp_seq=2 ttl=63 time=0.468 ms
64 bytes from 10.244.2.3: icmp_seq=3 ttl=63 time=0.443 ms
^C
--- 10.244.2.3 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2056ms
rtt min/avg/max/mdev = 0.443/1.729/4.277/1.801 ms

由此可判断问题大概率出现在 worker1 节点,首先检查 worker1 节点上的 flannel 容器是否正常

$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default tet-deployment-fbc96cc5d-hlqkg 1/1 Running 0 9m8s 10.244.1.4 k8s-worker1 <none> <none>
default tet-deployment-fbc96cc5d-mcjzg 1/1 Running 0 31m 10.244.2.3 k8s-worker2 <none> <none>
kube-flannel kube-flannel-ds-d42lm 1/1 Running 0 22h 192.168.142.11 k8s-worker1 <none> <none>
kube-flannel kube-flannel-ds-lqp5v 1/1 Running 0 22h 192.168.142.10 k8s-master1 <none> <none>
kube-flannel kube-flannel-ds-w675f 1/1 Running 0 70m 192.168.142.12 k8s-worker2 <none> <none>

看到 worker1 节点上的 flannel 容器运行正常。在 worker1 节点上检查 flannel 进程及端口信息

$ netstat -anutp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 1817/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 2224/kube-proxy
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1039/sshd
tcp 0 0 192.168.142.11:47468 192.168.142.10:6443 ESTABLISHED 2224/kube-proxy
tcp 0 0 192.168.142.11:56584 192.168.142.10:6443 ESTABLISHED 1817/kubelet
tcp 0 44 192.168.142.11:22 192.168.142.1:62099 ESTABLISHED 1108/sshd: root@pts
tcp 0 0 192.168.142.11:40574 10.96.0.1:443 ESTABLISHED 2566/flanneld
tcp6 0 0 :::34939 :::* LISTEN 1433/cri-dockerd
tcp6 0 0 :::10250 :::* LISTEN 1817/kubelet
tcp6 0 0 :::10256 :::* LISTEN 2224/kube-proxy
tcp6 0 0 :::22 :::* LISTEN 1039/sshd

$ ps -elf | grep flannel
4 S root 2566 2539 0 80 0 - 353654 futex_ 14:44 ? 00:00:02 /opt/bin/flanneld --ip-masq --kube-subnet-mgr

检查发现 flanneld 进程存在,但是端口未启动,检查 flannel 容器日志输出

$ docker ps -a | grep flannel | grep -v "Exited"
92fab879c75b 11ae74319a21 "/opt/bin/flanneld -…" 34 minutes ago Up 34 minutes k8s_kube-flannel_kube-flannel-ds-77bwd_kube-flannel_078dde8c-573b-4db4-939e-d3dd353477f7_1
237a82c1378a registry.k8s.io/pause:3.6 "/pause" 34 minutes ago Up 34 minutes k8s_POD_kube-flannel-ds-77bwd_kube-flannel_078dde8c-573b-4db4-939e-d3dd353477f7_1

$ docker logs 92fab879c75b
failed to add vxlanRoute

network is down

关键错误信息 failed to add vxlanRoute, network is down参考案例,重启服务器。恢复正常。

coredns 无法解析域名

Pod 中无法解析域名。

集群相关信息如下

$ kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 25h

在容器中测试 dns 相关信息,访问外部 IP 和 Kubernetes API Server 的 Service 地址均正常

$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=127 time=37.2 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=127 time=36.9 ms
^C
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 36.946/37.085/37.224/0.139 ms


$ curl -v 10.96.0.1:443
* About to connect() to 10.96.0.1 port 443 (#0)
* Trying 10.96.0.1...
* Connected to 10.96.0.1 (10.96.0.1) port 443 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 10.96.0.1:443
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 400 Bad Request
<
Client sent an HTTP request to an HTTPS server.
* Closing connection 0

容器中的 dns 配置为 kube-dns 的 Service 的 IP,测试其端口,显示 Connection refused。测试解析集群内部域名,结果无法解析。

$ cat /etc/resolv.conf 
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

$ curl 10.96.0.10:53
curl: (7) Failed connect to 10.96.0.10:53; Connection refused

$ ping svc.cluster.local
ping: svc.cluster.local: Name or service not known

通过以上步骤,大概可以确定,Pod 的网络正常,应该是 kube-dns 出问题,导致 Pod 无法解析域名。

Service 是通过 Endpoint 和后端的具体的 Pod 关联起来向外提供服务,首先检查 kube-dns 的 Service 对应的 Endpoint,看是否正常。

$ kubectl get ep -A
NAMESPACE NAME ENDPOINTS AGE
default kubernetes 192.168.142.10:6443 25h
kube-system kube-dns 25h

检查发现,kube-dns 对应的 ENDPOINTS 列表为空。删除 coredns 容器,重新创建。再次检查后,发现 kube-dns 的 Service 对应的 Endpoint 恢复正常。

$ kubectl delete pod -n kube-system coredns-565d847f94-bzr62 coredns-565d847f94-vmddh

$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default tet-deployment-fbc96cc5d-hlqkg 1/1 Running 1 (115m ago) 139m
default tet-deployment-fbc96cc5d-mcjzg 1/1 Running 0 162m
kube-flannel kube-flannel-ds-77bwd 1/1 Running 1 (115m ago) 129m
kube-flannel kube-flannel-ds-lqp5v 1/1 Running 0 25h
kube-flannel kube-flannel-ds-w675f 1/1 Running 0 3h21m
kube-system coredns-565d847f94-8wmg7 1/1 Running 0 9s
kube-system coredns-565d847f94-csc9f 0/1 Running 0 9s

$ kubectl get ep -A
NAMESPACE NAME ENDPOINTS AGE
default kubernetes 192.168.142.10:6443 25h
kube-system kube-dns 10.244.1.7:53,10.244.1.7:53,10.244.1.7:9153 25h

在 Pod 中重新测试解析,结果正常

$ curl -v 10.96.0.10:53
* About to connect() to 10.96.0.10 port 53 (#0)
* Trying 10.96.0.10...
* Connected to 10.96.0.10 (10.96.0.10) port 53 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 10.96.0.10:53
> Accept: */*
>
* Empty reply from server
* Connection #0 to host 10.96.0.10 left intact
curl: (52) Empty reply from server

$ ping qq.com
PING qq.com (61.129.7.47) 56(84) bytes of data.
64 bytes from 61.129.7.47 (61.129.7.47): icmp_seq=1 ttl=127 time=308 ms
64 bytes from 61.129.7.47 (61.129.7.47): icmp_seq=2 ttl=127 time=312 ms
64 bytes from 61.129.7.47 (61.129.7.47): icmp_seq=3 ttl=127 time=312 ms
^C
--- qq.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 308.493/310.873/312.106/1.743 ms

dns 文件定位参考文档

故障排查:Kubernetes 中 Pod 无法正常解析域名

集群状态异常

节点状态 NotReady

PLEG is not healthy: pleg was last seen active 10m13.755045415s ago

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready control-plane 14d v1.24.7
k8s-master2 Ready control-plane 14d v1.24.7
k8s-master3 Ready control-plane 14d v1.24.7
k8s-work1 NotReady <none> 14d v1.24.7
k8s-work2 Ready <none> 14d v1.24.7

查看节点详细信息

$ kubectl describe node k8s-work1
...
Conditions:
Ready False Tue, 15 Nov 2022 10:14:49 +0800 Tue, 15 Nov 2022 10:07:39 +0800 KubeletNotReady PLEG is not healthy: pleg was last seen active 10m13.755045415s ago; threshold is 3m0s

异常原因

集群因为此原因(PLEG is not healthy: pleg was last seen active ***h**m***s ago;)状态变为 NotReady,通常是因为节点超负载。

container runtime is down, container runtime not ready

排查过程

检查集群中的 Pod 分布情况时,发现某一节点上几乎所有的 Pod 都被调度去了其他节点,当前检查时此节点的状态已经是 Ready,针对此情况进行分析。

  1. 确定问题发生的大概时间段

    根据 Pod 在其他节点上面被启动的时间,可以大概确定节点异常的时间,根据此时间段可以缩小排查的时间范围。此示例中问题发生的时间大概在 Nov 25 04:49:00 前后。

  2. 检查 kubelet 日志

    根据已经推断出的时间段,在 问题节点 上,检查 kubelet 日志

    $ journalctl -u kubelet --since "2022-11-25 4:40" | grep -v -e "failed to get fsstats" -e "invalid bearer token" | more
    Nov 25 04:49:00 k8s-work2 kubelet[17604]: E1125 04:49:00.153132 17604 generic.go:205] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = operation timeout: context deadline exceeded"
    Nov 25 04:49:00 k8s-work2 kubelet[17604]: E1125 04:49:00.375524 17604 remote_runtime.go:356] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = operation timeout: context deadline exceeded" filter="&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},}"
    Nov 25 04:49:00 k8s-work2 kubelet[17604]: E1125 04:49:00.375559 17604 kuberuntime_sandbox.go:292] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = operation timeout: context deadline exceeded"
    Nov 25 04:49:00 k8s-work2 kubelet[17604]: E1125 04:49:00.375578 17604 kubelet_pods.go:1153] "Error listing containers" err="rpc error: code = Unknown desc = operation timeout: context deadline exceeded"
    Nov 25 04:49:00 k8s-work2 kubelet[17604]: E1125 04:49:00.375589 17604 kubelet.go:2162] "Failed cleaning pods" err="rpc error: code = Unknown desc = operation timeout: context deadline exceeded"
    Nov 25 04:49:00 k8s-work2 kubelet[17604]: E1125 04:49:00.375603 17604 kubelet.go:2166] "Housekeeping took longer than 15s" err="housekeeping took too long" seconds=119.005290203
    Nov 25 04:49:00 k8s-work2 kubelet[17604]: E1125 04:49:00.476011 17604 kubelet.go:2010] "Skipping pod synchronization" err="container runtime is down"
    Nov 25 04:49:00 k8s-work2 kubelet[17604]: E1125 04:49:00.507861 17604 remote_runtime.go:680] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = operation timeout: context deadline exceeded" containerID="5cd867ce2a52311e79a20a113c7cedd2a233b3a52b556065b479f2dd11a14eac" cmd=[wget --no-check-certificate --spider -q http://localhost:8088/health]
    Nov 25 04:49:00 k8s-work2 kubelet[17604]: E1125 04:49:00.676271 17604 kubelet.go:2010] "Skipping pod synchronization" err="container runtime is down"

    Nov 25 04:49:01 k8s-work2 kubelet[17604]: E1125 04:49:01.076918 17604 kubelet.go:2010] "Skipping pod synchronization" err="container runtime is down"
    Nov 25 04:49:01 k8s-work2 kubelet[17604]: E1125 04:49:01.178942 17604 kubelet.go:2359] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: operation timeout: context deadline exceeded"
    Nov 25 04:49:01 k8s-work2 kubelet[17604]: E1125 04:49:01.878007 17604 kubelet.go:2010] "Skipping pod synchronization" err="[container runtime is down, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: operation timeout: context deadline exceeded]"
    Nov 25 04:49:03 k8s-work2 kubelet[17604]: E1125 04:49:03.329558 17604 remote_runtime.go:536] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = operation timeout: context deadline exceeded" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
    Nov 25 04:49:03 k8s-work2 kubelet[17604]: E1125 04:49:03.329585 17604 container_log_manager.go:183] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = operation timeout: context deadline exceeded"

    Nov 25 04:49:09 k8s-work2 kubelet[17604]: E1125 04:49:09.485356 17604 remote_runtime.go:168] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version: operation timeout: context deadline exceeded"
    Nov 25 04:49:09 k8s-work2 kubelet[17604]: I1125 04:49:09.485486 17604 setters.go:532] "Node became not ready" node="k8s-work2" condition={Type:Ready Status:False LastHeartbeatTime:2022-11-25 04:49:09.485445614 +0800 CST m=+227600.229789769 LastTransitionTime:2022-11-25 04:49:09.485445614 +0800 CST m=+227600.229789769 Reason:KubeletNotReady Message:[container runtime is down, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: operation timeout: context deadline exceeded]}

    从以上日志中,可以看到关键的日志信息:

    "Skipping pod synchronization" err="container runtime is down"

    setters.go:532] "Node became not ready"Reason:KubeletNotReady Message:[container runtime is down, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: operation timeout: context deadline exceeded]}

    从以上日志信息可以看出,节点状态变为了 not ready,原因为 container runtime is down, container runtime not ready,本示例中 container runtimedocker

  3. 检查 docker 服务日志

    根据上面的日志时间,检查 docker 服务的日志

    journalctl -u docker --since "2022-11-25 04:0" | more
    Nov 25 04:49:06 k8s-work2 dockerd[15611]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
    Nov 25 04:49:06 k8s-work2 dockerd[15611]: time="2022-11-25T04:49:06.410127201+08:00" level=error msg="Handler for GET /v1.40/containers/5cd867ce2a52311e79a20a113c7cedd2a233b3a52b556065b479f2dd11a14eac/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
    Nov 25 04:49:06 k8s-work2 dockerd[15611]: time="2022-11-25T04:49:06.410342223+08:00" level=error msg="Handler for GET /v1.40/containers/41e0dfe97b87c2b8ae941653fa8adbf93bf9358d91e967646e4549ab71b2f004/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
    Nov 25 04:49:06 k8s-work2 dockerd[15611]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
    Nov 25 04:49:06 k8s-work2 dockerd[15611]: time="2022-11-25T04:49:06.414773158+08:00" level=error msg="Handler for GET /v1.40/containers/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
    Nov 25 04:49:06 k8s-work2 dockerd[15611]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
    Nov 25 04:49:06 k8s-work2 dockerd[15611]: time="2022-11-25T04:49:06.416474238+08:00" level=error msg="Handler for GET /v1.40/containers/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
    Nov 25 04:49:06 k8s-work2 dockerd[15611]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
    Nov 25 04:49:06 k8s-work2 dockerd[15611]: time="2022-11-25T04:49:06.422844592+08:00" level=error msg="Handler for GET /v1.40/containers/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"

    根据日志可以看到关键日志 write unix /var/run/docker.sock->@: write: broken pipe

  4. 检查 messages 日志

    查看对应时间段的系统日志

    Nov 25 04:49:00 k8s-work2 kubelet: E1125 04:49:00.153089   17604 remote_runtime.go:356] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = operation timeout: context deadline exceeded" filter="nil"
    Nov 25 04:49:00 k8s-work2 kubelet: E1125 04:49:00.375603 17604 kubelet.go:2166] "Housekeeping took longer than 15s" err="housekeeping took too long" seconds=119.005290203
    Nov 25 04:49:00 k8s-work2 kubelet: E1125 04:49:00.375614 17604 kubelet.go:2010] "Skipping pod synchronization" err="container runtime is down"
    Nov 25 04:49:01 k8s-work2 kubelet: E1125 04:49:01.178942 17604 kubelet.go:2359] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: operation timeout: context deadline exceeded"
    Nov 25 04:49:01 k8s-work2 kubelet: E1125 04:49:01.878007 17604 kubelet.go:2010] "Skipping pod synchronization" err="[container runtime is down, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: operation timeout: context deadline exceeded]"
    Nov 25 04:49:06 k8s-work2 dockerd: time="2022-11-25T04:49:06.410127201+08:00" level=error msg="Handler for GET /v1.40/containers/5cd867ce2a52311e79a20a113c7cedd2a233b3a52b556065b479f2dd11a14eac/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"

根据 kubelet 服务日志,节点 Not Ready 的原因为 docker down,根据 docker 服务日志,docker 存在异常,但是此时执行 docker 相关命令,未发现异常。此问题多次出现,docker engine 版本为 19.03.15-3,之后尝试将 docker engine 版本升级为最新版本 20.10.9,问题未在出现。docker engine 升级参考

“Container runtime network not ready” networkReady=”NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized”

环境信息

  • Kubernetes v1.21.2

新增节点后,节点状态为 NotReady

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
work2 Ready <none> 17d v1.21.2
work3 Ready <none> 17d v1.21.2
work4 Ready <none> 10d v1.21.2
work5 NotReady <none> 8m36s v1.21.2
master Ready control-plane,master 191d v1.21.2

Master 上查看节点的描述信息

$ kubectl describe node k8s-api-work5 
Name: work5
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=work5
kubernetes.io/os=linux

Taints: node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: work5
AcquireTime: <unset>
RenewTime: Wed, 05 Apr 2023 13:50:16 +0800
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Wed, 05 Apr 2023 13:48:19 +0800 Wed, 05 Apr 2023 13:48:19 +0800 FlannelIsUp Flannel is running on this node
MemoryPressure False Wed, 05 Apr 2023 13:48:33 +0800 Wed, 05 Apr 2023 13:48:03 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 05 Apr 2023 13:48:33 +0800 Wed, 05 Apr 2023 13:48:03 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 05 Apr 2023 13:48:33 +0800 Wed, 05 Apr 2023 13:48:03 +0800 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Wed, 05 Apr 2023 13:48:33 +0800 Wed, 05 Apr 2023 13:48:03 +0800 KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

看到异常原因为 container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

work5 节点上查看 kubelet 日志

$ journalctl -u kubelet -f
Apr 05 13:52:03 work5 kubelet[19520]: E0405 13:52:03.952395 19520 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"

Apr 05 13:52:08 work5 kubelet[19520]: I0405 13:52:08.498481 19520 cni.go:204] "Error validating CNI config list" configList="{\n \"name\": \"cbr0\",\n \"cniVersion\": \"0.3.1\",\n \"plugins\": [\n {\n \"type\": \"flannel\",\n \"delegate\": {\n \"hairpinMode\": true,\n \"isDefaultGateway\": true\n }\n },\n {\n \"type\": \"portmap\",\n \"capabilities\": {\n \"portMappings\": true\n }\n }\n ]\n}\n" err="[failed to find plugin \"flannel\" in path [/opt/cni/bin]]"
Apr 05 13:52:08 work5 kubelet[19520]: I0405 13:52:08.498501 19520 cni.go:239] "Unable to update cni config" err="no valid networks found in /etc/cni/net.d"

在 Master 节点上查看异常节点上的 kube-flannel POD 状态正常

$ kubectl get pods -o wide -n kube-system
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-558bd4d5db-6wf7m 1/1 Running 0 18d 10.244.4.132 admin <none> <none>
coredns-558bd4d5db-zh9mw 1/1 Running 0 18d 10.244.4.144 admin <none> <none>
etcd-master 1/1 Running 1 191d 192.168.100.38 master <none> <none>
kube-apiserver-master 1/1 Running 1 191d 192.168.100.38 master <none> <none>
kube-controller-manager-master 1/1 Running 1 191d 192.168.100.38 master <none> <none>
kube-flannel-ds-2lg9x 1/1 Running 0 18d 192.168.100.38 master <none> <none>
kube-flannel-ds-5fpn8 1/1 Running 0 10d 192.168.100.69 work4 <none> <none>
kube-flannel-ds-7ln98 1/1 Running 0 30m 192.168.100.59 work5 <none> <none>
kube-flannel-ds-kvhhq 1/1 Running 0 17d 192.168.14.7 work3 <none> <none>
kube-flannel-ds-vz4th 1/1 Running 0 17d 192.168.8.197 work2 <none> <none>
kube-flannel-ds-xr84k 1/1 Running 0 18d 192.168.100.86 admin <none> <none>
kube-proxy-9b7kt 1/1 Running 0 30m 192.168.100.59 work5 <none> <none>
kube-proxy-c6ggk 1/1 Running 1 191d 192.168.100.38 master <none> <none>
kube-proxy-gtlqt 1/1 Running 0 17d 192.168.14.7 work3 <none> <none>
kube-proxy-n6s7p 1/1 Running 0 10d 192.168.100.69 work4 <none> <none>
kube-proxy-p8m9d 1/1 Running 0 17d 192.168.8.197 work2 <none> <none>
kube-proxy-qvks4 1/1 Running 2 191d 192.168.100.86 admin <none> <none>
kube-scheduler-master 1/1 Running 1 191d 192.168.100.38 master <none> <none>

接着检查新增节点上提供 flannel 组件的安装包,及相关目录中的文件是否存在异常

$ rpm -qa | grep kubernetes
kubernetes-cni-1.2.0-0.x86_64

$ ls /opt/cni/bin/
bandwidth dhcp firewall host-local loopback portmap sbr tuning vrf
bridge dummy host-device ipvlan macvlan ptp static vlan

比对其他已存在的正常节点上的 kubernetes-cni 信息,发现其他节点上的 kubernetes-cni 版本为 kubernetes-cni-0.8.7-0,怀疑为版本问题导致,卸载问题节点上的 kubernetes-cni-1.2.0-0,重新安装 kubernetes-cni-0.8.7-0。卸载 kubernetes-cni 会导致之前安装的 kubeadmkubelet 被卸载,也需要重新安装。

$ yum remove kubernetes-cni-1.2.0-0
...
Removed:
kubernetes-cni.x86_64 0:1.2.0-0

Dependency Removed:
kubeadm.x86_64 0:1.21.2-0 kubelet.x86_64 0:1.21.2-0

$ yum install -y kubelet-1.21.2 kubeadm-1.21.2 kubectl-1.21.2 kubernetes-cni-0.8.7-0

安装 kubernetes-cni-0.8.7-0 版本后,再次查看节点状态,变为 Ready

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
work2 Ready <none> 17d v1.21.2
work3 Ready <none> 17d v1.21.2
work4 Ready <none> 10d v1.21.2
work5 Ready <none> 8m36s v1.21.2
master Ready control-plane,master 191d v1.21.2

Container runtime network not ready” networkReady=”NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized”

节点状态 NotReady,检查节点上的 kubelet日志,显示 Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"

$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master1 Ready control-plane 35m v1.25.4 192.168.142.10 <none> CentOS Linux 7 (Core) 5.4.242-1.el7.elrepo.x86_64 docker://20.10.9
k8s-worker1 Ready <none> 30m v1.25.4 192.168.142.11 <none> CentOS Linux 7 (Core) 5.4.242-1.el7.elrepo.x86_64 docker://20.10.9
k8s-worker2 NotReady <none> 21m v1.25.4 192.168.142.12 <none> CentOS Linux 7 (Core) 5.4.242-1.el7.elrepo.x86_64 docker://20.10.9

检查发现节点上没有 CNI 配置文件 /etc/cni/net.d/10-flannel.conflist,拷贝正常节点上的配置到异常节点后,状态恢复正常。

api-server 启动失败

No such file or directory

api server 启动失败,执行 kubectl 命令输出:

$ kubectl get nodes
The connection to the server kube-apiserver:6443 was refused - did you specify the right host or port?

检查 Api Server 监听的端口 6443 ,显示端口未启动。

检查 Api Server 对应的容器状态

$ docker ps -a | grep api
81688b9cbe45 1f38c0b6a9d1 "kube-apiserver --ad…" 14 seconds ago Exited (1) 13 seconds ago k8s_kube-apiserver_kube-apiserver-k8s-uat-master1.kube-system_c8a87f4921623c7bff57f5662ea486cc_25

容器状态为 Exited,检查容器日志

$ docker logs 81688b9cbe45
I1116 07:43:53.775588 1 server.go:558] external host was not specified, using 172.31.30.123
I1116 07:43:53.776035 1 server.go:158] Version: v1.24.7
I1116 07:43:53.776057 1 server.go:160] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
E1116 07:43:53.776298 1 run.go:74] "command failed" err="open /etc/kubernetes/pki/apiserver.crt: no such file or directory"

日志显示 err="open /etc/kubernetes/pki/apiserver.crt: no such file or directory",检查文件 /etc/kubernetes/pki/apiserver.crt

$ ls /etc/kubernetes/pki/apiserver.crt
ls: cannot access /etc/kubernetes/pki/apiserver.crt: No such file or directory

解决方法参考

  • 发现此文件确实不存在。若有备份,从备份中恢复此文件。如果没有备份,参考文档 恢复证书
    cp /k8s/backup/pki/apiserver.key /etc/kubernetes/pki/
    cp /k8s/backup/pki/apiserver.crt /etc/kubernetes/pki/
    重启 kubelet 后检查 Api Server,发现服务正常启动
    systemctl restart kubelet

  • 如果只是缺少了 apiserver.keyapiserver.crt 证书文件,可通过以下命令重新生成证书文件,生成原理参考
    $ kubeadm init phase certs apiserver \
    --apiserver-advertise-address 10.150.0.21 \
    --apiserver-cert-extra-sans 10.96.0.1 \
    --apiserver-cert-extra-sans 34.150.1.1

    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.150.0.21 34.150.1.1]

context deadline exceeded

kube-apiserver 无法正常启动,检查 kube-apiserver 相关容器日志

# kubectl get nodes
The connection to the server kube-apiserver:6443 was refused - did you specify the right host or port?

# docker logs -f f39205f67e71
server.go:558] external host was not specified, using 172.31.29.250
server.go:158] Version: v1.24.7
server.go:160] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
shared_informer.go:255] Waiting for caches to sync for node_authorizer
plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
run.go:74] "command failed" err="context deadline exceeded"
clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...

查看到日志中的关键错误信息: "command failed" err="context deadline exceeded"[core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...,由此可知,主要问题在于连接 etcd 组件异常,因 transport: authentication handshake failed 无法和 etcd 建立连接,kube-apiserver 连接 etcd 的认证依赖于证书,因此去检查集群证书,发现证书过期,相关操作参考以下命令及输出

# cd /etc/kubernetes/

# openssl x509 -text -in apiserver.crt
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 2154708302505735210 (0x1de70fc8f570742a)
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=kubernetes
Validity
Not Before: Nov 1 01:11:01 2022 GMT
Not After : Nov 1 01:24:48 2023 GMT
Subject: CN=kube-apiserver

# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[check-expiration] Error reading configuration from the Cluster. Falling back to default configuration

CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Nov 01, 2023 01:24 UTC <invalid> ca no
apiserver Nov 01, 2023 01:24 UTC <invalid> ca no
apiserver-etcd-client Nov 01, 2023 01:24 UTC <invalid> etcd-ca no
apiserver-kubelet-client Nov 01, 2023 01:24 UTC <invalid> ca no
controller-manager.conf Nov 01, 2023 01:24 UTC <invalid> ca no
etcd-healthcheck-client Nov 01, 2023 01:24 UTC <invalid> etcd-ca no
etcd-peer Nov 01, 2023 01:24 UTC <invalid> etcd-ca no
etcd-server Nov 01, 2023 01:24 UTC <invalid> etcd-ca no
front-proxy-client Nov 01, 2023 01:24 UTC <invalid> front-proxy-ca no
scheduler.conf Nov 01, 2023 01:24 UTC <invalid> ca no

CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Oct 29, 2032 01:11 UTC 8y no
etcd-ca Oct 29, 2032 01:11 UTC 8y no
front-proxy-ca Oct 29, 2032 01:11 UTC 8y no


参考以下命令,备份及更新集群证书,并重启 kubelet 后,kube-apiserver 恢复正常

# tar -cf /etc/kubernetes/kubernetes.20231115.tar /etc/kubernetes/kubernetes

# kubeadm certs renew all
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[renew] Error reading configuration from the Cluster. Falling back to default configuration

certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed

Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.

# systemctl restart kubelet

# kubectl get nodes
error: You must be logged in to the server (Unauthorized)

# export KUBECONFIG=/etc/kubernetes/admin.conf

# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready control-plane 379d v1.24.7
k8s-master2 Ready control-plane 379d v1.24.7
k8s-master3 Ready control-plane 379d v1.24.7
k8s-work1 NotReady <none> 379d v1.24.7
k8s-work2 Ready <none> 379d v1.24.7

kubelet 启动失败

failed to parse kubelet flag

kubelet 重启失败,查看 kubelet 服务日志

$ journalctl -f -u kubelet
"command failed" err="failed to parse kubelet flag: unknown flag: --network-plugin"

问题原因版本不匹配。集群版本为 1.21.2,检查问题节点上的组件版本信息,kubelet 变为了 kubelet-1.27.3,可能是升级了 kubelet 软件包版本导致。

$ rpm -qa | grep kube
kubernetes-cni-1.2.0-0.x86_64
kubectl-1.25.2-0.x86_64
kubelet-1.27.3-0.x86_64

查看正常节点上的组件版本

$ rpm -qa | grep kube
kubectl-1.21.2-0.x86_64
kubernetes-cni-0.8.7-0.x86_64
kubelet-1.21.2-0.x86_64
kubeadm-1.21.2-0.x86_64

解决方法 为恢复问题节点上的组件版本和集群版本一致。

  1. 在正常节点上下载软件安装包并拷贝到问题节点上
    yumdownloader kubernetes-cni-0.8.7-0 kubectl-1.21.2-0 kubeadm-1.21.2-0 kubelet-1.21.2-0
  2. 在问题节点上卸载软件包并安装和集群一致版本的软件包
    yum remove kubelet-1.27.3-0 kubectl-1.25.2-0.x86_64 kubernetes-cni-1.2.0-0.x86_64

    yum localinstall kubectl-1.21.2-0.x86_64.rpm kubeadm-1.21.2-0.x86_64.rpm kubelet-1.21.2-0.x86_64.rpm kubernetes-cni-0.8.7-0.x86_64.rpm

  3. 重启服务,恢复正常
    systemctl daemon-reload
    systemctl restart kubelet

misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"“

kubelet 服务启动失败

$ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Thu 2023-07-13 14:18:33 CST; 2s ago
Docs: https://kubernetes.io/docs/
Process: 28476 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
Main PID: 28476 (code=exited, status=1/FAILURE)

Jul 13 14:18:33 k8s-node-5 systemd[1]: Unit kubelet.service entered failed state.
Jul 13 14:18:33 k8s-node-5 systemd[1]: kubelet.service failed.

检查 kubelet 服务日志

$ journalctl -f -u kubelet
Jul 13 14:18:43 k8s-node-5 kubelet[28572]: E0713 14:18:43.328660 28572 server.go:292] "Failed to run kubelet" err="failed to run Kubelet: misconfiguration: kubelet cgroup driver: \"systemd\" is different from docker cgroup driver: \"cgroupfs\""

根据日志信息可知,kubelet 服务启动失败是因为 kubelet 使用的 cgroup 驱动是 systemd,而 docker 服务使用的是 cgroupfs。二者不一致导致 kubelet 无法启动。

修改 docker 服务使用的 cgroup 驱动为 systemd

$ cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",

}
EOF

重启 docker 服务后,kubelet 服务运行正常。

systemctl restart docker

misconfiguration

kubelet 服务启动失败,检查日志

# journalctl -f -u kubelet
kubelet[22771]: E0908 14:10:08.325316 22771 server.go:292] "Failed to run kubelet" err="failed to run Kubelet: misconfiguration: kubelet cgroup driver: \"cgroupfs\" is different from docker cgroup driver: \"systemd\""

从日志可知,是因为 kubeletdocker 使用的 cgroup 驱动不一致导致。kubelet 使用了 cgroupfs,而 docker 使用了 systemd

参考以下步骤修改 kubelet 使用 systemd 驱动

  1. 修改 kubelet 配置 /var/lib/kubelet/config.yaml 中的 cgroupDriver
    /var/lib/kubelet/config.yaml
    cgroupDriver: systemd
  2. 重启 kubelet 服务

其他控制平面故障

高可用集群中一个 master 节点异常导致整个集群中的所有应用服务不可用

参考此文档安装的高可用集群,在 [创建高可用控制平面集群](https://csms.tech/202209121102/#创建高可用控制平面的集群)中,要 为 `kube-apiserver` 创建负载均衡器,本次问题环境中未创建负载均衡器,而是使用主机 hosts (`/etc/hosts`) 写入了 `kube-apiserver` 的域名及 IP 映射关系,示例如下
/etc/hosts
172.31.26.116 k8s-master1 kube-api-svr-c1.mydomain.com
172.31.19.164 k8s-master2 kube-api-svr-c1.mydomain.com
172.31.21.3 k8s-master3 kube-api-svr-c1.mydomain.com
172.31.16.124 k8s-worker1
172.31.22.159 k8s-worker2

集群中的所有节点都写入了以上内容,实现了节点通过 /etc/hosts 中的配置解析 kube-apiserver 的域名(kube-api-svr-c1.mydomain.com)。

正常情况下,所有节点都会将 kube-apiserver 解析为 172.31.26.116 kube-api-svr-c1.mydomain.com

本次故障中,k8s-master1 节点因为 CPU 和内存满载,k8s-master1 异常,无法提供 kube-apiserver 服务。同时,集群中所有的应用都响应异常 :503 Service Temporarily Unavailable

理论上,本环境为 高可用 的Kubernetes 集群,一个 master 节点异常,不会影响集群提供正常的功能,但是此次只是因为 k8s-master1 这一个 master 节点异常,就导致了整个集群无法提供正常的功能。

复现问题分析,发现 k8s-master1 异常后,在其他主节点上使用 kubectl 命令,无法使用,原因为 根据节点 /etc/hosts 配置,kube-apiserver 解析到了异常的 k8s-master1 节点,修改 k8s-master2 上面的 /etc/hosts 配置,将 172.31.19.164 k8s-master2 kube-api-svr-c1.mydomain.com 放在第一行,以使 kube-apiserver 域名解析到 k8s-master2kubectl 命令连接到 k8s-master2 即可正常查看集群状态

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 NotReady control-plane 224d v1.24.7
k8s-master2 Ready control-plane 224d v1.24.7
k8s-master3 NotReady control-plane 224d v1.24.7
k8s-worker1 NotReady <none> 224d v1.24.7
k8s-worker2 NotReady <none> 224d v1.24.7

可以看到,集群中除了 k8s-master2 ,其他所有节点都异常,原因为 除了 k8s-master2 其他节点都将 apiserver 解析到了异常的 k8s-master1 节点。修改所有节点的 /etc/hosts 文件,将 kube-api-svr-c1.mydomain.com 解析到 k8s-master1 之外的 master 节点,集群恢复正常。

本次故障的根本原因为本环境中未实现 Kubernetes 的高可用,虽然已经部署了高可用环境,但是因为 apiserver 的域名未实现高可用(负载均衡),导致 apiserver 请求全部到了异常节点。要从根本上解决此问题,需要为 apiserver 的请求域名部署负载均衡实现真正的高可用。使用 /etc/hosts 将域名解析到多个 IP 不能实现高可用

deployment 部署后未创建对应的 Pod

deployment 部署成功后,检查对应的 Pod,发现未创建对应的 Pod。查看 deployment 描述信息如下,Events 内容为空。

# kubectl describe deployment ops-admin -n ops
Name: ops-admin
Namespace: ops
Labels: env=prod
project=ops-admin
Selector: env=prod,project=ops-admin
Replicas: 1 desired | 0 updated | 0 total | 0 available | 0 unavailable
StrategyType: RollingUpdate
Pod Template:
Labels: env=prod
project=ops-admin
...
OldReplicaSets: <none>
NewReplicaSet: <none>
Events: <none>

检查 replicaset 信息,未发现任何的相关 replicaset 资源

# kubectl describe replicaset -l env=prod,project=ops-admin -n ops
No resources found in ops namespace.

检查事件列表,内容为空

# kubectl get events -n ops
No resources found in ops namespace.

继续检查控制平面组件的状态,看到 controller-managerscheduler 状态异常(此异常可能是因为配置原因,具体见 controller-manager 和 scheduler 组件状态 Unhealthy)

# kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0 Healthy {"health":"true"}

接着查看 controller-managerscheduler 对应的 Pod 的日志

# kubectl logs -n kube-system -l component=kube-controller-manager
E0925 04:44:40.190504 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Unauthorized
E0925 04:44:43.861798 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Unauthorized
E0925 04:44:48.017384 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Unauthorized
E0925 04:44:52.260888 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Unauthorized
E0925 04:44:54.825907 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Unauthorized
E0925 04:44:58.502754 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Unauthorized
E0925 04:45:02.539845 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Unauthorized
E0925 04:45:06.623081 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Unauthorized
E0925 04:45:09.166492 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Unauthorized
E0925 04:45:12.507228 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Unauthorized


# kubectl logs -n kube-system -l component=kube-scheduler
E0925 04:45:30.669008 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Unauthorized
E0925 04:45:30.986953 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Unauthorized
E0925 04:45:31.297694 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Unauthorized
E0925 04:45:36.275267 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Unauthorized
E0925 04:45:36.839025 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Unauthorized
E0925 04:45:37.365357 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: Unauthorized
E0925 04:45:45.008682 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Unauthorized
E0925 04:45:52.514736 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Unauthorized
E0925 04:45:52.698173 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Unauthorized
E0925 04:45:54.558346 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Unauthorized

日志显示存在鉴权问题(Unauthorized),本示例中此问题主要是由集群证书过期导致。

controller-manager 和 scheduler 组件状态 Unhealthy

环境信息

  • Kubernetes 1.21

检查集群组件状态,显示 controller-managerscheduler 组件状态 Unhealthy。集群功能正常

# kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0 Healthy {"health":"true"}

出现此情况是因为 /etc/kubernetes/manifests/kube-controller-manager.yaml/etc/kubernetes/manifests/kube-scheduler.yaml 设置的默认端口是 0,修改此配置文件,注释 - --port=0 即可(修改此配置文件后,系统会自动加载配置,无需重启),等待大概一分钟后重新查看,状态正常

# kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}

Kubernetes 集群证书相关问题

Kubernetes 相关证书

Kubernetes 集群中使用了多种证书来保证各种组件间的安全通信。以下是 Kubernetes 中使用的主要证书:

证书默认存放路径 /etc/kubernetes/pki,以下文件相对路径基于此默认存放路径

Kubernetes 集群证书

  • kubernetes CA 证书

    Kubernetes CA 证书用于签发集群中的其他证书,例如 kube-apiserver 服务器证书、kubelet 证书、controller-manager 证书等。

    • ca.crt CA 的公钥证书。
    • ca.key CA 的私钥。

    在使用 kubeadm 初始化 Kubernetes 集群时,CA 证书通常会自动生成。

  • kube-apiserver 组件证书

    加密 Kubernetes API 服务器的 HTTPS 接口。

    • apiserver.crt
    • apiserver.key

    证书的 SANs 中包含了 Kubernetes API 服务器的 IP 地址、DNS 名称等。要更新 SANs 的值,使用命令 kubeadm init phase certs apiserver参考示例

    要单独更新 kube-apiserver 的证书,参考命令 kubeadm certs renew

  • Kube Proxy、Controller Manager 和 Scheduler 默认不使用 TLS 和 API Server 进行身份验证,而是使用 Kubernetes Service Account。

  • Kubelet 组件证书

    加密 Kubelet 与 API 服务器的通信。kubelet 使用这些证书来安全地与 Kubernetes API 服务器通信。这包括节点状态的报告、Pod 的创建和管理等。当 kubelet 首次加入集群时,它会使用这些证书进行 TLS 认证。API 服务器通过这些证书验证节点的身份,并基于此授予相应的权限。

    Kubernetes 支持自动证书轮换,这意味着 kubelet 会在证书接近过期时自动请求新的证书。这个过程是自动的,确保了长期运行的集群保持安全。

    通常为节点的 /var/lib/kubelet/pki/kubelet-client-current.pem 和对应的 Key 文件。

  • Service Account Keys

    在初始化 Kubernetes 集群时,这些密钥通常由 kubeadm 或相似的工具自动生成。

    • sa.key 用于签发新的 Service Account JWT Tokens。通常由集群管理员或自动化工具(如 kubeadm)安全地生成和管理。
    • sa.pub非 TLS 证书,而是密钥对)。 由 Kubernetes API 服务器用来验证 Token 的合法性。与私钥配对使用。

    在 Kubernetes 中,Service Account Keys 是一对公钥和私钥,用于验证和签发 Service Account Tokens。这些令牌允许 Pod 以特定的 Service Account 身份与 Kubernetes API 服务器进行认证和授权。

  • apiserver-kubelet-client

    这对证书和密钥用于 API 服务器请求集群中每个节点上的 kubelet 时的安全通信。API 服务器使用这些证书与 kubelet 通信,进行诸如启动 Pod、获取节点状态等操作。

    • apiserver-kubelet-client.crt
    • apiserver-kubelet-client.key

etcd 集群证书

  • etcd CA 证书

    Etcd CA 证书用于签发 etcd 集群相关的证书,如 etcd 服务器证书、etcd 客户端证书、Peer 实体证书等。

    • etcd/ca.crt
    • etcd/ca.key
  • apiserver-etcd-client

    这对证书和密钥用于 Kubernetes API 服务器请求 etcd 数据库时的加密通信

    • apiserver-etcd-client.crt
    • apiserver-etcd-client.key

Found multiple CRI endpoints on the host

使用 kubeadm 重置集群证书时报错,具体操作如下

# sudo kubeadm init phase certs all
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher

原因为系统上存在多个可用的 CRI,需要手动配置使用哪一个。这通常是在 kubeadm 配置文件中设置 criSocket 字段来完成。如果集群正在使用 containerd 作为容器运行时,参考以下配置解决

  1. 创建 kubeadm 配置文件。创建一个 kubeadm 配置文件(如果还没有的话,如 kubeadm-config.yaml),并在其中指定 criSocket 字段。例如,如果使用 containerd,则文件内容如下:
    kubeadm-config.yaml
    apiVersion: kubeadm.k8s.io/v1beta2
    kind: InitConfiguration
    nodeRegistration:
    criSocket: /var/run/containerd/containerd.sock

  2. 运行 kubeadm 命令
    # sudo kubeadm init phase certs all --config=./kubeadm-config.yaml 
    W1208 14:19:51.376721 13737 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
    W1208 14:19:51.376972 13737 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/containerd/containerd.sock". Please update your configuration!
    I1208 14:19:51.517814 13737 version.go:255] remote version is much newer: v1.28.4; falling back to: stable-1.24
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Using existing ca certificate authority
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [fm-k8s-c1-master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.26.116]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [fm-k8s-c1-master1 localhost] and IPs [172.31.26.116 127.0.0.1 ::1]
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [fm-k8s-c1-master1 localhost] and IPs [172.31.26.116 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Using the existing "sa" key

kubeadm 更新 apiserver 证书报错

使用以下方式更新 kube-apiserver 的证书时报错

# kubeadm init phase certs apiserver --apiserver-advertise-address --apiserver-cert-extra-sans kubernetes --apiserver-cert-extra-sans kubernetes.default --apiserver-cert-extra-sans kubernetes.default.svc  --config=/home/username/kubeadm-config.yaml
can not mix '--config' with arguments [apiserver-advertise-address apiserver-cert-extra-sans]
To see the stack trace of this error execute with --v=5 or higher

根据报错提示,不能同时使用 --configapiserver-advertise-address apiserver-cert-extra-sans

本示例中使用 --config 是用来解决 Found multiple CRI endpoints on the host。要简单解决,只需要确保主机上只有 一个可用的 CRI 即可

如果在 kubeadm init 命令中使用了 --config 选项指定配置文件,那么所有的选项配置都要写入此配置中。相关配置项参考 不同的 Kubernetes 版本对此配置版本要求不一样,具体要参考官网

etcd 集群证书异常导致 etcd 和 Kubernetes 集群不可用

etcd 堆叠(Stack)架构的 Kubernetes 集群无法访问,获取不到集群状态

# kubectl get nodes
The connection to the server kube-apiserver.uat.148962587001:6443 was refused - did you specify the right host or port?

通过 CRI(以 docker 为例)查看 kube-apiserver 容器状态,显示为 Exited

# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cf4bf9f2c4a3 9efa6dff568f "kube-scheduler --au…" About a minute ago Exited (1) About a minute ago k8s_kube-scheduler_kube-scheduler-k8s-master3_kube-system_a3a06a8f4bb3d9a7a753421061337314_808
27e06d290dbb 9e2bfc195de6 "kube-controller-man…" 2 minutes ago Exited (1) 2 minutes ago k8s_kube-controller-manager_kube-controller-manager-k8s-master3_kube-system_1d62164acfdda6946d09aa8255b4b191_808
b0aa1a2e24ee c7cbaca6e63b "kube-apiserver --ad…" 3 minutes ago Exited (1) 3 minutes ago k8s_kube-apiserver_kube-apiserver-k8s-master3_kube-system_fd413ffd28d3bcce4b1330c38307ebe2_791

查看 kube-apiserver 容器日志信息。错误日志中有关键错误信息 failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: internal error". Reconnecting...,这显示了 kube-apiserver 因为 TLS 证书原因无法连接到 etcd

# docker logs b0aa1a2e24ee
clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: internal error". Reconnecting...
authentication.go:63] "Unable to authenticate the request" err="[x509: certificate has expired or is not yet valid: current time 2023-12-08T07:21:21Z is after 2023-12-06T09:58:58Z, verifying certificate SN=1505375741374655454, SKID=, AKID=08:39:2B:D0:14:00:F4:7F:3F:58:26:36:32:BA:F8:0E:0E:B4:D4:83 failed: x509: certificate has expired or is not yet valid: current time 2023-12-08T07:21:21Z is after 2023-12-06T09:58:58Z]"
clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: internal error". Reconnecting...
authentication.go:63] "Unable to authenticate the request" err="[x509: certificate has expired or is not yet valid: current time 2023-12-08T07:21:22Z is after 2023-12-06T09:58:58Z, verifying certificate SN=1505375741374655454, SKID=, AKID=08:39:2B:D0:14:00:F4:7F:3F:58:26:36:32:BA:F8:0E:0E:B4:D4:83 failed: x509: certificate has expired or is not yet valid: current time 2023-12-08T07:21:22Z is after 2023-12-06T09:58:58Z]"

根据日志指向,首先来确定集群中 etcd 的状态。etcd 集群暴露了健康状态检查的 Endpoint,可以通过请求此接口检查 etcd 集群的状态。检查结果显示集群状态不正常,原因为未选举出 Leader

# curl 127.0.0.1:2381/health
{"health":"false","reason":"RAFT NO LEADER"}

根据当前的排查结果,可以确定 etcd 集群异常,etcd 集群是 Kubernetes 集群的配置核心,其异常会导致整个 Kubernetes 集群不可用。为了确定 etcd 集群异常原因,检查 etcd 集群中节点的日志,以查找有用信息

# docker logs etcd
{"logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"651e0623614c0f76 is starting a new election at term 13"}
{"logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"651e0623614c0f76 became pre-candidate at term 13"}
{"logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"651e0623614c0f76 received MsgPreVoteResp from 651e0623614c0f76 at term 13"}
{"logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"651e0623614c0f76 [logterm: 13, index: 216516485] sent MsgPreVote request to 71e91a3cb0d95be8 at term 13"}
{"logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"651e0623614c0f76 [logterm: 13, index: 216516485] sent MsgPreVote request to d0ca64fcbfb25318 at term 13"}
{"caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d0ca64fcbfb25318","rtt":"0s","error":"x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"etcd-ca\")"}
{"caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"d0ca64fcbfb25318","rtt":"0s","error":"x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"etcd-ca\")"}
{"caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"71e91a3cb0d95be8","rtt":"0s","error":"x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"etcd-ca\")"}
{"caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"71e91a3cb0d95be8","rtt":"0s","error":"x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"etcd-ca\")"}

检查 etcd 节点容器日志,日志中有报错显示证书认证问题:x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"etcd-ca\")",根据错误提示,可能是 etcd 集群使用的证书存在问题,具体错误说明 etcd 节点之间在尝试相互验证对方证书时遇到了问题。这通常发生在节点使用不同的 CA 证书,或者配置不正确时。

通常情况下,在启用了 --client-cert-auth=true--peer-client-cert-auth=trueetcd 集群中,etcd 对等节点间需要使用 HTTPS 证书进行加密通信,对等实体间需要验证客户端证书,这个过程中需要相同的 CA 证书进行 CA 机构的签名校验。根据日志提示,问题应集中在 etcd 集群中的证书上。

根据以上日志及思路引导,首先检查 etcd 各个节点的 CA 证书是否一致,结果发现 3 个 etcd 节点上面的 CA 证书不同

[root@k8s-master1 ~]# md5sum /etc/kubernetes/pki/etcd/ca.crt 
65fec4a08b77132febabfef3ca4eaafa /etc/kubernetes/pki/etcd/ca.crt

[root@k8s-master2 ~]# md5sum /etc/kubernetes/pki/etcd/ca.crt
1b6330a0acacd09dabeff2ff0c97451f /etc/kubernetes/pki/etcd/ca.crt

[root@k8s-master3 ~]# md5sum /etc/kubernetes/pki/etcd/ca.crt
3f398e445a4844e6c2a3fee6f24203aa /etc/kubernetes/pki/etcd/ca.crt

在一个正常配置的 etcd 集群中,所有节点应该使用由同一个根 CA 签发的证书,以便它们能够相互验证和信任。这是因为节点之间的通信(包括 Raft 协议的通信和快照传输)使用 TLS 加密,而且节点需要能够验证彼此的证书。如果每个节点上的 CA 证书不同,它们将无法验证其他节点的证书,从而导致通信失败。

为了解决这个 etcd 集群中各个节点使用的 CA 证书不一致的问题,参考以下步骤

  1. 备份 Kubernets 集群中的证书目录 /etc/kubernetes/pki/在所有的 Master 节点上操作
    tar -czf /ops/kubernetes_backup/pki.tar /etc/kubernetes/pki/
  2. 删除 etcd 节点上的所有证书。在所有的 Master 节点上操作
    rm -rf /etc/kubernetes/pki/etcd/*
  3. 使用 kubeadm 重新生成 etcd-ca 证书。只需在一个 Master 节点上操作
    kubeadm init phase certs etcd-ca
  4. 将重新生成的 etcd-ca 证书对(证书 /etc/kubernetes/pki/etcd/ca.crt 和私钥 /etc/kubernetes/pki/etcd/ca.key) 拷贝到另外 2 台 etcd 节点的相同路径下(/etc/kubernetes/pki/etcd/
  5. 重新生成 etcd 集群使用的证书文件。在所有的 Master 节点上操作。执行以下操作,基于以上步骤中重新生成的 CA 证书,签发 etcd 集群所需的其他证书
    kubeadm init phase certs etcd-healthcheck-client
    kubeadm init phase certs etcd-peer
    kubeadm init phase certs etcd-server
  6. 重新检查 etcd 集群及 Kubernetes 集群的状态
    # curl 127.0.0.1:2381/health
    {"health":"true","reason":""}

    # kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    k8s-master1 Ready control-plane 369d v1.24.7
    k8s-master2 Ready control-plane 369d v1.24.7
    k8s-master3 Ready control-plane 369d v1.24.7
    k8s-worker1 Ready <none> 369d v1.24.7
    k8s-worker2 Ready <none> 369d v1.24.7

安装以上步骤操作后,检查 Kubernetes 集群中管理节点 Pod 状态发现 master3 节点上面的 kube-apiserverkube-schedulerkube-controller-manager 处于 CrashLoopBackOff

# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d4b75cb6d-57jhk 0/1 Pending 0 2d23h
coredns-6d4b75cb6d-w6wgg 0/1 Pending 0 2d23h
etcd-k8s-master1 1/1 Running 1 (68d ago) 2d22h
etcd-k8s-master2 1/1 Running 0 2d23h
etcd-k8s-master3 1/1 Running 0 2d23h
kube-apiserver-k8s-master1 1/1 Running 792 (12m ago) 2d23h
kube-apiserver-k8s-master2 1/1 Running 4 (68d ago) 2d23h
kube-apiserver-k8s-master3 0/1 CrashLoopBackOff 807 (18s ago) 2d23h
kube-controller-manager-k8s-master1 1/1 Running 825 (14m ago) 2d23h
kube-controller-manager-k8s-master2 1/1 Running 0 2d23h
kube-controller-manager-k8s-master3 0/1 CrashLoopBackOff 823 (4m35s ago) 2d23h
kube-scheduler-k8s-master3 0/1 CrashLoopBackOff 823 (3m53s ago) 2d23h

检查 kube-apiserver-k8s-master3 日志发现是因为证书问题无法连接到 etcd原因可能是因为 这些组件连接 etcd 的证书导致

# kubectl logs -n kube-system kube-apiserver-k8s-master3
clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate". Reconnecting...
clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate". Reconnecting...
clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate". Reconnecting...
clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate". Reconnecting...
clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate". Reconnecting...

master3节点上执行以下命令更新 kube-apiserver 连接 etcd 时使用的客户端证书 /etc/kubernetes/pki/apiserver-etcd-client.crt,需要先删除 /etc/kubernetes/pki/apiserver-etcd-client.crt/etc/kubernetes/pki/apiserver-etcd-client.key,否则更新时会报错:error execution phase certs/apiserver-etcd-client: [certs] certificate apiserver-etcd-client not signed by CA certificate etcd/ca: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "etcd-ca")

# kubeadm init phase certs apiserver-etcd-client --config=/root/kubeadm-config.yaml 
W1211 15:06:57.911002 1401 common.go:84] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1211 15:06:57.911751 1401 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/containerd/containerd.sock". Please update your configuration!
I1211 15:06:58.238799 1401 version.go:255] remote version is much newer: v1.28.4; falling back to: stable-1.24
[certs] Generating "apiserver-etcd-client" certificate and key

检查 kube-controller-manager-k8s-master3 日志和 kube-scheduler-k8s-master3 日志,显示相关端口已经绑定,kill 掉被占用端口的进程后重试

# kubectl logs -n kube-system kube-controller-manager-k8s-master3
I1211 07:04:37.643782 1 serving.go:348] Generated self-signed cert in-memory
failed to create listener: failed to listen on 0.0.0.0:10257: listen tcp 0.0.0.0:10257: bind: address already in use

# kubectl logs -n kube-system kube-scheduler-k8s-master3
I1211 07:10:21.304329 1 serving.go:348] Generated self-signed cert in-memory
E1211 07:10:21.304587 1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:10259: listen tcp 0.0.0.0:10259: bind: address already in use"

更新证书后,kube-apiserver 报证书过期错误

在更新集群证书后,kube-apiserver 异常,检查日志,有证书过期的错误

authentication.go:63] "Unable to authenticate the request" err="[x509: certificate has expired or is not yet valid: current time 2023-12-11T07:35:22Z is after 2023-12-06T09:50:35Z, verifying certificate SN=2750116196247444292, SKID=, AKID=08:39:2B:D0:14:00:F4:7F:3F:58:26:36:32:BA:F8:0E:0E:B4:D4:83 failed: x509: certificate has expired or is not yet valid: current time 2023-12-11T07:35:22Z is after 2023-12-06T09:50:35Z]"
authentication.go:63] "Unable to authenticate the request" err="[x509: certificate has expired or is not yet valid: current time 2023-12-11T07:35:22Z is after 2023-12-06T09:50:35Z, verifying certificate SN=2750116196247444292, SKID=, AKID=08:39:2B:D0:14:00:F4:7F:3F:58:26:36:32:BA:F8:0E:0E:B4:D4:83 failed: x509: certificate has expired or is not yet valid: current time 2023-12-11T07:35:22Z is after 2023-12-06T09:50:35Z]"
authentication.go:63] "Unable to authenticate the request" err="[x509: certificate has expired or is not yet valid: current time 2023-12-11T07:35:22Z is after 2023-12-06T09:50:35Z, verifying certificate SN=2750116196247444292, SKID=, AKID=08:39:2B:D0:14:00:F4:7F:3F:58:26:36:32:BA:F8:0E:0E:B4:D4:83 failed: x509: certificate has expired or is not yet valid: current time 2023-12-11T07:35:22Z is after 2023-12-06T09:50:35Z]"
authentication.go:63] "Unable to authenticate the request" err="[x509: certificate has expired or is not yet valid: current time 2023-12-11T07:35:22Z is after 2023-12-06T09:50:35Z, verifying certificate SN=2750116196247444292, SKID=, AKID=08:39:2B:D0:14:00:F4:7F:3F:58:26:36:32:BA:F8:0E:0E:B4:D4:83 failed: x509: certificate has expired or is not yet valid: current time 2023-12-11T07:35:22Z is after 2023-12-06T09:50:35Z]"
authentication.go:63] "Unable to authenticate the request" err="[x509: certificate has expired or is not yet valid: current time 2023-12-11T07:35:22Z is after 2023-12-06T09:50:35Z, verifying certificate SN=2750116196247444292, SKID=, AKID=08:39:2B:D0:14:00:F4:7F:3F:58:26:36:32:BA:F8:0E:0E:B4:D4:83 failed: x509: certificate has expired or is not yet valid: current time 2023-12-11T07:35:22Z is after 2023-12-06T09:50:35Z]"
authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, service account token has been invalidated]"
authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, service account token has been invalidated]"

这个可能是因为更新证书后,其他节点依旧使用旧的证书在请求 kube-apiserver,此时可以重启所有节点上的 kubelet 服务。

Ingress 接入异常

503 Service Temporarily Unavailable

DeploymentServiceIngress 部署后,通过 Ingress 配置的域名访问,显示 503 Service Temporarily Unavailable

排查步骤

检查 Ingress-Nginx Pod 的日志,检索对应域名日志,显示返回码为 503

52.77.198.154 - - [15/Dec/2022:02:10:59 +0000] "GET /graph HTTP/1.1" 503 592 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36" 507 0.000 [prometheus-prometheus-service-8080] [] - - - - 00b07fe234401054153fdbd0ffafb158

查看 Ingress 对应的 Service,从以下输出中可以看到对应的 Serviceprometheus-service,端口为 8080

$ kubectl get ingress -n prometheus -o wide
NAME CLASS HOSTS ADDRESS PORTS AGE
prometheus-ui nginx prometheus.example.com 172.31.23.72,172.31.27.193 80 19h

$ kubectl describe ingress prometheus-ui -n prometheus
Name: prometheus-ui
Labels: <none>
Namespace: prometheus
Address: 172.31.23.72,172.31.27.193
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
prometheus.example.com
/ prometheus-service:8080 ()
Annotations: field.cattle.io/publicEndpoints:
[{"addresses":["172.31.23.72","172.31.27.193"],"port":80,"protocol":"HTTP","serviceName":"prometheus:prometheus-service","ingressName":"pr...
Events: <none>

查看 Service 信息

$ kubectl get services -n prometheus -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
prometheus-service ClusterIP 10.99.75.232 <none> 8090/TCP 19h app=prometheus-server

$ kubectl describe service -n prometheus prometheus-service
Name: prometheus-service
Namespace: prometheus
Labels: <none>
Annotations: <none>
Selector: app=prometheus-server
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.99.75.232
IPs: 10.99.75.232
Port: prometheus-port 8090/TCP
TargetPort: 9090/TCP
Endpoints: 10.244.3.95:9090
Session Affinity: None
Events: <none>

从以上信息可以看到,服务的端口为 Port: prometheus-port 8090/TCP,而 Ingress 中配置的服务端口为 8080 ,修改 Ingress 配置,将服务端口修改正确。修改后访问正常。

其他错误

invalid Host header

在 Master 节点上执行以下命令时报错

# kubectl exec -it -n kube-system coredns-6d4b75cb6d-kdlqg -- sh
error: Internal error occurred: error executing command in container: http: invalid Host header

此错误主要是因为 Kubernetes 和 cri-docker 版本问题导致 [3]

参考链接

相关参考

脚注