【K8S】poststarthook/rbac/bootstrap-roles failed: not finished

1.获取资源状态无法获得值

kubectl get csr 

[root@K8S1 work]# kubectl get csr
No resources found

--查看日志 
journalctl -u kube-apiserver.service --no-pager > 1.log 
journalctl -u kubelet.service --no-pager > 2.log 

vi 2.log 
Jul 20 17:26:39 K8S1 kubelet[59786]: I0720 17:26:39.360809   59786 certificate_manager.go:270] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates
Jul 20 17:26:39 K8S1 kubelet[59786]: E0720 17:26:39.364491   59786 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Unauthorized
Jul 20 17:26:40 K8S1 kubelet[59786]: E0720 17:26:40.167353   59786 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jul 20 17:26:45 K8S1 kubelet[59786]: E0720 17:26:45.169147   59786 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jul 20 17:26:50 K8S1 kubelet[59786]: E0720 17:26:50.170266   59786 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jul 20 17:26:55 K8S1 kubelet[59786]: E0720 17:26:55.171148   59786 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"



vi 1.log
Jul 20 13:43:46 K8S1 kube-apiserver[49383]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
Jul 20 13:43:46 K8S1 kube-apiserver[49383]: I0720 13:43:46.687771   49383 healthz.go:257] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz
Jul 20 13:43:46 K8S1 kube-apiserver[49383]: [-]poststarthook/rbac/bootstrap-roles failed: not finished
Jul 20 13:43:46 K8S1 kube-apiserver[49383]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
Jul 20 13:43:46 K8S1 kube-apiserver[49383]: I0720 13:43:46.729020   49383 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
Jul 20 13:43:46 K8S1 kube-apiserver[49383]: I0720 13:43:46.736063   49383 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
Jul 20 13:43:46 K8S1 kube-apiserver[49383]: I0720 13:43:46.736091   49383 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
Jul 20 13:43:46 K8S1 kube-apiserver[49383]: I0720 13:43:46.783838   49383 healthz.go:257] poststarthook/rbac/bootstrap-roles check failed: readyz
Jul 20 13:43:46 K8S1 kube-apiserver[49383]: [-]poststarthook/rbac/bootstrap-roles failed: not finished



2.原因分析 

vi bootstrap.secret.yaml

token-id 的获取,token-secret获取方法,随机生成,写入bootstrap.secret.yaml 即可。

 TOKEN_ID=$(head -c 30 /dev/urandom | od -An -t x | tr -dc a-f3-9|cut -c 3-8)
 TOKEN_SECRET=$(head -c 16 /dev/urandom | md5sum | head -c 16)

echo $TOKEN_ID $TOKEN_SECRET

3.重新配置。

kubectl delete -f bootstrap.secret.yaml

kubectl create -f bootstrap.secret.yaml

4.重新获取集群状态

[root@K8S1 work]# kubectl get csr 
NAME        AGE   SIGNERNAME                                    REQUESTOR                 REQUESTEDDURATION   CONDITION
csr-lsj5m   0s    kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:bab759   <none>              Approved,Issued

--- kind: Namespace apiVersion: v1 metadata: name: kube-flannel labels: k8s-app: flannel pod-security.kubernetes.io/enforce: privileged --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: k8s-app: flannel name: flannel rules: - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - get - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: k8s-app: flannel name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-flannel --- apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: flannel name: flannel namespace: kube-flannel --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-flannel labels: tier: node k8s-app: flannel app: flannel data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "EnableNFTables": false, "Backend": { "Type": "vxlan" } } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds namespace: kube-flannel labels: tier: node app: flannel k8s-app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux hostNetwork: true priorityClassName: system-node-critical tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni-plugin image: ghcr.io/flannel-io/flannel-cni-plugin:v1.7.1-flannel1 command: - cp args: - -f - /flannel - /opt/cni/bin/flannel volumeMounts: - name: cni-plugin mountPath: /opt/cni/bin - name: install-cni image: ghcr.io/flannel-io/flannel:v0.27.0 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: ghcr.io/flannel-io/flannel:v0.27.0 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN", "NET_RAW"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: EVENT_QUEUE_DEPTH value: "5000" - name: CONT_WHEN_CACHE_NOT_READY value: "false" volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ - name: xtables-lock mountPath: /run/xtables.lock volumes: - name: run hostPath: path: /run/flannel - name: cni-plugin hostPath: path: /opt/cni/bin - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg - name: xtables-lock hostPath: path: /run/xtables.lock type: FileOrCreate
最新发布
07-01
<think>我们正在讨论如何部署FlannelCNI插件到Kubernetes集群,特别是关于kube-flannel-dsDaemonSet的配置清单。根据引用[1]和引用[3]中的信息,我们知道在搭建Kubernetes集群时使用了flannel作为网络插件。通常,Flannel的官方配置清单可以从其GitHub仓库获取。我们将使用最新的稳定版本。步骤:1.获取Flannel的配置清单文件(YAML)2.应用这个配置清单以创建DaemonSet注意:在应用Flannel之前,需要确保集群已经初始化(kubeadminit)但还没有安装任何CNI插件。配置清单的官方来源:https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml我们可以直接使用这个URL来部署,或者下载后根据需要进行修改(例如,如果集群的Pod网段不是默认的10.244.0.0/16,则需要修改)。根据引用[1]中提到的,他们使用了flannel,并且没有特别提到修改Pod网段,所以我们可以假设使用默认配置。因此,部署命令通常是:kubectlapply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml但是,用户要求的是配置清单的内容,所以我们可能需要提供这个YAML文件的内容,或者说明如何获取和修改。考虑到用户可能需要查看或修改,我们将提供该YAML文件的核心部分,并说明关键配置。注意:由于网络原因,用户可能无法直接访问GitHub,所以也可以考虑提供一份默认的配置。不过,由于Flannel的配置清单较长,我们在这里提供一份简化版本(但实际部署请使用官方最新版本)并说明关键点。关键配置:-在DaemonSet中,容器镜像通常为flannel镜像,例如:flannel/flannel:v0.22.0(具体版本需查看官方)-网络配置通过ConfigMap传递(kube-flannel-cfg)-需要指定网络后端(默认为vxlan)在官方提供的kube-flannel.yml中,包含以下主要部分:-Namespace: kube-flannel-ServiceAccount: flannel-ClusterRole和ClusterRoleBinding-ConfigMap: kube-flannel-cfg-DaemonSet:kube-flannel-ds下面我们将提供一份从官方获取的配置清单(以v0.22.0为例),但请注意版本可能会更新。由于字符限制,我们可能无法提供完整的超长YAML,但可以描述结构并提供关键部分,并建议用户从官方获取。但是,用户明确要求配置清单,所以我们尽量提供。我们选择从官方版本(v0.22.0)中获取的配置清单(2023年10月左右发布),并确保语法正确。注意:在配置清单中,镜像可能需要根据网络环境选择(如使用国内镜像),但这里我们使用官方镜像。由于内容较长,我们将分步骤提供,并省略一些重复的部分。以下是kube-flannel.yml的内容(版本v0.22.0):</think>### Kubernetes 部署 Flannel CNI 插件指南 Flannel 是 Kubernetes 最常用的 CNI 插件之一,通过 `kube-flannel-ds` DaemonSet 在每个节点部署网络代理。以下是标准配置清单及部署说明: #### 1. **kube-flannel-ds DaemonSet 配置清单** ```yaml # kube-flannel.yml --- apiVersion: v1 kind: Namespace metadata: name: kube-flannel labels: pod-security.kubernetes.io/enforce: privileged --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-flannel --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: flannel rules: - apiGroups: [""] resources: ["pods"] verbs: ["get"] - apiGroups: [""] resources: ["nodes"] verbs: ["list", "watch"] - apiGroups: [""] resources: ["nodes/status"] verbs: ["patch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-flannel --- apiVersion: v1 kind: ConfigMap metadata: name: kube-flannel-cfg namespace: kube-flannel labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds namespace: kube-flannel labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux hostNetwork: true priorityClassName: system-node-critical serviceAccountName: flannel tolerations: - operator: Exists effect: NoSchedule containers: - name: kube-flannel image: quay.io/flannel/flannel:v0.22.0 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN", "NET_RAW"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ - name: cni mountPath: /etc/cni/net.d - name: xtables-lock mountPath: /run/xtables.lock volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg - name: xtables-lock hostPath: path: /run/xtables.lock type: FileOrCreate ``` #### 2. **关键配置说明** - **网络配置**: `net-conf.json` 中的 `Network` 字段需与集群 Pod CIDR 一致(默认为 `10.244.0.0/16`)。若初始化集群时指定了 `--pod-network-cidr`(如 `10.244.0.0/16`),需在此处同步修改[^1]。 - **后端模式**: `Backend.Type` 支持 `vxlan`(默认)、`host-gw`、`udp`。生产环境推荐 `vxlan` 或 `host-gw`(需二层网络支持)[^2]。 - **镜像版本**: `quay.io/flannel/flannel:v0.22.0` 为当前稳定版本,可通过[官方仓库](https://github.com/flannel-io/flannel/releases)获取更新。 - **内核要求**: 需开启 `CONFIG_VXLAN` 模块(检查命令:`lsmod | grep vxlan`)。 #### 3. **部署步骤** ```bash # 应用配置清单 kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/v0.22.0/Documentation/kube-flannel.yml # 验证部署状态 kubectl get pods -n kube-flannel -o wide ``` 预期输出: ``` NAME READY STATUS RESTARTS AGE IP NODE kube-flannel-ds-abc12 1/1 Running 0 2m 192.168.1.5 node1 kube-flannel-ds-xyz34 1/1 Running 0 2m 192.168.1.6 node2 ``` #### 4. **故障排查** - **问题**:Pod 卡在 `Init` 状态 **原因**:镜像拉取失败(尤其国内环境) **解决**:替换为国内镜像源(如 `registry.cn-hangzhou.aliyuncs.com/google_containers/flannel:v0.22.0`)[^3]。 - **问题**:节点网络不通 **原因**:防火墙阻止 VXLAN 端口(UDP 8472) **解决**:开放防火墙规则: ```bash sudo firewall-cmd --permanent --add-port=8472/udp sudo firewall-cmd --reload ``` ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值