Kubernetes 集群部署指南

系统环境:CentOS 7
节点规划

  • Master: 192.168.0.113
  • Node1: 192.168.0.114
  • Node2: 192.168.0.115

一、基础环境配置(所有节点执行)

# 关闭防火墙 & SELinux  
systemctl stop firewalld && systemctl disable firewalld  
setenforce 0  
sed -i 's/SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config  

# 关闭 Swap  
swapoff -a  
sed -i '/ swap / s/^/#/' /etc/fstab  

# 加载内核模块  
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf  
br_netfilter  
overlay  
EOF  
modprobe br_netfilter && modprobe overlay  

# 配置内核参数  
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf  
net.bridge.bridge-nf-call-iptables = 1  
net.ipv4.ip_forward = 1  
EOF  

sysctl --system  

2、集群前期环境准备,所有节点执行

#修改主机名
hostnamectl set-hostname master(主机名)
bash

#配置集群间本地解析
echo "192.168.0.10 master
192.168.0.11 node01
192.168.0.12 node02" >> /etc/hosts

3、开启brige网桥过滤

cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

4、由于开启bridge功能需要加载br_netfilter模块,来允许在bridge设备上的数据包经过iptables防火墙

modprobe br_netfilter && lsmod | grep br_netfilter

sysctl -p /etc/sysctl.d/k8s.conf #加载配置文件,使上述文件生效

二、下载k8s和containerd(所有节点执行)

1、配置yum仓库,并下载k8s和containerd

#配置k8s仓库 
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  [kubernetes]
  name=Kubernetes
  baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
  enabled=1
  gpgcheck=1
  repo_gpgcheck=1
  gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  EOF

  #清除yum缓存
  yum clean all

  #下载k8s
   sudo yum install -y kubelet-1.28.0 kubeadm-1.28.0 kubectl-1.28.0
  #运行k8s 并自启动
   systemctl enable kubelet

  #kubelet启用cgroup控制组用于限制进程的资源使用量
  tee > /etc/sysconfig/kubelet <<EOF
    KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
    EOF
#下载containerd
 yum install containerd

#运行containerd 并自启动
systemctl enable containerd --now

2、配置镜像加速所有节点都需执行

cat <<EOF | sudo tee /etc/containerd/config.toml  
   version = 2  
      [plugins."io.containerd.grpc.v1.cri"]  
     sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"  
      [plugins."io.containerd.grpc.v1.cri".containerd]  
        default_runtime_name = "runc"  
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]  
          runtime_type = "io.containerd.runc.v2"  
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]  
            SystemdCgroup = true  
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]  
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]  
          endpoint = ["https://abde64ba3c6d4242b9d12854789018c6.mirror.swr.myhuaweicloud.com"]  
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]  
          endpoint = ["https://registry.aliyuncs.com/google_containers"]  
    EOF

三、Kubernetes集群初始化(master节点其他节点不执行)

1、查看集群镜像

# 在master节点查看集群所需的镜像
kubeadm config images list

#...以下是集群初始化所需的集群组件镜像
v1.27.1; falling back to: stable-1.23
k8s.gcr.io/kube-apiserver:v1.23.17
k8s.gcr.io/kube-controller-manager:v1.23.17
k8s.gcr.io/kube-scheduler:v1.23.17
k8s.gcr.io/kube-proxy:v1.23.17
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6

2、在master节点生成初始化集群的配置文件

kubeadm config print init-defaults > kubeadm-config.yaml

修改刚生成的kubeadm-config.yaml文件

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.77.130   #需要修改为你master的IP
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: master  #需修改为节点名称
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers #需修改为阿里云镜像仓库地址
kind: ClusterConfiguration
kubernetesVersion: 1.28.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}

3、通过默认配置文件初始化集群

# 初始化集群  
sudo kubeadm init --config kubeadm-config.yaml  

# 配置kubectl  
mkdir -p $HOME/.kube  
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  
sudo chown $(id -u):$(id -g) $HOME/.kube/config  

#在初始化集群后会提示
kubeadm join 192.168.1.100:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef
#将上述提示复制到工作节点执行

# 查看节点状态(可能需要等待 1-2 分钟)
kubectl get nodes

# 输出应显示新节点
NAME     STATUS   ROLES           AGE   VERSION
master   Ready    control-plane   10m   v1.28.0
worker1  Ready    <none>          1m    v1.28.0

四、网络插件安装 (主节点执行)

# Master节点执行  
wget https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/calico.yaml
#创建calico网络
kubectl apply -f calico.yaml 

五、验证集群

kubectl get nodes -o wide  
kubectl get pods -A -o wide  
kubectl cluster-info  

六、在Master节点部署Nginx

# 允许Master调度Pod  
kubectl taint nodes master-168-0-113 node-role.kubernetes.io/control-plane:NoSchedule-  

# 部署Nginx  
kubectl create deployment nginx-master --image=nginx:alpine  
kubectl expose deployment nginx-master --port=80 --type=NodePort  

# 获取访问端口  
kubectl get svc nginx-master  

#使用chome浏览器访问工作节点nginx验证是否部署成功

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值