Kubernetes

1.介绍

2.配置 kubernetes 集群

2.1准备一台虚拟机

2.1.1克隆 centos-7-1908

  • 克隆 centos-7-1908: k1
  • 设置 cpu 和 内存
    • cpu - 2
    • 内存 - 2G
      在这里插入图片描述

2.1.2设置ip

./ip-static
ip: 192.168.64.191

2.1.3上传文件

  • easzup、images.gz 两个文件上传到 /root/
  • ansible 文件夹上传到 /etc/


2.2准备离线安装环境

cd ~/

# 下载 kubeasz 的自动化安装脚本文件: easzup,如果已经上传过此文件,则不必执行这一步
export release=2.2.0
curl -C- -fLO --retry 3 https://github.com/easzlab/kubeasz/releases/download/${release}/easzup

#从这里开始执行
# 对easzup文件设置执行权限
chmod +x ./easzup

# 下载离线安装文件,并安装配置docker,
# 如果离线文件已经存在则不会重复下载,
# 离线安装文件存放路径: /etc/ansible
./easzup -D

# 启动kubeasz工具使用的临时容器
./easzup -S

# 进入该容器
docker exec -it kubeasz sh

# 下面命令在容器内执行
# 配置离线安装
cd /etc/ansible
sed -i 's/^INSTALL_SOURCE.*$/INSTALL_SOURCE: "offline"/g' roles/chrony/defaults/main.yml
sed -i 's/^INSTALL_SOURCE.*$/INSTALL_SOURCE: "offline"/g' roles/ex-lb/defaults/main.yml
sed -i 's/^INSTALL_SOURCE.*$/INSTALL_SOURCE: "offline"/g' roles/kube-node/defaults/main.yml
sed -i 's/^INSTALL_SOURCE.*$/INSTALL_SOURCE: "offline"/g' roles/prepare/defaults/main.yml
exit

# 安装 python,已安装则忽略这一步
yum install python -y


2.3导入镜像

docker load -i images.gz

2.4准备三台服务器

  • 拍摄k1快照
  • 克隆k1,生成俩个服务器:k2、k3
    k2和k3设置ip:
    • 192.168.64.192
    • 192.168.64.192

2.4.1在master(k1)上继续配置安装环境

# 安装pip,已安装则忽略这一步
wget -O /etc/yum.repos.d/epel-7.repo https://mirrors.aliyun.com/repo/epel-7.repo
yum install git python-pip -y

# pip安装ansible(国内如果安装太慢可以直接用pip阿里云加速),已安装则忽略这一步
pip install pip --upgrade -i https://mirrors.aliyun.com/pypi/simple/
pip install ansible==2.6.12 netaddr==0.7.19 -i https://mirrors.aliyun.com/pypi/simple/

#开始执行下面
# 在ansible控制端配置免密码登陆其他节点服务器
ssh-keygen -t ed25519 -N '' -f ~/.ssh/id_ed25519

# 公钥复制到所有节点,包括master自己
# 按提示输入yes和root管理员的密码
ssh-copy-id 192.168.64.191
ssh-copy-id 192.168.64.192
ssh-copy-id 192.168.64.193




2.4.2配置集群服务器的ip

cd /etc/ansible && cp example/hosts.multi-node hosts && vim hosts

# 检查集群主机状态
ansible all -m ping

在这里插入图片描述

  • 继续拍摄快照,k1、k2、k3都拍,防止后面出错不能回退。

2.5一键安装k8s集群

安装步骤非常多,时间较长,耐心等待安装完成。

cd /etc/ansible
ansible-playbook 90.setup.yml

2.6设置 kubectl 命令别名

# 设置 kubectl 命令别名 k
echo "alias k='kubectl'" >> ~/.bashrc

# 使设置生效
source ~/.bashrc

2.7配置自动补全

yum install -y bash-completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
source ~/.bashrc

2.8验证安装

k get cs

k get node

3.初步尝试 kubernetes

  • Pod是用来封装Docker容器的对象,它具有自己的虚拟环境(端口、环境变量等),一个Pod可以封装多个Docker容器
  • RC控制器:
    RC是用来自动控制Pod部署的工具,它可以自动启停Pod,对Pod进行自动伸缩。

3.1部署 RC 控制器

k run \
    --image=luksa/kubia \
    --port=8080 \
    --generator=run/v1 kubia

k get rc

k get pods


kubectl run 几个参数的含义:

  • --image=luksa/kubia
    镜像名称
  • --port=8080
    pod 对外暴露的端口
  • --generator=run/v1 kubia
    创建一个ReplicationController

3.2创建 service

k expose \
    rc kubia \
    --type=NodePort \
    --name kubia-http

k get svc
------------------------------------------------------------------------------
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.68.0.1      <none>        443/TCP          59m
kubia-http   NodePort    10.68.126.97   <none>        8080:28217/TCP   9s

3.3 pod 自动伸缩

k8s对应用部署节点的自动伸缩能力非常强,只需要指定需要运行多少个pod,k8s就可以完成pod的自动伸缩

# 将pod数量增加到3
k scale rc kubia --replicas=3

k get po -o wide
----------------------------------------------------------------------------------------------------------------
NAME          READY   STATUS    RESTARTS   AGE   IP           NODE             NOMINATED NODE   READINESS GATES
kubia-8sb9t   1/1     Running   0          32m   172.20.2.4   192.168.64.193   <none>           <none>
kubia-9k2mq   1/1     Running   0          36s   172.20.2.9   192.168.64.193   <none>           <none>
kubia-qv4zs   1/1     Running   0          36s   172.20.1.9   192.168.64.192   <none>           <none>

# 将pod数量减少到1
k scale rc kubia --replicas=1

# k8s会自动停止两个pod,最终pod列表中会只有一个pod
k get po -o wide
---------------------------------------------------------------------------------------------------------------------
NAME          READY   STATUS        RESTARTS   AGE   IP           NODE             NOMINATED NODE   READINESS GATES
kubia-8sb9t   1/1     Running       0          33m   172.20.2.4   192.168.64.193   <none>           <none>
kubia-9k2mq   1/1     Terminating   0          65s   172.20.2.9   192.168.64.193   <none>           <none>
kubia-qv4zs   1/1     Terminating   0          65s   172.20.1.9   192.168.64.192   <none>           <none>

4.Pod

4.1使用部署文件手动部署pod

创建 kubia-manual.yml 部署文件:

cat <<EOF > kubia-manual.yml 
apiVersion: v1               # k8s api版本
kind: Pod                    # 该部署文件用来创建pod资源
metadata:                
  name: kubia-manual         # pod名称前缀,后面会追加随机字符串
spec:
  containers:                # 对pod中容器的配置
  - image: luksa/kubia       # 镜像名
    imagePullPolicy: Never
    name: kubia              # 容器名
    ports:
    - containerPort: 8080    # 容器暴露的端口
      protocol: TCP
EOF


使用部署文件创建pod:

k create -f kubia-manual.yml

k get po
-----------------------------------------------
NAME           READY   STATUS    RESTARTS   AGE
kubia-8sb9t    1/1     Running   0          48m
kubia-manual   1/1     Running   0          7s

k get po -o wide
#测试
curl http://172.20.1.10:8080

4.2pod 标签

4.2.1创建pod时指定标签

通过kubia-manual-with-labels.yml部署文件部署pod;

在部署文件中为pod设置了两个自定义标签:creation_method和env

cat <<EOF > kubia-manual-with-labels.yml
apiVersion: v1                  # api版本
kind: Pod                       # 部署的资源类型
metadata:
  name: kubia-manual-v2         # pod名
  labels:                       # 标签设置,键值对形式
    creation_method: manual     
    env: prod
spec:
  containers:                   # 容器设置
  - image: luksa/kubia          # 镜像
    name: kubia                 # 容器命名
    imagePullPolicy: Never
    ports:                      # 容器暴露的端口
    - containerPort: 8080
      protocol: TCP
EOF

使用部署文件创建资源:

k create -f kubia-manual-with-labels.yml

4.2.2查看pod的标签

  • 列出所有的pod,并显示pod的标签:
k get po --show-labels
------------------------------------------------------------
NAME              READY   STATUS    RESTARTS   AGE   LABELS
kubia-8sb9t       1/1     Running   0          76m   run=kubia
kubia-manual      1/1     Running   0          27m   <none>
kubia-manual-v2   1/1     Running   0          34s   creation_method=manual,env=prod

  • 以列的形式列出pod的标签:
k get po -L creation_method,env
-----------------------------------------------------------------------------
NAME              READY   STATUS    RESTARTS   AGE     CREATION_METHOD   ENV
kubia-8sb9t       1/1     Running   0          79m
kubia-manual      1/1     Running   0          30m
kubia-manual-v2   1/1     Running   0          3m50s   manual            prod

4.2.3修改pod的标签

pod kubia-manual-v2 的env标签值是prod, 我们把这个标签的值修改为 debug
修改一个标签的值时,必须指定 --overwrite 参数,目的是防止误修改。

k label po kubia-manual-v2 env=debug --overwrite

k get po --show-labels
-------------------------------------------------------------------------------------
NAME              READY   STATUS    RESTARTS   AGE   LABELS
kubia-8sb9t       1/1     Running   0          87m   run=kubia
kubia-manual      1/1     Running   0          38m   <none>
kubia-manual-v2   1/1     Running   0          11m   creation_method=manual,env=debug


为pod kubia-manual 设置标签:

k label po kubia-manual creation_method=manual env=debug


为pod kubia-8sb9t 设置标签:

kubectl label po kubia-8sb9t env=debug

查看标签设置的结果:

k get po --show-labels
-------------------------------------------------------------------------------------
NAME              READY   STATUS    RESTARTS   AGE   LABELS
kubia-8sb9t       1/1     Running   0          92m   env=debug,run=kubia
kubia-manual      1/1     Running   0          43m   creation_method=manual,env=debug
kubia-manual-v2   1/1     Running   0          16m   creation_method=manual,env=debug

4.2.4使用标签来查询 pod

查询 creation_method=manual 的pod:

# -l 查询
k get po \
    -l creation_method=manual \
    -L creation_method,env
---------------------------------------------------------------------------
NAME              READY   STATUS    RESTARTS   AGE   CREATION_METHOD   ENV
kubia-manual      1/1     Running   0          28m   manual            debug
kubia-manual-v2   1/1     Running   0          27m   manual            debug

查询有 env 标签的 pod:

# -l 查询
k get po \
    -l env \
    -L creation_method,env
---------------------------------------------------------------------------
NAME              READY   STATUS    RESTARTS   AGE   CREATION_METHOD   ENV
kubia-5rz9h       1/1     Running   0          31m                     debug
kubia-manual      1/1     Running   0          30m   manual            debug
kubia-manual-v2   1/1     Running   0          29m   manual            debug

查询 creation_method=manual 并且 env=debug 的 pod:

# -l 查询
k get po \
    -l creation_method=manual,env=debug \
    -L creation_method,env
---------------------------------------------------------------------------
NAME              READY   STATUS    RESTARTS   AGE   CREATION_METHOD   ENV
kubia-manual      1/1     Running   0          33m   manual            debug
kubia-manual-v2   1/1     Running   0          32m   manual            debug

查询不存在 creation_method 标签的 pod:

# -l 查询
k get po \
    -l '!creation_method' \
    -L creation_method,env
-----------------------------------------------------------------------
NAME          READY   STATUS    RESTARTS   AGE   CREATION_METHOD   ENV
kubia-5rz9h   1/1     Running   0          36m                     debug

其他查询举例:

  • creation_method!=manual
  • env in (prod,debug)
  • env notin (prod,debug)

4.3把pod部署到指定的节点服务器

部署文件,其中节点选择器nodeSelector设置了通过标签gpu=true来选择节点:

cat <<EOF > kubia-gpu.yml
apiVersion: v1
kind: Pod
metadata:
  name: kubia-gpu          # pod名
spec:
  nodeSelector:            # 节点选择器,把pod部署到匹配的节点
    gpu: "true"            # 通过标签 gpu=true 来选择匹配的节点
  containers:              # 容器配置
  - image: luksa/kubia     # 镜像
    name: kubia            # 容器名
    imagePullPolicy: Never
EOF


创建pod kubia-gpu,并查看pod的部署节点:

k create -f kubia-gpu.yml

k get po -o wide
----------------------------------------------------------------------------------------------------------------------
NAME              READY   STATUS    RESTARTS   AGE     IP            NODE             NOMINATED NODE   READINESS GATES
kubia-8sb9t       1/1     Running   0          2d17h   172.20.2.4    192.168.64.193   <none>           <none>
kubia-gpu         0/1     Pending   0          71s     <none>        <none>           <none>           <none>
kubia-manual      1/1     Running   0          2d16h   172.20.1.10   192.168.64.192   <none>           <none>
kubia-manual-v2   1/1     Running   0          2d16h   172.20.2.10   192.168.64.193   <none>           <none>


通过为node设置标签,在部署pod时,使用节点选择器,来选择把pod部署到匹配的节点服务器。
下面为名称为192.168.64.192的节点服务器,添加标签gpu=true

k label no 192.168.64.192 gpu=true

k get po
-----------------------------------------------------
NAME              READY   STATUS    RESTARTS   AGE
kubia-8sb9t       1/1     Running   0          2d17h
kubia-gpu         1/1     Running   0          4m12s
kubia-manual      1/1     Running   0          2d16h
kubia-manual-v2   1/1     Running   0          2d16h

5.namespace

可以使用命名空间对资源进行组织管理;
不同命名空间的资源并不完全隔离,它们之间可以通过网络互相访问。

5.1查看命名空间

# namespace
k get ns

k get po --namespace kube-system
k get po -n kube-system

5.2创建命名空间

新建部署文件custom-namespace.yml,创建命名空间,命名为custom-namespace

cat <<EOF > custom-namespace.yml
apiVersion: v1
kind: Namespace
metadata:
  name: custom-namespace
EOF

# 创建命名空间
k create -f custom-namespace.yml 

k get ns
--------------------------------
NAME               STATUS   AGE
custom-namespace   Active   2s
default            Active   2d18h
kube-node-lease    Active   2d18h
kube-public        Active   2d18h
kube-system        Active   2d18h

5.3将pod部署到指定的命名空间中

创建pod,并将其部署到命名空间custom-namespace

# 创建 Pod 时指定命名空间
k create \
    -f kubia-manual.yml \
    -n custom-namespace

# 默认访问default命名空间,默认命名空间中不存在pod kubia-manual
k get po kubia-manual

# 访问custom-namespace命名空间中的pod
k get po -n custom-namespace
----------------------------------------------------------
NAME           READY   STATUS    RESTARTS   AGE
kubia-manual   1/1     Running   0          15s

6.删除资源

# 按名称删除, 可以指定多个名称
# 例如: k delete po po1 po2 po3
k delete po kubia-gpu

# 按标签删除
k delete po -l creation_method=manual

# 删除命名空间和其中所有的pod
k delete ns custom-namespace

# 删除当前命名空间中所有pod
k delete po --all

# 由于有ReplicationController,所以会自动创建新的pod
[root@master1 ~]# k get po
NAME          READY   STATUS    RESTARTS   AGE
kubia-m6k4d   1/1     Running   0          2m20s
kubia-rkm58   1/1     Running   0          2m15s
kubia-v4cmh   1/1     Running   0          2m15s

# 删除工作空间中所有类型中的所有资源
# 这个操作会删除一个系统Service kubernetes,它被删除后会立即被自动重建
k delete all --all

7.存活探针

有三种存活探针:

  • HTTP GET
    返回 2xx 或 3xx 响应码则认为探测成功
  • TCP
    与指定端口建立 TCP 连接,连接成功则为成功
  • Exec
    在容器内执行任意的指定命令,并检查命令的退出码,退出码为0则为探测成功

7.1HTTP GET 存活探针

luksa/kubia-unhealthy 镜像
在kubia-unhealthy镜像中,应用程序作了这样的设定: 从第6次请求开始会返回500错
在部署文件中,我们添加探针,来探测容器的健康状态.
探针默认每10秒探测一次,连续三次探测失败后重启容器。

cat <<EOF > kubia-liveness-probe.yml
apiVersion: v1
kind: Pod
metadata:
  name: kubia-liveness               # pod名称
spec:
  containers:
  - image: luksa/kubia-unhealthy     # 镜像
    name: kubia                      # 容器名
    imagePullPolicy: Never
    livenessProbe:                   # 存活探针配置
      httpGet:                       # HTTP GET 类型的存活探针
        path: /                      # 探测路径
        port: 8080                   # 探测端口
EOF


创建 pod:

k create -f kubia-liveness-probe.yml

# pod的RESTARTS属性,每过1分半种就会加1
k get po kubia-liveness
--------------------------------------------------
NAME             READY   STATUS    RESTARTS   AGE
kubia-liveness   1/1     Running   0          13s


查看上一个pod的日志,前5次探测是正确状态,后面3次探测是失败的,则该pod会被删除.

k logs kubia-liveness --previous
-----------------------------------------
Kubia server starting...
Received request from ::ffff:172.20.2.1
Received request from ::ffff:172.20.2.1
Received request from ::ffff:172.20.2.1
Received request from ::ffff:172.20.2.1
Received request from ::ffff:172.20.2.1
Received request from ::ffff:172.20.2.1
Received request from ::ffff:172.20.2.1
Received request from ::ffff:172.20.2.1


查看pod描述:

k describe po kubia-liveness
---------------------------------
......
     Ready:          True
   Restart Count:  6
    Liveness:       http-get http://:8080/ delay=0s timeout=1s period=10s #success=1 #failure=3
......

8.控制器

8.1ReplicationController(旧版)

RC可以自动化维护多个pod,只需指定pod副本的数量,就可以轻松实现自动扩容缩容;
当一个pod宕机,RC可以自动关闭pod,并启动一个新的pod替代它。
下面是一个RC的部署文件,设置启动三个kubia容器:

cat <<EOF > kubia-rc.yml
apiVersion: v1
kind: ReplicationController        # 资源类型
metadata:   
  name: kubia                      # 为RC命名
spec:
  replicas: 3                      # pod副本的数量
  selector:                        # 选择器,用来选择RC管理的pod
    app: kubia                     # 选择标签'app=kubia'的pod,由当前RC进行管理
  template:                        # pod模板,用来创建新的pod
    metadata:
      labels:
        app: kubia                 # 指定pod的标签
    spec:
      containers:                  # 容器配置
      - name: kubia                # 容器名
        image: luksa/kubia         # 镜像
        imagePullPolicy: Never
        ports:
        - containerPort: 8080      # 容器暴露的端口
EOF


创建RC:
RC创建后,会根据指定的pod数量3,自动创建3个pod。

k create -f kubia-rc.yml

k get rc
----------------------------------------
NAME    DESIRED   CURRENT   READY   AGE
kubia   3         3         3       42s

k get po --show-labels
------------------------------------------------------------------------------------------------------------------------------
NAME             READY   STATUS             RESTARTS   AGE    LABELS
kubia-gs7r5      1/1     Running            0          117s   app=kubia
kubia-klxz5      1/1     Running            0          117s   app=kubia
kubia-liveness   0/1     CrashLoopBackOff   9          26m    <none>
kubia-pjs8b      1/1     Running            0          117s   app=kubia



RC是通过指定的标签app=kubia对匹配的pod进行管理的;
允许在pod上添加任何其他标签,而不会影响pod与RC的关联关系:

kubectl label po kubia-pjs8b type=special

k get po --show-labels
----------------------------------------------------------------------
NAME             READY   STATUS    RESTARTS   AGE   LABELS
kubia-gs7r5      1/1     Running   0          35m   app=kubia
kubia-klxz5      1/1     Running   0          35m   app=kubia
kubia-liveness   1/1     Running   17         60m   <none>
kubia-pjs8b      1/1     Running   0          35m   app=kubia,type=special

但是,如果改变pod的app标签的值,就会使这个pod脱离RC的管理,这样RC会认为这里少了一个pod,那么它会立即创建一个新的pod,来满足我们设置的3个pod的要求。

kubectl label po kubia-pjs8b app=foo --overwrite

k get po --show-labels
-------------------------------------------------------------------
NAME             READY   STATUS             RESTARTS   AGE   LABELS
kubia-gs7r5      1/1     Running            0          38m   app=kubia
kubia-klxz5      1/1     Running            0          38m   app=kubia
kubia-liveness   0/1     CrashLoopBackOff   17         62m   <none>
kubia-nhd8f      1/1     Running            0          3s    app=kubia
kubia-pjs8b      1/1     Running            0          38m   app=foo,type=special

8.1.1修改 pod 模板

pod模板修改后,只影响后续新建的pod,已创建的pod不会被修改;
可以删除旧的pod,用新的pod来替代。

# 编辑 ReplicationController,添加一个新的标签: foo=bar
kubectl edit rc kubia
------------------------------------------------
......
spec:
  replicas: 3
  selector:
    app: kubia
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: kubia
        foo: bar                 # 任意添加一标签
    spec:
......


# 之前pod的标签没有改变
k get pods --show-labels
----------------------------------------------------------------------
NAME             READY   STATUS    RESTARTS   AGE     LABELS
kubia-gs7r5      1/1     Running   0          44m     app=kubia
kubia-klxz5      1/1     Running   0          44m     app=kubia
kubia-liveness   1/1     Running   19         69m     <none>
kubia-nhd8f      1/1     Running   0          6m49s   app=kubia
kubia-pjs8b      1/1     Running   0          44m     app=foo,type=special


# 通过RC,把pod扩容到4
# 可以使用前面用过的scale命令来扩容
# k scale rc kubia --replicas=4

# 或者,可以编辑修改RC的replicas属性,修改成6
k edit rc kubia
---------------------
spec:
  replicas: 4       # 3修改成4,扩容到4个pod
  selector:
    app: kubia


# 新增加的pod有新的标签,而旧的pod没有新标签
k get pods --show-labels
----------------------------------------------------------------------
NAME             READY   STATUS    RESTARTS   AGE    LABELS
kubia-gs7r5      1/1     Running   0          79m    app=kubia
kubia-klxz5      1/1     Running   0          79m    app=kubia
kubia-liveness   1/1     Running   27         103m   <none>
kubia-nhd8f      1/1     Running   0          40m    app=kubia
kubia-pjs8b      1/1     Running   0          79m    app=foo,type=special
kubia-z4ztn      1/1     Running   0          32m    app=kubia,foo=bar


# 删除 rc, 但不级联删除 pod, 使 pod 处于脱管状态
k delete rc kubia --cascade=false

在这里插入图片描述


在这里插入图片描述

8.2 ReplicaSet(新版)

ReplicaSet 被设计用来替代 ReplicationController,它提供了更丰富的pod选择功能;
以后我们总应该使用 RS, 而不适用 RC, 但在旧系统中仍会使用 RC。

cat <<EOF > kubia-replicaset.yml
apiVersion: apps/v1              # RS  apps/v1中提供的资源类型
kind: ReplicaSet                 # 资源类型
metadata:
  name: kubia                    # RS 命名为 kubia
spec:
  replicas: 3                    # pod 副本数量
  selector:
    matchLabels:                 # 使用 label 选择器
      app: kubia                 # 选取标签是 "app=kubia" 的pod
  template:
    metadata:
      labels:
        app: kubia               # 为创建的pod添加标签 "app=kubia"
    spec:
      containers:
      - name: kubia              # 容器名
        image: luksa/kubia       # 镜像
        imagePullPolicy: Never
EOF

  • 先删除rc控制器:
kubectl delete rc kubia --cascade=false
-----------------------------------------
replicationcontroller "kubia" deleted

k get rc
-----------------------------------------
No resources found in default namespace.

k get po
-------------------------------------------------------------
NAME             READY   STATUS             RESTARTS   AGE
kubia-gs7r5      1/1     Running            0          3h8m
kubia-klxz5      1/1     Running            0          3h8m
kubia-liveness   0/1     CrashLoopBackOff   51         3h33m
kubia-nhd8f      1/1     Running            0          150m
kubia-pjs8b      1/1     Running            0          3h8m
kubia-z4ztn      1/1     Running            0          141m
  • 创建 ReplicaSet:
k create -f kubia-replicaset.yml

k get po
------------------------------------------------------------
NAME             READY   STATUS             RESTARTS   AGE
kubia-gs7r5      1/1     Running            0          3h15m
kubia-klxz5      1/1     Running            0          3h15m
kubia-liveness   0/1     CrashLoopBackOff   53         3h40m
kubia-nhd8f      1/1     Running            0          157m
kubia-pjs8b      1/1     Running            0          3h15m

# 之前脱离管理的pod被RS管理
# 设置的pod数量是3,多出的pod会被关闭
k get rs
----------------------------------------
NAME    DESIRED   CURRENT   READY   AGE
kubia   3         3         3       78s

# 多出的3个pod会被关闭
k get pods --show-labels
----------------------------------------------------------------------
NAME             READY   STATUS        RESTARTS   AGE     LABELS
kubia-8d9jj      1/1     Pending       0          2m23s   app=kubia,foo=bar
kubia-lc5qv      1/1     Terminating   0          3d5h    app=kubia
kubia-lhj4q      1/1     Terminating   0          2d22h   app=kubia
kubia-pjs9n      1/1     Running       0          3d5h    app=kubia
kubia-wb8sv      1/1     Pending       0          2m17s   app=kubia,foo=bar
kubia-xp4jv      1/1     Terminating   0          2m17s   app=kubia,foo=bar

# 查看RS描述, 与RC几乎相同
k describe rs kubia

  • 清理:
kubectl delete rs kubia --cascade=false
--------------------------------------
replicaset.apps "kubia" deleted
  • 使用更强大的标签选择器:
cat <<EOF > kubia-replicaset.yml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: kubia
spec:
  replicas: 4
  selector:
    matchExpressions:       # 表达式匹配选择器
      - key: app            # label 名是 app
        operator: In        # in 运算符
        values:             # label 值列表
          - kubia
          - foo
  template:
    metadata:
      labels:
        app: kubia
    spec:
      containers:
      - name: kubia
        image: luksa/kubia
        imagePullPolicy: Never
EOF
  • 创建:
# 创建 RS
k create -f kubia-replicaset.yml

# 查看rs
k get rs
---------------------------------------
NAME    DESIRED   CURRENT   READY   AGE
kubia   4         4         4       9s

# 查看pod
k get po
--------------------------------------------
NAME             READY   STATUS    RESTARTS   AGE
kubia-gs7r5      1/1     Running   0          3h27m
kubia-klxz5      1/1     Running   0          3h27m
kubia-liveness   1/1     Running   56         3h52m
kubia-nhd8f      1/1     Running   0          169m
kubia-pjs8b      1/1     Running   0          3h27m
  • 可使用的运算符:
    • In: label与其中一个值匹配
    • NotIn: label与任何一个值都不匹配
    • Exists: 包含指定label名称(值任意)
    • DoesNotExists: 不包含指定的label
  • 清理:
k delete all --all

在这里插入图片描述

8.3 DaemonSet

可以在所有选定的节点上部署pod;
通过节点的label来选择节点。

cat <<EOF > ssd-monitor-daemonset.yml
apiVersion: apps/v1
kind: DaemonSet                       
metadata:
  name: ssd-monitor                   
spec:
  selector:
    matchLabels:                      
      app: ssd-monitor                
  template:
    metadata:
      labels:
        app: ssd-monitor              
    spec:
      nodeSelector:                   # 节点选择器
        disk: ssd                     # 选择的节点上具有标签: 'disk=ssd'
      containers:                     
        - name: main                  
          image: luksa/ssd-monitor   
          imagePullPolicy: Never
EOF
# 创建
k create -f ssd-monitor-daemonset.yml

查看 DS 和 pod, 看到并没有创建pod,这是因为不存在具有disk=ssd标签的节点。

k get ds

为节点’192.168.64.192’设置标签 disk=ssd
这样 DS 会在该节点上立即创建 pod

k label node 192.168.64.191 disk=ssd

k get ds
---------------------------------------------------------------------------------------
NAME          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
ssd-monitor   1         1         1       1            1           disk=ssd        32s

k label no 192.168.64.192 disk=ssd

k get ds
----------------------------------------------------------------------------------------------------------------------------------
NAME          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
ssd-monitor   2         2         1       2            2           disk=ssd        40s

k get po
-----------------------------------------------------
NAME                READY   STATUS    RESTARTS   AGE
ssd-monitor-b28pd   1/1     Running   0          20s
ssd-monitor-sdf4v   1/1     Running   0          31s

删除’192.168.64.191’节点上的disk标签,那么该节点中部署的pod会被立即销毁.

# 注意删除格式: disk-
k label no 192.168.64.191 disk-
k get ds
k get po
-----------------------------------------------------------
NAME                READY   STATUS        RESTARTS   AGE
ssd-monitor-b28pd   1/1     Running       0          2m59s
ssd-monitor-sdf4v   1/1     Terminating   0          3m10s

8.4 Job

Job 用来运行单个任务,任务结束后pod不再重启。

cat <<EOF > exporter.yml
apiVersion: batch/v1                 # Job资源在batch/v1版本中提供
kind: Job                            # 资源类型
metadata: 
  name: batch-job                    # 资源命名
spec:
  template: 
    metadata:
      labels:
        app: batch-job               # pod容器标签
    spec:
      restartPolicy: OnFailure       # 任务失败时重启
      containers:
        - name: main                 # 容器名
          image: luksa/batch-job     # 镜像
          imagePullPolicy: Never
EOF

创建 job:
镜像 batch-job 中的进程,运行120秒后会自动退出。

k create -f exporter.yml

k get job
-----------------------------------------
NAME        COMPLETIONS   DURATION   AGE
batch-job   0/1           17s        18s

k get po
-------------------------------------------------------------
NAME                READY   STATUS    RESTARTS   AGE
batch-job-7sskl     1/1     Running   0          7s
ssd-monitor-b28pd   1/1     Running   0          32m

等待两分钟后,pod中执行的任务退出,再查看job和pod。

k get job
-----------------------------------------
NAME        COMPLETIONS   DURATION   AGE
batch-job   1/1           2m2s       2m18s

k get po
-----------------------------------------------------
NAME                READY   STATUS      RESTARTS   AGE
batch-job-7sskl     0/1     Completed   0          2m9s
ssd-monitor-b28pd   1/1     Running     0          34m

使用Job让pod连续运行5次;
先创建第一个pod,等第一个完成后后,再创建第二个pod,以此类推,共顺序完成5个pod。

cat <<EOF > multi-completion-batch-job.yml
apiVersion: batch/v1
kind: Job
metadata: 
  name: multi-completion-batch-job
spec:
  completions: 5                    # 指定完整的数量
  template: 
    metadata:
      labels:
        app: batch-job
    spec:
      restartPolicy: OnFailure
      containers:
        - name: main
          image: luksa/batch-job
          imagePullPolicy: Never
EOF
k create -f multi-completion-batch-job.yml 

共完成5个pod,并每次可以同时启动两个pod:

cat <<EOF > multi-completion-parallel-batch-job.yml
apiVersion: batch/v1
kind: Job
metadata: 
  name: multi-completion-parallel-batch-job
spec:
  completions: 5                    # 共完成5
  parallelism: 2                    # 可以有两个pod同时执行
  template: 
    metadata:
      labels:
        app: batch-job
    spec:
      restartPolicy: OnFailure
      containers:
        - name: main
          image: luksa/batch-job
          imagePullPolicy: Never
EOF
k create -f multi-completion-parallel-batch-job.yml

8.5 Cronjob

定时和重复执行的任务;
cron时间表格式:
分钟 小时 每月的第几天 月 星期几

cat <<EOF > cronjob.yml
apiVersion: batch/v1beta1                # api版本
kind: CronJob                            # 资源类型
metadata:
  name: batch-job-every-fifteen-minutes
spec:
  # 0,15,30,45  - 分钟
  # 第一个*  -  每个小时
  # 第二个*  -  每月的每一天
  # 第三个*  -  每月
  # 第四个*  -  每一周中的每一天
  schedule: "0,15,30,45 * * * *"
  jobTemplate:
    spec:
      template:
        metadata:
          labels:
            app: periodic-batch-job
        spec:
          restartPolicy: OnFailure
          containers: 
          - name: main
            image: luksa/batch-job
            imagePullPolicy: Never
EOF

创建cronjob:

k create -f cronjob.yml

# 立即查看 cronjob,此时还没有创建pod
k get cj
----------------------------------------------------------------------------------------------
NAME                              SCHEDULE             SUSPEND   ACTIVE   LAST SCHEDULE   AGE
batch-job-every-fifteen-minutes   0,15,30,45 * * * *   False     0        <none>          12s

k get job
------------------------------------------------------------------
NAME                                  COMPLETIONS   DURATION   AGE
batch-job                             1/1           2m2s       15m
multi-completion-batch-job            5/5           10m        10m
multi-completion-parallel-batch-job   5/5           6m7s       10m

# 0,15,30,45分钟时,会创建一个pod
k get po
--------------------------------------------------------------------------------------
NAME                                               READY   STATUS    RESTARTS   AGE
batch-job-every-fifteen-minutes-1607327100-tdchj   1/1     Running   0          19s

k get cj
-------------------------------------------------------------------------------------------------
NAME                              SCHEDULE             SUSPEND   ACTIVE   LAST SCHEDULE   AGE
batch-job-every-fifteen-minutes   0,15,30,45 * * * *   False     1        42s             5m11s

附:https://blog.csdn.net/weixin_38305440/article/details/102810548

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值