【运维知识大神篇】运维界的超神器Kubernetes教程17(metric-server采集资源数据+HPA水平扩容详解+HPA测试示例+Helm资源超详细讲解+主流的Helm仓库使用)

本篇文章继续给大家介绍Kubernetes的内容,包含metric-server采集资源数据,HPA水平扩容详解,两种方式创建hpa,对Pod进行压测,HPA测试示例,Helm资源超详细讲解(管理Chart生命周期+自定义Chart+基于helm进行升级/回滚),主流的Helm仓库使用。内容已经接近尾声,望诸君坚持到底!!

目录

metric-server

一、部署metric-server

HPA

一、编写资源清单

二、创建hpa规则

1、声明式创建hpa

2、响应式创建hpa

三、测试hpa

Helm资源详解

一、Helm常见术语

二、为什么需要Helm

三、Helm版本说明

四、Helm快速部署

五、配置helm命令的自动补全

六、管理Chart生命周期

七、自定义Chart

1、不使用values.yaml

2、使用values.yaml

八、基于helm进行升级/回滚

1、部署helm

2、基于values.yaml文件的方式进行升级/回滚

3、基于命令行修改镜像的方式进行升级/回滚

4、不指定发行版回滚

5、指定发行版回滚

九、主流的Chart仓库

1、添加Chart仓库的方式

2、搜索需要的Chart

3、拉取第三方的Chart

4、helm的Chart私有仓库


metric-server

参考链接:https://github.com/kubernetes-sigs/metrics-server

Metrics Server是一个Kubernetes组件,用于收集、聚合、存储和查询集群中的资源利用率和性能指标数据,例如CPU、内存和网络使用情况等。

Metrics Server从kubelets收集资源指标,并通过Metrics API将它们暴露在Kubernetes apiserver中,以供HPA(Horizontal Pod Autoscaler)和VPA(Vertical Pod Autoscaler)使用。

Metrics API也可以通过kubectl top访问,从而更容易调试自动缩放管道。

一、部署metric-server

1、下载资源清单

wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/high-availability-1.21+.yaml

2、修改资源清单,里面需要拉去github的镜像,如果拉取不下来可以更改为阿里云的镜像,我这边拉取下来后上传至私有仓库了,所以使用私有,其他位置根据上面参考链接里面的提示修改即可。

[root@Master231 metrics-server]# cat high-availability-1.21+.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  replicas: 2
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 1
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      - key: class
        operator: Exists
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchLabels:
                k8s-app: metrics-server
            namespaces:
            - kube-system
            topologyKey: kubernetes.io/hostname
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls
        # image: registry.k8s.io/metrics-server/metrics-server:v0.6.3
        # image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/metrics-server:v0.6.3
        # image: registry.aliyuncs.com/google_containers/metrics-server:v0.6.3
        image: harbor.koten.com/add-ons/metrics-server:v0.6.3
        # imagePullPolicy: IfNotPresent
        imagePullPolicy: Always
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: metrics-server
  namespace: kube-system
spec:
  minAvailable: 1
  selector:
    matchLabels:
      k8s-app: metrics-server
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

3、运行资源清单

[root@Master231 metrics-server]# kubectl apply -f high-availability-1.21+.yaml 

4、查看运行结果

[root@Master231 metrics-server]# kubectl get pods,svc -o wide -n kube-system |grep metrics-server
pod/metrics-server-644f56494b-cb8kx     1/1     Running   0                 74s     10.100.0.75   master231   <none>           <none>
pod/metrics-server-644f56494b-h2d8z     1/1     Running   0                 74s     10.100.1.21   worker232   <none>           <none>
service/metrics-server   ClusterIP   10.200.95.130   <none>        443/TCP                  3m20s   k8s-app=metrics-server
[root@Master231 metrics-server]# kubectl top pod 
NAME                         CPU(cores)   MEMORY(bytes)   
linux86-secrets-harbor-001   0m           3Mi             
[root@Master231 metrics-server]# kubectl top node
NAME        CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master231   196m         9%     933Mi           24%       
worker232   91m          4%     395Mi           10%       
worker233   77m          3%     603Mi           15%   

HPA

HPA是Horizontal Pod Autoscaler的缩写,是Kubernetes提供的一种自动扩展应用程序副本数的机制。HPA通过监控Kubernetes集群中Pod的CPU利用率或自定义指标,自动调整Pod的副本数,以满足应用程序的负载需求。简而言之就是水平自动伸缩。(水平伸缩是调整数量,垂直伸缩是调整配置)

一、编写资源清单

编写deployment资源清单,用于生成压测的Pod

[root@Master231 metrics-server]# cat 02-deploy-stress.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: koten-linux-stress
spec:
  replicas: 1
  selector:
    matchExpressions:
    - key: apps
      operator: Exists
  template:
    metadata:
      labels:
        apps: stress
    spec:
      containers:
      - name: web
        image: harbor.koten.com/koten-tools/stress:v0.1
        command:
        - tail
        - -f
        - /etc/hosts
        resources:
          requests:
             cpu: 500m
             memory: 200M
          limits:
             cpu: 1
             memory: 500M

二、创建hpa规则

1、声明式创建hpa

[root@Master231 metrics-server]# cat 03-hpa.yaml
# 指定Api的版本号
apiVersion: autoscaling/v2
# 指定资源类型
kind: HorizontalPodAutoscaler
# 指定hpa源数据信息
metadata:
  # 指定名称
  name: koten-linux-stress-hpa
  # 指定名称空间
  namespace: default
# 用户的期望状态
spec:
  # 指定最大的Pod副本数量
  maxReplicas: 5
  # 指定监控指标
  metrics:
    # 指定资源限制
  - resource:
      # 指定资源限制的名称
      name: cpu
      # 指定限制的阈值
      target:
        averageUtilization: 80
        type: Utilization
    type: Resource
  # 指定最小的Pod副本数量
  minReplicas: 2
  # 当前的hpa规则应用在哪个资源
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: koten-linux-stress

2、响应式创建hpa

理论应该也可以,但是我这边测试显示不出百分比数值,一直显示uknow,感兴趣可以自测

[root@Master231 metrics-server]#kubectl autoscale deployment koten-linux-stress --min=2 --max=10 --cpu-percent=90 horizontalpodautoscaler.autoscaling/koten-linux-stress autoscaled

三、测试hpa

1、运行上面创建Pod的资源清单和hpa声明式创建的资源清单,由于hpa指定最小副本数为2,所以扩了一个,此时hpa的cpu使用率为0%

[root@Master231 metrics-server]# kubectl apply -f 02-deploy-stress.yaml 
deployment.apps/koten-linux-stress created

[root@Master231 metrics-server]# kubectl apply -f 03-hpa.yaml
horizontalpodautoscaler.autoscaling/koten-linux-stress-hpa created

[root@Master231 metrics-server]# kubectl get pods
NAME                                 READY   STATUS    RESTARTS   AGE
koten-linux-stress-f9998bdd7-8587r   1/1     Running   0          21s
koten-linux-stress-f9998bdd7-hgw2m   1/1     Running   0          28s

[root@Master231 pod]# kubectl get hpa
NAME                     REFERENCE                       TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
koten-linux-stress-hpa   Deployment/koten-linux-stress   0%/80%    2         5         2          5m59s

2、 对其中一个Pod进行压测,经过压测,发现新起了一个Pod

[root@Master231 pod]# kubectl get hpa
NAME                     REFERENCE                       TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
koten-linux-stress-hpa   Deployment/koten-linux-stress   0%/80%    2         5         2          5m59s

[root@Master231 metrics-server]# kubectl exec -it koten-linux-stress-f9998bdd7-8587r -- stress -c 4 --verbose --timeout 10m

[root@Master231 pod]# kubectl get pods
NAME                                 READY   STATUS    RESTARTS   AGE
koten-linux-stress-f9998bdd7-8587r   1/1     Running   0          5m47s
koten-linux-stress-f9998bdd7-hgw2m   1/1     Running   0          5m54s
koten-linux-stress-f9998bdd7-jvzfb   1/1     Running   0          16s

3、此时继续将之前的一个和新增的进行压测,cpu利用率达到133%,发现Pod扩至5个

[root@Master231 ~]# kubectl exec koten-linux-stress-f9998bdd7-hgw2m -- stress -c 4 --verbose --timeout 10m

[root@Master231 ~]# kubectl exec koten-linux-stress-f9998bdd7-jvzfb -- stress -c 4 --verbose --timeout 10m

[root@Master231 pod]# kubectl get hpa
NAME                     REFERENCE                       TARGETS    MINPODS   MAXPODS   REPLICAS   AGE
koten-linux-stress-hpa   Deployment/koten-linux-stress   133%/80%   2         5         5          12m

[root@Master231 pod]# kubectl get pods
NAME                                 READY   STATUS    RESTARTS   AGE
koten-linux-stress-f9998bdd7-272k4   1/1     Running   0          25s
koten-linux-stress-f9998bdd7-8587r   1/1     Running   0          10m
koten-linux-stress-f9998bdd7-hgw2m   1/1     Running   0          10m
koten-linux-stress-f9998bdd7-jvzfb   1/1     Running   0          4m56s
koten-linux-stress-f9998bdd7-sk2w9   1/1     Running   0          25s

4、此时我们再选一个新增的进行压测发现没有了新增Pod,因为已经达到hpa的上限

[root@Master231 ~]# kubectl exec koten-linux-stress-f9998bdd7-jvzfb -- stress -c 4 --verbose --timeout 10m

5、若压力减少到80%以内,也需要等大概五分钟后Pod再会消失,防止消失了压力又上来了。

Helm资源详解

Helm是一个开源的Kubernetes包管理工具,用于部署、管理和分享Kubernetes应用程序。Helm允许你打包应用程序、依赖和配置为一个单独的可重复部署的软件包(称为chart),并提供了一套命令行工具和服务端组件,用于部署和管理这些软件包。

Helm类似于centos的yum,可以快速的下载资源。

一、Helm常见术语

helm          命令行工具,主要用于k8s的chart的创建,打包,发布和管理。

chart          应用描述,一系列用于描述k8s资源相关文件的集合,也可以理解成定义变量。

release       基于chart的部署实体,一个chart被helm运行后会生成一个release实体。这个release实体会在k8s集群中创建对应的资源对象。

二、为什么需要Helm

部署服务面临很多挑战

1、资源清单过多,不容器管理,希望将这些资源清单当成一个整体的服务进行管理。

2、希望实现应用的版本管理,比如发布,回滚到指定的版本。

3、如何实现资源清单的高效复用。

三、Helm版本说明

Helm目前有两个版本,即V2和V3。

2019年11月Helm团队发布V3版本,相比v2版本最大变化是将Tiller删除,并大部分代码重构。

helm v3相比helm v2还做了很多优化,比如不同命名空间资源同名的情况在v3版本是允许的,我们在生产环境中使用建议大家使用v3版本,不仅仅是因为它版本功能较强,而且相对来说也更加稳定了。


官方地址:https://helm.sh/docs/intro/install/

github地址:https://github.com/helm/helm/releases

四、Helm快速部署

我们来部署v3版本。

1、下载软件包

wget https://get.helm.sh/helm-v3.9.4-linux-amd64.tar.gz

2、解压软件包、将软件包的heml拷贝至PATH路径、清理软件包

[root@Master231 helm]# tar xf helm-v3.9.4-linux-amd64.tar.gz && mv linux-amd64/helm /usr/local/sbin/ && rm -rf linux-amd64/

3、验证helm是否安装成功

[root@Master231 helm]# helm -h

4、helm命令概述

可用命令(Available Commands)概述
		completion:
			生成命令补全的功能。使用"source <(helm completion bash)"

		create:
			创建一个chart并指定名称。

		dependency:
			管理chart依赖关系。

		env:
			查看当前客户端的helm环境变量信息。

		get:
			下载指定版本的扩展信息。

		help:
			查看帮助信息。

		history:
			获取发布历史记录。

		install:
			安装chart。

		lint:
			检查chart中可能出现的问题。

		list:
			列出releases信息。

		package:
			将chart目录打包到chart存档文件中。

		plugin:
			安装、列出或卸载Helm插件。

		pull:
			从存储库下载chart并将其解包到本地目录。

		repo:
			添加、列出、删除、更新和索引chart存储库。

		rollback:
			将版本回滚到以前的版本。

		search:
			在chart中搜索关键字。

		show:
			显示chart详细信息。

		status:
			显示已有的"RELEASE_NAME"状态。

		template:
			本地渲染模板。

		test:
			运行版本测试。

		uninstall:
			卸载版本。

		upgrade:
			升级版本。

		verify:
			验证给定路径上的chart是否已签名且有效
	  
		version:
			查看客户端版本。

5、查看helm版本

[root@Master231 helm]# helm version
version.BuildInfo{Version:"v3.9.4", GitCommit:"dbc6d8e20fe1d58d50e6ed30f09a04a77e4c68db", GitTreeState:"clean", GoVersion:"go1.17.13"}

五、配置helm命令的自动补全

1、让当前的shell终端生效helm的自动补全功能

[root@Master231 helm]# source <(helm completion bash)
[root@Master231 helm]# helm 
completion  (generate autocompletion scripts for th…)
create      (create a new chart with the given name)
dependency  (manage a chart's dependencies)
env         (helm client environment information)
get         (download extended information of a nam…)
help        (Help about any command)
history     (fetch release history)
install     (install a chart)
lint        (examine a chart for possible issues)
--More--

2、对新打开的会话添加自动补全功能,适用于linux系统

[root@Master231 helm]# helm completion bash > /etc/bash_completion.d/helm

3、对新打开的会话添加自动补全功能,适用于MacOS系统

[root@Master231 helm]# helm completion bash > /usr/local/etc/bash_completion.d/helm

六、管理Chart生命周期

1、创建chart

[root@Master231 helm]# helm create linux
Creating linux

2、响应式创建名称空间

[root@Master231 helm]# kubectl create ns koten
namespace/koten created

3、安装chart

[root@Master231 helm]# helm install myhelm linux -n koten
NAME: myhelm
LAST DEPLOYED: Mon Jun 26 22:57:26 2023
NAMESPACE: koten
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace koten -l "app.kubernetes.io/name=linux,app.kubernetes.io/instance=myhelm" -o jsonpath="{.items[0].metadata.name}")
  export CONTAINER_PORT=$(kubectl get pod --namespace koten $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl --namespace koten port-forward $POD_NAME 8080:$CONTAINER_PORT
[root@Master231 helm]# 

4、安装release信息及K8s集群资源(helm3.9测试默认安装nginx:1.16)

[root@Master231 helm]# helm list -n koten
NAME  	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART      	APP VERSION
myhelm	koten    	1       	2023-06-26 22:57:26.258041226 +0800 CST	deployed	linux-0.1.0	1.16.0   

[root@Master231 helm]# kubectl get all -n koten
NAME                                READY   STATUS    RESTARTS   AGE
pod/myhelm-linux-746ddbbd9b-2lzg5   1/1     Running   0          2m40s

NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/myhelm-linux   ClusterIP   10.200.240.194   <none>        80/TCP    2m40s

NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/myhelm-linux   1/1     1            1           2m40s

NAME                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/myhelm-linux-746ddbbd9b   1         1         1       2m40s

[root@Master231 helm]# kubectl describe pod myhelm-linux-746ddbbd9b-2lzg5 -n koten | grep nginx
    Image:          nginx:1.16.0
    Image ID:       docker-pullable://nginx@sha256:3e373fd5b8d41baeddc24be311c5c6929425c04cabf893b874ac09b72a798010
  Normal  Pulling    3m36s  kubelet            Pulling image "nginx:1.16.0"
  Normal  Pulled     85s    kubelet            Successfully pulled image "nginx:1.16.0" in 2m11.850457913s (2m11.85047456s including waiting)

5、修改value的值,自定义镜像,尝试安装

其实镜像的版本会先读取values.yaml的值,如果里面为空,才会去读取Chart.yaml的值。

# 修改前的values.yaml
[root@Master231 helm]# head -15 linux/values.yaml
# Default values for linux.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 1

image:
  repository: nginx
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: ""

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

# values.yaml没有版本,默认读取Chart.yaml的值
[root@Master231 helm]# grep -r 1.16.0 linux/*
linux/Chart.yaml:appVersion: "1.16.0"

# 修改后的values.yaml
[root@Master231 helm]# head -15 linux/values.yaml
# Default values for linux.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 1

image:
  repository: harbor.koten.com/koten-web/nginx
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: "1.24.0-alpine"

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

6、安装Chart,注意Chart的名称不能相同

[root@Master231 helm]# helm install myhelm02 linux/ -n koten
NAME: myhelm02
LAST DEPLOYED: Mon Jun 26 23:09:26 2023
NAMESPACE: koten
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace koten -l "app.kubernetes.io/name=linux,app.kubernetes.io/instance=myhelm02" -o jsonpath="{.items[0].metadata.name}")
  export CONTAINER_PORT=$(kubectl get pod --namespace koten $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl --namespace koten port-forward $POD_NAME 8080:$CONTAINER_PORT

7、卸载Chart

[root@Master231 helm]# helm list -n koten
NAME    	NAMESPACE	REVISION	UPDATED                                	STATUS      CHART      	APP VERSION
myhelm  	koten    	1       	2023-06-26 22:57:26.258041226 +0800 CST	deployed    linux-0.1.0	1.16.0     
myhelm02	koten    	1       	2023-06-26 23:09:26.021303289 +0800 CST	deployed    linux-0.1.0	1.16.0  

[root@Master231 helm]# helm -n koten uninstall myhelmrelease "myhelm" uninstalled

[root@Master231 helm]# helm list -n koten
NAME    	NAMESPACE	REVISION	UPDATED                                	STATUS      CHART      	APP VERSION
myhelm02	koten    	1       	2023-06-26 23:09:26.021303289 +0800 CST	deployed    linux-0.1.0	1.16.0 

七、自定义Chart

1、不使用values.yaml

1、清空Chart模板,用于自定义

[root@Master231 helm]# rm -rf linux/templates/*

2、清空values文件

[root@Master231 helm]# > linux/values.yaml

3、自定义Chart信息

[root@Master231 helm]# cat > linux/Chart.yaml <<'EOF'
apiVersion: v2
name: linux
description: linux k8s tomcat demo deploy
type: application
version: "v0.1"
appVersion: "1.0"
EOF

4、创建资源清单,放在templates下面

[root@Master231 helm]# cat linux/templates/01-deploy-tomcat.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
spec:
  replicas: 1
  selector:
    matchExpressions:
    - key: app
      operator: Exists
  template:
    metadata:
      labels:
        app: koten-mysql
    spec:
      volumes:
       - name: data
         nfs:
           server: master231
           path: /koten/data/kubernetes/tomcat-db
      containers:
        - name: mysql
          image: harbor.koten.com/koten-db/mysql:5.7
          ports:
          - containerPort: 3306
          env:
          - name: MYSQL_ROOT_PASSWORD
            value: '123456'
          volumeMounts:
          - name: data
            mountPath: /var/lib/mysql

---

apiVersion: v1
kind: Service
metadata:
  name: koten-mysql
spec:
  selector:
     app: koten-mysql
  ports:
  - port: 3306
    targetPort: 3306

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: koten-tomcat-app
spec:
  replicas: 1
  selector:
    matchExpressions:
    - key: app
      operator: Exists
  template:
    metadata:
      labels:
        app: koten-tomcat-app
    spec:
      containers:
        - name: tomcat
          image: harbor.koten.com/koten-web/tomcat:v1
          ports:
          - containerPort: 8080
          env:
          - name: MYSQL_SERVICE_HOST
            value: koten-mysql
          - name: MYSQL_SERVICE_PORT
            value: '3306'

---

apiVersion: v1
kind: Service
metadata:
  name: koten-tomcat-app
spec:
  type: NodePort
  selector:
     app: koten-tomcat-app
  ports:
  - port: 8080
    targetPort: 8080
    nodePort: 8080

5、安装自定义的Chart

[root@Master231 helm]# helm install linux-tomcat linux -n koten
NAME: linux-tomcat
LAST DEPLOYED: Mon Jun 26 23:42:28 2023
NAMESPACE: koten
STATUS: deployed
REVISION: 1
TEST SUITE: None

6、查看服务是否安装成功,在K8s集群查看资源

[root@Master231 helm]# kubectl get all -n koten
[root@Master231 helm]# kubectl get pv,pvc -n koten

7、自定义安装的提示信息

helm创建成功的消息提示可以更改,默认是Helm的模板语言,它基于Go语言模板库实现,可以用于在Kubernetes集群中部署应用程序。Helm模板语言的语法与Go语言的模板语言类似,但是添加了一些特殊的关键字和函数,用于实现Kubernetes资源的定义和渲染。

[root@Master231 helm]# cat linux/templates/NOTES.txt
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
  {{- range .paths }}
  http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ .path }}
  {{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
  export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "linux-test.fullname" . }})
  export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
  echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
     NOTE: It may take a few minutes for the LoadBalancer IP to be available.
           You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "linux-test.fullname" . }}'
  export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "linux-test.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
  echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
  export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "linux-test.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
  export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:$CONTAINER_PORT
{{- end }}

[root@Master231 helm]# cat linux/templates/NOTES.txt
恭喜你,创建完成!

[root@Master231 helm]# helm install linux-test linux/ -n koten
NAME: linux-test
LAST DEPLOYED: Mon Jun 26 23:26:17 2023
NAMESPACE: koten
STATUS: deployed
REVISION: 1
NOTES:
恭喜你,创建完成!

8、删除资源

[root@Master231 helm]# helm -n koten uninstall linux-tomcat 
release "linux-tomcat" uninstalled

2、使用values.yaml

1、编写资源values.yaml文件

[root@Master231 helm]# cat > linux/values.yaml <<'EOF'
image:
  repository: harbor.koten.com/koten-web/tomcat
  tag: v1

storage:
  pvc: koten-tomcat-pvc
  sc: managed-nfs-storage

apps:
  author: koten
  hobby: linux

name: tomcat
version: v0.1
EOF

2、自定义安装服务的提示信息文件

[root@Master231 helm]# cat > linux/templates/NOTES.txt   <<'EOF'
welcome to use koten tomcat apps ...

本次您部署的服务是[{{ .Values.image.repository }}:{{ .Values.image.tag }}]

作者 --->【{{ .Values.apps.author }}】
爱好 --->【{{ .Values.apps.hobby }}】


Successful deploy {{ .Values.name }}:{{ .Values.version }} !!!
EOF

3、编写资源清单,引用values.yaml中预定义的变量

[root@Master231 helm]# rm -rf linux/templates/01-deploy-tomcat.yaml

cat > linux/templates/koten-deploy-mysql.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
spec:
  replicas: 1
  selector:
    matchExpressions:
    - key: app
      operator: Exists
  template:
    metadata:
      labels:
        app: koten-mysql
    spec:
      volumes:
      - name: data
        persistentVolumeClaim:
           claimName: {{ .Values.storage.pvc }}
      containers:
        - name: mysql
          image: harbor.koten.com/koten-db/mysql:5.7
          ports:
          - containerPort: 3306
          env:
          - name: MYSQL_ROOT_PASSWORD
            value: '123456'
          volumeMounts:
          - name: data
            mountPath: /var/lib/mysql
EOF

cat > linux/templates/koten-deploy-tomcat.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  name: koten-tomcat-app
spec:
  replicas: 1
  selector:
    matchExpressions:
    - key: app
      operator: Exists
  template:
    metadata:
      labels:
        app: koten-tomcat-app
    spec:
      containers:
        - name: myweb
          image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
          ports:
          - containerPort: 8080
          env:
          - name: MYSQL_SERVICE_HOST
            value: koten-mysql
          - name: MYSQL_SERVICE_PORT
            value: '3306'
EOF

cat > linux/templates/koten-mysql-svc.yaml <<'EOF'
apiVersion: v1
kind: Service
metadata:
  name: koten-mysql
spec:
  selector:
     app: koten-mysql
  ports:
  - port: 3306
    targetPort: 3306
EOF

cat > linux/templates/koten-sc-pvc.yaml <<'EOF'
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: {{ .Values.storage.pvc }}
  annotations:
    # 声明使用的动态存储类名称,根据您的k8s环境自行修改即可,sc名称必须存在哈!
    volume.beta.kubernetes.io/storage-class: {{ .Values.storage.sc }}
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Mi
EOF

cat > linux/templates/koten-tomcat-svc.yaml <<'EOF'
apiVersion: v1
kind: Service
metadata:
  name: koten-tomcat-app
spec:
  type: NodePort
  selector:
     app: koten-tomcat-app
  ports:
  - port: 8080
    targetPort: 8080
    nodePort: 8080
EOF

4、安装Chart服务

[root@Master231 helm]# ls linux/templates/
koten-deploy-mysql.yaml   koten-sc-pvc.yaml
koten-deploy-tomcat.yaml  koten-tomcat-svc.yaml
koten-mysql-svc.yaml      NOTES.txt

[root@Master231 helm]# helm -n koten install linux-tomcat linux
NAME: linux-tomcat
LAST DEPLOYED: Mon Jun 26 23:48:06 2023
NAMESPACE: koten
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
welcome to use koten tomcat apps ...

本次您部署的服务是[harbor.koten.com/koten-web/tomcat:v1]

作者 --->【koten】
爱好 --->【linux】


Successful deploy tomcat:v0.1 !!!

5、验证服务是否安装成功,在K8s集群查看资源

此处我的mysql没有起来,查看了详细信息是节点没有可以挂载的地方了,可能是之前的pvc占用了

[root@Master231 helm]# kubectl get all -n koten
NAME                                    READY   STATUS    RESTARTS   AGE
pod/koten-tomcat-app-7f7455849d-4v9q7   1/1     Running   0          54s
pod/mysql-94c66994f-bhtsv               0/1     Pending   0          54s

NAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
service/koten-mysql        ClusterIP   10.200.103.120   <none>        3306/TCP        54s
service/koten-tomcat-app   NodePort    10.200.8.250     <none>        8080:8080/TCP   54s

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/koten-tomcat-app   1/1     1            1           54s
deployment.apps/mysql              0/1     1            0           54s

NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/koten-tomcat-app-7f7455849d   1         1         1       54s
replicaset.apps/mysql-94c66994f               1         1         0       54s


[root@Master231 helm]# kubectl get pv,pvc -n koten
NAME                                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                     STORAGECLASS   REASON   AGE
persistentvolume/koten-linux-pv01   2Gi        RWX            Retain           Bound       default/test-claim                                2d3h
persistentvolume/koten-linux-pv02   5Gi        RWX            Retain           Bound       default/koten-linux-pvc                           2d3h
persistentvolume/koten-linux-pv03   10Gi       RWX            Recycle          Available                                                     2d3h

NAME                                     STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS          AGE
persistentvolumeclaim/koten-tomcat-pvc   Pending                                      managed-nfs-storage   66s
[root@Master231 helm]# 

6、清空资源

[root@Master231 helm]# helm -n koten uninstall linux-tomcat 
release "linux-tomcat" uninstalled

八、基于helm进行升级/回滚

1、部署helm

1、创建chart

[root@Master231 helm]# helm create koten-web
Creating koten-web

2、修改chart的values.yaml,指定安装nginx:1.24.0版本

[root@Master231 helm]# sed -i '/repository/s#nginx#harbor.koten.com/koten-web/nginx#' koten-web/values.yaml
[root@Master231 helm]# sed -i '/tag:/s#""#1.24.0-alpine#' koten-web/values.yaml

3、安装chart

[root@Master231 helm]# helm install linux-web koten-web/
NAME: linux-web
LAST DEPLOYED: Tue Jun 27 00:26:25 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=koten-web,app.kubernetes.io/instance=linux-web" -o jsonpath="{.items[0].metadata.name}")
  export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT

4、验证部署nginx版本

[root@Master231 helm]# kubectl get pods -o wide
NAME                                   READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
linux-web-koten-web-858b5ff9dd-nhjck   1/1     Running   0          20s   10.100.1.43   worker232   <none>           <none>
[root@Master231 helm]# curl -sI 10.100.1.43|grep nginx
Server: nginx/1.24.0

5、查看发型版本

[root@Master231 helm]# helm list 
NAME     	NAMESPACE	REVISION	UPDATED                                	STATUS      CHART          	APP VERSION
linux-web	default  	1       	2023-06-27 00:26:25.718077442 +0800 CST	deployed    koten-web-0.1.0	1.16.0     

2、基于values.yaml文件的方式进行升级/回滚

其实就是修改镜像,我这边只测试升级

1、修改文件

[root@Master231 helm]# sed -i 's#1.24.0#1.25.1#' koten-web/values.yaml

2、基于文件进行升级

[root@Master231 helm]# helm upgrade -f koten-web/values.yaml linux-web koten-web/
Release "linux-web" has been upgraded. Happy Helming!
NAME: linux-web
LAST DEPLOYED: Tue Jun 27 00:30:15 2023
NAMESPACE: default
STATUS: deployed
REVISION: 2
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=koten-web,app.kubernetes.io/instance=linux-web" -o jsonpath="{.items[0].metadata.name}")
  export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT

3、查看发行版本

[root@Master231 helm]# helm list
NAME     	NAMESPACE	REVISION	UPDATED                                	STATUS      CHART          	APP VERSION
linux-web	default  	2       	2023-06-27 00:30:15.291450689 +0800 CST	deployed    koten-web-0.1.0	1.16.0   
[root@Master231 helm]# helm history linux-web 
REVISION	UPDATED                 	STATUS    	CHART          	APP VERSION	DESCRIPTION     
1       	Tue Jun 27 00:26:25 2023	superseded	koten-web-0.1.0	1.16.0     	Install complete
2       	Tue Jun 27 00:30:15 2023	deployed  	koten-web-0.1.0	1.16.0     	Upgrade complete

3、基于命令行修改镜像的方式进行升级/回滚

类似于deployment的版本回滚,但是功能更全,而且history很明显能看出信息更多。我这边只测试用命令行修改镜像回滚

1、基于命令行修改镜像的方式进行回滚

[root@Master231 helm]# kubectl get pods -o wide
NAME                                   READY   STATUS    RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
linux-web-koten-web-7c59cdf7f6-2l7rw   1/1     Running   0          3m30s   10.100.2.104   worker233   <none>           <none>

[root@Master231 helm]# curl -sI 10.100.2.104 | grep nginx
Server: nginx/1.25.1

[root@Master231 helm]# helm upgrade --set image.tag=1.24.0-alpine,replicaCount=3 linux-web koten-web/
Release "linux-web" has been upgraded. Happy Helming!
NAME: linux-web
LAST DEPLOYED: Tue Jun 27 00:36:05 2023
NAMESPACE: default
STATUS: deployed
REVISION: 3
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=koten-web,app.kubernetes.io/instance=linux-web" -o jsonpath="{.items[0].metadata.name}")
  export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT

[root@Master231 helm]# kubectl get pods -o wide
NAME                                   READY   STATUS    RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
linux-web-koten-web-7c59cdf7f6-2l7rw   1/1     Running   0          6m37s   10.100.2.104   worker233   <none>           <none>
linux-web-koten-web-7c59cdf7f6-7x9qj   1/1     Running   0          47s     10.100.1.45    worker232   <none>           <none>
linux-web-koten-web-7c59cdf7f6-stgc7   1/1     Running   0          47s     10.100.1.44    worker232   <none>           <none>

[root@Master231 helm]# curl -sI 10.100.1.44 | grep nginx
Server: nginx/1.24.0

2、查看版本

[root@Master231 helm]# helm history linux-web 
REVISION	UPDATED                 	STATUS    	CHART          	APP VERSION	DESCRIPTION     
1       	Tue Jun 27 00:26:25 2023	superseded	koten-web-0.1.0	1.16.0     	Install complete
2       	Tue Jun 27 00:30:15 2023	superseded	koten-web-0.1.0	1.16.0     	Upgrade complete
3       	Tue Jun 27 00:36:05 2023	deployed  	koten-web-0.1.0	1.16.0     	Upgrade complete
[root@Master231 helm]# helm list 
NAME     	NAMESPACE	REVISION	UPDATED                                	STATUS      CHART          	APP VERSION
linux-web	default  	3       	2023-06-27 00:36:05.335731569 +0800 CST	deployed    koten-web-0.1.0	1.16.0  

4、不指定发行版回滚

默认回滚到上一个版本修改内容的版本,假如正常回滚是从4回滚到3,但是你的3没有经过修改与2一致,那么就会回滚到2

[root@Master231 helm]# helm rollback linux-web
Rollback was a success! Happy Helming!
[root@Master231 helm]# helm history linux-web 
REVISION	UPDATED                 	STATUS    	CHART          	APP VERSION	DESCRIPTION     
1       	Tue Jun 27 00:26:25 2023	superseded	koten-web-0.1.0	1.16.0     	Install complete
2       	Tue Jun 27 00:30:15 2023	superseded	koten-web-0.1.0	1.16.0     	Upgrade complete
3       	Tue Jun 27 00:36:05 2023	superseded	koten-web-0.1.0	1.16.0     	Upgrade complete
4       	Tue Jun 27 00:40:13 2023	deployed  	koten-web-0.1.0	1.16.0     	Rollback to 3   

5、指定发行版回滚

直接回滚到指定的历史版本

[root@Master231 helm]# helm rollback linux-web 1
Rollback was a success! Happy Helming!
[root@Master231 helm]# helm history linux-web 
REVISION	UPDATED                 	STATUS    	CHART          	APP VERSION	DESCRIPTION     
1       	Tue Jun 27 00:26:25 2023	superseded	koten-web-0.1.0	1.16.0     	Install complete
2       	Tue Jun 27 00:30:15 2023	superseded	koten-web-0.1.0	1.16.0     	Upgrade complete
3       	Tue Jun 27 00:36:05 2023	superseded	koten-web-0.1.0	1.16.0     	Upgrade complete
4       	Tue Jun 27 00:40:13 2023	superseded	koten-web-0.1.0	1.16.0     	Rollback to 2   
5       	Tue Jun 27 00:42:19 2023	deployed  	koten-web-0.1.0	1.16.0     	Rollback to 1   

九、主流的Chart仓库

就是可以远程拉取Chart去直接install部署,我感觉类似于docker仓库

互联网公开Chart仓库,可以直接使用他们制作好的包:
微软仓库:http://mirror.azure.cn/kubernetes/charts/
阿里云仓库:https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts

1、添加Chart仓库的方式

查看现有的仓库信息,默认没有任何仓库地址

[root@Master231 helm]# helm repo list

将微软云的仓库添加到helm客户端仓库

[root@Master231 helm]# helm repo add azure http://mirror.azure.cn/kubernetes/charts/ 
"azure" has been added to your repositories
[root@Master231 helm]# helm repo list 
NAME   	URL                                             
azure  	http://mirror.azure.cn/kubernetes/charts/

更新仓库信息

[root@Master231 helm]# helm repo update

2、搜索需要的Chart

[root@Master231 helm]# helm search repo kafka
NAME                    	CHART VERSION	APP VERSION	DESCRIPTION                                       
azure/kafka-manager     	2.3.5        	1.3.3.22   	DEPRECATED - A tool for managing Apache Kafka.    
azure/schema-registry-ui	0.4.4        	v0.9.5     	DEPRECATED - This is a web tool for the conflue...
[root@Master231 helm]# 

3、拉取第三方的Chart

1、下载chart

[root@Master231 helm]# helm pull azure/kafka-manager

2、解压chart

[root@Master231 helm]# tar xf kafka-manager-2.3.5.tgz

3、查看资源的api是否正确,否则可能会导致运行失败

[root@Master231 helm]# cat kafka-manager/templates/deployment.yaml

4、部署chart

[root@Master231 helm]# helm install koten-kafka kafka-manager
WARNING: This chart is deprecated
NAME: koten-kafka
LAST DEPLOYED: Tue Jun 27 00:51:51 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=kafka-manager,release=koten-kafka" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl port-forward $POD_NAME 8080:9000

5、根据提示开启端口转发,注意提前检查端口是否被占用

[root@Master231 helm]# export POD_NAME=$(kubectl get pods --namespace default -l "app=kafka-manager,release=koten-kafka" -o jsonpath="{.items[0].metadata.name}")

6、测试服务是否部署成功

[root@Master231 helm]# curl http://10.0.0.231:8080

4、helm的Chart私有仓库

参考链接:https://github.com/helm/chartmuseum
https://hub.docker.com/r/chartmuseum/chartmuseum

感兴趣可以看链接自己部署,我将不展示了,类似私有harbor仓库


我是koten,10年运维经验,持续分享运维干货,感谢大家的阅读和关注!

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

我是koten

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值