Elasticsearch是一个分布式可扩展的实时搜索和分析引擎,有restful接口,设计用于云计算中,能够达到实时搜索,稳定,可靠,快速,安装使用方便。这里将Elasticsearch集群部署在k8s中,记录部署过程。
一、准备工作
1.1、实验环境
kubernetes集群由两个节点组成,实验环境在虚拟集中搭建,master配置:2vCPU、2G RAM; worker配置:2vCPU、4G RAM,各节点信息如下:
# 查看k8s集群节点信息
root@k8s-master:~# kubectl get nodes -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master Ready control-plane 5d v1.24.3 192.168.5.248 <none> Ubuntu 22.04.4 LTS 5.15.0-119-generic containerd://1.7.27
k8s-node Ready <none> 5d v1.24.3 192.168.5.249 <none> Ubuntu 22.04.4 LTS 5.15.0-119-generic containerd://1.7.27
root@k8s-master:~#
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
1.2、提前部署nfs
因为要通过nfs在存储Elasticsearch的数据,实验中已提前安装部署nfs-server,安装部署在master节点上,IP地址为:192.168.5.248,共享目录为/k8s。
二、部署Elasticsearch
本次实验中,Elasticsearch的数据存储是通过storageclass在nfs中根据所需配置自动生成存储卷并挂在到pod中,所以需要安装nfs-provisioner。
2.1、安装nfs provisioner
1、创建nfs-provisioner需要的sa账号
root@k8s-master:~/kubernetes/Elasticsearch# cat sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
root@k8s-master:~/kubernetes/Elasticsearch# kubectl apply -f sa.yaml
serviceaccount/nfs-provisioner created
root@k8s-master:~/kubernetes/Elasticsearch# kubectl get sa
NAME SECRETS AGE
default 1 7d2h
nfs-provisioner 0 5s
root@k8s-master:~/kubernetes/Elasticsearch#
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
2、对sa授权
root@k8s-master:~/kubernetes/Elasticsearch# kubectl create clusterrolebinding nfs-provisioner-clusterrolebinding --clusterrole=cluster-admin --serviceaccount=default:nfs-provisioner
clusterrolebinding.rbac.authorization.k8s.io/nfs-provisioner-clusterrolebinding created
root@k8s-master:~/kubernetes/Elasticsearch#
root@k8s-master:~/kubernetes/Elasticsearch# kubectl get clusterrolebinding -n default |grep nfs
nfs-provisioner-clusterrolebinding ClusterRole/cluster-admin 68s
- 1.
- 2.
- 3.
- 4.
- 5.
3、安装nfs-provisioner
nfs的共享目录为/k8s
nfs-provisioner资源文件nfs-provisioner.yaml:
root@k8s-master:~/kubernetes/Elasticsearch# cat nfs-provisioner.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-provisioner
spec:
selector:
matchLabels:
app: nfs-provisioner
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-provisioner
spec:
serviceAccount: nfs-provisioner
containers:
- name: nfs-provisioner
image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0
imagePullPolicy: IfNotPresent
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: example.com/nfs
- name: NFS_SERVER
value: 192.168.5.248
- name: NFS_PATH
value: /k8s/
volumes:
- name: nfs-client-root
nfs:
server: 192.168.5.248
path: /k8s/
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
- 19.
- 20.
- 21.
- 22.
- 23.
- 24.
- 25.
- 26.
- 27.
- 28.
- 29.
- 30.
- 31.
- 32.
- 33.
- 34.
- 35.
- 36.
- 37.
在nfs-provisioner.yaml中PROVISIONER_NAME的值就是provisioner的名称,需要和storageclass中provisioner的值保持一致,NFS_SERVER和NFS_PATH保持和nfs服务提供的IP地址和共享目录一致。
应用/更新资源清单文件nfs-provisioner.yaml:
4、查看nfs-provisiner
2.2、在k8s集群安装es
创建名称空间es:
创建es配置文件:
通过configmap存储es配置文件,configmap-es-config.yaml
root@k8s-master:~/kubernetes/Elasticsearch# cat configmap-es-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: configmap-es-config
namespace: es
labels:
app: elasticsearch
data:
elasticsearch.yml: |+
cluster.name: rshine
node.name: ${MY_POD_NAME}
path.data: /usr/share/elasticsearch/data
path.logs: /usr/share/elasticsearch/logs
network.host: 0.0.0.0
http.port: 9200
transport.tcp.port: 9300
discovery.zen.ping.unicast.hosts: ["elasticsearch-0.elasticsearch-cluster.es:9300","elasticsearch-1.elasticsearch-cluster.es:9300","elasticsearch-2.elasticsearch-cluster.es:9300"]
node.master: true
node.data: true
http.cors.enabled: true
http.cors.allow-origin: "*"
bootstrap.system_call_filter: false
xpack.security.enabled: false
indices.fielddata.cache.size: 60%
indices.queries.cache.size: 40%
root@k8s-master:~/kubernetes/Elasticsearch# kubectl apply -f configmap-es-config.yaml
configmap/configmap-es-config created
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
- 19.
- 20.
- 21.
- 22.
- 23.
- 24.
- 25.
- 26.
- 27.
- 28.
创建存储类
存储类可以根据需求自动在nfs中划分一个存储目录,并以该目录自动创建一个基于nfs的存储卷pv和绑定该pv的pvc,并将pvc自动挂载到pods中。
存储类资源清单文件storageclass.yaml如下:
root@k8s-master:~/kubernetes/Elasticsearch# cat storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs
provisioner: example.com/nfs # 这个名字要和nfs-provisioner中的PROVISIONER_NAME的值一样
root@k8s-master:~/kubernetes/Elasticsearch# kubectl apply -f storageclass.yaml
storageclass.storage.k8s.io/nfs created
root@k8s-master:~/kubernetes/Elasticsearch#
root@k8s-master:~/kubernetes/Elasticsearch# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs example.com/nfs Delete Immediate false 23h
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
部署statefulset资源安装estaicsearch
statefulset资源清单文件statefulset-es.yaml:
root@k8s-master:~/kubernetes/Elasticsearch# cat statefulset-es.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
namespace: es
spec:
serviceName: "elasticsearch-cluster"
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
initContainers:
- name: fix-permissions
image: busybox
imagePullPolicy: IfNotPresent
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: es-data
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
imagePullPolicy: IfNotPresent
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:6.0.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9200
name: elasticsearch
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m" # 这里由于虚拟机资源有限,这里内存给的很小,因为有三个副本,所以少给点,根据实际的电脑配置给一个内存值。
volumeMounts:
- name: elasticsearch-config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
- name: es-data
mountPath: /usr/share/elasticsearch/data
volumes:
- name: elasticsearch-config
configMap:
name: configmap-es-config
volumeClaimTemplates:
- metadata:
name: es-data
spec:
accessModes: ["ReadWriteMany"]
storageClassName: nfs
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-cluster
namespace: es
labels:
app: elasticsearch
spec:
ports:
- port: 9200
name: elasticsearch
clusterIP: None
selector:
app: elasticsearch
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: es
labels:
app: elasticsearch
spec:
ports:
- port: 9200
name: elasticsearch
type: NodePort
selector:
app: elasticsearch
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
- 19.
- 20.
- 21.
- 22.
- 23.
- 24.
- 25.
- 26.
- 27.
- 28.
- 29.
- 30.
- 31.
- 32.
- 33.
- 34.
- 35.
- 36.
- 37.
- 38.
- 39.
- 40.
- 41.
- 42.
- 43.
- 44.
- 45.
- 46.
- 47.
- 48.
- 49.
- 50.
- 51.
- 52.
- 53.
- 54.
- 55.
- 56.
- 57.
- 58.
- 59.
- 60.
- 61.
- 62.
- 63.
- 64.
- 65.
- 66.
- 67.
- 68.
- 69.
- 70.
- 71.
- 72.
- 73.
- 74.
- 75.
- 76.
- 77.
- 78.
- 79.
- 80.
- 81.
- 82.
- 83.
- 84.
- 85.
- 86.
- 87.
- 88.
- 89.
- 90.
- 91.
- 92.
- 93.
- 94.
- 95.
- 96.
- 97.
- 98.
- 99.
- 100.
- 101.
- 102.
- 103.
清单文件中有两个service,一个是clusterIP,一个是NodePort,前者是为了内部通信,后者是为了可以在外部访问Elasticsearch。
应用/更新清单文件:
查看statefulset和pod状态
查看存储类自动创建的pv和pvc和实际的存储路径。
root@k8s-master:/k8s# kubectl get pv |column -t
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mongodb-pv-1 2Gi RWO Retain Bound default/mongo-dir-mongodb-1-0 6d22h
mongodb-pv-2 2Gi RWO Retain Bound default/mongo-dir-mongodb-2-0 6d22h
pvc-11a86872-3eb4-4e04-b438-73215c110653 1Gi RWX Delete Bound es/es-data-elasticsearch-1 nfs 40h
pvc-5d87dcc2-ea35-4185-a363-04d254967cfe 1Gi RWX Delete Bound es/es-data-elasticsearch-2 nfs 40h
pvc-f64f1a54-e131-45fe-9a77-306074fa1e16 1Gi RWX Delete Bound es/es-data-elasticsearch-0 nfs 40h
root@k8s-master:/k8s#
root@k8s-master:/k8s# kubectl get pvc -n es
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
es-data-elasticsearch-0 Bound pvc-f64f1a54-e131-45fe-9a77-306074fa1e16 1Gi RWX nfs 41h
es-data-elasticsearch-1 Bound pvc-11a86872-3eb4-4e04-b438-73215c110653 1Gi RWX nfs 40h
es-data-elasticsearch-2 Bound pvc-5d87dcc2-ea35-4185-a363-04d254967cfe 1Gi RWX nfs 40h
root@k8s-master:/k8s# ls
es-es-data-elasticsearch-0-pvc-f64f1a54-e131-45fe-9a77-306074fa1e16 mongodb-1
es-es-data-elasticsearch-1-pvc-11a86872-3eb4-4e04-b438-73215c110653 mongodb-2
es-es-data-elasticsearch-2-pvc-5d87dcc2-ea35-4185-a363-04d254967cfe
root@k8s-master:/k8s#
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
三、验证Elasticsearch是否安装完成
通过使用浏览器访问: 192.168.5.248:32377
可以看到Elasticsearch的配置,表示安装成功。