这里写自定义目录标题
编写启动脚本 elasticsearch.sh
- 懒人编写一个用于启动、停止和重启Elasticsearch服务的shell脚本
- 脚本中设置环境变量,并使用
su - es - c
命令在es用户下执行操作 - 通过赋予脚本执行权限,用户可以方便地对Elasticsearch进行控制
#!/bin/sh
export JAVA_HOME=/opt/software/java11/jdk-11.0.11
export PATH=$JAVA_HOME/bin:$PATH
export ES_HOME=/opt/software/elasticsearch/elasticsearch-7.6.1
export PATH=$ES_HOME/bin:$PATH
if [ $# -lt 1 ]
then
echo "No Args Input..."
exit ;
fi
case $1 in
start)
su - es -c "$ES_HOME/bin/elasticsearch -d -p pid"
pid=`ps aux|grep elasticsearch | grep -v grep | awk '{print $2}'`
echo "elasticsearch is started, pid: $pid""
;;
stop)
pid=`ps aux|grep elasticsearch | grep -v grep | awk '{print $2}'`
kill -9 $pid
echo "elasticsearch is stopped"
;;
restart)
pid=`ps aux|grep elasticsearch | grep -v grep | awk '{print $2}'`
kill -9 $pid
echo "elasticsearch is stopped"
sleep 1
su - es -c "$ES_HOME/bin/elasticsearch -d -p pid"
pid=`ps aux|grep elasticsearch | grep -v grep | awk '{print $2}'`
echo ">>>> elasticsearch is restarted... pid: $pid"
;;
*)
echo "start|stop|restart"
;;
esac
exit 0
赋予执行权限:
chmod +x elasticsearch.sh
参数说明:
su - es -c “命令”, 这个表示脚本内切换es用户一次并执行 -c 后面的命令,然后退出es普通用户到当前用户。
启动可能报错:elasticsearch 7.10启动报错 bootstrap checks failed
elasticsearch 8.17启动报错 bootstrap checks failed
现象一
ES使用root用户安装报错如下
Caused by: java.lang.RuntimeException: can not run elasticsearch as root
[root@localhost elasticsearch-8.17]# ./bin/elasticsearch
[2022-09-18T16:40:41,166][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [localhost.localdomain] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:163) ~[elasticsearch-8.17.jar:8.17]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150) ~[elasticsearch-8.17.jar:8.17]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:75) ~[elasticsearch-8.17.jar:8.17]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:116) ~[elasticsearch-cli-8.17.jar:8.17]
at org.elasticsearch.cli.Command.main(Command.java:79) ~[elasticsearch-cli-8.17.jar:8.17]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:115) ~[elasticsearch-8.17.jar:8.17]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:81) ~[elasticsearch-8.17.jar:8.17]
Caused by: java.lang.RuntimeException: can not run elasticsearch as root
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:103) ~[elasticsearch-8.17.jar:8.17]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:170) ~[elasticsearch-8.17.jar:8.17]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:399) ~[elasticsearch-8.17.jar:8.17]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-8.17.jar:8.17]
... 6 more
uncaught exception in thread [main]
java.lang.RuntimeException: can not run elasticsearch as root
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:103)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:170)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:399)
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159)
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150)
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:75)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:116)
at org.elasticsearch.cli.Command.main(Command.java:79)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:115)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:81)
For complete error details, refer to the log at /opt/ES/elasticsearch-8.17/logs/elasticsearch.log
解决方法
ES官方为了安全起见,禁止启用Root启动ES服务,因此我们需要进行普通用户的创建,才能正常启动ES服务。所以我们需要新建一个普通用户,并将ES安装目录授权给刚新建的用户。
[root@localhost ES]# useradd es
错误信息:
ERROR: [1] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
[2]: max number of threads [3795] for user [es] is too low, increase to at least [4096]
[3]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
问题原因:
es高版本对资源要求较高,linux系统默认配置不能满足它的要求,所以需要单独配置
解决方法:
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
修改vi /etc/security/limits.conf 文件,加入配置
* hard nofile 65535 # *可以是es启动用户
* soft nofile 65535
可以指定用户
[2]: max number of threads [3795] for user [es] is too low, increase to at least [4096]
修改vi /etc/security/limits.conf 文件,加入配置
es - nproc 4096 # es是我的启动用户
注意 退出xshell,重新登录: 上面两个配置项改完后,ES启动用户(es 或root) 重新登录就会生效;
[3]:max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
修改/etc/sysctl.config文件,加入下面配置
vi /etc/sysctl.conf
vm.max_map_count=262144
执行命令,立即生效
/sbin/sysctl -p
vm.max_map_count = 262144
启动成功
其他报错:
nested: BindException[地址已在使用];
Likely root cause: java.net.BindException: 地址已在使用
at java.base/sun.nio.ch.Net.bind0(Native Method)
at java.base/sun.nio.ch.Net.bind(Net.java:555)
at java.base/sun.nio.ch.ServerSocketChannelImpl.netBind(ServerSocketChannelImpl.java:337)
at java.base/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:294)
at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:134)
at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:550)
at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1334)
at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:506)
at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:491)
at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:973)
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:248)
at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:356)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.base/java.lang.Thread.run(Thread.java:833)
For complete error details, refer to the log at /data/db/elastic/node-1/logs/ltkj-search.log
报错解释:
这个错误表明应用程序尝试绑定到本地计算机上的19700端口失败,因为该端口已经被其他应用程序占用。
解决方法:
- 查找并停止占用端口的进程:
-
在Windows上,可以使用命令netstat -ano | findstr :19700来查找占用端口的进程ID,然后通过任务管理器或taskkill /PID <进程ID> /F命令结束该进程。
-
在Linux或Mac上,可以使用lsof -i :19700或netstat -tulnp | grep :19700来查找占用端口的进程,并使用kill命令结束它。
- 更改应用程序配置:
如果可能,更改应用程序自身配置,使其使用不同的端口。
- 端口冲突处理:
如果应用程序可以配置端口,可以设置为自动选择未被使用的端口。
确保在进行任何更改之前,了解正在运行的服务及其端口的用途,以免影响系统的正常运行。
ELK近实时日志搜索系统
ELK简介
ELK 是 Elasticsearch、Logstash、Kibana 三大开源框架的首字母大写简称。市面上也被称为Elastic Stack。其中 Elasticsearch 是一个基于Lucene、分布式、通过Restful方式进行交互的近实时搜索平台框架。像类似百度、谷歌这种大数据全文搜索引擎的场景都可以使用 Elasticsearch作为底层支持的框架,可见 Elasticsearch 提供的搜索能力确实强大,市面上很多时候我们称Elasticsearch 为 es。Logstash 是 ELK 的中央数据流引擎,用于从不同目标(文件/数据存储/MQ)收集的不同格式数据,经过过滤后支持输出到不同目的地(文件/MQ/Redis/Elasticsearch/Kafka等)。Kibana 可以将 Elasticsearch 的数据通过友好的页面展示出来,提供实时分析的功能。简单来说,ELK就是集数据收集、传输、存储、分析、告警为一体的近实时日志分析搜索系统。
elasticsearch
E – Elasticsearch(存储,查询),分布式数据搜索引擎,基于Apache Lucene实现,可集群,提供数据的集中式存储,分析,以及强大的数据搜索和聚合功能。
logstash
L – logstash(收集),数据收集引擎,相较于Filebeat比较重量级,但它集成了大量的插件,支持丰富的数据源收集,对收集的数据可以过滤,分析,格式化日志格式。
kibana
K – kibana(展示),数据的可视化平台,通过该web平台可以实时的查看 Elasticsearch 中的相关数据,并提供了丰富的图表统计功能。
一、ELK架构
nginx+fliebeat --> kafka (消息缓存队列) – > elk or (数据库) --> mysql
(上图并未引入kafka消息缓存队列,使用缓存队列主要是解决数据安全与均衡Logstash与Elasticsearch负载压力。)
此架构的工作方式大致如下,应用端日志收集器使用Filebeat,Filebeat轻量,占用服务器资源少,所以使用Filebeat作为应用服务器端的日志收集器,一般Filebeat会配合Logstash一起使用,Filebeat收集数据,Logstash过滤数据并将数据推送到Elasticsearch,Elasticsearch存储传入的数据,并提供方法以供查询,这种部署方式也是目前最常用的架构。
二、ELK的应用场景
-
应用场景:
要对大规模的日志进行搜索,进行分析。
-
例子:
前端nginx代理集群有5台,这上面都分布了nginx日志。突然当前时间点用户反馈网站访问不了,那我们就要去查看过滤出nginx日志http状态码为404,4xx,5xx的日志。如果使用awk,grep进行操作,那就登录到5台机器上进行操作,然后对结果进行一个合并。这种情况就不适用使用命令行工具去做分析和搜索,这种方法比较麻烦,效率又低,日志量大起来搜索又慢。对于这种情况,我们的日志就要进行集中化管理,集中化分析。如此一来,搭建ELK近实时日志分析搜索系统,无疑是一个好的解决方法。
三、ELK的安装部署
- 服务器准备、规划
准备四台虚拟机
主机名 | IP地址 | 描述 | |
---|---|---|---|
elasticsearch-1 | 192.168.21.3 | es集群 | |
elasticsearch-2 | 192.168.21.4 | es集群 | |
elasticsearch-3 | 192.168.21.18 | es集群 | |
kibana | 192.168.220.21 | logstash+kibana | |
web | 192.168.220.21 | nginx+filebeat |
下载地址(官网):https://www.elastic.co/cn/downloads
-
基础环境配置
ELK主机都需部署(都是如下操作)1、jdk环境的部署
# 导入jdk安装包 [root@elasticsearch-1 ~]# mkdir -p /usr/local/soft/java/ [root@localhost node-3]# cd /usr/local/soft/java/ [root@elasticsearch-1 jdk]# ls jdk-8u381-linux-i586.tar.gz # 解压到当前文件夹 [root@elasticsearch-1 jdk]# tar -zxvf jdk-xxxxxx.tar.gz [root@elasticsearch-1 jdk]# ls jdk1.8.0_381 jdk-8u381-linux-i586.tar.gz # 配置/etc/profile文件 [root@elasticsearch-1 ~]# vim /etc/profile JAVA_HOME=/jdk/jdk1.8.0_381 JRE_HOME=/jdk/jdk1.8.0_381/jre PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH CLASSPATH=$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib export PATH JAVA_HOME CLASSPATH JRE_HOME # 重新加载文件 [root@elasticsearch-1 ~]# source /etc/profile # 查看是否配置成功 root@elasticsearch-1 jdk]# java -version -bash: /jdk/jdk1.8.0_381/bin/java: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory # 此问题是因为少了glibc依赖,所以需要安装依赖 [root@elasticsearch-1 jdk]# yum install glibc.i686 -y [root@kibana jdk]# java -version java version "1.8.0_381" Java(TM) SE Runtime Environment (build 1.8.0_381-b09) Java HotSpot(TM) Client VM (build 25.381-b09, mixed mode)
2、如有必要:关闭防火墙和selinux
# 关闭防火墙 [root@elasticsearch-1 ~]# systemctl stop firewalld && systemctl disable firewalld # 关闭selinux [root@elasticsearch-1 ~]# setenforce 0 [root@elasticsearch-1 ~]# sed -i 's/^SELINUX=enforcing$/SELINUS=disabled/' /etc/selinux/config
3、操作系统参数优化
参见上面 常见错误部分# 设置进程数和文件句柄数以及内存限制配置 [root@elasticsearch-1 ~]# vim /etc/security/limits.conf elastic soft memlock unlimited #不进行内存限制 elastic hard memlock unlimited elastic soft nofile 65535 #进行打开文件数限制 elastic hard nofile 65535 elastic soft nproc 65535 #进行打开文件数限制 elastic hard nproc 65535 # 虚拟内存设置 [root@elasticsearch-1 ~]# vim /etc/sysctl.conf vm.max_map_count=655350 #max_map_count文件包含限制一个进程可以拥有的VMA(虚拟内存区域)的数量 # 执行sysctl -p生效 [root@elasticsearch-1 ~]# sysctl -p
四、ELK的部署
1、elasticsearch
(1) 创建es用户
[root@elasticsearch-1 ~]# useradd es
[root@elasticsearch-1 ~]# echo "elastic.com" | passwd --stdin elastic
(2) 解压elasticsearch压缩包: 2025-02-17 使用 es8.17 重新安装部署,注意和7.14的配置项区别
[root@elasticsearch-1 elk]# ls
elasticsearch-8.17.2-linux-x86_64.tar.gz
# 将elasticsearch-8.17.2-linux-x86_64.tar.gz解压到 / 目录下
[root@elasticsearch-1 elk]# tar zxvf elasticsearch-8.17.2-linux-x86_64.tar.gz -C /
# 授予权限
[root@elasticsearch-1 ~]# chown -R elastic:elastic /es8.17/
[root@elasticsearch-1 ~]# ll -d /es8.17/
drwxr-xr-x 9 elastic elastic 172 Aug 30 16:01 /es8.17/
(3) 创建elasticsearch数据以及日志所存放的目录
[root@elasticsearch-1 ~]# mkdir -p /data/db/elastic/{data,logs}
# 授予权限
[root@elasticsearch-1 ~]# chown -R elastic:elastic /data/db/elastic
[root@elasticsearch-1 ~]# ll /data/elasticsearch/
drwxr-xr-x 5 elastic elastic 87 Sep 4 17:27 data
drwxr-xr-x 2 elastic elastic 4096 Aug 31 09:45 logs
可能报错:
在ES集群部署后,通过curl检查健康状态出现master_not_discovered_exception错误。解决方法是修改elasticsearch.yml文件,设置discovery.seed_hosts和cluster.initial_master_nodes为集群IP,然后重启服务。执行系统重启命令后,ES集群健康状态恢复正常。
摘要由CSDN通过智能技术生成
ES集群部署完毕后,检查健康状态
报错信息:
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
解决办法
vim /usr/local/soft/es8.17/config/elasticsearch.yml
修改集群IP为自己集群IP,重启即可
discovery.seed_hosts: ["10.0.0.X","10.0.0.X","10.0.0.X"]
cluster.initial_master_nodes: ["10.0.0.X","10.0.0.X","10.0.0.X"]
systemctl restart elasticsearch.service
再次查询健康状态,恢复正常。
2、ElasticSearch开启xpack
(1) 配置集群间安全通信证书密钥 (仅在elasticsearch-1节点上操作即可,其他节点scp 复制)
# 签发ca证书
[root@elasticsearch-1 ~]# ./es8.17/bin/elasticsearch-certutil ca
# 输入该指令后一直回车即可
# 签发节点证书
[root@elasticsearch-1 ~]# ./es8.17/bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
# 输入该指令后一直回车即可
(2) 将证书移至/usr/local/soft/es8.17/config/cert目录(仅在elasticsearch-1节点上操作即可)
(3) 给生成的文件进行授权
chmod 660 elastic-certificates.p12
chown es:es elastic-certificates.p12
# 无cert目录就创建(切记需用elastic用户创建)
[es@m elasticsearch-8.17]$ mkdir config/cert
[es@m elasticsearch-8.17]$ mv elastic-stack-ca.p12 config/cert/
[es@m elasticsearch-8.17]$ mv elastic-certificates.p12 config/cert/
在另两台机器上复制即可:
# 192.168.21.4
[root@localhost node-2]# scp -r root@192.168.21.3:/usr/local/soft/es8.17/elasticsearch-8.17/config/cert /usr/local/soft/es8.17/elasticsearch-8.17/config
# 192.168.21.18
[root@localhost node-3]# scp -r root@192.168.21.3:/usr/local/soft/es8.17/elasticsearch-8.17/config/cert /usr/local/soft/es8.17/elasticsearch-8.17/config
(4) 创建elasticsearch.keystore文件(仅在elasticsearch-1节点上操作即可,其他节点scp 复制)
[es@m elasticsearch-8.17]$ /usr/local/soft/es8.17/elasticsearch-8.17/bin/elasticsearch-keystore create
第一台实例1配置完成:
ES8.x 启动时,ERROR: Elasticsearch died while starting up, with exit code 137
当 Elasticsearch 8.x 启动时出现 “ERROR: Elasticsearch died while starting up, with exit code 137” 错误,这通常意味着 Elasticsearch 进程被操作系统的 Out-Of-Memory (OOM) 杀手终止,也就是系统内存不足。以下是详细的原因分析和对应的解决办法:
- 检查系统内存使用情况
使用以下命令查看系统内存的使用情况:
free -h
该命令会显示系统的总内存、已使用内存、空闲内存和交换空间的使用情况。如果物理内存不足,可以考虑关闭一些不必要的进程,或者增加服务器的物理内存。 - 调整 Elasticsearch 堆内存设置
打开 Elasticsearch 的 jvm.options 配置文件,通常位于 config 目录下。找到并修改堆内存的初始值和最大值:
[root@m es8.17]# vim config/jvm.options
-Xms2g
-Xmx2g
将 -Xms 和 -Xmx 设置为适合服务器内存大小的值。一般来说,建议将堆内存设置为服务器物理内存的一半,但不要超过 32GB。修改完成后,保存文件并重新启动 Elasticsearch。
在另两台机器上复制即可:
# 192.168.21.4
[root@localhost node-2]# scp -r root@192.168.21.3:/usr/local/soft/es8.17/elasticsearch-8.17/config/elasticsearch.keystore /usr/local/soft/es8.17/elasticsearch-8.17/config
# 192.168.21.18
[root@localhost node-3]# scp -r root@192.168.21.3:/usr/local/soft/es8.17/elasticsearch-8.17/config/elasticsearch.keystore /usr/local/soft/es8.17/elasticsearch-8.17/config
(5) 证书的操作权限更新(所有的证书都必须是elastic权限)
[root@localhost config]# chown -R es:es cert/
[root@localhost config]# ll /usr/local/soft/es8.17/elasticsearch-8.17/config/
总用量 44
drwxrwxr-x. 2 es es 66 8月 17 14:16 cert
-rw-rw----. 1 es es 199 8月 17 14:22 elasticsearch.keystore
-rw-r-----. 1 es es 3232 8月 14 17:59 elasticsearch.yml
-rw-r-----. 1 es es 2739 8月 6 18:23 elasticsearch.yml.back
-rw-r-----. 1 es es 3110 8月 6 18:23 jvm.options
drwxr-x---. 2 es es 6 8月 6 18:23 jvm.options.d
-rw-r-----. 1 es es 19089 8月 6 18:23 log4j2.properties
-rw-r-----. 1 es es 473 8月 6 18:23 role_mapping.yml
-rw-r-----. 1 es es 197 8月 6 18:23 roles.yml
-rw-r-----. 1 es es 0 8月 6 18:23 users
-rw-r-----. 1 es es 0 8月 6 18:23 users_roles
[root@localhost config]#
(6) elasticsearch配置文件修改
注意区分 es7.x 和es8.x 的配置项的区别
例:
1. unknown setting [node.master] please check that any required plugins are installed, or check
2. unknown setting [transport.tcp.port] did you mean any of [transport.port, transport.tcp.keep_count]?
[es@m elasticsearch-8.17]$ vim /usr/local/soft/es8.17/elasticsearch-8.17/config/elasticsearch.yml (添加如下内容)
# ----------------------------------- Memory -----------------------------------
#
54,1 37%
cluster.name: ltkj-search
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
# 是不是有资格主节点 , es8.17用 node.roles: [ master ]
#### node.master: true
node.roles: [ master, data ]
# 是否存储数据,同上,es8.17用 node.roles: [ master, data ]
#### node.data: true
# 最大集群数量
# node.max_local_storage_nodes: 3
# IP地址
network.host: 0.0.0.0
http.port: 19200
# custom 节点之间沟通ip, 避免docker0 ip影响
transport.host: 192.168.21.3
# 节点之间沟通端口,es8.17用 所有与节点间通信相关的配置都应该使用transport.port
#### transport.tcp.port: 19230
transport.port: 19230
# 节点发现
discovery.seed_hosts: ["192.168.21.3","192.168.21.4", "192.168.21.18"]
# 初始化一个新的集群时需要此配置来选举master
cluster.initial_master_nodes: ["192.168.21.18"]
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /data/db/elastic/node/data
#
# Path to log files:
#
path.logs: /data/db/elastic/node/logs
# 访问head 插件相关
http.cors.enabled: true
# 或者是具体的域名
http.cors.allow-origin: "*"
#以下皆是跨域配置
http.cors.allow-origin: "*"
http.cors.enabled: true
http.cors.allow-headers : X-Requested-With,X-Auth-Token,Authorization,Content-Type,Content-Length
# http.cors.allow-credentials: true
# 4.4.6 es设置密码,开启开启x-pack验证(非必须,生产建议)
# 需要在配置文件中开启x-pack验证, 修改config目录下面的elasticsearch.yml文件,在里面添加如下内容,并重启es.
#开启xpack验证(开启用户名、密码验证)
http.cors.allow-headers: Authorization,Content-Type
# ES security 配置
xpack.security.enabled: true
xpack.security.http.ssl.enabled: false
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /usr/local/soft/es8.17/config/certs/elastic-certificates.p12
xpack.security.transport.ssl.keystore.password: elastic.cx.
xpack.security.transport.ssl.truststore.type: PKCS12
# xpack.security.transport.ssl.truststore.path: /usr/local/soft/es8.17/config/certs/elastic-stack-ca.p12
xpack.security.transport.ssl.truststore.path: /usr/local/soft/es8.17/config/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.password: elastic.cx.
# ————————————————
# ip 地图相关功能
ingest.geoip.downloader.enabled: false
node-2
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: ltkj-search
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-2
# 是不是有资格主节点
node.master: false
# 是否存储数据
node.data: true
# 最大集群数量
# node.max_local_storage_nodes: 3
# IP地址
network.host: 0.0.0.0
http.port: 19200
# custom 节点之间沟通ip, 避免docker0 ip影响
transport.host: 192.168.21.4
# 节点之间沟通端口
transport.tcp.port: 19230
# 节点发现
discovery.seed_hosts: ["192.168.21.3","192.168.21.4", "192.168.21.18"]
# 初始化一个新的集群时需要此配置来选举master
cluster.initial_master_nodes: ["192.168.21.18"]
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /data/db/elastic/node/data
#
# Path to log files:
#
path.logs: /data/db/elastic/node/logs
# 访问head 插件相关
http.cors.enabled: true
http.cors.allow-origin: "*" # 或者是具体的域名
http.cors.allow-headers: Authorization,Content-Type
# ES security 配置
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /usr/local/soft/es8.17/elasticsearch-8.17/config/cert/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /usr/local/soft/es8.17/elasticsearch-8.17/config/cert/elastic-certificates.p12
#
node-3
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: ltkj-search
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-3
# 是不是有资格主节点
node.master: true
# 是否存储数据
node.data: true
# 最大集群数量
# node.max_local_storage_nodes: 3
# IP地址
network.host: 0.0.0.0
http.port: 19200
# custom 节点之间沟通ip, 避免docker0 ip影响
transport.host: 192.168.21.18
# 节点之间沟通端口
transport.tcp.port: 19230
# 节点发现
discovery.seed_hosts: ["192.168.21.3", "192.168.21.4", "192.168.21.18"]
# 初始化一个新的集群时需要此配置来选举master
cluster.initial_master_nodes: ["192.168.21.18"]
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /data/db/elastic/node/data
#
# Path to log files:
#
path.logs: /data/db/elastic/node/logs
# 访问head 插件相关
http.cors.enabled: true
http.cors.allow-origin: "*" # 或者是具体的域名
http.cors.allow-headers: Authorization,Content-Type
# ES security 配置
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /usr/local/soft/es8.17/elasticsearch-8.17/config/cert/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /usr/local/soft/es8.17/elasticsearch-8.17/config/cert/elastic-certificates.p12
#
(7) 非必须: elasticsearch jvm参数配置
# 按照虚机分配的内存配置-Xms和-Xmx,本文中虚机内存为2G,设置-Xms和-Xmx值为1G,为一半
[root@elasticsearch-1 ~]# vim /elasticsearch-8.6.0/config/jvm.options
-Xms1g
-Xmx1g
(8) 非必须: 将elasticsearch加入systemctl,可以用上面的 /usr/local/soft/es8.17/elasticsearch-8.17/startes.sh脚本启动
[root@localhost node-3]# vim /usr/lib/systemd/system/elasticsearch.service
[Unit]
Description=elasticsearch
After=network.target
[Service]
Type=forking
User=es
ExecStart=/usr/local/soft/es8.17/elasticsearch-8.17/bin/elasticsearch -d
PrivateTmp=true
# 指定此进程可以打开的最大文件数
LimitNOFILE=65535
# 指定此进程可以打开的最大进程数
LimitNPROC=65535
# 最大虚拟内存
LimitAS=infinity
# 最大文件大小
LimitFSIZE=infinity
# 超时设置 0-永不超时
TimeoutStopSec=0
# SIGTERM是停止java进程的信号
KillSignal=SIGTERM
# 信号只发送给给JVM
KillMode=process
# java进程不会被杀掉
SendSIGKILL=no
# 正常退出状态
SuccessExitStatus=143
[Install]
WantedBy=multi-user.target
(9) 启动elasticsearch
[root@localhost node-3]# systemctl start elasticsearch && systemctl enable elasticsearch
(10) 生产环境建议:设置elasticsearch密码
[root@localhost node-3]# /usr/local/soft/es8.17/elasticsearch-8.17/bin/elasticsearch-setup-passwords interactive
关闭所有的节点,然后重启一个你作为master的es节点,只在集群中一个节点上执行,默认会创建一个索引.secutity-6
注意:执行之前一定要看一下集群的日志,由于刚才进行了重启,集群的状态从red还未变成yellow或者green,
等一会儿,集群状态变成yellow时就可以操作了,目的是要能创建索引。否则创建失败。
如需要自动生成密码可以将interactive 替换为 auto,让系统自动生成
/opt/dtstack/es-6.8.23/es/bin/elasticsearch-setup-passwords interactive
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y
Enter password for [elastic]:
Reenter password for [elastic]:
Enter password for [apm_system]:
Reenter password for [apm_system]:
Enter password for [kibana]:
Reenter password for [kibana]:
Enter password for [logstash_system]:
Reenter password for [logstash_system]:
Enter password for [beats_system]:
Reenter password for [beats_system]:
Enter password for [remote_monitoring_user]:
Reenter password for [remote_monitoring_user]:
Changed password for user [apm_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]
设置完成后重启es。
访问head 插件地址,查看es集群信息:
如果你忘记了密码的话,我们也可以重新设置密码,但是重新设置密码不是重复第10个步骤,那样会报错的。
因为:当我们执行设置密码命令的时候,Es会创建一个.security-6索引,这里面保存了密码信息,所以我们需要这个索引删掉,然后重复第10步骤即可。
但是我们启用了鉴权,所以直接执行删除索引的命令肯定是失败的,我们可以先把鉴权关掉,也就是修改elasticsearch.yml配置文件
xpack.security.enabled: false
xpack.security.transport.ssl.enabled: false
然后删除.security-6索引
curl -XDELETE http://localhost:9200/.security-6
最后在重复第10 步骤就可以重设密码了
重置密码,浏览器登录账号-密码 elasticsearch-reset-password
比如:有3种方式可选择:
a. 为elastic账号自动生成新的随机密码,输出至控制台
bin/elasticsearch-reset-password -u elastic
b. 手工指定elastic的新密码
bin/elasticsearch-reset-password --username elastic -i
c. 指定服务地址和账户名
bin/elasticsearch-reset-password --url "https://172.0.0.3:9200" --username elastic -i
————————————————
9-5 验证服务启动
9.5.1 访问服务
在7.x的版本是通过如下地址访问ES服务:http://localhost:19200/
但是在 8.x 的版本访问会看到如下页面:
9.5.2 原因解释
这是正常现象,因为 Elastic 8 默认开启了 SSL,将默认配置项由true改为false即可
9.5.3 推荐做法
关闭SSL虽然可以访问服务了,但这本质上是在规避问题而非解决问题,更推荐的做法是使用https协议进行访问:
https://localhost:9200/,此时如果你的浏览器版本是比较新的版本会出现以下弹窗提示,即:
9.5.4 解决方案
在chrome该页面上,直接键盘敲入thisisunsafe这11个字符(鼠标点击当前页面任意位置,让页面处于最上层即可输入,输入时是没有任何提示也不显示任何字符的,直接输入即可按回车即可),然后你会看到如下提示:
————————————————
(11) 查看集群状态
# 1. 查看集群的健康状态
[root@localhost node-3]# curl -u es:elastic.com http://192.168.21.18:19200/_cat/health?v
C:\Users\wang> curl -u elastic:elastic.com http://192.168.21.18:19200/_cat/health?v
# 2. 集群各个节点状态,可以看到三个节点的信息
curl -k -u elastic:[你的ES密码] https://127.0.0.1:9200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
[ip1] 8 22 4 0.81 0.37 0.21 dm * node-1
[ip1] 8 22 5 0.56 0.27 0.14 dm - node-3
[ip1] 7 26 5 0.53 0.23 0.12 dm - node-2
————————————————
文件内容识别
希望能够实现用户上传PDF、WORD、TXT之内得文本内容,然后用户可以根据附件名称或文件内容模糊查询文件信息,并可以在线查看文件内容。
Attachment 内容识别插件
第一步: 要用es实现文本附件内容的识别,需要先给es安装一个插件:Ingest Attachment Processor Plugin
这只是一个内容识别的插件,还有其它的例如OCR之类的其它插件,有兴趣的可以去搜一下了解一下
Ingest Attachment Processor Plugin是一个文本抽取插件,本质上是利用了Elasticsearch的ingest node功能,提供了关键的预处理器attachment。在安装目录下运行以下命令即可安装。
到es的安装文件bin目录下执行
elasticsearch-plugin install ingest-attachment
因为我们这里es是使用docker安装的,所以需要进入到es的docker镜像里面的bin目录下安装插件
[root@iZuf63d0pqnjrga4pi18udZ plugins]# docker exec -it es bash
[root@elasticsearch elasticsearch]# ls
LICENSE.txt NOTICE.txt README.asciidoc bin config data jdk lib logs modules plugins
[root@elasticsearch elasticsearch]# cd bin/
[root@elasticsearch bin]# ls
elasticsearch elasticsearch-certutil elasticsearch-croneval elasticsearch-env-from-file elasticsearch-migrate elasticsearch-plugin elasticsearch-setup-passwords elasticsearch-sql-cli elasticsearch-syskeygen x-pack-env x-pack-watcher-env
elasticsearch-certgen elasticsearch-cli elasticsearch-env elasticsearch-keystore elasticsearch-node elasticsearch-saml-metadata elasticsearch-shard elasticsearch-sql-cli-7.9.3.jar elasticsearch-users x-pack-security-env
[root@elasticsearch bin]# elasticsearch-plugin install ingest-attachment
显示installed 就表示安装完成了,然后重启es,不然第二步要报错
如果是集群环境,需要3个结点都安装 Attachment插件,不然也会报错 。
第二步:创建一个文本抽取的管道
主要是用于将上传的附件转换成文本内容,支持(word,PDF,txt,excel没试,应该也支持)
PUT _ingest/pipeline/attachment
{
"description": "Extract attachment information",
"processors": [
{
"attachment": {
"field": "content",
"indexed_chars": 1000000,
"ignore_missing": true
}
},
{
"remove": {
"field": "content"
}
}
]
}
创建管道成功提示:
实例:精确查询
场景:查询 “我爱中国”,比如我们创建了2个文档,其中一个内容里含有“我爱中国”,另一个文档里只有“中国”,但是查询的时候你会发现ES把两条数据都返回了。
因为我们对内容进行了分词,所以如果想精确查询需要用 match_phrase
强调:es中,match还是精确查询,wildcard 才是模糊查询,因为分词了内容,所以用match_phrase来进行精确查询
match_phrase是什么?它与match的区别?
match_phrase查询是一种用于匹配短语的查询方式,可以用于精确匹配多个单词组成的短语。它会将查询字符串分解成单词,然后按照顺序匹配文档中的单词,只有当文档中的单词顺序与查询字符串中的单词顺序完全一致时才会匹配成功。
与match查询不同,match查询只需要匹配查询中的一个或多个单词,而不需要考虑单词的顺序。例如,如果查询是“abc”,match查询将匹配包含“a”、“b”或“c”的文档,而不管它们的顺序如何。相比之下,match_phrase查询只会匹配包含完全短语“abc”的文档。
因此,match_phrase查询更适合需要精确匹配短语的情况,而match查询更适合需要模糊匹配单词的情况。
PUT _ingest/pipeline/attachment
{
"description": "Extract attachment information",
"processors": [
{
"attachment": {
"field": "content",
"indexed_chars": 1000000,
"ignore_missing": true
}
},
{
"remove": {
"field": "content"
}
}
]
}
PUT /pdftest/pdf/1?pipeline=attachment
{
"content":"QmFzZTY057yW56CB6K+05piOCuOAgOOAgEJhc2U2NOe8lueggeimgeaxguaKijPkuKo45L2N5a2X6IqC77yIMyo4PTI077yJ6L2s5YyW5Li6NOS4qjbkvY3nmoTlrZfoioLvvIg0KjY9MjTvvInvvIzkuYvlkI7lnKg25L2N55qE5YmN6Z2i6KGl5Lik5LiqMO+8jOW9ouaIkDjkvY3kuIDkuKrlrZfoioLnmoTlvaLlvI/jgIIg5aaC5p6c5Ymp5LiL55qE5a2X56ym5LiN6LazM+S4quWtl+iKgu+8jOWImeeUqDDloavlhYXvvIzovpPlh7rlrZfnrKbkvb/nlKgnPSfvvIzlm6DmraTnvJbnoIHlkI7ovpPlh7rnmoTmlofmnKzmnKvlsL7lj6/og73kvJrlh7rnjrAx5oiWMuS4qic9J+OAggoK44CA44CA5Li65LqG5L+d6K+B5omA6L6T5Ye655qE57yW56CB5L2N5Y+v6K+75a2X56ym77yMQmFzZTY05Yi25a6a5LqG5LiA5Liq57yW56CB6KGo77yM5Lul5L6/6L+b6KGM57uf5LiA6L2s5o2i44CC57yW56CB6KGo55qE5aSn5bCP5Li6Ml42PTY077yM6L+Z5Lmf5pivQmFzZTY05ZCN56ew55qE55Sx5p2l44CC"
}
GET /pdftest/pdf/1
GET /pdftest/_doc/1
GET pdftest/_search
{
"query": {
"match": {
"attachment.content": {
"query": "编码表ee"
}
}
}
}
GET /pdftest/_search
{
"query": {
"match_phrase": {
"attachment.content": {
"query": "编码表"
}
}
}
}
————————————————
2、Kibana
附:Centos7中安装Node出现Cannot find module ‘…/lib/utils/unsupported.js‘问题
删除原本的的npm连接,重新建一个即可。
1、先cd到该node版本中的bin文件夹下,这里装的是12.16.2版本:
cd /usr/local/soft/node-v16/bin
2、删除该路径下的npm文件, 出现提示输入yes即可
mv npm npm.bk
3、重新建立文件连接
ln -s ../usr/local/soft/node-v16/lib/node_modules/npm/bin/npm-cli.js ./npm
再次使用 npm -v 命令查看发现已近ok了
npm cache clean --force
=================== 华丽的分隔线 kibana 开始====================================
第一步、下载
# es7.14
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.14.0-linux-x86_64.tar.gz
# es8.17
wget https://artifacts.elastic.co/downloads/kibana/kibana-8.17.2-linux-x86_64.tar.gz
第三步、配置yum源地址;修改配置文件kibana.yml:
配置yum源地址:
在/etc/yum.repos.d/ 下新建 kibana.repo 配置 yum 源地址:
新建kibana.repo:
cat>>kibana.repo
新建完成之后,在里面加上如下内容:
[kibana-7.x]
name=Kibana repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
修改配置文件kibana.yml:
进入到kibana-7.14下的config文件夹:
编辑kibana.yml文件:
输入命令:
vim kibana.yml
加上这些:
server.port: 15601
# 允许访问的IP,如果允许任何IP访问此处输入0.0.0.0
server.host: "192.168.21.18"
#server.basePath: ""
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false
# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576
# The Kibana server's name. This is used for display purposes.
# 服务名称
server.name: "ltkj-kibana"
# The URLs of the Elasticsearch instances to use for all your queries.
# ES服务地址
elasticsearch.hosts: ["http://192.168.21.18:19200","http://192.168.21.3:19200","http://192.168.21.4:19200"]
xpack.reporting.capture.browser.chromium.disableSandbox: false
#elasticsearch.preserveHost: true
#
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
# 创建的索引
#kibana.index: ".kibana"
#
# # The default application to load.
# #kibana.defaultAppId: "home"
# 如果ES 开启了x-pack 登录验证,则需要在kibana这里配置 ES中的普通用户和密码,
## 注意是普通用户,不能是elastic 超级用户
elasticsearch.username: "bj_wangl"
elasticsearch.password: "kibana.xxx"
server.publicBaseUrl: "http://192.168.21.18:15601"
# 设置kibana为中文
i18n.locale: "zh-CN"
改成这样:
修改好了之后,保存退出(Esc :wq)
第四步、开启5601端口:
如果防火墙是关闭的,首先打开防火墙,否则没办法开启5601端口:
输入命令、开启防火墙:
systemctl start firewalld.service
开启防火墙后,添加5601端口:
firewall-cmd --permanent --zone=public --add-port=5601/tcp
出现success说明这个端口开启成功了。
重启防火墙:
firewall-cmd --reload
出现success说明重启成功了。
至此,5601端口添加完成。
最后:
第五步、启动kibana
启动kibana之前,需要先启动ElasticSearch:
启动ElasticSearch:
进入ElasticSearch安装包 /usr/local/soft/elk/es8.17/bin目录,输入命令:
sh startes.sh start
或
systemctl start elasticsearch
然后:
启动Kibana:
进入kibana安装目录下的bin目录,输入命令:
./kibana
如果提示这个错误:
那就用这个命令:
新建用户es,参见 ES集群安装
#创建用户组
[root@localhost es8.17]# groupadd es
#在es组创建用户
## 我是复用的 ES的用户 es/elastic.cxx,没有再单独创建kibana
## [root@localhost es8.17]# useradd -m -g kibana es
#给用户设置密码
## [root@localhost es8.17]# passwd kibana
## elastic.cxx
先前台启动验证,没问题后,再后台启动
[es@localhost kibana-8.17]$ ./bin/kibana
如果 启动报错:
Error: [config validation of [elasticsearch].username]: value of "elastic" is forbidden. This is a superuser account that cannot write to syste
就是上面kibana.yml中 “# 如果ES 开启了x-pack 登录验证”,表明在配置 Elasticsearch 连接时,使用了超级用户 elastic 账号进行某些系统写入操作。
解决办法
- 创建专用的普通用户账号
- 先用超级用户 登录进ES ,创建普通用户
可以创建一个具有适当权限的普通用户账号来执行需要的写入操作。以下是创建新用户的步骤:
curl -u elastic:elastic.com -X POST "192.168.21.18:19200/_security/user/your_username(bj_wang)?pretty" -H 'Content-Type: application/json' -d'
{
"password" : "your_pwd",
"email" : "w@163.com",
"roles" : [ "monitoring_user" ]
} '
在上述命令中:
- your_username是新创建的用户名,你可按需修改。
- password 需替换成你为该用户设定的实际密码。
- roles 是分配给该用户的角色,该角色赋予用户访问 Kibana 所需的权限。
可能需要二次修改密码
curl -u elastic:elastic.cxx -X POST "192.168.21.18:19200/_security/user/kibana/_password?pretty" -H 'Content-Type: application/json' -d'
{
"password" : "kibana.cxx"
}
'
3. 为新用户分配适当的角色和权限
如果报错:
Root causes:
security_exception: action [cluster:monitor/nodes/info] is unauthorized for user [bj_wangl] with effective roles [monitoring_user], this action is granted by the cluster privileges [monitor,manage,all]
确保新用户具有执行所需操作的足够权限。可以在 Kibana 的 Security -> Roles(角色)部分为用户分配角色,
或者使用 Elasticsearch API 来管理角色和权限。
curl -u elastic:elastic.cxx -X POST "192.168.21.18:19200/_security/user/bj_wangl?pretty" -H 'Content-Type: application/json' -d'
{
"roles": ["monitor"]
}
'
通过以上步骤,你可以避免使用 elastic 超级用户账号进行系统写入操作,从而解决该错误。
再后台启动
[es@localhost kibana-8.17]$ nohup ./bin/kibana &
编写脚本 startk.sh
#!/bin/bash
export PATH=$ES_JAVA_HOME/bin:$PATH
export K_HOME=/usr/local/soft/elk/kibana-8.17
export PATH=$K_HOME/bin:$PATH
PORT=15601
if [ $# -lt 1 ]
then
echo ">>> No Args Input..."
exit ;
fi
case $1 in
start)
su - es -c "nohup $K_HOME/bin/kibana &"
pid=`ps aux|grep kibana | grep -v grep | awk '{print $2}'`
echo "kibana is started, pid: $pid"
;;
stop)
pid=`ps aux|grep kibana | grep -v grep | awk '{print $2}'`
kill -9 $pid
echo ">>>>> kibana is stopped ... "
;;
restart)
pid=`ps aux|grep kibana | grep -v grep | awk '{print $2}'`
kill -9 $pid
echo "kibana is stopped"
sleep 1
su - es -c "nohup $K_HOME/bin/kibana &"
pid=`ps aux|grep kibana | grep -v grep | awk '{print $2}'`
echo ">>>>>>> kibana is restarted ... pid: $pid"
;;
*)
echo "start|stop|restart"
;;
esac
exit 0
log [22:10:43.185] [info][server][Kibana][http] http server running at http://192.168.21.18:15601
kibana开启xpack
1、使用使用es生成的p12证书生成kibana证书
/opt/dtstack/es-6.8.23/es/bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 --dns kibana --name kibana
2、获取pem证书
kibana 不能够直接使用 PKCS#12类型的证书,所以需要基于p12生成pem证书
openssl pkcs12 -in kibana.p12 -clcerts -nokeys -chain -out ca.pem
执行后会得到kibana使用的pem证书,名为 ca.pem
拓展:用下面的命令 从 elastic-stack-ca.p12 中分离出 kibana节点的key 和 crt,执行以下命令:
bin/elasticsearch-certutil cert --pem -ca elastic-stack-ca.p12 --dns kibana
适应key和crt的方式的话,需要将配置文件增加
#用HTTPS方式访问kibana,使用获取到的crt和key文件
server.ssl.enabled: true
server.ssl.certificate: /opt/kibana-6.8.0-linux-x86_64/config/xp/instance/instance.crt
server.ssl.key: /opt/kibana-6.8.0-linux-x86_64/config/xp/instance/instance.key
3、修改配置文件kibana.yml
server.port: 5601
server.host: hostip
server.name: hostname
#es集群
elasticsearch.hosts: [http://hostip1:9200,http://hostip2:9200,http://hostip3:9200]
kibana.index: .kibana
elasticsearch.username: "kibana"
elasticsearch.password: "kibana用户的密码"
#使用pem方式
elasticsearch.ssl.certificateAuthorities: [ "/opt/dtstack/kibana-6.8.23/kibana/config/ca.pem" ]
xpack.reporting.encryptionKey: "something_at_least_32_characters"
elasticsearch.ssl.verificationMode: certificate
i18n.locale: zh-CN
完整版:
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: hostip
# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""
# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false
# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""
# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576
# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"
server.name: hostname
# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: [http://hostip1:9200,http://hostip2:9200,http://hostip3:9200]
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
kibana.index: .kibana
# The default application to load.
#kibana.defaultAppId: "home"
# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
elasticsearch.username: "kibana"
elasticsearch.password: "kibana用户的密码"
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key
# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key
# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
elasticsearch.ssl.certificateAuthorities: [ "/opt/dtstack/kibana-6.8.23/kibana/config/ca.pem" ]
xpack.reporting.encryptionKey: "something_at_least_32_characters"
# To disregard the validity of SSL certificates, change this setting's value to 'none'.
elasticsearch.ssl.verificationMode: certificate
# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500
# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000
# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]
# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000
# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false
# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid
# Enables you to specify a file where Kibana stores log output.
#logging.dest: stdout
# Set the value of this setting to true to suppress all logging output.
#logging.silent: false
# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false
# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false
# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000
# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
i18n.locale: zh-CN
拓展:
kibana中用哪种一种证书去与es通讯,取决于 es集群用的是哪种证书:es集群 如果是用 .p12 格式的证书去配置的 keystore 、truststore,那么 kibana 用 .pem 证书;如果es集群用的是 .key 和 .crt 格式的证书 去开启的x-pack,那么 kibana 用 .crt 证书 去做 elasticsearch.ssl.certificateAuthorities 的配置。
安装和验证ik分词器 https://blog.csdn.net/nalanxiaoxiao2011/article/details/141400643
注:ik分词器要解压到elasticsearch-8.17.0的plugins目录中。解压后,重启es。重启成功即可。
验证:进入kibana,选择页面右上的 【开发工具】,进行验证分词
POST _analyze
{
"analyzer": "ik_smart",
"text": "我是中国人"
}
POST _analyze
{
"analyzer": "ik_max_word",
"text": "我是中国人"
}