如果想在本地环境快速部署一套ELK服务,推荐使用docker方式部署。需要注意的是部署单节点es服务仅用于实验测试环境使用,未配置开机自启,所有容器未加–restart=always参数。可根据实际情况添加该参数,生产环境建议使用eck部署和管理es容器集群。
ES与Kibana部署
准备工作
创建自定义网络
由于容器每次重启后ip会发生变化,使得ssl证书主机IP地址不一致,导致其它服务连接es异常。因此需要自定义网络,并固定es的ip地址。
[root@elk ]# docker network create elk --subnet=172.19.0.0/16
创建ES服务相关目录
[root@elk ]# mkdir es-log es-data
启动es容器
[root@elk ]# docker run --name elasticsearch -d --net elk --ip 172.19.0.100 -p 9200:9200 -p 9300:9300 -v $PWD/es-data:/usr/share/elasticsearch/data -v $PWD/es-log:/usr/share/elasticsearch/logs -e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xms64m -Xmx512m" docker.elastic.co/elasticsearch/elasticsearch:8.8.2
启动Kibana容器
[root@elk ]# docker run --name kibana -d --net elk -p 5601:5601 docker.elastic.co/kibana/kibana:8.8.2
初始化配置
es生成连接token
[root@elk ]# docker exec -it elasticsearch bash
elasticsearch@a15632b27a31:~$ elasticsearch-create-enrollment-token --scope kibana
eyJ2ZXIiOiI4LjguMiIsImFkciI6WyIxNzIuMTkuMC4xMDA6OTIwMCJdLCJmZ3IiOiI4NzNjYjMwNTQzODBkODRhNDcwMTcxMzQ0Mzc4ZDFmYmYxNmI2OTc2ODlkZjM1MGU1ZWM3OWJjYWIxNDgwNTI5Iiwia2V5IjoiLWp4MWtJa0I5T25GdkhMN2NsNGk6VHRoYWYxVTlUQS1BVVcwbGlwbEplUSJ9
登录Kibana,填写token完成es连接
kibana生成二次验证码
[root@elk ]# docker exec -it kibana bash
kibana@1b35739cdeed:~$ kibana-verification-code
Your verification code is: 471 593
kibana填写二次验证码
es生成初始化密码
[root@elk ]# docker exec -it elasticsearch bash
elasticsearch@a15632b27a31:~$ elasticsearch-reset-password -u elastic
This tool will reset the password of the [elastic] user to an autogenerated value.
The password will be printed in the console.
Please confirm that you would like to continue [y/N]y
Password for the [elastic] user successfully reset.
New value: ANrjs2w+upw0O1a1Yuka
kibana填写es账号密码
kibana登录成功
其他常用操作
curl访问测试
安装完es后,自动为我们配置了用户密码和tls证书,我们使用curl命令即可访问测试
[root@elk ]# curl -k -u 'elastic:ANrjs2w+upw0O1a1Yuka' https://127.0.0.1:9200/_cluster/health
{"cluster_name":"docker-cluster","status":"green","timed_out":false,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":26,"active_shards":26,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":100.0}
[root@elk ]# curl -k -u 'elastic:ANrjs2w+upw0O1a1Yuka' https://127.0.0.1:9200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.19.0.100 29 45 2 0.04 0.21 0.18 cdfhilmrstw * 630b895ac617
ES证书导出
使用logstash或者filebeat,SDK连接es时,如果不指定ca证书,会提示x509错误,需要提前导出ca证书。
[root@elk ]# docker cp elasticsearch:/usr/share/elasticsearch/config/certs .
[root@elk ]# ls certs
http.p12 http_ca.crt transport.p12
ES日志打印至文件
es的日志默认使用的console输出, 不记录到日志文件中, logs目录下面只有gc.log,我们可以通过配置log4j2设置,将日志写入对应文件中。
[root@elk ]# docker cp elasticsearch:/usr/share/elasticsearch/config/log4j2.properties .
[root@elk ]# vim log4j2.properties
# appender.rolling.type = Console 改成下面的
appender.rolling.type = RollingFile
### 下面的进行追加
######## Server JSON ############################
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_server.json
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.json.gz
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size = 128MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.fileIndex = nomax
appender.rolling.strategy.action.type = Delete
appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path}
appender.rolling.strategy.action.condition.type = IfFileName
appender.rolling.strategy.action.condition.glob = ${sys:es.logs.cluster_name}-*
appender.rolling.strategy.action.condition.nested_condition.type = IfAccumulatedFileSize
appender.rolling.strategy.action.condition.nested_condition.exceeds = 2GB
[root@elk ]# docker cp log4j2.properties elasticsearch:/usr/share/elasticsearch/config/
[root@elk ]# docker restart elasticsearch
[root@elk ]# elk docker exec -it elasticsearch bash
elasticsearch@f70457f06c3c:~$ cd /usr/share/elasticsearch/logs/
elasticsearch@f70457f06c3c:~/logs$ ls
docker-cluster_server.json gc.log gc.log.00 gc.log.01
GeoIP地理解析库手动导入
Elasticsearch会自动从 Elastic GeoIP 下载IP地理数据库文件,默认情况下,Elasticsearch 每三天检查一次数据库文件是否有更新,但有些情况下会导致下载失败,此时就需要提前下载GeoLite2-City.mmdb文件,并放于指定路径下才能使用。
禁用数据库自动更新
PUT /_cluster/settings
{
"persistent" : {
"ingest.geoip.downloader.enabled" : false
}
}
拷贝文件
# 创建目录
[root@elk ]# docker exec -it --user root elasticsearch bash
root@630b895ac617:/# mkdir /usr/share/elasticsearch/config/ingest-geoip
# 拷贝文件
[root@elk ]# docker cp GeoLite2-City.mmdb elasticsearch:/usr/share/elasticsearch/config/ingest-geoip/my.mmdb
# 更改权限
[root@elk ]# docker exec -it --user root elasticsearch bash
root@630b895ac617:/# chown -R elasticsearch:root /usr/share/elasticsearch/config/ingest-geoip
使用
在处理管道中指定数据库文件为my.mmdb即可。
Kibana中文配置
[root@elk ]# docker cp kibana:/usr/share/kibana/config/kibana.yml .
[root@elk ]# ls
kibana.yml
[root@elk ]# vim kibana.yml
i18n.locale: "zh-CN"
[root@elk ]# docker cp kibana.yml kibana:/usr/share/kibana/config/
[root@elk ]# docker restart kibana
kibana添加加密key
[root@elk ]# elk docker exec -it kibana bash
kibana@68148c992019:~$ kibana-encryption-keys generate
## Kibana Encryption Key Generation Utility
The 'generate' command guides you through the process of setting encryption keys for:
xpack.encryptedSavedObjects.encryptionKey
Used to encrypt stored objects such as dashboards and visualizations
https://www.elastic.co/guide/en/kibana/current/xpack-security-secure-saved-objects.html#xpack-security-secure-saved-objects
xpack.reporting.encryptionKey
Used to encrypt saved reports
https://www.elastic.co/guide/en/kibana/current/reporting-settings-kb.html#general-reporting-settings
xpack.security.encryptionKey
Used to encrypt session information
https://www.elastic.co/guide/en/kibana/current/security-settings-kb.html#security-session-and-cookie-settings
Already defined settings are ignored and can be regenerated using the --force flag. Check the documentation links for instructions on how to rotate encryption keys.
Definitions should be set in the kibana.yml used configure Kibana.
Settings:
xpack.encryptedSavedObjects.encryptionKey: a60108485b5ea9f08178cda313394180
xpack.reporting.encryptionKey: bd52a351b5805a2ccc6d0a43bb4b749c
xpack.security.encryptionKey: cefa8180c88eec6ff70cbed24966c232
[root@elk ]# docker cp kibana:/usr/share/kibana/config/kibana.yml .
[root@elk ]# vim kibana.yml
xpack.encryptedSavedObjects.encryptionKey: a60108485b5ea9f08178cda313394180
xpack.reporting.encryptionKey: bd52a351b5805a2ccc6d0a43bb4b749c
xpack.security.encryptionKey: cefa8180c88eec6ff70cbed24966c232
[root@elk ]# docker cp kibana.yml kibana:/usr/share/kibana/config/
[root@elk ]# docker restart kibana
Elastic Agent部署
生成安装命令
在Kibana中进入fleet菜单,点击添加fleet服务器,然后填写fleet服务器地址为https://fleet-server:8220,生成的安装参数如下所示:
接下来我们将原本的参数改为环境变量方式引入,容器启动命令如下,记得替换token和fingerprint参数。挂载es日志目录是为了采集es服务日志,挂载es证书是为了请求es接口获取监控指标信息。
[root@elk ]# docker run --name fleet-server -d -p 8200:8200 --net elk -v $PWD/es-log:/var/log/elasticsearch -v $PWD/certs:/certs -e FLEET_SERVER_ENABLE=true -e FLEET_SERVER_ELASTICSEARCH_HOST=https://172.19.0.100:9200 -e FLEET_SERVER_SERVICE_TOKEN=AAEAAWVsYXN0aWMvZmxlZXQtc2VydmVyL3Rva2VuLTE2OTAzNTYwOTM0MzA6Qm5TOWQ0WFBUSS1fUGNuR3JRM2RfUQ -e LEET_SERVER_POLICY_ID=fleet-server-policy -e FLEET_SERVER_ELASTICSEARCH_CA_TRUSTED_FINGERPRINT=6c2c5caf569d98cb98da8f756dd1a31aa480f98434d4b0f17f7264615cb954fb docker.elastic.co/beats/elastic-agent:8.8.2
容器运行后,查看fleet页面,以成功注册fleet server。
添加elasticsearch采集监控
进入Kibana集成菜单,搜索Elasticsearch并安装。
填写集成策略名称和描述,选择已有的Fleet Server Policy
在metrics配置中,填写es的地址和账号密码,并指定ca证书路径。logs配置无需更改,使用默认配置即可。
访问验证
最后在monitor中可以看到已经加载了es集群日志。
日志模拟程序部署
克隆源码并构建镜像
[root@elk ]# git clone https://gitee.com/cuiliang0302/log_demo.git
[root@elk ]# cd log_demo/
[root@elk log_demo]# ls
Dockerfile log.py main.py readme.md requirements.txt
[root@elk log_demo]# docker build -t log_demo:1.0 .
运行程序
[root@elk ]# docker run --name log_demo -d -v $PWD/log:/opt/logDemo/log log_demo:1.0
Filebeat部署
创建filebeat配置文件
[root@elk ]# cat > filebeat.yml << EOF
filebeat.inputs:
- type: log
enabled: true
paths:
- /log/info.log
output.logstash:
hosts: ["logstash:5044"]
EOF
启动filebeat
启动时挂载了filebeat配置文件和采集日志的目录。
[root@elk ]# docker run --name filebeat -d --net elk -v $PWD/filebeat.yml:/usr/share/filebeat/filebeat.yml -v $PWD/log:/log docker.elastic.co/beats/filebeat:8.8.2
Logstash部署
文件目录准备
[root@elk ]# tree .
.
├── GeoLite2-City.mmdb # 地理位置信息文件
├── certs # es证书
│ ├── http.p12
│ ├── http_ca.crt
│ └── transport.p12
└── logstash.conf # logstash pipeline
[root@elk ]# ls
GeoLite2-City.mmdb es-data filebeat.yml log log4j2.properties logstash.conf
certs elasticsearch.yml es-log kibana.yml
编辑logstash pipeline
[root@elk ]# cat > logstash.conf << EOF
input {
beats {
port => 5044
}
}
filter{
grok{
match => {"message" => "%{TIMESTAMP_ISO8601:log_timestamp} \| %{LOGLEVEL:level} %{SPACE}* \| (?<class>[__main__:[\w]*:\d*]+) \- %{GREEDYDATA:content}"}
}
mutate {
gsub =>[
"content", "'", '"'
]
lowercase => [ "level" ]
}
json {
source => "content"
}
geoip {
source => "remote_address"
database => "/usr/share/logstash/GeoLite2-City.mmdb"
ecs_compatibility => disabled
}
mutate {
remove_field => ["content"]
}
}
output {
elasticsearch {
hosts => ["https://172.19.0.100:9200"]
data_stream => "true"
data_stream_type => "logs"
data_stream_dataset => "myapp"
data_stream_namespace => "default"
timeout => 120
pool_max => 800
validate_after_inactivity => 7000
user => elastic
password => "ANrjs2w+upw0O1a1Yuka"
ssl => true
cacert => "/certs/http_ca.crt"
}
}
EOF
启动logstash
启动时挂载了GeoIP数据库文件和logstash配置以及ca证书。
[root@elk ]# docker run --name logstash -d -p 5044:5044 --net elk -v $PWD/certs:/certs -v $PWD/GeoLite2-City.mmdb:/usr/share/logstash/GeoLite2-City.mmdb -v $PWD/logstash.conf:/usr/share/logstash/pipeline/logstash.conf docker.elastic.co/logstash/logstash:8.8.2
访问验证
查看数据流信息
在kibana的索引管理中,查看数据流信息,已成功创建数据流,并向其中持续写入数据。
查看字段信息
在discover中可以看到写入es的数据,已经是logstash处理过的数据内容。
参考文档
elasticsearch docker方式部署:https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
elasticsearch日志配置:https://www.elastic.co/guide/en/elasticsearch/reference/8.8/logging.html#loggin-configuration
kibana docker方式部署:https://www.elastic.co/guide/en/kibana/current/docker.html
logstash docker方式部署:https://www.elastic.co/guide/en/logstash/current/docker.html
filebeat docker方式部署:https://www.elastic.co/guide/en/beats/filebeat/current/running-on-docker.html
Logstash输入beats插件:https://www.elastic.co/guide/en/logstash/master/plugins-inputs-beats.html
Logstash正则匹配插件:https://www.elastic.co/guide/en/logstash/master/plugins-filters-grok.html
Logstash字符处理插件:https://www.elastic.co/guide/en/logstash/master/plugins-filters-mutate.html
Logstash日期处理插件:https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html
Logstash json转换插件:https://www.elastic.co/guide/en/logstash/master/plugins-filters-json.html
Logstash地理位置插件:https://www.elastic.co/guide/en/logstash/master/plugins-filters-geoip.html
Logstash输出elasticsearch配置:https://www.elastic.co/guide/en/logstash/master/plugins-outputs-elasticsearch.html
查看更多
微信公众号
微信公众号同步更新,欢迎关注微信公众号第一时间获取最近文章。
博客网站
崔亮的博客-专注devops自动化运维,传播优秀it运维技术文章。更多原创运维开发相关文章,欢迎访问https://www.cuiliangblog.cn