ES8生产实践——Docker部署ELK8.8与日志采集

如果想在本地环境快速部署一套ELK服务,推荐使用docker方式部署。需要注意的是部署单节点es服务仅用于实验测试环境使用,未配置开机自启,所有容器未加–restart=always参数。可根据实际情况添加该参数,生产环境建议使用eck部署和管理es容器集群。

ES与Kibana部署

准备工作

创建自定义网络
由于容器每次重启后ip会发生变化,使得ssl证书主机IP地址不一致,导致其它服务连接es异常。因此需要自定义网络,并固定es的ip地址。

[root@elk ]# docker network create elk --subnet=172.19.0.0/16

创建ES服务相关目录

[root@elk ]# mkdir es-log es-data

启动es容器

[root@elk ]# docker run --name elasticsearch -d --net elk --ip 172.19.0.100 -p 9200:9200 -p 9300:9300 -v $PWD/es-data:/usr/share/elasticsearch/data -v $PWD/es-log:/usr/share/elasticsearch/logs -e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xms64m -Xmx512m" docker.elastic.co/elasticsearch/elasticsearch:8.8.2

启动Kibana容器

[root@elk ]# docker run --name kibana -d  --net elk -p 5601:5601 docker.elastic.co/kibana/kibana:8.8.2

初始化配置

es生成连接token

[root@elk ]# docker exec -it elasticsearch bash
elasticsearch@a15632b27a31:~$ elasticsearch-create-enrollment-token --scope kibana
eyJ2ZXIiOiI4LjguMiIsImFkciI6WyIxNzIuMTkuMC4xMDA6OTIwMCJdLCJmZ3IiOiI4NzNjYjMwNTQzODBkODRhNDcwMTcxMzQ0Mzc4ZDFmYmYxNmI2OTc2ODlkZjM1MGU1ZWM3OWJjYWIxNDgwNTI5Iiwia2V5IjoiLWp4MWtJa0I5T25GdkhMN2NsNGk6VHRoYWYxVTlUQS1BVVcwbGlwbEplUSJ9

登录Kibana,填写token完成es连接
image.png
kibana生成二次验证码

[root@elk ]# docker exec -it kibana bash       
kibana@1b35739cdeed:~$ kibana-verification-code
Your verification code is:  471 593

kibana填写二次验证码
image.png
es生成初始化密码

[root@elk ]# docker exec -it elasticsearch bash
elasticsearch@a15632b27a31:~$ elasticsearch-reset-password -u elastic
This tool will reset the password of the [elastic] user to an autogenerated value.
The password will be printed in the console.
Please confirm that you would like to continue [y/N]y


Password for the [elastic] user successfully reset.
New value: ANrjs2w+upw0O1a1Yuka

kibana填写es账号密码
image.png
kibana登录成功
image.png

其他常用操作

curl访问测试

安装完es后,自动为我们配置了用户密码和tls证书,我们使用curl命令即可访问测试

[root@elk ]# curl -k -u 'elastic:ANrjs2w+upw0O1a1Yuka' https://127.0.0.1:9200/_cluster/health
{"cluster_name":"docker-cluster","status":"green","timed_out":false,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":26,"active_shards":26,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":100.0}

[root@elk ]# curl -k -u 'elastic:ANrjs2w+upw0O1a1Yuka' https://127.0.0.1:9200/_cat/nodes?v
ip           heap.percent ram.percent cpu load_1m load_5m load_15m node.role   master name
172.19.0.100           29          45   2    0.04    0.21     0.18 cdfhilmrstw *      630b895ac617

ES证书导出

使用logstash或者filebeat,SDK连接es时,如果不指定ca证书,会提示x509错误,需要提前导出ca证书。

[root@elk ]# docker cp elasticsearch:/usr/share/elasticsearch/config/certs .                                                 
[root@elk ]# ls certs                                            
http.p12      http_ca.crt   transport.p12

ES日志打印至文件

es的日志默认使用的console输出, 不记录到日志文件中, logs目录下面只有gc.log,我们可以通过配置log4j2设置,将日志写入对应文件中。

[root@elk ]# docker cp elasticsearch:/usr/share/elasticsearch/config/log4j2.properties .

[root@elk ]# vim log4j2.properties
# appender.rolling.type = Console 改成下面的
appender.rolling.type = RollingFile
### 下面的进行追加
######## Server JSON ############################
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_server.json
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.json.gz
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size = 128MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.fileIndex = nomax
appender.rolling.strategy.action.type = Delete
appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path}
appender.rolling.strategy.action.condition.type = IfFileName
appender.rolling.strategy.action.condition.glob = ${sys:es.logs.cluster_name}-*
appender.rolling.strategy.action.condition.nested_condition.type = IfAccumulatedFileSize
appender.rolling.strategy.action.condition.nested_condition.exceeds = 2GB

[root@elk ]# docker cp log4j2.properties elasticsearch:/usr/share/elasticsearch/config/
[root@elk ]# docker restart elasticsearch
[root@elk ]# elk docker exec -it elasticsearch bash
elasticsearch@f70457f06c3c:~$ cd /usr/share/elasticsearch/logs/
elasticsearch@f70457f06c3c:~/logs$ ls
docker-cluster_server.json  gc.log  gc.log.00  gc.log.01

GeoIP地理解析库手动导入

Elasticsearch会自动从 Elastic GeoIP 下载IP地理数据库文件,默认情况下,Elasticsearch 每三天检查一次数据库文件是否有更新,但有些情况下会导致下载失败,此时就需要提前下载GeoLite2-City.mmdb文件,并放于指定路径下才能使用。
禁用数据库自动更新

PUT /_cluster/settings
{
  "persistent" : {
    "ingest.geoip.downloader.enabled" : false
  }
}

拷贝文件

# 创建目录
[root@elk ]# docker exec -it --user root elasticsearch bash
root@630b895ac617:/# mkdir /usr/share/elasticsearch/config/ingest-geoip

# 拷贝文件
[root@elk ]# docker cp GeoLite2-City.mmdb elasticsearch:/usr/share/elasticsearch/config/ingest-geoip/my.mmdb

# 更改权限
[root@elk ]# docker exec -it --user root elasticsearch bash
root@630b895ac617:/# chown -R elasticsearch:root /usr/share/elasticsearch/config/ingest-geoip

使用
在处理管道中指定数据库文件为my.mmdb即可。
image.png

Kibana中文配置

[root@elk ]# docker cp kibana:/usr/share/kibana/config/kibana.yml .
[root@elk ]# ls
kibana.yml
[root@elk ]# vim kibana.yml
i18n.locale: "zh-CN"
[root@elk ]# docker cp kibana.yml kibana:/usr/share/kibana/config/ 
[root@elk ]# docker restart kibana

kibana添加加密key

[root@elk ]# elk docker exec -it kibana bash       
kibana@68148c992019:~$ kibana-encryption-keys generate
## Kibana Encryption Key Generation Utility

The 'generate' command guides you through the process of setting encryption keys for:

xpack.encryptedSavedObjects.encryptionKey
    Used to encrypt stored objects such as dashboards and visualizations
    https://www.elastic.co/guide/en/kibana/current/xpack-security-secure-saved-objects.html#xpack-security-secure-saved-objects

xpack.reporting.encryptionKey
    Used to encrypt saved reports
    https://www.elastic.co/guide/en/kibana/current/reporting-settings-kb.html#general-reporting-settings

xpack.security.encryptionKey
    Used to encrypt session information
    https://www.elastic.co/guide/en/kibana/current/security-settings-kb.html#security-session-and-cookie-settings


Already defined settings are ignored and can be regenerated using the --force flag.  Check the documentation links for instructions on how to rotate encryption keys.
Definitions should be set in the kibana.yml used configure Kibana.

Settings:
xpack.encryptedSavedObjects.encryptionKey: a60108485b5ea9f08178cda313394180
xpack.reporting.encryptionKey: bd52a351b5805a2ccc6d0a43bb4b749c
xpack.security.encryptionKey: cefa8180c88eec6ff70cbed24966c232


[root@elk ]# docker cp kibana:/usr/share/kibana/config/kibana.yml .                    
[root@elk ]# vim kibana.yml   
xpack.encryptedSavedObjects.encryptionKey: a60108485b5ea9f08178cda313394180
xpack.reporting.encryptionKey: bd52a351b5805a2ccc6d0a43bb4b749c
xpack.security.encryptionKey: cefa8180c88eec6ff70cbed24966c232
[root@elk ]# docker cp kibana.yml kibana:/usr/share/kibana/config/ 
[root@elk ]# docker restart kibana                                 

Elastic Agent部署

生成安装命令

在Kibana中进入fleet菜单,点击添加fleet服务器,然后填写fleet服务器地址为https://fleet-server:8220,生成的安装参数如下所示:
image.png
接下来我们将原本的参数改为环境变量方式引入,容器启动命令如下,记得替换token和fingerprint参数。挂载es日志目录是为了采集es服务日志,挂载es证书是为了请求es接口获取监控指标信息。

[root@elk ]# docker run --name fleet-server -d -p 8200:8200 --net elk -v $PWD/es-log:/var/log/elasticsearch -v $PWD/certs:/certs -e FLEET_SERVER_ENABLE=true -e FLEET_SERVER_ELASTICSEARCH_HOST=https://172.19.0.100:9200 -e FLEET_SERVER_SERVICE_TOKEN=AAEAAWVsYXN0aWMvZmxlZXQtc2VydmVyL3Rva2VuLTE2OTAzNTYwOTM0MzA6Qm5TOWQ0WFBUSS1fUGNuR3JRM2RfUQ -e LEET_SERVER_POLICY_ID=fleet-server-policy -e FLEET_SERVER_ELASTICSEARCH_CA_TRUSTED_FINGERPRINT=6c2c5caf569d98cb98da8f756dd1a31aa480f98434d4b0f17f7264615cb954fb docker.elastic.co/beats/elastic-agent:8.8.2

容器运行后,查看fleet页面,以成功注册fleet server。
image.png

添加elasticsearch采集监控

进入Kibana集成菜单,搜索Elasticsearch并安装。
image.png
填写集成策略名称和描述,选择已有的Fleet Server Policy
image.png
在metrics配置中,填写es的地址和账号密码,并指定ca证书路径。logs配置无需更改,使用默认配置即可。
image.png

访问验证

最后在monitor中可以看到已经加载了es集群日志。
image.png

日志模拟程序部署

克隆源码并构建镜像

[root@elk ]# git clone https://gitee.com/cuiliang0302/log_demo.git
[root@elk ]# cd log_demo/
[root@elk log_demo]# ls
Dockerfile  log.py  main.py  readme.md  requirements.txt
[root@elk log_demo]# docker build -t log_demo:1.0 .

运行程序

[root@elk ]# docker run --name log_demo -d -v $PWD/log:/opt/logDemo/log log_demo:1.0

Filebeat部署

创建filebeat配置文件

[root@elk ]# cat > filebeat.yml << EOF
filebeat.inputs:
- type: log 
  enabled: true
  paths:
    - /log/info.log
output.logstash:
  hosts: ["logstash:5044"]
EOF

启动filebeat

启动时挂载了filebeat配置文件和采集日志的目录。

[root@elk ]# docker run --name filebeat -d --net elk -v $PWD/filebeat.yml:/usr/share/filebeat/filebeat.yml -v $PWD/log:/log docker.elastic.co/beats/filebeat:8.8.2

Logstash部署

文件目录准备

[root@elk ]# tree .                  
.
├── GeoLite2-City.mmdb # 地理位置信息文件
├── certs # es证书
│   ├── http.p12
│   ├── http_ca.crt
│   └── transport.p12
└── logstash.conf # logstash pipeline
[root@elk ]# ls
GeoLite2-City.mmdb     es-data                filebeat.yml           log                    log4j2.properties      logstash.conf 
certs                  elasticsearch.yml      es-log                 kibana.yml             

编辑logstash pipeline

[root@elk ]# cat > logstash.conf << EOF
input {
  beats {
    port => 5044
  }
}
filter{
    grok{
      match => {"message" => "%{TIMESTAMP_ISO8601:log_timestamp} \| %{LOGLEVEL:level} %{SPACE}* \| (?<class>[__main__:[\w]*:\d*]+) \- %{GREEDYDATA:content}"}
    }
    mutate {
        gsub =>[
            "content", "'", '"'
        ]
        lowercase => [ "level" ]
    }
    json {
        source => "content"
    }
    geoip {
        source => "remote_address"
        database => "/usr/share/logstash/GeoLite2-City.mmdb"
        ecs_compatibility => disabled
    }
    mutate {
        remove_field => ["content"]
    }
}
output {
    elasticsearch {
        hosts => ["https://172.19.0.100:9200"]
        data_stream => "true"
        data_stream_type => "logs"
        data_stream_dataset => "myapp"
        data_stream_namespace => "default"
        timeout => 120
        pool_max => 800
        validate_after_inactivity => 7000
        user => elastic
        password => "ANrjs2w+upw0O1a1Yuka"
        ssl => true
        cacert => "/certs/http_ca.crt"
    }  
}
EOF

启动logstash

启动时挂载了GeoIP数据库文件和logstash配置以及ca证书。

[root@elk ]# docker run --name logstash -d -p 5044:5044 --net elk -v $PWD/certs:/certs -v $PWD/GeoLite2-City.mmdb:/usr/share/logstash/GeoLite2-City.mmdb -v $PWD/logstash.conf:/usr/share/logstash/pipeline/logstash.conf docker.elastic.co/logstash/logstash:8.8.2

访问验证

查看数据流信息

在kibana的索引管理中,查看数据流信息,已成功创建数据流,并向其中持续写入数据。
image.png

查看字段信息

在discover中可以看到写入es的数据,已经是logstash处理过的数据内容。image.png

参考文档

elasticsearch docker方式部署:https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
elasticsearch日志配置:https://www.elastic.co/guide/en/elasticsearch/reference/8.8/logging.html#loggin-configuration
kibana docker方式部署:https://www.elastic.co/guide/en/kibana/current/docker.html
logstash docker方式部署:https://www.elastic.co/guide/en/logstash/current/docker.html
filebeat docker方式部署:https://www.elastic.co/guide/en/beats/filebeat/current/running-on-docker.html
Logstash输入beats插件:https://www.elastic.co/guide/en/logstash/master/plugins-inputs-beats.html
Logstash正则匹配插件:https://www.elastic.co/guide/en/logstash/master/plugins-filters-grok.html
Logstash字符处理插件:https://www.elastic.co/guide/en/logstash/master/plugins-filters-mutate.html
Logstash日期处理插件:https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html
Logstash json转换插件:https://www.elastic.co/guide/en/logstash/master/plugins-filters-json.html
Logstash地理位置插件:https://www.elastic.co/guide/en/logstash/master/plugins-filters-geoip.html
Logstash输出elasticsearch配置:https://www.elastic.co/guide/en/logstash/master/plugins-outputs-elasticsearch.html

查看更多

微信公众号

微信公众号同步更新,欢迎关注微信公众号第一时间获取最近文章。在这里插入图片描述

博客网站

崔亮的博客-专注devops自动化运维,传播优秀it运维技术文章。更多原创运维开发相关文章,欢迎访问https://www.cuiliangblog.cn

### 部署Elasticsearch 8 使用 Docker 并配置多个节点 #### 创建 Elasticsearch 配置文件 对于每个节点,需创建 `elasticsearch.yml` 文件来定义集群参数。以下是针对名为 “es-cluster”的集群设置的一个实例: ```yaml bootstrap.memory_lock: false cluster.name: es-cluster node.name: node1 node.roles: [master, data_hot, ingest] network.host: 0.0.0.0 discovery.seed_hosts: ["host1", "host2", "host3"] cluster.initial_master_nodes: ["node1", "node2", "node3"] path.data: /data/es-data/node1 path.logs: /var/log/elasticsearch/ http.port: 9200 transport.port: 9300 xpack.security.enabled: true xpack.security.http.ssl: enabled: true keystore.path: certs/node1.p12 truststore.path: certs/ca.p12 ``` 上述配置适用于第一个节点(node1),其他节点应相应调整 `node.name`, `path.data` 和 SSL/TLS 密钥库路径[^1]。 #### 准备必要的安全证书 考虑到安全性,在启动前应该准备 CA 证书以及各节点的传输层安全(TLS)证书: ```bash ${ES_HOME}/bin/elasticsearch-certutil ca --out elastic-stack-ca.p12 --pass "" ${ES_HOME}/bin/elasticsearch-certutil cert \ --ca elastic-stack-ca.p12 \ --ca-pass "" \ --out certs.zip \ --dns localhost,<NODE_IP> \ --ip <NODE_IP> unzip certs.zip -d config/certs ``` 这里 `<NODE_IP>` 应替换为实际 IP 地址;此命令会生成用于身份验证所需的 `.p12` 文件[^3]。 #### 编写 Docker Compose 文件 为了简化多节点部署过程,可以利用 `docker-compose.yaml` 来管理服务编排。下面是一个简单的例子展示如何通过 Docker Compose 启动三个节点组成的 Elasticsearch 集群: ```yaml version: '3' services: es-node1: image: docker.elastic.co/elasticsearch/elasticsearch:8.0.0 container_name: es-node1 environment: - discovery.seed_hosts=es-node2,es-node3 - cluster.initial_master_nodes=es-node1,es- xpack.security.enabled=true - xpack.security.transport.ssl.enabled=true - xpack.security.transport.ssl.verification_mode=certificate - xpack.security.transport.ssl.keystore.path=/usr/share/elasticsearch/config/certs/node1.p12 - xpack.security.transport.ssl.truststore.path=/usr/share/elasticsearch/config/certs/ca.p12 volumes: - ./config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml - ./certs/:/usr/share/elasticsearch/config/certs - ./data/node1:/usr/share/elasticsearch/data ports: - "9200:9200" - "9300:9300" es-node2: ... es-node3: ... ``` 请注意,以上仅展示了部分配置项,完整的环境变量列表可以根据官方文档进一步补充。另外两个节点的服务定义(`es-node2`,`es-node3`)也需要按照相同模式添加到该 YAML 文件中,并确保各自的数据卷映射不同目录以防止数据冲突[^2]。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值