ubuntu安装ELK

本文档详细介绍了如何在Ubuntu上安装ELK stack,包括Elasticsearch的下载、解压、配置、启动及错误处理;Logstash的配置、启动以及与Filebeat的配合;Kibana的下载、配置和启动。同时,还涵盖了SpringBoot项目中如何配置以将日志发送给Logstash。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

ELK表示Elasticsearch、Logstash、Kibana,一套完整的日志收集以及展示的解决方案。
ElasticSearch是一个实时的分布式搜索和分析引擎,它可以用于全文搜索,结构化搜索以及分析。它是一个建立在全文搜索引擎 Apache Lucene 基础上的搜索引擎,使用 Java 语言编写。
Logstash是一个具有实时传输能力的数据收集引擎,用来进行数据收集、解析,并将数据发送给ES。
Kibana为 Elasticsearch 提供了分析和可视化的 Web 平台。它可以在 Elasticsearch 的索引中查找,交互数据,并生成各种维度表格、图形。
在SpringBoot项目中,Logstash负责收集数据,并将数据发送给ElasticSearch,最后由Kibana展示在页面上。

一、环境

java环境:jdk7以上
Elasticsearch、Logstash、Kibana版本必须一致,本文使用的6.4.3。

二、下载与安装

1. Elasticsearch

1.1 下载

在官网https://www.elastic.co/downloads/elasticsearch下载安装包,本文下载的是elasticsearch-6.4.3.tar.gz,将下载的安装包上传至服务器上。

1.2 解压至指定目录
tar zxvf elasticsearch-6.4.3.tar.gz -C /usr/local
1.3 运行

(1) 定位至安装目录:

cd /usr/local/elasticsearch-6.4.3/

(2) 运行

./bin/elasticsearch

后台运行

bin/elasticsearch -d
1.4 错误解决
  • java内存空间不足或文件权限不足:
Exception in thread "main" java.nio.file.AccessDeniedException: /usr/local/elasticsearch-6.4.3/config/jvm.options
        at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
        at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
        at java.nio.file.Files.newByteChannel(Files.java:361)
        at java.nio.file.Files.newByteChannel(Files.java:407)
        at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)
        at java.nio.file.Files.newInputStream(Files.java:152)
        at org.elasticsearch.tools.launchers.JvmOptionsParser.main(JvmOptionsParser.java:60)

解决:
(1) 切换到root账户,修改/usr/local/elasticsearch-6.4.3/config/jvm.options文件的Xms和Xmx的配置:

-Xms512m
-Xmx512m

(2) 修改文件权限

sudo chmod 644 jvm.options
  • root账号启动报错
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:140) ~[elasticsearch-6.4.3.jar:6.4.3]
        at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:127) ~[elasticsearch-6.4.3.jar:6.4.3]
        at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.4.3.jar:6.4.3]
        at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) ~[elasticsearch-cli-6.4.3.jar:6.4.3]
        at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-6.4.3.jar:6.4.3]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:93) ~[elasticsearch-6.4.3.jar:6.4.3]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:86) ~[elasticsearch-6.4.3.jar:6.4.3]
Caused by: java.lang.RuntimeException: can not run elasticsearch as root
        at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:104) ~[elasticsearch-6.4.3.jar:6.4.3]
        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:171) ~[elasticsearch-6.4.3.jar:6.4.3]
        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:326) ~[elasticsearch-6.4.3.jar:6.4.3]
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:136) ~[elasticsearch-6.4.3.jar:6.4.3]
        ... 6 more

解决:切换到普通用户重启启动。

  • 启动过程中提示
ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2018-11-23T10:00:28,057][INFO ][o.e.n.Node               ] [mQQdX2p] stopping ...
[2018-11-23T10:00:28,100][INFO ][o.e.n.Node               ] [mQQdX2p] stopped
[2018-11-23T10:00:28,101][INFO ][o.e.n.Node               ] [mQQdX2p] closing ...
[2018-11-23T10:00:28,111][INFO ][o.e.n.Node               ] [mQQdX2p] closed

解决:
(1) 修改配置sysctl.conf

vim /etc/sysctl.conf

(2) 添加下面配置:

vm.max_map_count=655360

(3) 配置生效:

sysctl -p
1.5 配置文件修改

(1) 进入/usr/local/elasticsearch-6.4.3/config目录,编辑文件elasticsearch.yml

  • 允许外网访问:network.host: 0.0.0.0
  • 指定端口:http.port: 9202
1.6 多节点配置

假设需要配置三个节点,形成一个集群,集群名字为es-colony,节点名称分别为master、node-2、node-3,其中master为主节点,三个es的配置分别如下:
首先将之前解压的es安装包另外复制两份。

  • master节点的elasticsearch.yml配置:
network.host: 0.0.0.0

cluster.name: es-colony 
node.name: master
node.master: true
  • node-2节点的elasticsearch.yml配置:
network.host: 0.0.0.0
http.port: 9201

cluster.name: es-colony
node.name: node-2
node.master: false

discovery.zen.ping.unicast.hosts: ["127.0.0.1"]
  • node-3节点的elasticsearch.yml配置:
network.host: 0.0.0.0
http.port: 9202

cluster.name: es-colony
node.name: node-3
node.master: false

discovery.zen.ping.unicast.hosts: ["127.0.0.1"]

配置完成后,分别启动三个节点,启动成功后,在浏览器访问:http://IP地址:9200/http://IP地址:9201/http://IP地址:9202/,出现以下配置表示安装成功。

{
  "name" : "master",
  "cluster_name" : "es-colony",
  "cluster_uuid" : "3_ZFfdGHS0SHKmulr4PWeQ",
  "version" : {
    "number" : "6.4.3",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "fe40335",
    "build_date" : "2018-10-30T23:17:19.084789Z",
    "build_snapshot" : false,
    "lucene_version" : "7.4.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}
1.7 elasticsearch-head插件安装

(1) github上下载插件或通过git clone

git clone git://github.com/mobz/elasticsearch-head.git

(2) 下载依赖的node,npm,grunt

  • 安装npm:sudo apt-get install npm
  • 安装grunt和grunt-cli
sudo npm install -g grunt
sudo npm install -g grunt-cli

(3) 修改head的连接地址

sudo vim elasticsearch-head/_site/app.js

将以下语句中的http://localhost:9200改为http://IP:9200

this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://localhost:9200";  

(4) 修改服务器的监听地址

sudo vim elasticsearch-head/Gruntfile.js
connect: {
                        server: {
                                options: {
                                        port: 9100,
                                        base: '.',
                                        keepalive: true
                                }
                        }
                }

在options中添加hostname: '*'
(5) 因为head是单独启动的,还需要修改elasticseach的配置文件elasticsearch.yml, 修改对应的ip以及跨域的设置
在elasticsearch.yml中增加

http.cors.enabled: true
http.cors.allow-origin: "*"

(6) 启动

sudo npm start

启动时会出现错误提示:

> elasticsearch-head@0.0.0 start /home/elasticsearch-head
> grunt server

grunt-cli: The grunt command line interface (v1.3.2)

Fatal error: Unable to find local grunt.

If you're seeing this message, grunt hasn't been installed locally to
your project. For more information about installing and configuring grunt,
please see the Getting Started guide:

https://gruntjs.com/getting-started

npm ERR! Linux 4.15.0-20-generic
npm ERR! argv "/usr/bin/node" "/usr/bin/npm" "start"
npm ERR! node v8.10.0
npm ERR! npm  v3.5.2
npm ERR! code ELIFECYCLE
npm ERR! elasticsearch-head@0.0.0 start: `grunt server`
npm ERR! Exit status 99
npm ERR! 
npm ERR! Failed at the elasticsearch-head@0.0.0 start script 'grunt server'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the elasticsearch-head package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR!     grunt server
npm ERR! You can get information on how to open an issue for this project with:
npm ERR!     npm bugs elasticsearch-head
npm ERR! Or if that isn't available, you can get their info via:
npm ERR!     npm owner ls elasticsearch-head
npm ERR! There is likely additional logging output above.

npm ERR! Please include the following file with any support request:
npm ERR!     /home/camellia-test145/elasticsearch-head/npm-debug.log

经过查询,是因为版本不是最新版本的问题,将npm、node、grunt以及相关的版本都重新安装位最新版本

sudo  npm cache clean -f
sudo npm install -g n
sudo n stable
sudo npm install npm@latest -g
sudo npm install grunt-cli@latest
sudo npm install grunt-contrib-copy@latest
sudo npm install grunt-contrib-concat@latest
sudo npm install grunt-contrib-uglify@latest
sudo npm install grunt-contrib-clean@latest
sudo npm install grunt-contrib-watch@latest
sudo npm install grunt-contrib-connect@latest
sudo npm install grunt-contrib-jasmine@latest

在elasticsearch-head目录下重新执行grunt server,出现以下内容表示成功:

Running "connect:server" (connect) task
Waiting forever...
Started connect web server on http://localhost:9100

后台启动nohup npm run start &
访问IP:9100即可。

1.8 删除所有的索引
 curl -X DELETE http://ip:9200/_all

2. logstash

2.1 下载logstash-6.4.3
sudo wget https://artifacts.elastic.co/downloads/logstash/logstash-6.4.3.tar.gz

解压到指定目录下

sudo tar zxvf logstash-6.4.3.tar.gz -C /usr/local
2.2 配置logstash输出到elasticsearch

修改logstash的安装目录的config目录下的logstash.conf文件,如果没有新建,配置如下:

input {
    tcp {
        mode => "server"
        host => "0.0.0.0"
        port => 4567
        codec => json
    }
}
filter{
        grok{
                match => {
                        "message" => "(?<datetime>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}.\d{3})  INFO %{NUMBER:thread} --- %{SYSLOG5424SD:task} %{JAVACLASS:javaclass}\s*: %{SYSLOG5424SD:module}\s*%{GREEDYDATA:msg}"
                 }
        } ### 通过grok匹配内容

        date{
                match => ["datetime","yyyy-MM-dd HH:mm:ss.SSS","yyyy-MM-dd HH:mm:ss.SSSZ"]
                target => "@timestamp"
        } ### 处理时间
}

output {
    elasticsearch {
       hosts => ["ip:9200","ip:9201","ip:9202"]
       index => "serverlog-%{+YYYY.MM.dd}"
    }
}

在上面的配置中,输入数据源为filebeat,关于filebeat见下一小节,输出源为elasticsearch,更多的输入和输出源的配置见官网https://www.elastic.co/guide/en/logstash/current/advanced-pipeline.html.

input中配置的tcp.host 和 tcp.port 需要和springBoot项目中的配置对应。

2.3 启动
sudo bin/logstash -f config/logstash.conf >logstash.log  2>&1 &
2.4 下载插件
bin/logstash-plugin install logstash-filter-multiline

使用默认的镜像会很慢,所以将磨人的镜像修改为国内的:

sudo vim Gemfile

更改默认source: https://rubygems.org 为https://gems.ruby-china.comhttps://ruby.taobao.org

3. logstash结合filebeat

参照https://blog.csdn.net/forezp/article/details/98322521;

在分布式系统中,一台主机可能有多个应用,应用将日志输出到主机的指定目录,这时由logstash来搬运日志并解析日志,然后输出到elasticsearch上。
由于logstash是java应用,解析日志是非的消耗cpu和内存,logstash安装在应用部署的机器上显得非常的笨重。
最常见的做法是用filebeat部署在应用的机器上,logstash单独部署,然后由filebeat将日志输出给logstash解析,解析完由logstash再传给elasticsearch。

3.1 下载filebeat
sudo wget  https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.2.0-linux-x86_64.tar.gz

解压到指定文件夹

sudo tar zxvf filebeat-7.2.0-linux-x86_64.tar.gz -C /usr/local
3.2 filebeat.yml修改配置

两个enabled改为true:

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true

#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: true

  # Period on which files under path should be checked for changes
  #reload.period: 10s
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  # hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
   hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

如果日志输出到Elasticsearch将相应注释放开

output.elasticsearch:
  hosts: ["localhost:9200"]

如果日志输出到Logstash将相应注释放开

output.logstash:
   hosts: ["localhost:5044"]
3.3 启动
sudo ./filebeat -e >filebeat.log 2>&1 &

4. Kibana

4.1 下载
sudo wget https://artifacts.elastic.co/downloads/kibana/kibana-6.4.3-linux-x86_64.tar.gz

解压到指定目录下

sudo tar zxvf kibana-6.4.3-linux-x86_64 -C /usr/local
4.2 修改配置文件
cd /usr/local/kibana-6.4.3-linux-x86_64/config
sudo vim  kibana.yml 


server.host: “0.0.0.0”
elasticsearch.url: “http://IP:9200”

4.3 启动
sudo bin/kibana &

以上,elk的安装与配置全部完成,下面修改Springboot项目,使其能向Logstash中传输日志。

三、SpringBoot项目配置

1. 添加依赖

<dependency>
	<groupId>net.logstash.logback</groupId>
	<artifactId>logstash-logback-encoder</artifactId>
	<version>5.3</version>
</dependency>
2. 修改日志配置文件
<!-- 为logstash输出的Appender -->
	<appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
		<destination>ip:port</destination>
	<!-- 日志输出编码 -->
		<encoder charset="UTF-8"
		         class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
			<providers>
				<timestamp>
					<timeZone>UTC</timeZone>
				</timestamp>
				<pattern>
					<pattern>
						{
						"serviceName": "sales-manage",
						"class": "%logger{36}",
						"rest": "%message"
						}
					</pattern>
				</pattern>
			</providers>
		</encoder>
	</appender>
  • destination: 为Logstash的配置文件logstash.conf中的input项下配置的ip和端口号;
  • pattern:为想在Kibana中看到的指定的信息。
    以上,SpringBoot项目的配置完毕。

四、总结:

首先启动ElsaticSearch,然后启动Logstash或Kibana
cd /usr/local/elasticsearch-6.4.3
bin/elasticsearch -d
cd /usr/local/logstash-6.4.3/
bin/logstash -f logstash.conf >logstash.log 2>&1 &
cd /usr/local/kibana-6.4.3-linux-x86_64/
bin/kibana &

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值