ELK(Elasticsearch1.7+ + Logstash1.5+ + Kibana4.1)搭建日志集中分析平台实践

Posted by Yancy on 2016-01-21

关注可参考:

本文将安装Elasticsearch-1.7.2, Logstash-1.5.5, Kibana-4.1.5。 请注意版本要求,有些组件需要响应的版本要求。
logstash是负责搜集和转发日志的,es用于存储和检索,kibana提供web端的展现…他们是独立运行的,也可以部署在同一台机器,也可以不同的机器。

可以关注我的Github上面有详细文档:https://github.com/yangcvo/ELK

组件预览

准备工作:

服务端:系统centos 6.7 ip:192.168.1.234 JDK1.8 Elasticsearch-1.7.2 Kibana-4.1.2
客户端:系统centos 6.7 ip:192.168.1.235 JDK1.8 Logash-1.4.2

基本配置设置FQDN:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
创建SSL证书的时候需要配置FQDN
#修改hostname
cat /etc/hostname
elk
#修改hosts
cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.234 elk.ooxx.com elk
#刷新环境
hostname -F /etc/hostname
#复查结果
hostname -f
elk.ooxx.com
hostname
elk

关闭防火墙

1
2
3
4
5
6
7
8
9
10
11
#service iptables stop
#setenforce 0
不过这里我防火墙是开启的,后期添加出去端口即可。
或者可以不关闭防火墙,但是要在iptables中打开相关的端口:
# vim /etc/sysconfig/iptables
-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 9200 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 9292 -j ACCEPT
# service iptables restart

安装java:

1
2
3
4
5
ElasticSearch和Logstash依赖于JDK,所以需要安装JDK:
# yum -y install java-1.8.0-openjdk*
# java -version
这里我是用yum安装方法,也可以自行下载tar包,注意设置java路径。
java也可到这个地址下载https://www.reucon.com/cdn/java/

安装Elasticsearch:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
RPM安装
下载ElasticSearch ElasticSearch默认的对外服务的HTTP端口是9200,节点间交互的TCP端口是9300。
.以 CentOS 下使用安装包RPM安装
#mkdir -p /opt/software && cd /opt/software
#wget -c https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.2.noarch.rpm
#rpm -ivh elasticsearch-1.7.2.noarch.rpm
可以自定义下存储文件目录 用RPM 安装。
vim /etc/elasticsearch/elasticsearch.yml
cluster.name: graylog-development
node.data: true
index.number_of_shards: 5
index.number_of_replicas: 1
path.data: /home/data/es-data 自定存储目录
path.work: /home/data/es-work
network.host: 192.168.1.234

启动es相关服务

1
2
service elasticsearch start
service elasticsearch status

es源码安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
这里我是源码安装的
下载ElasticSearch ElasticSearch默认的对外服务的HTTP端口是9200,节点间交互的TCP端口是9300。
Elasticsearch - https://www.elastic.co/downloads/elasticsearch
tar -zxvf elasticsearch-1.7.1.tar.gz -C /usr/local/
cd /usr/local/elasticsearch-1.7.1/config/
然后给目录做个软链接:
ln -s elasticsearch-1.7.1/ /usr/local/elasticsearch
这里需要修改配置文件:
配置前先创建几个目录文件
mkdir /data/es-data -p
mkdir /data/es-work -p
mkdir /usr/local/elasticsearch-1.7.1/config/logs
mkdir /usr/local/elasticsearch-1.7.1/config/plugins

配置Elasticsearch:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
vim elasticsearch.yml
cluster.name: elasticsearch 集群名称
#################################### Node #####################################
# Node names are generated dynamically on startup, so you're relieved
# from configuring them manually. You can tie this node to a specific name:
#
node.name: “linux_es” 这里我做了集群所以需要两个节点,这里我写了一个节点名称
# Every node can be configured to allow or deny being eligible as the master,
# and to allow or deny to store the data.
#
# Allow this node to be eligible as a master node (enabled by default):
#
node.master: true 集群master 启动
#
# Allow this node to store data (enabled by default):
#
node.data: true 数据存放true
# Set the number of shards (splits) of an index (5 by default):
#
index.number_of_shards: 5
# Set the number of replicas (additional copies) of an index (1 by default):
#
index.number_of_replicas: 1
#################################### Paths ####################################
# Path to directory containing configuration (this file and logging.yml):
#
path.conf: /usr/local/elasticsearch/config 这里开启es配置文件目录路径
# Path to directory where to store index data allocated for this node.
#
path.data: /data/es-data es数据存放目录 这里需要自己新建目录
#
# Can optionally include more than one location, causing data to be striped across
# the locations (a la RAID 0) on a file level, favouring locations with most free
# space on creation. For example:
#
#path.data: /path/to/data1,/path/to/data2
# Path to temporary files:
#
path.work: /data/es-work
# Path to log files:
#
path.logs: /usr/local/elasticsearch/logs es的存放日志 这里需要自己创建下文件
# Path to where plugins are installed:
#
path.plugins: /usr/local/elasticsearch/plugins es安装插件存放路径
#
# Set this property to true to lock the memory:
#
bootstrap.mlockall: true

源码安装启动需要执行 :/usr/local/elasticsearch/bin/elasticsearch
才能启动;

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
这里需要/etc/init.d/创建启动脚本。
可以到我github上面下载。
[root@ELK elasticsearch-servicewrapper]# mv service/ /usr/local/elasticsearch/bin/
[root@ELK elasticsearch-servicewrapper]# cd /usr/local/elasticsearch
[root@ELK elasticsearch]# /usr/local/elasticsearch/bin/service/elasticsearch install 这里是安装es
Detected RHEL or Fedora:
Installing the Elasticsearch daemon..
[root@ELK elasticsearch]# vim /etc/init.d/elasticsearch 查看安装es启动配置文件
[root@ELK elasticsearch]# service elasticsearch start 启动es
Starting Elasticsearch...
Waiting for Elasticsearch......
running: PID:31360 服务已启动了。
启动相关服务
service elasticsearch start
service elasticsearch status
配置 elasticsearch 服务随系统自动启动
# chkconfig --add elasticsearch
测试ElasticSearch服务是否正常,预期返回200的状态码
# curl -X GET http://localhost:9200
{
"name" : "elk",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "-4Rqn4IzS1GfnsodqZD8Tg",
"version" : {
"number" : "1.7.2",
"build_hash" : "d38a34e7b75af4e17ead16f156feffa432b22be3",
"build_timestamp" : "2016-01-03T16:28:56Z",
"build_snapshot" : false,
"lucene_version" : "5.5.2"
},
"tagline" : "You Know, for Search"
}

安装 head、marvel、bigdesk插件:

head插件
1
2
3
4
5
6
7
8
9
10
11
12
插件安装方法1:
/usr/local/elasticsearch/bin/plugin -install mobz/elasticsearch-head
重启es 即可。
打开http://localhost:9200/_plugin/head/
插件安装方法2:
1.https://github.com/mobz/elasticsearch-head下载zip 解压
2.建立/usr/local/elasticsearch/plugins/head/文件
3.将解压后的elasticsearch-head-master文件夹下的文件copy到/usr/local/elasticsearch/plugins/head/
重启es 即可。
打开http://localhost:9200/_plugin/head/

Marvel插件
1
2
3
4
5
6
7
8
9
10
11
12
Elasticsearch 的集群和数据管理界面 Marvel 非常赞,可惜只对开发环境免费,
参考链接:https://www.elastic.co/guide/en/marvel/current/configuration.html
插件安装
/usr/local/elasticsearch/bin/plugin -i elasticsearch/marvel/latest
重启es 即可。
完成后重启服务访问 http://192.168.1.234:9200/_plugin/marvel/
如何看不到下面的页面,就修改下这里的参数看看有没有配置:vim elasticsearch.yml
network.host: 192.168.1.234
在重启es 然后在查看就有数据。

bigdesk插件

1
2
3
4
5
6
看需求安装
功能: 监控查看cpu、内存使用情况,索引数据、搜索情况,http连接数等
安装
#/elstaicsearch/bin/plugin -i lukas-vlcek/bigdesk
重启es 即可。
完成后重启服务访问 http://192.168.1.234:9200/_plugin/bigdesk

安装Kibana:

在es机器上面安装kibana.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
到https://www.elastic.co/downloads/kibana 找合适的版本。
每个版本下面有这么一行内容,一定要注意这些内容:Compatible with Elasticsearch 1.4.4 - 1.7
cd /opt/software/ && wget https://download.elastic.co/kibana/kibana/kibana-4.1.2-linux-x64.tar.gz
#解压
#tar zxvf kibana-4.1.2-linux-x64.tar.gz -C /usr/local
cd /usr/local/ && mv kibana-4.1.2-linux-x64 kibana
#创建kibana启动脚本服务
vi /etc/rc.d/init.d/kibana
#!/bin/bash
### BEGIN INIT INFO
# Provides: kibana
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Runs kibana daemon
# Description: Runs the kibana daemon as a non-root user
### END INIT INFO
# Process name
NAME=kibana
DESC="Kibana4"
PROG="/etc/init.d/kibana"
# Configure location of Kibana bin
KIBANA_BIN=/usr/local/kibana/bin
# PID Info
PID_FOLDER=/var/run/kibana/
PID_FILE=/var/run/kibana/$NAME.pid
LOCK_FILE=/var/lock/subsys/$NAME
PATH=/bin:/usr/bin:/sbin:/usr/sbin:$KIBANA_BIN
DAEMON=$KIBANA_BIN/$NAME
# Configure User to run daemon process
DAEMON_USER=root
# Configure logging location
KIBANA_LOG=/var/log/kibana.log
# Begin Script
RETVAL=0
if [ `id -u` -ne 0 ]; then
echo "You need root privileges to run this script"
exit 1
fi
# Function library
. /etc/init.d/functions
start() {
echo -n "Starting $DESC : "
pid=`pidofproc -p $PID_FILE kibana`
if [ -n "$pid" ] ; then
echo "Already running."
exit 0
else
# Start Daemon
if [ ! -d "$PID_FOLDER" ] ; then
mkdir $PID_FOLDER
fi
daemon --user=$DAEMON_USER --pidfile=$PID_FILE $DAEMON 1>"$KIBANA_LOG" 2>&1 &
sleep 2
pidofproc node > $PID_FILE
RETVAL=$?
[[ $? -eq 0 ]] && success || failure
echo
[ $RETVAL = 0 ] && touch $LOCK_FILE
return $RETVAL
fi
}
reload()
{
echo "Reload command is not implemented for this service."
return $RETVAL
}
stop() {
echo -n "Stopping $DESC : "
killproc -p $PID_FILE $DAEMON
RETVAL=$?
echo
[ $RETVAL = 0 ] && rm -f $PID_FILE $LOCK_FILE
}
case "$1" in
start)
start
;;
stop)
stop
;;
status)
status -p $PID_FILE $DAEMON
RETVAL=$?
;;
restart)
stop
start
;;
reload)
reload
;;
*)
# Invalid Arguments, print the following message.
echo "Usage: $0 {start|stop|status|restart}" >&2
exit 2
;;
esac
#修改启动权限
chmod +x /etc/rc.d/init.d/kibana
#启动kibana服务
service kibana start
service kibana status
#查看端口
netstat -nltp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:9200 0.0.0.0:* LISTEN 1765/java
tcp 0 0 0.0.0.0:9300 0.0.0.0:* LISTEN 1765/java
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1509/sshd
tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN 1876/node
tcp 0 0 :::22 :::* LISTEN 1509/sshd

配置Kibana:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
#编辑kibana.yaml 修改端口,设置host 可以设置本地服务器IP
vi /usr/local/kibana/config/kibana.yml
# Kibana is served by a back end server. This controls which port to use.
port: 5601
# The host to bind the server to.
host: "0.0.0.0"
# The Elasticsearch instance to use for all your queries.
elasticsearch_url: "http://localhost:9200"
# preserve_elasticsearch_host true will send the hostname specified in `elasticsearch`. If you set it to false,
# then the host you use to connect to *this* Kibana instance will be sent.
elasticsearch_preserve_host: true
# Kibana uses an index in Elasticsearch to store saved searches, visualizations
# and dashboards. It will create a new index if it doesn't already exist.
kibana_index: ".kibana"
# If your Elasticsearch is protected with basic auth, this is the user credentials
# used by the Kibana server to perform maintence on the kibana_index at statup. Your Kibana
# users will still need to authenticate with Elasticsearch (which is proxied thorugh
# the Kibana server)
# kibana_elasticsearch_username: user
# kibana_elasticsearch_password: pass
# If your Elasticsearch requires client certificate and key
# kibana_elasticsearch_client_crt: /path/to/your/client.crt
# kibana_elasticsearch_client_key: /path/to/your/client.key
# If you need to provide a CA certificate for your Elasticsarech instance, put
# the path of the pem file here.
# ca: /path/to/your/CA.pem
# The default application to load.
default_app_id: "discover"
# Time in milliseconds to wait for elasticsearch to respond to pings, defaults to
# request_timeout setting
# ping_timeout: 1500
# Time in milliseconds to wait for responses from the back end or elasticsearch.
# This must be > 0
request_timeout: 300000
# Time in milliseconds for Elasticsearch to wait for responses from shards.
# Set to 0 to disable.
shard_timeout: 0
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying
# startup_timeout: 5000
# Set to false to have a complete disregard for the validity of the SSL
# certificate.
verify_ssl: true
# SSL for outgoing requests from the Kibana Server (PEM formatted)
# ssl_key_file: /path/to/your/server.key
# ssl_cert_file: /path/to/your/server.crt
# Set the path to where you would like the process id file to be created.
# pid_file: /var/run/kibana.pid
# If you would like to send the log output to a file you can set the path below.
# This will also turn off the STDOUT log output.
# log_file: ./kibana.log
# Plugins that are included in the build, and no longer found in the plugins/ folder
bundled_plugin_ids:
- plugins/dashboard/index
- plugins/discover/index
- plugins/doc/index
- plugins/kibana/index
- plugins/markdown_vis/index
- plugins/metric_vis/index
- plugins/settings/index
- plugins/table_vis/index
- plugins/vis_types/index
- plugins/visualize/index

安装Logstash

客户端:系统centos 6.7 ip:192.168.1.235

rpm安装
1
2
3
4
5
6
7
8
#下载rpm包
wget https://download.elastic.co/logstash/logstash/packages/centos/logstash-1.5.4-1.noarch.rpm
#安装
yum localinstall logstash-1.5.4-1.noarch.rpm
这里修改下hosts:vim /etc/hosts
127.0.0.1 tomcat_A1
源码安装
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
我这里源码包安装
# wget https://download.elasticsearch.org/logstash/logstash/logstash-1.5.1.tar.gz
#curl -O https://download.elastic.co/logstash/logstash/logstash-1.5.4.tar.gz
#tar -zxvf logstash-1.5.1.tar.gz
#mv logstash-1.5.1 /usr/local/
#ln -s /usr/local/logstash-1.5.1/ /usr/local/logstash
下载启动脚本
生产都是运行在后台的,我这里源码安装没有init脚本启动。 去Github下载 https://github.com/benet1006/ELK_config.git
#cp logstash.init /etc/init.d/logstash
#chmod +x /etc/init.d/logstash
这个脚本我做过修改。
#启动logstash服务
service logstash start
service logstash status
#查看5000端口
netstat -nltp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:9200 0.0.0.0:* LISTEN 1765/java
tcp 0 0 0.0.0.0:9300 0.0.0.0:* LISTEN 1765/java
tcp 0 0 0.0.0.0:9301 0.0.0.0:* LISTEN 2309/java
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1509/sshd
tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN 1876/node
tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN 2309/java
tcp 0 0 :::22 :::* LISTEN 1509/sshd
修改启动脚本
vim /etc/init.d/logstash
指定的目录自己源码安装的路径。
name=logstash
pidfile="/var/run/$name.pid"
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin
LS_USER=logstash
LS_GROUP=logstash
LS_HOME=/usr/local/logstash 安装路径
LS_HEAP_SIZE="1000m"
LS_JAVA_OPTS="-Djava.io.tmpdir=${LS_HOME}"
LS_LOG_DIR=/usr/local/logstash
LS_LOG_FILE="${LS_LOG_DIR}/$name.log"
LS_CONF_FILE=/etc/logstash.conf 收集日志的规则conf
LS_OPEN_FILES=16384
LS_NICE=19
LS_OPTS=""
https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html
这个是log stash的官方文档的配置说明。
这个配置说明上面我先修改下我之前的配置文件。

logstash测试:

1
2
3
4
logstash.ooxx.com换成你自己的域名。同时,到域名解析那添加elk.ooxx.comA记录。
使用那种方式都行,不过如果logstash服务端的IP地址变换了,证书不可用了。
这里查看日志有主机名返回不然就跟下面一样 host:0.0.0.0
1
#/usr/local/logstash/bin/logstash -e 'input { stdin{ } } output { stdout{codec => rubydebug} }'

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
配置log stash-实现系统日志收集input
vim /etc/logstash.conf 这里我们之间先创建一个.conf 我们写在/etc/ 编写好以后让logstash去调用。
input {
stdin { }
}
output {
elasticsearch {
host => "192.168.1.234"
protocol => "http"
}
stdout {
codec => rubydebug
}
}
然后在用logstash去调用
/usr/local/logstash/bin/logstash -f /etc/logstash.conf


vim /etc/logstash.conf

官方文档file的配置文件和类型。
官方文档下面还有个这个从头读到尾这样规定,这个非常好,这里我在做修改。

1
2
3
4
5
6
7
然后在启动log stash脚本.
# /etc/init.d/logstash start
## ps -ef | grep logstash
启动完了以后在查看下/var/log/messages
然后在登陆到 http://192.168.1.234:9200/_plugin/head/

扩展阅读

CentOS 7.x安装ELK(Elasticsearch+Logstash+Kibana)

Centos 6.5 安装nginx日志分析系统 elasticsearch + logstash + redis + kibana

logstash-forwarder and grok examples

Communicative learning:

🐧 Linux shell_ senior operation and maintenance faction: QQ group 459096184 circle (system operation and maintenance - application operation and maintenance - automation operation and maintenance - virtualization technology research, welcome to join)
🐧 BigData-Exchange School:QQ group 521621407 circles (big data Yun Wei) (Hadoop developer) (big data research enthusiasts) welcome to join

Bidata have internal WeChat exchange group, learn from each other, join QQ group has links.