logstash如何写tomcat,redis,nginx日志收集规则

Posted by Yancy on 2016-01-29
logstash如何收集tomcat日志分析(可以参考我下面写的模板)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
我想实现对tomcat的日志采集,正常应该创建一个file的input,这个input指定了监听的文件或者目录....然后过滤器(如果没有特殊要求,先可以不进行解析,原始的日志直接存就行)....最后使用一个elastcisearch插件,发送到es中去.
input {
stdin{}
file {
type => "tomcat_card"
path => "/srv/tomcat/logs/card/*.log"
start_position => "beginning"
}
file {
type => "tomcat_family"
path => "/srv/tomcat/logs/family/*.log"
start_position => "beginning"
}
file {
type => "tomcat_mission"
path => "/srv/tomcat/logs/mission/*.log"
start_position => "beginning"
}
}
filter {
grok {
match => [
"message", "%{TIMESTAMP_ISO8601:logdate} %{WORD:loglevel}%{SPACE}%{NOTSPACE:loggername} - %{GREEDYDATA:msg}",
"message", "%{GREEDYDATA:msg}"
]
}
}
output{
if [type] == "tomcat_card" {
elasticsearch {
host => "192.168.1.234"
protocol => "http"
index => "tomcat_card-%{+YYYYY.MM.dd}"
}
}
if [type] == "tomcat_family" {
elasticsearch {
host => "192.168.1.234"
protocol => "http"
index => "tomcat_card-%{+YYYYY.MM.dd}"
}
}
if [type] == "tomcat_mission" {
elasticsearch {
host => "192.168.1.234"
protocol => "http"
index => "tomcat_mission-%{+YYYYY.MM.dd}"
}
}
}
logstash日志采集nginx配置

/etc/logstash/conf.d/nginx.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
input {
file {
path => "/data/wwwlogs/nginx_json.log"
codec => "json"
}
}
filter {
mutate {
split => [ "upstreamtime", "," ]
}
mutate {
convert => [ "upstreamtime", "float" ]
}
}
output {
kafka {
bootstrap_servers => "10.0.0.11:9092"
topic_id => "logstash"
compression_type => "gzip"
}
}
创建一个从日志文件读取,并写入redis的配置文件(本文件采用默认方式进行输入,输出)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
#cat agent.conf
input {
file {
path => "/var/log/httpd/access_log" //设置读取的日志路径
sincedb_path => "../.sincedb"
type => "httpd"
start_position => "beginning"
}
}
output {
redis {
host => ["127.0.0.1"]
port => 6379
batch => true
batch_events => 5
data_type => "list"
key => "logstash:redis"
}
}
配置一个从redis读取日志并输出到es的配置文件
#cat index.conf
input {
redis {
host => ["127.0.0.1"]
port => 6379
data_type => "list"
key => “log stash:redis"
}
}
output {
elasticsearch {
host => "127.0.0.1"
protocol => "http"
index => "logstash-%{type}-%{+YYYY.MM.dd}"
index_type => "%{type}"
}
}
启动logstash

Communicative learning:

🐧 Linux shell_ senior operation and maintenance faction: QQ group 459096184 circle (system operation and maintenance - application operation and maintenance - automation operation and maintenance - virtualization technology research, welcome to join)
🐧 BigData-Exchange School:QQ group 521621407 circles (big data Yun Wei) (Hadoop developer) (big data research enthusiasts) welcome to join

Bidata have internal WeChat exchange group, learn from each other, join QQ group has links.