软件下载 2.3版本官网下载链接以及具体的下载地址
1 2 3 4 wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.6-linux-x86_64.tar.gz wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.17.6-linux-x86_64.tar.gz wget https://artifacts.elastic.co/downloads/kibana/kibana-7.17.6-linux-x86_64.tar.gz wget https://artifacts.elastic.co/downloads/logstash/logstash-7.17.6-linux-x86_64.tar.gz
elasticsearch安装
1.安装
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 tar -zxvf elasticsearch-7.17.6-linux-x86_64.tar.gz -C /data/ cd /data/elasticsearch-7.17.6useradd -M efk -s /bin/false echo "vm.max_map_count = 655350" >> /etc/sysctl.confsysctl -p 解普通用户打开文件句柄的限制 vi /etc/security/limits.conf * soft nofile 65536 * hard nofile 65536 * soft nproc 65536 * hard nproc 65536
配置config/elasticsearch.yml 创建日志和数据目录1 2 3 4 mkdir -p /data/efk/elasticsearch mkdir -p /data/logs/efk chown -R efk. /data/efk/elasticsearch chown -R efk. /data/logs/
各节点配置
172.16.90.210
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 cluster.name: my-application node.name: 172.16.90.210 path.logs: /data/logs/elasticsearch path.data: /data/efk/elasticsearch network.host: 172.16.90.210 http.port: 9200 transport.port: 9300 discovery.seed_hosts: ["172.16.90.210" , "172.16.90.211" , "172.16.90.212" ] cluster.initial_master_nodes: ["172.16.90.210" , "172.16.90.211" , "172.16.90.212" ] xpack.security.enabled: false
172.16.90.211
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 cluster.name: my-application node.name: 172.16.90.211 path.logs: /data/logs/elasticsearch path.data: /data/efk/elasticsearch network.host: 172.16.90.211 http.port: 9200 transport.port: 9300 discovery.seed_hosts: ["172.16.90.210" , "172.16.90.211" , "172.16.90.212" ] cluster.initial_master_nodes: ["172.16.90.210" , "172.16.90.211" , "172.16.90.212" ] xpack.security.enabled: false
172.16.90.212
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 cluster.name: my-application node.name: 172.16.90.212 path.logs: /data/logs/elasticsearch path.data: /data/efk/elasticsearch network.host: 172.16.90.212 http.port: 9200 transport.port: 9300 discovery.seed_hosts: ["172.16.90.210" , "172.16.90.211" , "172.16.90.212" ] cluster.initial_master_nodes: ["172.16.90.210" , "172.16.90.211" , "172.16.90.212" ] xpack.security.enabled: false http.cors.enabled: true http.cors.allow-origin: "*"
1 2 sudo -u efk /data/elasticsearch-7.17.6/bin/elasticsearch &
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 curl -fsSL https://rpm.nodesource.com/setup_lts.x | bash - yum install -y nodejs https://github.com/mobz/elasticsearch-head git clone https://github.com/mobz/elasticsearch-head.git npm install -g grunt-cli npm install grunt server http://172.16.90.210:9100/ cd /usr/local /sharewget https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-2.1.1-linux-x86_64.tar.bz2 tar xjf phantomjs-2.1.1-linux-x86_64.tar.bz2 ln -s /usr/local /share/phantomjs-2.1.1-linux-x86_64/bin/phantomjs /usr/local /share/phantomjs ln -s /usr/local /share/phantomjs-2.1.1-linux-x86_64/bin/phantomjs /usr/local /bin/phantomjs ln -s /usr/local /share/phantomjs-2.1.1-linux-x86_64/bin/phantomjs /usr/bin/phantomjs
1 2 http://172.16.90.210:9200 http://172.16.90.210:9200/_plugiin/head
kibana安装
安装1 2 tar -zxvf kibana-7.17.6-linux-x86_64.tar.gz -C /data/ chown -R efk:efk /data/kibana-7.17.6/
2.配置config/kibana.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 server.port: 5601 server.host: "172.16.90.210" server.name: "172.16.90.210" elasticsearch.hosts: ["http://172.16.90.210:9200" , "http://172.16.90.211:9200" , "http://172.16.90.212:9200" ]
启动kibana1 sudo -u efk /data/kibana-7.17.6-linux-x86_64/bin/kibana & &
安装filebeat filebeat安装在需要被收集日志的服务器上
安装1 2 tar -zxvf filebeat-7.17.6-linux-x86_64.tar.gz -C /data/filebeat-7.17.6
配置filebeat.yml1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 filebeat.inputs: - type : log enabled: true paths: - /var/log /nginx/access.log fields: type : nginx_access output.elasticsearch: hosts: ["http://172.16.90.210:9200" , "http://172.16.90.211:9200" , "http://172.16.90.212:9200" ] indices: - index: "nginx_access_%{+yyy.MM}" when.equals: fields.type: "nginx_access" setup.template.enabled: false logging.to_files: true logging.level: info logging.files: path: /opt/logs/filebeat/ name: filebeat keepfiles: 7 permissions: 0600
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 type : log enable : true paths: - /var/log /* /*.log recursive_glob.enabled: encoding: exclude_lines: ['^DBG' ] include_lines: ['^ERR' , '^WARN' ] harvester_buffer_size: 16384 max_bytes: 10485760 exclude_files: ['.gz$' ] ingore_older: 0 文件或者文件从来没有被harvester收集 close_* 后文件被更新,则在scan_frequency过后,文件将被重新拾取。 但是,如果在harvester关闭时移动或删除文件,Filebeat将无法再次接收文件 ,并且harvester未读取的任何数据都将丢失。 close_inactive 读取的最后一条日志定义为下一次读取的起始点,而不是基于文件的修改时间 如果关闭的文件发生变化,一个新的harverster将在scan_frequency运行后被启动 建议至少设置一个大于读取日志频率的值,配置多个prospector来实现针对不同更新速度的日志文件 使用内部时间戳机制,来反映记录日志的读取,每次读取到最后一行日志时开始倒计时使用2h 5m 来表示 close_rename close_removed close_eof close_timeout close_timeout 不能等于ignore_older,会导致文件更新时,不会被读取如果output一直没有输出日志事件,这个timeout是不会被启动的, 至少要要有一个事件发送,然后haverter将被关闭 设置0 表示不启动 clean_inactived 设置必须大于ignore_older+scan_frequency,以确保在文件仍在收集时没有删除任何状态 配置选项有助于减小注册表文件的大小,特别是如果每天都生成大量的新文件 此配置选项也可用于防止在Linux上重用inode的Filebeat问题 clean_removed 如果关闭close removed 必须关闭clean removed scan_frequency tail_files: 而不是从文件开始处重新发送所有内容。 symlinks: Filebeat也会打开并读取原始文件。 backoff: 再次检查文件之间等待的时间。 max_backoff: backoff_factor: harvester_limit: tags fields 默认在sub-dictionary位置 filebeat.inputs: fields: app_id: query_engine_12 fields_under_root multiline.pattern multiline.negate 假如模式匹配条件'^b' ,默认是false 模式,表示讲按照模式匹配进行匹配 将不是以b开头的日志行进行合并 如果是true ,表示将不以b开头的日志行进行合并 multiline.match multiline.max_lines multiline.timeout max_procs name
安装logstash 1 2 3 tar zxf logstash-1.5.6.tar.gz -C /usr/local / cd /usr/local /logstash-1.5.6/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-2.0.5/patterns/
logstash搜集tomcat日志 参考logstash搜集tomcat日志配置样例:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 input { file { codec => multiline { pattern => "^\\s" what => "previous" } path => "/data/weblogs/tomcat_test/tomcat01-service-test.log" start_position => "beginning" } } filter { grok { patterns_dir => "/usr/local/logstash-1.5.6/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-0.4.0/patterns/" match => { "message" => "%{TOMCATLOG}" } add\_field => \[ "server\_ip" , "10.168.xx.xxx" \] } } output { elasticsearch { host => '121.40.xx.xx' index => 'tomcat01-%{+YYYY.MM.dd}' bind_host => '121.40.yy.yy' } }