Technology Sharing

Building ELK Filebead Zookeeper Kafka Experiment

2024-07-12

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

Table of contents

1. Deploy Filebeat

2. Logstash Configuration

3. Browser kibana access verification


CPU nameip addressMain software
es01192.168.9.114ElasticSearch
es02192.168.9.115ElasticSearch
es03192.168.9.116ElasticSearch、Kibana
nginx01192.168.9.113nginx、Logstash
NA192.168.9.111nginx、Filebeat
NA192.168.9.210Zookeeper、Kafka
NA192.168.9.120Zookeeper、Kafka
NA192.168.9.140Zookeeper、Kafka

ELK, Filebead, Zookeeper, Kafka build can query the previous blog

1. Deploy Filebeat

  1. cd /usr/local/filebeat
  2. vim filebeat.yml
  3. 注释162164行内容
  4. 163行起添加
  5. output.kafka:
  6. enabled: true
  7. hosts: ["192.168.9.210:9092","192.168.9.120","192.168.9.140"] #指定 Kafka 集群配置
  8. topic: "nginx" #指定 Kafka 的 topic
  9. ————————————

The browser accesses filebeat to generate new log data

http://192.168.9.111/test.html、http://192.168.9.111/test1.html、http://192.168.9.111/

  1. 启动 filebeat
  2. ./filebeat -e -c filebeat.yml

2. Logstash Configuration

  1. cd /etc/logstash/conf.d/
  2. vim kafka.conf
  3. input {
  4. kafka {
  5. bootstrap_server => "192.168.9.210:9092,192.168.9.120:9092,192.168.9.140:9092"
  6. topics => "nginx"
  7. type => "nginx_kafka"
  8. auto_offset_reset => "latest"
  9. }
  10. }
  11. #filter {}
  12. output {
  13. elasticsearch {
  14. hosts => ["192.168.9.114:9200", "192.168.9.115:9200", "192.168.9.116:9200"]
  15. index => "nginx_kafka-%{+yyyy.MM.dd}"
  16. }
  17. }
  18. logstash -t -f kafka.conf
  19. logstash -f kafka.conf

3. Browser kibana access verification

Use a browser to access http://192.168.9.116:5601 to log in to Kibana, click the [Manage] button, [Create Index Pattern], search for [nginx_kafka-*], click the [Next] button to create, select the [@timestamp] button, and click [Create Index Pattern]; you can view chart information and log information.