Technology sharing

Filebead zookeeper kafka experimentum constitue ELK

2024-07-12

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

Tabula contentorum

1. Deploy Filebeat

2. Logstash configuratione

3. Pasco kibana accessum verificationis


CPU nomenDisputatioPraecipua software
es01192.168.9.114ElasticSearch
es02192.168.9.115ElasticSearch
es03192.168.9.116ElasticSearch、Kibana
nginx01192.168.9.113nginx、Logstash
NA192.168.9.111nginx、Filebeat
NA192.168.9.210Zookeeper、Kafka
NA192.168.9.120Zookeeper、Kafka
NA192.168.9.140Zookeeper、Kafka

ELK, Filebead, zookeeper, kafka aedificari possunt ad interrogationes diaria priora

1. Deploy Filebeat

  1. cd /usr/local/filebeat
  2. vim filebeat.yml
  3. 注释162164行内容
  4. 163行起添加
  5. output.kafka:
  6. enabled: true
  7. hosts: ["192.168.9.210:9092","192.168.9.120","192.168.9.140"] #指定 Kafka 集群配置
  8. topic: "nginx" #指定 Kafka 的 topic
  9. ————————————

Naviculas fasciculi in navigatro petit et novas stipes notitias generat

http://192.168.9.111/test.htmlhttp://192.168.9.111/test1.html、http://192.168.9.111/

  1. 启动 filebeat
  2. ./filebeat -e -c filebeat.yml

2. Logstash configuratione

  1. cd /etc/logstash/conf.d/
  2. vim kafka.conf
  3. input {
  4. kafka {
  5. bootstrap_server => "192.168.9.210:9092,192.168.9.120:9092,192.168.9.140:9092"
  6. topics => "nginx"
  7. type => "nginx_kafka"
  8. auto_offset_reset => "latest"
  9. }
  10. }
  11. #filter {}
  12. output {
  13. elasticsearch {
  14. hosts => ["192.168.9.114:9200", "192.168.9.115:9200", "192.168.9.116:9200"]
  15. index => "nginx_kafka-%{+yyyy.MM.dd}"
  16. }
  17. }
  18. logstash -t -f kafka.conf
  19. logstash -f kafka.conf

3. Pasco kibana accessum verificationis

Visita http://192.168.9.116:5601 cum navigatro, aperi Kibana, preme [Cre] bullam [Create Index Modus], quaere [nginx_kafka-*], preme conjunctionem [Next] creandi, elige [Next] @timestamp] puga, [modus indicem crea];