# ELK+log4j+kafka **Repository Path**: weixin1007/ELK-log4j-kafka ## Basic Information - **Project Name**: ELK+log4j+kafka - **Description**: 在基于elasticsearch-5.6.2 kafka_2.11-0.10.0.1 kibana-5.6.2-linux-x86_64 logstash-5.6.2 基础上通过springMvc的Log4j推送到kafka - **Primary Language**: Java - **License**: Apache-2.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 3 - **Created**: 2022-05-24 - **Last Updated**: 2022-05-24 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # ELK+log4j+kafka 在基于elasticsearch-5.6.2 kafka_2.11-0.10.0.1 kibana-5.6.2-linux-x86_64 logstash-5.6.2 基础上通过springMvc的Log4j推送到kafka 1.安装kafka,下载后解压,我配置的是单机版,集群的配置可以去网上参考, 修改service.properties 修改要点: broker.id=0 listeners=PLAINTEXT://:9092 host.name=本机ip advertised.host.name=本机ip zookeeper.connect=本机ip:2181 delete.topic.enable=true 先启动kafka自带的zookeeper ./zookeeper-server-start.sh ../config/zookeeper.properties 启动kafka服务器 ./kafka-server-start.sh ../config/server.properties 2.安装elasticsearch 下载后启动 ./elasticsearch 注意事项请参看notice.md 3.安装logstash 安装插件 bin/logstash-plugin install logstash-input-kafka bin/logstash-plugin install logstash-output-elasticsearch 新建一个文件 logstash-log4j.conf input{ kafka { bootstrap_servers =>"IP:9092" group_id =>"logstash" topics =>["log4jdemo"] consumer_threads => 5 decorate_events => true reconnect_backoff_ms=>5000 } } filter{ } output{ elasticsearch { hosts =>["IP:9200"] user => "elastic" password => "changeme" index => "logstash-%{+YYYY.MM.dd}" } } 在bin目录下启动 ./logstash -f ../config/logstash-log4j.conf 4,安装kibana ./kibana