2024-07-12
한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina
Table of contents
1. Choose the right logging framework
2. Configure the logging framework
2. Select the appropriate log level (logbkck is used as an example here)
3. Dynamically adjust the log level
4. Combine log context information
2. Using Log4j 2's ThreadContext
3. Leverage contextual information
5. Real-time monitoring and centralized storage
1. ELK Stack(Elasticsearch、Logstash、Kibana)
2. Configure Logstash to collect logs
3. Visualization and analysis using Kibana
5. Centralized storage and scalability
1. Configure the rolling strategy of the log framework
In modern software development, logging is a key part of ensuring system stability, troubleshooting, and performance monitoring. This article will delve into the practical experience of log collection in projects, and introduce the commonly used log collection technologies, tools, and some best practices in Java projects.
In a Java project, choosing a suitable logging framework is the first step in log collection. Common logging frameworks include Log4j, Logback, and SLF4J. Here are some considerations for choosing a framework:
After selecting a logging framework, you need to configure it appropriately to meet the needs of the project. Generally speaking, the configuration file is usually an XML or property file, which contains information about the log level, output format, target location, etc.
Taking Logback as an example, a simple configuration file example is as follows:
- <!-- logback.xml -->
- <configuration>
-
- <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
- <encoder>
- <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
- </encoder>
- </appender>
-
- <appender name="FILE" class="ch.qos.logback.core.FileAppender">
- <file>logs/myapp.log</file>
- <encoder>
- <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
- </encoder>
- </appender>
-
- <logger name="com.example" level="DEBUG"/>
-
- <root level="INFO">
- <appender-ref ref="CONSOLE"/>
- <appender-ref ref="FILE"/>
- </root>
-
- </configuration>
The above configuration defines two Appenders, one for console output and the other for file output, and sets the log level and output format.
Using the appropriate log level in a project is one of the key factors to ensure that the log system can achieve the maximum benefit. Choosing the appropriate log level can ensure that the appropriate level of detailed log information is obtained in different environments and stages, while avoiding too much or too little log to improve the performance and maintainability of the system.
In the Java logging framework, common log levels include:
Use DEBUG during development: During the development phase, use the DEBUG level to get more detailed log information to help developers trace and debug code.
- public class ExampleClass {
- private static final Logger logger = LoggerFactory.getLogger(ExampleClass.class);
-
- public void someMethod() {
- // ...
- logger.debug("Debug information for developers");
- // ...
- }
- }
The production environment uses INFO: In a production environment, set the log level to INFO to ensure that critical runtime information is logged while reducing redundant debugging information.
Warning and Error Handling: For potential problems and error conditions, use the WARN and ERROR levels. Logs at these levels will help the team quickly identify and resolve issues in the system.
Some logging frameworks allow the log level to be adjusted dynamically at runtime, which is useful for adjusting the verbosity of logging without restarting the application.
By using appropriate log levels, development teams can better balance information detail and performance overhead, ensuring optimal logging effects in different environments and scenarios.
Incorporating log context information is to add additional context information to the log record to better understand the context in which the log event occurred. This is very useful for tracking specific requests, user sessions, or other business processes. In Java projects, a common practice is to use SLF4J's MDC (Mapped Diagnostic Context) or Log4j 2's ThreadContext to implement the addition of log context information.
SLF4J's MDC allows key-value information to be added to the log context within a request or business process so that it persists throughout the entire processing process.
- import org.slf4j.Logger;
- import org.slf4j.LoggerFactory;
- import org.slf4j.MDC;
-
- public class RequestContextLogger {
- private static final Logger logger = LoggerFactory.getLogger(RequestContextLogger.class);
-
- public void processRequest(String requestId, String userId) {
- try {
- // 将请求ID和用户ID放入日志上下文
- MDC.put("requestId", requestId);
- MDC.put("userId", userId);
-
- // 处理请求
- logger.info("Processing request");
-
- // ...
- } catch (Exception e) {
- logger.error("Error processing request", e);
- } finally {
- // 清理日志上下文,确保不影响其他请求
- MDC.clear();
- }
- }
- }
Log4j 2 provides ThreadContext, which is similar to SLF4J's MDC and can also store contextual information of key-value pairs in thread scope.
- import org.apache.logging.log4j.LogManager;
- import org.apache.logging.log4j.Logger;
- import org.apache.logging.log4j.ThreadContext;
-
- public class RequestContextLogger {
- private static final Logger logger = LogManager.getLogger(RequestContextLogger.class);
-
- public void processRequest(String requestId, String userId) {
- try {
- // 将请求ID和用户ID放入日志上下文
- ThreadContext.put("requestId", requestId);
- ThreadContext.put("userId", userId);
-
- // 处理请求
- logger.info("Processing request");
-
- // ...
- } catch (Exception e) {
- logger.error("Error processing request", e);
- } finally {
- // 清理日志上下文,确保不影响其他请求
- ThreadContext.clearAll();
- }
- }
- }
The advantage of incorporating log context information is that a series of related log events can be associated, making it easier to track a specific request or user's operation process. For example, in a distributed system, by adding a unique request ID to the log, the entire request processing process can be tracked in multiple services.
- public class DistributedService {
- private static final Logger logger = LoggerFactory.getLogger(DistributedService.class);
-
- public void processDistributedRequest(String requestId) {
- try {
- MDC.put("requestId", requestId);
-
- // 处理分布式请求
- logger.info("Processing distributed request");
-
- // ...
- } catch (Exception e) {
- logger.error("Error processing distributed request", e);
- } finally {
- MDC.clear();
- }
- }
- }
By combining contextual information, log records are no longer isolated events, but are organically connected together, providing a more powerful tool for system troubleshooting and performance optimization.
Real-time monitoring and centralized storage are important aspects of log management in a project. Through these means, the team can track the operating status of the system in real time, detect potential problems, and conduct timely troubleshooting when necessary. In Java projects, commonly used tools include ELK Stack (Elasticsearch, Logstash, Kibana), Splunk, etc.
ELK Stack is a set of open source tools for log collection, storage, and visualization.
Elasticsearch: Used to store and retrieve large amounts of log data. It provides powerful search and analysis capabilities and is suitable for real-time data.
Logstash: Used for log collection, filtering, and forwarding. Logstash can normalize log data from different sources and send it to Elasticsearch for storage.
Kibana: Provides an intuitive user interface for querying, visualizing, and analyzing log data stored in Elasticsearch. With Kibana, teams can create dashboards, charts, and perform in-depth analysis of log data.
Configuring Logstash in the project to collect logs is a key step in ELK Stack. Logstash supports a variety of input sources and output targets, which can be defined through simple configuration files.
- # logstash.conf
-
- input {
- file {
- path => "/path/to/your/application.log"
- start_position => "beginning"
- }
- }
-
- filter {
- # 可添加过滤规则
- }
-
- output {
- elasticsearch {
- hosts => ["localhost:9200"]
- index => "your_index_name"
- }
- }
This example configures a Logstash input plugin to monitor the log files in the specified path and output them to Elasticsearch. The filter part can add additional rules to parse, filter, or add additional information to the logs.
Kibana provides an intuitive user interface that can be accessed through a web browser. In Kibana, you can create dashboards, charts, and perform complex queries and analysis.
With Kibana, you can easily:
real time monitoring: View real-time log data to understand the operating status of the system at any time.
Troubleshooting: Search logs based on specific criteria to find the root cause of potential issues.
Performance Analysis: Analyze system performance bottlenecks using charts and visualization tools.
Splunk is another widely used log management tool that provides an all-in-one solution for log collection, search, analysis, and visualization.
Log collection: Splunk supports collecting log data from a variety of sources (files, databases, network traffic, etc.).
Real-time search and analysis: Provides real-time search and analysis capabilities, supports complex queries, and displays search results through a visual interface.
Dashboards and Reports: Users can create customized dashboards and reports for monitoring and analyzing system performance.
Both ELK Stack and Splunk have powerful centralized storage mechanisms that can store large amounts of log data. This centralized storage not only facilitates log retrieval and analysis, but also provides scalability for the system to handle large-scale application logs.
Real-time monitoring and centralized storage are key to ensuring project stability and performance. By using tools such as ELK Stack and Splunk, the project team can track logs in real time in a complex system environment, conduct timely troubleshooting and performance optimization. The powerful functions of these tools not only improve the efficiency of the team, but also provide better maintainability and scalability for the project.
Log rolling and archiving are important practices in a project. They ensure the proper management of log files, prevent log files from becoming too large and causing storage problems, and help maintain the normal operation of the system. The following are some common practices for implementing log rolling and archiving in Java projects.
Most log frameworks provide rolling strategies that can be set through configuration files. These strategies determine when to roll to a new log file and when to delete the old log file. Take Logback as an example to configure a basic rolling strategy:
- <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
- <file>logs/myapp.log</file>
- <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
- <fileNamePattern>logs/myapp.%d{yyyy-MM-dd}.log</fileNamePattern>
- <maxHistory>30</maxHistory>
- </rollingPolicy>
- <encoder>
- <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
- </encoder>
- </appender>
The above configuration usesTimeBasedRollingPolicy
, which will rotate the log files based on time.maxHistory
Specifies the number of historical log files to be retained. Log files exceeding this number will be deleted.
Sometimes, rolling by time may not be enough, and you need to roll by the size of the log file. This can be achieved by configuring the file size:
- <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
- <file>logs/myapp.log</file>
- <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
- <fileNamePattern>logs/myapp.%d{yyyy-MM-dd}.%i.log</fileNamePattern>
- <maxFileSize>5MB</maxFileSize>
- <maxHistory>30</maxHistory>
- </rollingPolicy>
- <encoder>
- <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
- </encoder>
- </appender>
The above configuration usesSizeAndTimeBasedRollingPolicy
, which rotates log files based on file size and time.maxFileSize
Specifies the maximum size of each log file.
Sometimes, a project may need to roll logs based on custom conditions. In this case, you can consider implementing a custom rolling strategy. For example, rolling log files based on specific business rules:
- public class CustomRollingPolicy extends TimeBasedRollingPolicy<ILoggingEvent> {
- // 实现自定义的滚动逻辑
- }
Then use the custom rolling strategy in the configuration file:
- <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
- <file>logs/myapp.log</file>
- <rollingPolicy class="com.example.CustomRollingPolicy">
- <!-- 自定义配置 -->
- </rollingPolicy>
- <encoder>
- <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
- </encoder>
- </appender>
In addition to rolling log files, it is also a common practice to archive old log files. This can be achieved by periodically moving old log files to an archive directory to prevent them from taking up too much disk space.
Or use a programmatic approach and implement it in Java code:
- import java.io.File;
- import java.nio.file.Files;
- import java.nio.file.Path;
- import java.nio.file.StandardCopyOption;
- import java.time.LocalDate;
- import java.time.format.DateTimeFormatter;
-
- public class LogArchiver {
- public static void archiveLogFile(String logFileName, String archiveDirectory) {
- String currentDate = LocalDate.now().format(DateTimeFormatter.ofPattern("yyyy-MM-dd"));
- File logFile = new File(logFileName);
- File archiveDir = new File(archiveDirectory);
-
- if (!archiveDir.exists()) {
- archiveDir.mkdirs();
- }
-
- Path sourcePath = logFile.toPath();
- Path targetPath = new File(archiveDir, logFile.getName() + "." + currentDate + ".log").toPath();
-
- try {
- Files.move(sourcePath, targetPath, StandardCopyOption.REPLACE_EXISTING);
- } catch (Exception e) {
- e.printStackTrace();
- }
- }
- }
Called in periodic tasksarchiveLogFile
Method can realize the archiving of log files.
By implementing log rotation and archiving, projects can manage log files more effectively and ensure that the system can maintain good performance over a long period of time. This is not only helpful for troubleshooting, but also helps to comply with compliance requirements.
By choosing the right log framework, configuring it appropriately, using the appropriate log level, and combining it with contextual information, a project can establish a powerful logging system to provide strong support for troubleshooting, performance optimization, and system monitoring. At the same time, real-time monitoring and centralized storage provide the team with a more convenient means to track system status. Meticulous logging is not only a technical practice for project development, but also an important guarantee for improving the overall efficiency of the team and the quality of the project.