Technology Sharing

Recommendation and range calibration of critical value processing measures based on Kafka Flink ES

2024-07-12

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

📢 大家好,我是 【战神刘玉栋】,有10多年的研发经验,致力于前后端技术栈的知识沉淀和传播。 💗
🌻 近期刚转战 CSDN,会严格把控文章质量,绝不滥竽充数,欢迎多多交流。👍


Preface

This article shares the blogger’s company’s recommendations for critical value handling measures and implementation solutions for range calibration.
Mainly based onKafka + Flink + Elasticsearch implementation. Due to security issues, the content is mainly about solution introduction. If you need to discuss, please leave a message.
Okay, let's get started.


Background technique

Critical value means that when the test or examination results appear, it indicates that the patient may be in a life-threatening state. Clinicians need to obtain test or examination information in a timely manner and quickly give patients effective intervention measures or treatments to save their lives. Critical value information can be used by clinicians to take timely and effective treatment for patients in a life-threatening state, avoiding accidents, serious consequences, and losing the best chance of rescue.
The formulation and implementation of the critical value reporting system can effectively enhance the initiative and sense of responsibility of medical and technical staff, improve the theoretical level of medical and technical staff, enhance the service awareness of medical and technical staff to actively participate in clinical diagnosis, and promote effective communication and cooperation between clinical and medical and technical departments. Critical value management is an important part of hospital management. The rapid identification, confirmation, release, timely receipt of critical values, and monitoring and analysis of the process are the goals and directions of system information management.
At present, most critical value management systems have the following problems:
1. All kinds of critical value test items are determined according to fixed reference ranges. The upper and lower limits of the project ranges cannot be adjusted dynamically. Numerical comparisons are often performed simply and roughly. There is a lack of scientific range determination and calibration methods, which results in frequent notifications of "false" critical values ​​that do not conform to clinical reality, which has a great impact on the work of clinical medical staff.
2. The intervention measures made by medical staff for critical values ​​are simply filled in and fed back to the medical technology department. There is no interaction with the course of disease and nursing records, and the processing process is not remembered and managed. Work on the same situation often requires repeated work and time, and is prone to deviations;
3. Critical values ​​involve a wide range of links, and there is no unified process management, which easily leads to missing links and fails to form a complete closed loop. At the same time, the entire process lacks link monitoring, log tracking, and exception handling solutions, and there is also a lack of a statistical summary page for the entire hospital, which makes it impossible to provide a solution to improve the hospital's critical values ​​as a whole;


Purpose of the Invention

The purpose of this patent invention is to be based on Kafka + Flink + Elasticsearch and other technologies, in the process of critical value full-process management, a solution for scientifically calibrating the critical value judgment range and intelligently recommending critical value treatment measures is implemented to solve the current problems in critical value process management, such as inaccurate critical value judgment standards, unutilized treatment measure data, and imperfect critical value optimization mechanism for the entire hospital. It will then optimize the critical value process, improve the efficiency of critical value processing, form a complete closed-loop tracking, and ultimately build a complete critical value full-process management system.
1. Build a "false" critical value processing plan to prevent reminders from being too frequent and constantly calibrate the reasonable range of critical value items;
2. The measures for handling daily critical values ​​are stored and memorized. When critical values ​​arrive, medical staff can be given various prompts to serve as filling assistants;
3. Based on the event-driven mechanism of the message center, a complete critical value processing process is built, covering all aspects of clinical business scenarios as much as possible. The data center is responsible for storing complete critical value information and providing closed-loop display and data query interfaces. The critical value closed-loop data of each department is collected and directional indicator data is generated to facilitate regular tracking, analysis, and evaluation of critical value indicators, and to supervise each department to discover and improve its own critical value processing, thereby improving the efficiency of critical value processing in the entire hospital.


specific plan

This solution is based on Kafka + Flink + Elasticsearch To achieve critical value range calibration and measure recommendation, the specific technical solution is as follows.

1. Pre-environment preparation

1. Deploy the Kafka environment. The program introduces Kafka-related dependencies, performs relevant configuration and function integration, defines the "critical value sending" and "critical value feedback" events, and configures the event message input format and XSD verification text. These two events will be used as two topics of Kafka. Kafka is used as a message middleware to provide a collaborative mode between producers and consumers.
2. Deploy the Elasticsearch environment. The program introduces Elasticsearch-related dependencies and performs related configuration and function integration. Elasticsearch defines several index structures, which will be used to store critical value raw data, associated data, calculation results, etc., and uses the characteristics of Elasticsearch to perform statistical analysis;
3. Deploy the Flink environment. The program introduces Flink-related dependencies and performs related configuration and function integration. Flink plays a connecting role. On the one hand, it is used to consume the topic messages delivered by Kafka. On the other hand, it uses the relevant APIs to calculate the data and store the output in Elasticsearch.

2. Core Service Implementation

1. Provide external message producer interface
Develop a message center producer interface and open it to external systems. This interface can be used in the two scenarios of "critical value sending" and "critical value feedback".
The main logic is to perform rationality verification, analysis and processing on the message input parameters, and then send the message by calling the Kafka API, using the producer singleton to complete the message sending.
The sent topic is "critical value sending" or "critical value feedback".

2. Consume Kafka using Flink
Use Flink's **Flink Source API to add Kafka as the data source and subscribe to the two topics "Critical Value Sending" and "Critical Value Feedback".
For the pulled messages, add a code block for message consumption processing.

3. Using Flink to process streaming data
3.1. Emergency value sending process
The pulled Kafka "Critical Value Sending" Topic data is processed accordingly.
1) Use regular expressions to extract the core attributes of the critical value in the message input, including but not limited to the critical value ID, report ID, patient ID, visit ID, etc., identify the code and value of each key attribute, and assemble them into a Map structure;
2) Using the above key information, extract the report information, patient information, medical information, and historical information of the critical value business from Oracle, and extract the critical value interval distribution information, critical value treatment measure distribution and other information from Elasticsearch, and assemble these contents for auxiliary analysis;
3) Use the Flink Transform API to comprehensively process the Map data and obtain relevant results;
4) During the critical value transmission process, the relevant operations for range calibration and recommended measures are as follows:
a. Obtain the basic information of the critical value of the project and determine whether the critical value meets the upper and lower limits of the current critical value;
b. Obtain the interval distribution of the critical value of the project, determine the interval distribution to which the critical value belongs, make updates, and assemble the results;
c. Obtain the distribution of historical treatment measures for the critical value of the project, and calculate and arrange them according to the frequency of occurrence of different measures in different dimensions, and then assemble the results;
d. Obtain other items with the same abnormality when the critical value history of the item appears, and obtain the linkage relationship between these items and the current critical value item through calculation;
e. Obtain the historical values ​​of the project corresponding to the critical value of the project, make trend analysis, and assemble the results;
f. Obtain other related and extended information of the critical value of the project to assist in analysis and assemble the results;
g. Store the original critical value data into Elasticsearch, assemble all the calculation results in step 3, and enter the next step, acting as a filling assistant.

3.2. Critical Value Processing Procedure
The pulled Kafka "Critical Value Processing" Topic data is processed accordingly.
1) Use regular expressions to extract the attributes of critical value processing in the message input, including but not limited to critical value ID, processing method, processing measures, processor, etc., identify the code and value of each key attribute, and assemble them into a Map structure;
2) Send the critical value and use the above key information to extract related information from Oracle and Elasticsearch for auxiliary analysis
3) Use the Flink Transform API to comprehensively process the Map data and obtain relevant results;
4) During the critical value processing, the relevant operations for range calibration and recommended measures are as follows:
a. If the doctor gives normal intervention measures for the critical value, it means that the credibility of the critical value trigger range is increased. First, the frequency record information of the critical value of the project is updated; then, the critical value range interval data can be updated by adding the project group value, which means that the range interval of the critical value of a certain project is more accurate. If the critical value of this interval belongs to the original interval of the critical value, the number of interval occurrences is increased. If the critical value of this interval exceeds the original interval of the critical value, new interval data is added to expand the interval range and record the number of times; finally, the doctor's treatment measures and critical value values ​​are associated and stored in the measure memory index. The index record includes but is not limited to the following contents: the treatment measures used for different projects, which interval they belong to, what historical trigger values ​​are included, and the current and historical reports, visits, critical values ​​and other information associated with the patient.
b. If the doctor gives an abnormal treatment to the critical value, such as clicking the feedback question button, it means that the credibility of the critical value trigger range is reduced. First, the critical value key information is inserted into the feedback question index; then, the relevant abnormal data is also inserted into the critical value range interval index; finally, the treatment measure index will also be updated, and the feedback question is also a part of the treatment measure; these question contents will be provided on the corresponding statistical analysis page, and the final decision will be made manually to determine whether the range is changed;

4. Use Flink to output to Elasticsearch
Use the Flink Elasticsearch API to add ElasticsearchSink as the result output, and store the results calculated in the previous step into different index structures of ES according to different dimensions.
Include but not be less than the following indexes: critical value original data index, critical value extended data index, critical value interval frequency distribution index, critical value processing measure distribution index, etc.

3. Interaction with the User Portal

3.1. Emergency value sending process
After data processing by the core service, the back-end interface of the user's unified portal can be called, and then WebSocket can be used to complete the front-end and back-end message push, or the core service can directly integrate WebSocket to interact with the portal front-end, and finally display the critical value pop-up interface on the user portal front-end.
In addition to seeing the basic information, report information, and patient information corresponding to the critical values ​​in the pop-up window, doctors can also fill in the intervention measures and submit them, or click the feedback button.
The pop-up window will display the following filling assistant information:
a. The frequency of occurrence of various treatment measures for the critical value of the project, and doctors can quickly click and reuse them;
b. The frequency of critical values ​​appearing in different trigger intervals of the project, which serves as a reference for doctors to confirm critical values;
c. A comparative analysis chart of the historical trends of the project, as well as the frequency of the occurrence of similar abnormal project information when the project has a critical value and other projects have critical values;
d. Other historical reference information, such as medical history, report history, critical value history, etc.;
3.2. Critical Value Processing Procedure
The doctor will deal with the critical value pop-up window that dominates the screen by calling the unified portal backend interface, triggering Kafka's "Critical Value Processing" Topic data and entering the critical value processing link of the core service.
Doctors have two processing modes: they can fill in the intervention measures and submit them, or click the feedback question button. Both methods can end the processing process.

4. Developing a critical value analysis module

6.1. Set up a scheduled service and use aggregation functions to perform secondary processing on ES data. The processing results continue to be stored in the new index space.
6.2. Develop a front-end BI interface to display critical value indicator data before and after processing and provide analysis prompts.
1) Provide analysis and judgment results for the reference range, allowing manual final confirmation whether the calibration range has been changed.
2) Provide recommended analysis for treatment measures. For different treatment measures, provide various statistics, guidance, and analysis based on the number of times used, treatment path, corresponding value range, historical project trends, related concurrent projects, historical diagnosis information, etc.

5. Develop a critical value medical management module

1) Define critical value evaluation indicators: The medical department should define critical value evaluation indicators, such as: treatment rate%, average treatment time h, timely treatment rate/24-hour treatment rate%, patient six-hour follow-up rate%, total number of critical value treatments, etc.;
2) Critical value statistics: The medical department needs to regularly supervise, inspect, track, and analyze the implementation of the critical value management system of each department, and regularly evaluate the timeliness of critical value reporting and disposal. The critical value component of the medical portal aggregates and displays the critical value data of each department, displays the rankings and detailed data of various indicators by department and doctor, and supports exporting reports to display data at a glance, regularly compares indicators at different time points throughout the hospital, and formulates a critical value improvement plan for the entire hospital;
3) Critical value feedback: The medical department needs to update and adjust critical value items and critical values ​​according to actual clinical conditions, incorporate departmental critical value management into departmental medical quality assessment, and develop a critical value feedback component on the medical staff portal to uniformly collect, analyze and process these feedbacks.


Critical value sending process

Process: LIS - Data Center - Producer - Kafka Source - Flink - Processing - Elasticsearch
1. When medical technicians discover critical values, the examiner (tester) must first confirm whether the examination instruments, equipment and inspection process are normal, check whether there are any errors in the specimens, whether the operation is correct, and whether the instrument transmission is incorrect. After confirming that there are no abnormalities in all aspects of the clinical and inspection (testing) process, timely re-examination (the imaging department can decide whether re-examination is needed based on actual conditions). If the results of the two re-examinations are the same, the inspection (testing) results can be issued.
2. After the inspection (verification) system issues a critical value, the hospital integration platform will initiate a call to the data center's critical value sending interface. First, the critical value will be stored, and then the message center producer interface in this solution will be called to deliver the "critical value sending" Topic to Kafka;
3. The core service of this solution will use Flink to subscribe to Kafka's critical value sending topic, and use the Flink Transform API to process the received data to form the required data. Then, use the Flink Elasticsearch API to add ElasticsearchSink to output the results to the relevant Elasticsearch index;
4. After the data is processed by the core service, the back-end interface of the user's unified portal can be called, and then WebSocket can be used to push messages between the front-end and back-end, or the core service can directly integrate WebSocket to interact with the portal front-end. Finally, the critical value pop-up window interface is displayed on the front-end of the user portal;
5. At this point, the sending process ends.
image.png


Critical value processing flow

Process: Portal - Data Center - Producer - Kafka Source - Flink - Processing - Elasticsearch
1. When doctors use the portal system on a daily basis, if they receive a notification of a critical value, it will be displayed in a pop-up window;
2. The doctor makes a judgment based on the patient's critical value information, report information, medical consultation information, etc. If it is confirmed that the critical value category is met, the doctor fills in the corresponding intervention measures to trigger the critical value processing logic of the data center;
3. The data center first updates the critical value information, then initiates a call to the critical value sending interface of the inspection (verification) system through the in-hospital integration platform, and then calls the message center producer interface in this solution to deliver the "critical value processing" topic to Kafka;
4. The core service of this solution will use Flink to subscribe to Kafka's critical value processing topic, and use the Flink Transform API to process the received data to form the required data. Then, use the Flink Elasticsearch API to add the ElasticsearchSink to output the results to the relevant index of Elasticsearch.
5. If in step 2, the doctor determines that the critical value is a false alarm, click the "Feedback Question" button to call the error reporting interface of the "Medical Management Module" for subsequent analysis;
image.png


Solution Features

1. Based on the combination of Kafka + Flink, the advantages of big data streaming engine technology are utilized to achieve highly reliable, efficient, real-time, and highly scalable data processing for the transmission and processing of critical values, and ultimately achieve the purpose of critical value range calibration and measure recommendation;
2. Use Elasticsearch to store various index calculation results, and then use the aggregation function of ES to perform secondary analysis and processing on the results. The scalability and reusability of the overall solution are greatly improved;
3. Apply the event-driven mechanism of the message center to the critical value scenario, establish message events for the key nodes of the critical value closed-loop process, and specify subscription services for events through dynamic subscription. The process is clear and pluggable. Establish a complete critical value processing process with the message center as the hub, cover all aspects of the actual business scenario as much as possible, and improve business coverage and personnel participation;


Closing Statement

The above article introduces the blogger’s company’s solution “Recommendations for critical value processing measures and range calibration based on Kafka + Flink + ES”.
💗 后续会逐步分享企业实际开发中的实战经验,有需要交流的可以联系博主。