Technology Sharing

LVS KeepAlived high availability load balancing cluster

2024-07-12

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

The high availability architecture in the LVS cluster is only for the high availability of the scheduler.

Implementing master and backup scheduler based on VRRP

Highly available HA architecture

Main scheduler and backup scheduler (there can be multiple backup schedulers)

When the scheduler is working normally, the backup is completely in a redundant state (on standby). It does not participate in the operation of the cluster. Only when the main scheduler fails, the backup will take over the work of the main scheduler. After the main scheduler resumes function, the main scheduler continues to serve as the entrance to the cluster, and the backup continues to be in a redundant state (depending on the priority).

Keepalive implements the LVS high availability solution based on the VRRP protocol.

1. Multicast address:

224.0.0.18 communicates based on the multicast address, and the primary and backup nodes send messages to each other to determine whether the other node is alive.

2. Determine the position of the primary and backup nodes based on their priority

3. Failure switching: if the primary server fails, the backup server will continue to work; if the primary server recovers, the backup server will continue to wait.

4. The switch between the primary and backup is the switch of the VIP address

Keepalive is specifically for LVS, but it is not exclusive to LVS.

Core module: the core module of keepalive, responsible for starting and maintaining the main process and loading the global configuration file

VRRP module: the module that implements the VRRP protocol, which is also the main functional module

Check module: responsible for health checks and can also check the status of the real backend server.

Based on the DR mode experiment in the previous chapter, we add some configurations. This time we use two schedulers, one active and one standby.

Install keepalive on the scheduler first

yum -y install keepalived

After installation

We change the keepalived.conf file

  1. [root@test1 ~]# vim /etc/keepalived/keepalived.conf
  2. notification_email_from [email protected]
  3. smtp_server 127.0.0.1
  4. smtp_connect_timeout 30
  5. router_id LVS_01
  6. vrrp_skip_check_adv_addr
  7. vrrp_strict
  8. vrrp_garp_interval 0
  9. vrrp_gna_interval 0
  10. vrrp_iptables
  11. }
  12. vrrp_instance VI_1 {
  13. state MASTER
  14. interface ens33
  15. virtual_router_id 51
  16. priority 120
  17. advert_int 1
  18. authentication {
  19. auth_type PASS
  20. auth_pass 1111
  21. }
  22. virtual_ipaddress {
  23. 192.168.124.100
  24. }
  25. }
  26. virtual_server 192.168.124.100 80 {
  27. delay_loop 6
  28. lb_algo rr
  29. lb_kind DR
  30. persistence_timeout 50
  31. protocol TCP
  32. real_server 192.168.124.40 80 {
  33. weight 1
  34. TCP_CHECK {
  35. connect_port 80
  36. connect_timeout 3
  37. nb_get_retry 3
  38. delay_before_retry 3
  39. }
  40. }
  41. real_server 192.168.124.50 80 {
  42. 9,1 36%

Copy the configuration file in the first scheduler to the second scheduler

  1. scp root@192.168.233.10:/etc/keepallved/keepallved.conf
  2. /etc/keepallved

Then change the configuration

Primary and secondary priorities

Add an iptables option

In this way, the ipetables rule table will not stop accessing the keepalive rule.

  1. [root@localhost ~]# ipvsadm -ln
  2. IP Virtual Server version 1.2.1 (size=4096)
  3. Prot LocalAddress:Port Scheduler Flags
  4. -> RemoteAddress:Port Forward Weight ActiveConn InActConn
  5. TCP 192.168.124.100:80 rr persistent 50
  6. -> 192.168.124.40:80 Route 1 0 0
  7. -> 192.168.124.50:80 Route 1 0 0

Check it out

Then restart both

Take a look at the client results

Let's shut down the main scheduler first.

stsemctl stop keepalived.servers

The backup scheduler takes over the main job and continues to work

This is the VIP address that has been sent to the backup scheduler. The client is accessing it.

Still able to visit


Summarize

Keepalived has three main modules: core (core module, responsible for main process startup, maintenance and global configuration file loading and parsing), check (health check module) and vrrp (implementing the vrrp protocol).

Keepalived works based on the VRRP protocol, which groups multiple servers that provide the same functions into a server group, which includes a master and multiple backups. The master has a VIP that provides services to the outside world (the default route for other machines in the LAN where the server is located is this VIP). The master will send multicast messages. When the backup cannot receive the VRRP packet, it considers the master to be down, and then elects a backup to become the new master based on the priority of VRRP.

When configuring LVS + Keepalived, you usually need to install related software (such as ipvsadm, keepalived) on the master and backup nodes, and configure the keepalived.conf file. For example, in the configuration file of the master node, you need to specify the state (state) as master, the network interface (interface), the virtual router ID (virtual_router_id), the priority (priority), the advertisement interval (advert_int), the authentication information (authentication), and the virtual IP address (virtual_ipaddress), etc. The configuration of the backup node is similar, but the state is backup, and the priority is usually lower than the master.

After the configuration is complete, restart the keepalived service to achieve high-availability load balancing. When the master node fails, VIP will automatically switch to the backup node to ensure normal service access; after the master is restored, it will serve as the main load node again. In addition, the real server (rs) can also be configured accordingly. For example, when using the DR model for communication, lo needs to be configured as VIP on the network card of rs.

In this way, the LVS + Keepalived combination can achieve the following goals: the client accesses the service through VIP, and the request will be distributed according to the configuration rules; when the master's load balancing node fails, it can automatically switch to the backup node to ensure normal service; when an rs node fails, the node can be automatically removed and can join the cluster again after recovery.

In actual applications, relevant matters need to be noted, such as the IP address configured by virtual_ipaddress in the Keepalived configuration file must be in the same network segment; the higher the priority value, the greater the probability of the node becoming the master node; the smaller the advert_int value, the higher the frequency of the node sending VRRP messages. At the same time, factors such as network environment and server performance must also be considered to ensure the stability and efficient operation of the entire system.