Technology Sharing

[Install rabbitmq in k8s] Install rabbitmq in k8s and build a mirror cluster-pvc version

2024-07-12

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

Introduction

The rabbitmq cluster built in this article is built in the form of rabbitmq_peer_discovery_k8s. This plug-in automatically reads node information from the k8s api and builds a rabbitmq cluster. The construction method is a statefulset with three copies. Therefore, to ensure the persistence of data, hostpath+node affinity can be used, or pvc can be used. This article will describe the persistence of fault reporting data using pvc.

1. Conditions and environment description

The k8s version k8s-1.29.4 is built in the form of shared storage. The shared storage uses the self-built root-ceph. If the Alibaba Cloud cluster is used, data persistence can be achieved by mounting hard disks, nas or object storage. The disadvantage of using shared storage is that the read and write performance is generally poor.

4.2. Create configmap configuration

Save the following configuration to autotest-rabbitmq-config.yaml, configure the default vhost and user password in the configuration, and initialize the cluster node [Plan the node name before installation] information.

apiVersion: v1
kind: ConfigMap
metadata:
  name: autozx-rabbitmq-config
  namespace: zx-app
  labels:
    appname: pcauto-zx
    app: autozx-rabbitmq-config 
data:
  enabled_plugins: |
    [rabbitmq_management,rabbitmq_peer_discovery_k8s].
  rabbitmq.conf: |
    cluster_name = autozx-rabbitmq
    listeners.tcp.default = 5672
    
    default_vhost = /
    default_user = admin
    default_pass = pconline

    default_user_tags.administrator = true
    default_user_tags.management = true
    default_user_tags.custom_tag = true

    channel_max = 1024
    tcp_listen_options.backlog = 2048

    cluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8s
    cluster_formation.k8s.host = kubernetes.default.svc.cluster.local
    cluster_formation.k8s.address_type = hostname
    cluster_formation.node_cleanup.interval = 30
    cluster_formation.node_cleanup.only_log_warning = true

    cluster_partition_handling = autoheal

    queue_master_locator=min-masters
    loopback_users.guest = false
    cluster_formation.k8s.hostname_suffix = .autozx-rabbitmq.zx-app.svc.cluster.local
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
4.3. Create statefulset and service headless configuration

Use 3 copies

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: autozx-rabbitmq 
  namespace: zx-app 
  labels:
    appname: pcauto-zx
    app: autozx-rabbitmq
spec:
  serviceName: "autozx-rabbitmq"
  replicas: 3
  selector:
    matchLabels:
      app: autozx-rabbitmq
  template:
    metadata:
      labels:
        app: autozx-rabbitmq
    spec:
      containers:
      - name: rabbitmq-server
        image: pcgroup-registry-vpc.cn-shenzhen.cr.aliyuncs.com/public/rabbitmq:3.12.14-management 
        imagePullPolicy: IfNotPresent
        env:
        - name: RABBITMQ_ERLANG_COOKIE
          value: "YZSDHWMFSMKEMBDHSGGZ"
        - name: K8S_SERVICE_NAME
          value: autozx-rabbitmq 
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: RABBITMQ_USE_LONGNAME
          value: "true"
        - name: RABBITMQ_NODENAME
          value: rabbit@$(POD_NAME).$(K8S_SERVICE_NAME).$(POD_NAMESPACE).svc.cluster.local
        ports:
        - name: http
          containerPort: 15672 
        - name: amqp
          containerPort: 5672 
        readinessProbe:
          exec:
            command:
            - rabbitmq-diagnostics
            - status
          initialDelaySeconds: 20
          periodSeconds: 60
          timeoutSeconds: 10
        volumeMounts:
        - name: rbmq-data
          mountPath: /var/lib/rabbitmq
        - name: rabbitmq-config-volume
          mountPath: /etc/rabbitmq/ 
      restartPolicy: Always 
      serviceAccountName: rabbitmq-cluster
      terminationGracePeriodSeconds: 30
      volumes:
        - name: rabbitmq-config-volume
          configMap:
            name: autozx-rabbitmq-config
  volumeClaimTemplates:
  - metadata:
      name: rbmq-data
    spec:
      accessModes:
      - ReadWriteMany
      storageClassName: example-storageclass
      resources:
        requests:
          storage: 20Gi
---
apiVersion: v1
kind: Service
metadata:
  name: autozx-rabbitmq
  namespace: zx-app
  labels:
    appname: pcauto-zx
    app: autozx-rabbitmq
spec:
  ports:
  - port: 5672
  clusterIP: None
  selector:
    app: autozx-rabbitmq
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94

Note: To avoid allocating the same pod to the same node, you can add the following configuration:

      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                      - autozx-rabbitmq
              topologyKey: "kubernetes.io/hostname"
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
4.4. Authorization Configuration

Configure serviceAccount, role, and RoleBinding to authorize statefulset to read node information

apiVersion: v1
kind: Service
metadata:
  name: autozx-rabbitmq-manage
  namespace: zx-app
  labels:
    app: autozx-rabbitmq-manage
    appname: pcauto-zx
spec:
  ports:
  - port: 5672
    name: amqp
  - port: 15672
    name: http
  selector:
    app: autozx-rabbitmq
  type: LoadBalancer 
[root@autobbs-docker-240-213 rabbitmq]# cat rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rabbitmq-cluster
  namespace: zx-app
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rabbitmq-cluster
  namespace: zx-app
rules:
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rabbitmq-cluster
  namespace: zx-app
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: rabbitmq-cluster
subjects:
- kind: ServiceAccount
  name: rabbitmq-cluster
  namespace: zx-app
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
4.5. Create service configuration

Configure a service proxy service for amqp port 5672 and management port 15672.

apiVersion: v1
kind: Service
metadata:
  name: autozx-rabbitmq-manage
  namespace: zx-app
  labels:
    app: autozx-rabbitmq-manage
    appname: pcauto-zx
spec:
  ports:
  - port: 5672
    name: amqp
  - port: 15672
    name: http
  selector:
    app: autozx-rabbitmq
  type: LoadBalancer 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

5. Configuration after installation

Set the rabbitmq cluster to a 3-node mirror cluster. After completing step 4.4, you can log in to the console through the loadbalancer IP: http://ip:15672, and log in using the default_user and default_pass set in configmap.
insert image description here

Mirror mode settings:
insert image description here

After setting:
insert image description here
insert image description here

The mirror mode can be set via the command:
Set the demo vhost mirror execution command:
rabbitmqctl set_policy -p demo ha-all "^" '{"ha-mode":"all","ha-sync-mode":"automatic"}'
Set the mirror mode for the default vhost /:
rabbitmqctl set_policy ha-all "^" '{"ha-mode":"all","ha-sync-mode":"automatic"}'

6. Installation Instructions

  • Before using the configuration, replace autozx in the configuration with the name you need. The configured namespace is: zx-app, change it to your own namespace, appname is pcauto-zx, change it to your own application name, and this label can also be deleted.
  • The mirror address in the configuration uses a private mirror address. The mirror is the rabbitmq:3.12.14-management mirror downloaded from dockerhub to the private mirror warehouse. If the k8s can be directly connected to the external network, rabbitmq:3.12.14-management can be used directly. Otherwise, configure a mirror address that can be connected.
  • The default username and password is admin.
  • Storage class in the pvc template: example-storageclass Modify it to your own storage class
  • "V. Configuration after installation" Since the configuration method is the same, the screenshots in the previous article are used.