2024-07-12
한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina
The rabbitmq cluster built in this article is built in the form of rabbitmq_peer_discovery_k8s. This plug-in automatically reads node information from the k8s api and builds a rabbitmq cluster. The construction method is a statefulset with three copies. Therefore, to ensure the persistence of data, hostpath+node affinity can be used, or pvc can be used. This article will describe the persistence of fault reporting data using pvc.
The k8s version k8s-1.29.4 is built in the form of shared storage. The shared storage uses the self-built root-ceph. If the Alibaba Cloud cluster is used, data persistence can be achieved by mounting hard disks, nas or object storage. The disadvantage of using shared storage is that the read and write performance is generally poor.
Save the following configuration to autotest-rabbitmq-config.yaml, configure the default vhost and user password in the configuration, and initialize the cluster node [Plan the node name before installation] information.
apiVersion: v1
kind: ConfigMap
metadata:
name: autozx-rabbitmq-config
namespace: zx-app
labels:
appname: pcauto-zx
app: autozx-rabbitmq-config
data:
enabled_plugins: |
[rabbitmq_management,rabbitmq_peer_discovery_k8s].
rabbitmq.conf: |
cluster_name = autozx-rabbitmq
listeners.tcp.default = 5672
default_vhost = /
default_user = admin
default_pass = pconline
default_user_tags.administrator = true
default_user_tags.management = true
default_user_tags.custom_tag = true
channel_max = 1024
tcp_listen_options.backlog = 2048
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8s
cluster_formation.k8s.host = kubernetes.default.svc.cluster.local
cluster_formation.k8s.address_type = hostname
cluster_formation.node_cleanup.interval = 30
cluster_formation.node_cleanup.only_log_warning = true
cluster_partition_handling = autoheal
queue_master_locator=min-masters
loopback_users.guest = false
cluster_formation.k8s.hostname_suffix = .autozx-rabbitmq.zx-app.svc.cluster.local
Use 3 copies
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: autozx-rabbitmq
namespace: zx-app
labels:
appname: pcauto-zx
app: autozx-rabbitmq
spec:
serviceName: "autozx-rabbitmq"
replicas: 3
selector:
matchLabels:
app: autozx-rabbitmq
template:
metadata:
labels:
app: autozx-rabbitmq
spec:
containers:
- name: rabbitmq-server
image: pcgroup-registry-vpc.cn-shenzhen.cr.aliyuncs.com/public/rabbitmq:3.12.14-management
imagePullPolicy: IfNotPresent
env:
- name: RABBITMQ_ERLANG_COOKIE
value: "YZSDHWMFSMKEMBDHSGGZ"
- name: K8S_SERVICE_NAME
value: autozx-rabbitmq
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: RABBITMQ_USE_LONGNAME
value: "true"
- name: RABBITMQ_NODENAME
value: rabbit@$(POD_NAME).$(K8S_SERVICE_NAME).$(POD_NAMESPACE).svc.cluster.local
ports:
- name: http
containerPort: 15672
- name: amqp
containerPort: 5672
readinessProbe:
exec:
command:
- rabbitmq-diagnostics
- status
initialDelaySeconds: 20
periodSeconds: 60
timeoutSeconds: 10
volumeMounts:
- name: rbmq-data
mountPath: /var/lib/rabbitmq
- name: rabbitmq-config-volume
mountPath: /etc/rabbitmq/
restartPolicy: Always
serviceAccountName: rabbitmq-cluster
terminationGracePeriodSeconds: 30
volumes:
- name: rabbitmq-config-volume
configMap:
name: autozx-rabbitmq-config
volumeClaimTemplates:
- metadata:
name: rbmq-data
spec:
accessModes:
- ReadWriteMany
storageClassName: example-storageclass
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: Service
metadata:
name: autozx-rabbitmq
namespace: zx-app
labels:
appname: pcauto-zx
app: autozx-rabbitmq
spec:
ports:
- port: 5672
clusterIP: None
selector:
app: autozx-rabbitmq
Note: To avoid allocating the same pod to the same node, you can add the following configuration:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- autozx-rabbitmq
topologyKey: "kubernetes.io/hostname"
Configure serviceAccount, role, and RoleBinding to authorize statefulset to read node information
apiVersion: v1
kind: Service
metadata:
name: autozx-rabbitmq-manage
namespace: zx-app
labels:
app: autozx-rabbitmq-manage
appname: pcauto-zx
spec:
ports:
- port: 5672
name: amqp
- port: 15672
name: http
selector:
app: autozx-rabbitmq
type: LoadBalancer
[root@autobbs-docker-240-213 rabbitmq]# cat rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: rabbitmq-cluster
namespace: zx-app
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rabbitmq-cluster
namespace: zx-app
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rabbitmq-cluster
namespace: zx-app
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: rabbitmq-cluster
subjects:
- kind: ServiceAccount
name: rabbitmq-cluster
namespace: zx-app
Configure a service proxy service for amqp port 5672 and management port 15672.
apiVersion: v1
kind: Service
metadata:
name: autozx-rabbitmq-manage
namespace: zx-app
labels:
app: autozx-rabbitmq-manage
appname: pcauto-zx
spec:
ports:
- port: 5672
name: amqp
- port: 15672
name: http
selector:
app: autozx-rabbitmq
type: LoadBalancer
Set the rabbitmq cluster to a 3-node mirror cluster. After completing step 4.4, you can log in to the console through the loadbalancer IP: http://ip:15672, and log in using the default_user and default_pass set in configmap.
Mirror mode settings:
After setting:
The mirror mode can be set via the command:
Set the demo vhost mirror execution command:
rabbitmqctl set_policy -p demo ha-all "^" '{"ha-mode":"all","ha-sync-mode":"automatic"}'
Set the mirror mode for the default vhost /:
rabbitmqctl set_policy ha-all "^" '{"ha-mode":"all","ha-sync-mode":"automatic"}'