Technology Sharing

Yiwen learns to use Helm to deploy a Rancher high-availability cluster

2024-07-12

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

Rancher cluster architecture diagram

Helm deploys Rancher high availability cluster

Introduction to Helm

Helm is a package management tool for Kubernetes that simplifies the deployment and management of Kubernetes applications. Helm can be compared to the yum tool of CentOS. Helm has the following basic concepts:

Chart: It is an installation package managed by Helm, which contains the installation package resources that need to be deployed. Chart can be compared to the rpm file used by CentOS yum.

Release: is a deployment instance of a chart. A chart can have multiple releases on a Kubernetes cluster, which means that the chart can be installed multiple times.

Repository: chart warehouse, used to publish and store charts

download:https://github.com/helm/helm/releases

View pods status

  1. kubectl get pods --namespace=kube-system
  2. kubectl get pods --all-namespaces

If you want to delete it, find the deployment first and then delete it.

  1. kangming@ubuntu26:~$ kubectl get deployment --namespace=kube-system
  2. NAMEREADY UP-TO-DATE AVAILABLE AGE
  3. calico-kube-controllers 1/1 11 4h23m
  4. coredns 2/2 22 4h22m
  5. coredns-autoscaler1/1 11 4h22m
  6. metrics-server1/1 11 4h18m
  7. tiller-deploy 0/1 10 4h15m
  8. kangming@ubuntu26:~$ kubectl delete deployment tiller-deploy --namespace=kube-system
  9. deployment.apps "tiller-deploy" deleted

If you want to view a pod in detail, you can describe

kubectl describe pod rke-coredns-addon-deploy-job-qz8v6--namespace=kube-system

helm3 installation

Latest stable version: v3.9.2

download

https://get.helm.sh/helm-v3.9.2-linux-amd64.tar.gz

Install

  1. tar -zxvf helm-v3.9.2-linux-amd64.tar.gz
  2. sudo mv linux-amd64/helm /usr/local/bin/helm 
  3. sudo chmod +/usr/local/bin/helm

Check out the official documentation:

Helm | Docs

Helm | Quick Start Guide

Add a chart repository

helm repo add bitnami https://charts.bitnami.com/bitnami

View the list of installable charts

  1. kangming@ubuntu26:~/rancher$ helm search repo bitnami
  2. NAMECHART VERSION APP VERSION DESCRIPTION
  3. bitnami/airflow 13.0.22.3.3 Apache Airflow is a tool to express and execute...
  4. bitnami/apache9.1.162.4.54Apache HTTP Server is an open-source HTTP serve...
  5. bitnami/argo-cd 4.0.6 2.4.8 Argo CD is a continuous delivery tool for Kuber...
  6. bitnami/argo-workflows2.3.8 3.3.8 Argo Workflows is meant to orchestrate Kubernet...

Installing the chart example

  1. ##确定我们可以拿到最新的charts列表
  2. helm repo update
  3. #安装一个mysql的chat示例
  4. helm install bitnami/mysql --generate-name
  5. NAME: mysql-1659686641
  6. LAST DEPLOYED: Fri Aug5 16:04:04 2022
  7. NAMESPACE: default
  8. STATUS: deployed
  9. REVISION: 1
  10. TEST SUITE: None
  11. NOTES:
  12. CHART NAME: mysql
  13. CHART VERSION: 9.2.5
  14. APP VERSION: 8.0.30
  15. ** Please be patient while the chart is being deployed **
  16. Tip:
  17. Watch the deployment status using the command: kubectl get pods -w --namespace default
  18. Services:
  19. echo Primary: mysql-1659686641.default.svc.cluster.local:3306
  20. Execute the following to get the administrator credentials:
  21. echo Username: root
  22. MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysql-1659686641 -o jsonpath="{.data.mysql-root-password}" | base64 -d)
  23. To connect to your database:
  24. 1Run a pod that you can use as a client:
  25. kubectl run mysql-1659686641-client --rm --tty -i --restart='Never' --imagedocker.io/bitnami/mysql:8.0.30-debian-11-r4 --namespace default --env MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD --command -- bash
  26. 2To connect to primary service (read/write):
  27. mysql -h mysql-1659686641.default.svc.cluster.local -uroot -p"$MYSQL_ROOT_PASSWORD"

In the above example, the bitnami/mysql chart is published with the name mysql-1659686641. When we list all pods, we will find that there is an additional mysql pod.

You can get basic information about the chart by executing the command helm show chart bitnami/mysql. Or you can execute helm show all bitnami/mysql to get all information about the chart.

Every time you run helm install, a new release is created, so a chart can be installed multiple times in the same cluster, and each one can be managed and updated independently.

For more information on how to use helm, see:https://helm.sh/zh/docs/intro/using_helm/

With Helm you can easily see which charts have been released

  1. kangming@ubuntu26:~/rancher$ helm list
  2. NAMENAMESPACE REVISIONUPDATED STATUSCHART APP VERSION
  3. mysql-1659686641default 1 2022-08-05 16:04:04.411386078 +0800 CST deployedmysql-9.2.5 8.0.30

Uninstall a version

  1. kangming@ubuntu26:~/rancherhelm uninstall mysql-1659686641
  2. release "mysql-1659686641" uninstalled

This command will uninstall mysql-1659686641 from Kubernetes. It will delete all related resources (service, deployment, pod, etc.) and even version history related to this version.

If you provide the --keep-history option when executing helm uninstall, Helm will save the version history. You can view the version information with the command

helm status mysql-1659686641

helm Help Documentation

helm get -h

helm search

  • helm search hub from Artifact Hub Find and list helm charts in the Artifact Hub. There are a lot of different repositories stored in the Artifact Hub.

  • helm search repo searches for repositories that you have added (using helm repo add) to your local helm client. This command searches based on local data and does not require an internet connection.

Use the helm install command to install a new helm package. The simplest way to use it is to pass in two parameters: the name of the release you named and the name of the chart you want to install.

helm install happy-panda bitnami/wordpress

Install Rancher with helm3 (self-signed certificate method)

1. Add the Chart warehouse address

helm repo add rancher-latest 
https://releases.rancher.com/server-charts/latest

2. Generate a definition certificate

For reference:Generate a Self-Signed SSL Certificate

Generate a certificate script with one click, rancher official, save as key.sh

  1. #!/bin/bash -e
  2. help ()
  3. {
  4. echo' ================================================================ '
  5. echo' --ssl-domain: 生成ssl证书需要的主域名,如不指定则默认为www.rancher.local,如果是ip访问服务,则可忽略;'
  6. echo' --ssl-trusted-ip: 一般ssl证书只信任域名的访问请求,有时候需要使用ip去访问server,那么需要给ssl证书添加扩展IP,多个IP用逗号隔开;'
  7. echo' --ssl-trusted-domain: 如果想多个域名访问,则添加扩展域名(SSL_TRUSTED_DOMAIN),多个扩展域名用逗号隔开;'
  8. echo' --ssl-size: ssl加密位数,默认2048;'
  9. echo' --ssl-cn: 国家代码(2个字母的代号),默认CN;'
  10. echo' 使用示例:'
  11. echo' ./create_self-signed-cert.sh --ssl-domain=www.test.com --ssl-trusted-domain=www.test2.com  '
  12. echo' --ssl-trusted-ip=1.1.1.1,2.2.2.2,3.3.3.3 --ssl-size=2048 --ssl-date=3650'
  13. echo' ================================================================'
  14. }
  15. case "$1" in
  16. -h|--helphelpexit;;
  17. esac
  18. if [[ $1 == '' ]];then
  19. help;
  20. exit;
  21. fi
  22. CMDOPTS="$*"
  23. for OPTS in $CMDOPTS;
  24. do
  25. key=$(echo ${OPTS} | awk -F"=" '{print $1}' )
  26. value=$(echo ${OPTS} | awk -F"=" '{print $2}' )
  27. case "$key" in
  28. --ssl-domain) SSL_DOMAIN=$value ;;
  29. --ssl-trusted-ip) SSL_TRUSTED_IP=$value ;;
  30. --ssl-trusted-domain) SSL_TRUSTED_DOMAIN=$value ;;
  31. --ssl-size) SSL_SIZE=$value ;;
  32. --ssl-date) SSL_DATE=$value ;;
  33. --ca-date) CA_DATE=$value ;;
  34. --ssl-cn) CN=$value ;;
  35. esac
  36. done
  37. # CA相关配置
  38. CA_DATE=${CA_DATE:-3650}
  39. CA_KEY=${CA_KEY:-cakey.pem}
  40. CA_CERT=${CA_CERT:-cacerts.pem}
  41. CA_DOMAIN=cattle-ca
  42. # ssl相关配置
  43. SSL_CONFIG=${SSL_CONFIG:-$PWD/openssl.cnf}
  44. SSL_DOMAIN=${SSL_DOMAIN:-'www.rancher.local'}
  45. SSL_DATE=${SSL_DATE:-3650}
  46. SSL_SIZE=${SSL_SIZE:-2048}
  47. ## 国家代码(2个字母的代号),默认CN;
  48. CN=${CN:-CN}
  49. SSL_KEY=$SSL_DOMAIN.key
  50. SSL_CSR=$SSL_DOMAIN.csr
  51. SSL_CERT=$SSL_DOMAIN.crt
  52. echo -e "033[32m ---------------------------- 033[0m"
  53. echo -e "033[32m | 生成 SSL Cert | 033[0m"
  54. echo -e "033[32m ---------------------------- 033[0m"
  55. if [[ -e ./${CA_KEY} ]]; then
  56. echo -e "033[32m ====> 1. 发现已存在CA私钥,备份"${CA_KEY}"为"${CA_KEY}"-bak,然后重新创建 033[0m"
  57. mv ${CA_KEY} "${CA_KEY}"-bak
  58. openssl genrsa -out ${CA_KEY} ${SSL_SIZE}
  59. else
  60. echo -e "033[32m ====> 1. 生成新的CA私钥 ${CA_KEY} 033[0m"
  61. openssl genrsa -out ${CA_KEY} ${SSL_SIZE}
  62. fi
  63. if [[ -e ./${CA_CERT} ]]; then
  64. echo -e "033[32m ====> 2. 发现已存在CA证书,先备份"${CA_CERT}"为"${CA_CERT}"-bak,然后重新创建 033[0m"
  65. mv ${CA_CERT} "${CA_CERT}"-bak
  66. openssl req -x509 -sha256 -new -nodes -key ${CA_KEY} -days ${CA_DATE} -out ${CA_CERT} -subj "/C=${CN}/CN=${CA_DOMAIN}"
  67. else
  68. echo -e "033[32m ====> 2. 生成新的CA证书 ${CA_CERT} 033[0m"
  69. openssl req -x509 -sha256 -new -nodes -key ${CA_KEY} -days ${CA_DATE} -out ${CA_CERT} -subj "/C=${CN}/CN=${CA_DOMAIN}"
  70. fi
  71. echo -e "033[32m ====> 3. 生成Openssl配置文件 ${SSL_CONFIG} 033[0m"
  72. cat > ${SSL_CONFIG} <<EOM
  73. [req]
  74. req_extensions = v3_req
  75. distinguished_name = req_distinguished_name
  76. [req_distinguished_name]
  77. [ v3_req ]
  78. basicConstraints = CA:FALSE
  79. keyUsage = nonRepudiation, digitalSignature, keyEncipherment
  80. extendedKeyUsage = clientAuth, serverAuth
  81. EOM
  82. if [[ -n ${SSL_TRUSTED_IP} || -n ${SSL_TRUSTED_DOMAIN} || -n ${SSL_DOMAIN} ]]; then
  83. cat >> ${SSL_CONFIG} <<EOM
  84. subjectAltName = @alt_names
  85. [alt_names]
  86. EOM
  87. IFS=","
  88. dns=(${SSL_TRUSTED_DOMAIN})
  89. dns+=(${SSL_DOMAIN})
  90. for i in "${!dns[@]}"do
  91. echo DNS.$((i+1)) = ${dns[$i]} >> ${SSL_CONFIG}
  92. done
  93. if [[ -n ${SSL_TRUSTED_IP} ]]; then
  94. ip=(${SSL_TRUSTED_IP})
  95. for i in "${!ip[@]}"do
  96. echo IP.$((i+1)) = ${ip[$i]} >> ${SSL_CONFIG}
  97. done
  98. fi
  99. fi
  100. echo -e "033[32m ====> 4. 生成服务SSL KEY ${SSL_KEY} 033[0m"
  101. openssl genrsa -out ${SSL_KEY} ${SSL_SIZE}
  102. echo -e "033[32m ====> 5. 生成服务SSL CSR ${SSL_CSR} 033[0m"
  103. openssl req -sha256 -new -key ${SSL_KEY} -out ${SSL_CSR} -subj "/C=${CN}/CN=${SSL_DOMAIN}" -config ${SSL_CONFIG}
  104. echo -e "033[32m ====> 6. 生成服务SSL CERT ${SSL_CERT} 033[0m"
  105. openssl x509 -sha256 -req -in ${SSL_CSR} -CA ${CA_CERT} 
  106. -CAkey ${CA_KEY} -CAcreateserial -out ${SSL_CERT} 
  107. -days ${SSL_DATE} -extensions v3_req 
  108. -extfile ${SSL_CONFIG}
  109. echo -e "033[32m ====> 7. 证书制作完成 033[0m"
  110. echo
  111. echo -e "033[32m ====> 8. 以YAML格式输出结果 033[0m"
  112. echo "----------------------------------------------------------"
  113. echo "ca_key: |"
  114. cat $CA_KEY | sed 's/^//'
  115. echo
  116. echo "ca_cert: |"
  117. cat $CA_CERT | sed 's/^//'
  118. echo
  119. echo "ssl_key: |"
  120. cat $SSL_KEY | sed 's/^//'
  121. echo
  122. echo "ssl_csr: |"
  123. cat $SSL_CSR | sed 's/^//'
  124. echo
  125. echo "ssl_cert: |"
  126. cat $SSL_CERT | sed 's/^//'
  127. echo
  128. echo -e "033[32m ====> 9. 附加CA证书到Cert文件 033[0m"
  129. cat ${CA_CERT} >> ${SSL_CERT}
  130. echo "ssl_cert: |"
  131. cat $SSL_CERT | sed 's/^//'
  132. echo
  133. echo -e "033[32m ====> 10. 重命名服务证书 033[0m"
  134. echo "cp ${SSL_DOMAIN}.key tls.key"
  135. cp ${SSL_DOMAIN}.key tls.key
  136. echo "cp ${SSL_DOMAIN}.crt tls.crt"
  137. cp ${SSL_DOMAIN}.crt tls.crt

implement

bash ./key.sh --ssl-domain=rancher.k8s-test.com--ssl-size=2048 --ssl-date=3650

The generated file is as follows

3. Create a secret namespace

kubectl create namespace cattle-system

4. Service certificate and private key ciphertext

  1. kubectl -n cattle-system create secret tls tls-rancher-ingress 
  2. --cert=tls.crt 
  3. --key=tls.key

If you need to replace the certificate, you can usekubectl -n cattle-system delete secret tls-rancher-ingressTo deletetls-rancher-ingressCiphertext, then use the above command to create a new ciphertext. If you are using a certificate issued by a private CA, you can only replace the new certificate if it is issued by the same CA as the current certificate.

5, ca certificate ciphertext

  1. kubectl -n cattle-system create secret generic tls-ca 
  2. --from-file=cacerts.pem=./cacerts.pem

6. Install rancher

  1. helm install rancher rancher-latest/rancher 
  2. --namespace cattle-system 
  3. --set hostname=rancher.k8s-test.com 
  4. --set bootstrapPassword=admin 
  5. --set ingress.tls.source=secret 
  6. --set privateCA=true

7. Check the status and wait for the rollout to succeed and all nodes to be ready

  1. kangming@ubuntu26:~$ kubectl -n cattle-system rollout status deploy/rancher
  2. deployment "rancher" successfully rolled out
  3. kangming@ubuntu26:~$ kubectl -n cattle-system get deploy rancher
  4. NAMEREADY UP-TO-DATE AVAILABLE AGE
  5. rancher 3/3 33 40m

Check the pod status. When the rancher pod is ready, you can access it through the browser: rancher.k8s-test.com. This domain name can be mapped to the load balancing node. Because the above installation starts a pod on all working nodes by default, so the following nginx configures ports 80 and 443 of the three nodes.

  1. kubectl get pods --all-namespaces
  2. 或者
  3. kubectl get pods -n cattle-system
  4. #查看rancher pod状态
  5. kubectl describe podrancher-ff955865-29ljr --namespace=cattle-system
  6. #一次看所有rancher pod
  7. kubectl -n cattle-system describe pods -l app=rancher

The hostname above can be resolved to the load balancing node, which is usually the VIP virtualized by keepalived. For convenience, only one LB node is used here, which is directly resolved to 24, and then load balancing is performed through 24's nginx.

troubleshooting

View the logs of the Rancher pod

  1. kubectl get pods -n cattle-system
  2. kubectl -n cattle-system logs -f rancher-5d9699f4cf-72wgp

Configuring Load Balancing

Configure it on the 24 load balancing node and copy the certificate generated by the script to 24

sudo vi /etc/nginx/nignx.conf

  1. stream {
  2. upstream rancher_servers_http {
  3. least_conn;
  4. server 192.168.43.26:80 max_fails=3 fail_timeout=5s;
  5. server 192.168.43.27:80 max_fails=3 fail_timeout=5s;
  6. server 192.168.43.28:80 max_fails=3 fail_timeout=5s;
  7. }
  8. server {
  9. listen 80;
  10. proxy_pass rancher_servers_http;
  11. }
  12. upstream rancher_servers_https {
  13. least_conn;
  14. server 192.168.43.26:443 max_fails=3 fail_timeout=5s;
  15. server 192.168.43.27:443 max_fails=3 fail_timeout=5s;
  16. server 192.168.43.28:443 max_fails=3 fail_timeout=5s;
  17. }
  18. server {
  19. listen 443;
  20. proxy_pass rancher_servers_https;
  21. }
  22. }

access:https://rancher.k8s-test.com

Bootstrap password is admin, login account is admin, password uses random code: 1BgV0yLx19YkIhOv

Click continue to enter the management page.

Install Rancher with helm3 (rancher comes with its own certificate manager)

For the prerequisites, refer to the previous section, install the k8s cluster through rke, and prepare the helm environment.

1. Add helm repository

  1. helm repo add rancher-latest 
  2. https://releases.rancher.com/server-charts/latest

2. Create a namespace

kubectl create namespace cattle-system

3. Select Rancher-generated TLS certificate for certificate management

4. Install cert-manager. This is only required if you choose the Rancher-generated TLS certificate method.

  1. If you have installed the CRDs manually instead of with the `--set installCRDs=true` option added to your Helm install command, you should upgrade your CRD resources before upgrading the Helm chart:
  2. kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.7.1/cert-manager.crds.yaml
  3. Add the Jetstack Helm repository
  4. helm repo add jetstack https://charts.jetstack.io
  5. # Update your local Helm chart repository cache
  6. helm repo update
  7. # Install the cert-manager Helm chart
  8. helm install cert-manager jetstack/cert-manager 
  9. --namespace cert-manager 
  10. --create-namespace 
  11. --version v1.7.1

Verify that cert-manager is installed correctly

kubectl get pods --namespace cert-manager

5. Install rancher and use a fake domain name for hostname so that rancher will expose its IP.

  1. helm install rancher rancher-latest/rancher 
  2. --namespace cattle-system 
  3. --set hostname=rancher.my.org 
  4. --set bootstrapPassword=admin

6. Wait for rollout to succeed

  1. kangming@ubuntu26:~$ kubectl -n cattle-system rollout status deploy/rancher
  2. deployment "rancher" successfully rolled out
  3. kangming@ubuntu26:~$ kubectl -n cattle-system get deploy rancher
  4. NAMEREADY UP-TO-DATE AVAILABLE AGE
  5. rancher 3/3 33 40m

After success, randomly map the domain name rancher.my.org to three nodes for testing. After successfully seeing the page, it means that the installation is fine. Later, you only need to configure the load balancing unified entrance. After seeing the following page, it proves that the current installation is fine. Note that you can only use the domain name to access, and the IP access cannot see the page normally.

Configure the load balancing entry

After manually testing that all nodes can access Rancher, you need to configure nginx load balancing. You can directly configure the four-layer forwarding, so there is no need to configure a certificate.

sudo vi /etc/nginx/nignx.conf

  1. stream {
  2. upstream rancher_servers_http {
  3. least_conn;
  4. server 192.168.43.26:80 max_fails=3 fail_timeout=5s;
  5. server 192.168.43.27:80 max_fails=3 fail_timeout=5s;
  6. server 192.168.43.28:80 max_fails=3 fail_timeout=5s;
  7. }
  8. server {
  9. listen 80;
  10. proxy_pass rancher_servers_http;
  11. }
  12. upstream rancher_servers_https {
  13. least_conn;
  14. server 192.168.43.26:443 max_fails=3 fail_timeout=5s;
  15. server 192.168.43.27:443 max_fails=3 fail_timeout=5s;
  16. server 192.168.43.28:443 max_fails=3 fail_timeout=5s;
  17. }
  18. server {
  19. listen 443;
  20. proxy_pass rancher_servers_https;
  21. }
  22. }

Configure the client's hosts file, test it, and access the nginx LB entry to access rancher normally.