2024-07-12
한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina
Official documentation:https://min.io/docs/minio/linux/operations/install-deploy-manage/deploy-minio-multi-node-multi-drive.html
This recipe deploys MinIO in a Multi-Node Multi-Drive (MNMD) or “distributed” configuration. MNMD deployments provide enterprise-grade performance, availability, and scalability, and are the recommended topology for all production workloads.
MNMD deployments support erasure coded configurations that can tolerate the loss of up to half of the nodes or drives in the deployment while continuing to serve read operations.
Cluster deployment requirements
The node and disk planning are as follows. Four disks are prepared for each node:
Node Name | Node IP | minio disk | Mount Point | operating system |
---|---|---|---|---|
minio1.example.com | 192.168.72.51 | /dev/sdb /dev/sdc /dev/sdd /dev/sde | /var/lib/minio/data1 /var/lib/minio/data2 /var/lib/minio/data3 /var/lib/minio/data4 | Ubuntu22.04 |
minio2.example.com | 192.168.72.52 | /dev/sdb /dev/sdc /dev/sdd /dev/sde | /var/lib/minio/data1 /var/lib/minio/data2 /var/lib/minio/data3 /var/lib/minio/data4 | Ubuntu22.04 |
minio3.example.com | 192.168.72.53 | /dev/sdb /dev/sdc /dev/sdd /dev/sde | /var/lib/minio/data1 /var/lib/minio/data2 /var/lib/minio/data3 /var/lib/minio/data4 | Ubuntu22.04 |
minio4.example.com | 192.168.72.54 | /dev/sdb /dev/sdc /dev/sdd /dev/sde | /var/lib/minio/data1 /var/lib/minio/data2 /var/lib/minio/data3 /var/lib/minio/data4 | Ubuntu22.04 |
lb1.example.com | 192.168.72.55 | - | - | Ubuntu22.04 |
lb2.example.com | 192.168.72.56 | - | - | Ubuntu22.04 |
VIP | 192.168.72.100 | - | - | - |
Description: VIP address is resolved tominio.example.com
Domain name, as a unified entrance.
http://minio.example.com
http://minio.example.com/minio/ui
The cluster architecture is as follows:
All minio nodes are configured with host names
hostnamectl set-hostname minio1.example.com
hostnamectl set-hostname minio2.example.com
hostnamectl set-hostname minio3.example.com
hostnamectl set-hostname minio4.example.com
All minio nodes are configured with hosts resolution
cat >/etc/hosts<<EOF
192.168.72.51 minio1.example.com
192.168.72.52 minio2.example.com
192.168.72.53 minio3.example.com
192.168.72.54 minio4.example.com
EOF
All minio nodes are configured with time synchronization.
Minio multi-node systems must keep time and date synchronized to maintain stable inter-node operations and interactions.
apt install -y chrony
systemctl enable --now chrony
timedatectl set-timezone Asia/Shanghai
Each node mounts 4 disks, of which sda is the system disk:
root@minio1:~# lsblk -d -n -o NAME | grep '^sd'
sda
sdb
sdc
sdd
sde
On each node, create four directories to mount the four drives:
sudo mkdir -p /var/lib/minio/data1
sudo mkdir -p /var/lib/minio/data2
sudo mkdir -p /var/lib/minio/data3
sudo mkdir -p /var/lib/minio/data4
On each node individually, format the disk as the XFS file system:
sudo mkfs.xfs /dev/sdb -L DISK1
sudo mkfs.xfs /dev/sdc -L DISK2
sudo mkfs.xfs /dev/sdd -L DISK3
sudo mkfs.xfs /dev/sde -L DISK4
Configure automatic mounting
cat >>/etc/fstab<<EOF
LABEL=DISK1 /var/lib/minio/data1 xfs defaults,noatime 0 2
LABEL=DISK2 /var/lib/minio/data2 xfs defaults,noatime 0 2
LABEL=DISK3 /var/lib/minio/data3 xfs defaults,noatime 0 2
LABEL=DISK4 /var/lib/minio/data4 xfs defaults,noatime 0 2
EOF
Mount all /etc/fstab
File systems defined in the file but not yet mounted
sudo mount -av
Confirm that the file system is mounted normally
root@minio1:~# df -hT
......
/dev/sdb xfs 100G 746M 100G 1% /var/lib/minio/data1
/dev/sdc xfs 100G 746M 100G 1% /var/lib/minio/data2
/dev/sdd xfs 100G 746M 100G 1% /var/lib/minio/data3
/dev/sde xfs 100G 746M 100G 1% /var/lib/minio/data4
Install MinIO using the deb package on each node
wget https://dl.min.io/server/minio/release/linux-amd64/archive/minio_20240704142545.0.0_amd64.deb -O minio.deb
sudo dpkg -i minio.deb
use groupadd
anduseradd
Command to create users and groups
groupadd -r minio-user
useradd -M -r -g minio-user minio-user
chown -R minio-user:minio-user /var/lib/minio
exist /etc/default/minio
Create an environment file. The MinIO service uses this file as the environment for MinIO andminio.service
Sources all environment variables used by the file.
cat >/etc/default/minio<<EOF
MINIO_VOLUMES="http://minio{1...4}.example.com:9000/var/lib/minio/data{1...4}/minio"
MINIO_OPTS="--console-address :9001"
MINIO_ROOT_USER=minioadmin
MINIO_ROOT_PASSWORD=Minio@123456
MINIO_BROWSER_REDIRECT_URL="http://minio.example.com/minio/ui"
EOF
Issue the following command on each node in the deployment to start the MinIO service. Multiple nodes need to be executed simultaneously.
sudo systemctl restart minio.service
Confirm that the service is online and functioning properly using the following command:
sudo systemctl status minio.service
journalctl -f -u minio.service
Open your browser and access the port :9001
to open the MinIO console login page.
http://minio1.example.com:9001
Use the minio_root_user
andminio_root_password
Log in.
You can use the MinIO console to perform general management tasks such as identity and access management, metrics and log monitoring, or server configuration. Each MinIO server includes its own embedded MinIO console.
Official documentation:https://min.io/docs/minio/linux/integrations/setup-nginx-proxy-with-minio.html
usenginx
andkeepalived
To achieve load balancing, prepare two servers as load balancing nodes.
Configure the host name on each lb node
hostnamectl set-hostname lb1.example.com
hostnamectl set-hostname lb2.example.com
Configure hosts resolution on each lb node
cat >/etc/hosts<<EOF
192.168.72.55 lb1.example.com
192.168.72.56 lb2.example.com
192.168.72.51 minio1.example.com
192.168.72.52 minio2.example.com
192.168.72.53 minio3.example.com
192.168.72.54 minio4.example.com
EOF
Install nginx and keepalived on 2 nodes:
apt install -y nginx keepalived
Create an nginx configuration file, modify the server address and customize the listen port, pay attention to the modificationserver_name
parameter:
cat > /etc/nginx/conf.d/minio-lb.conf <<'EOF'
upstream minio_s3 {
least_conn;
server minio1.example.com:9000;
server minio2.example.com:9000;
server minio3.example.com:9000;
server minio4.example.com:9000;
}
upstream minio_console {
least_conn;
server minio1.example.com:9001;
server minio2.example.com:9001;
server minio3.example.com:9001;
server minio4.example.com:9001;
}
server {
listen 80;
listen [::]:80;
server_name minio.example.com;
# Allow special characters in headers
ignore_invalid_headers off;
# Allow any size file to be uploaded.
# Set to a value such as 1000m; to restrict file size to a specific value
client_max_body_size 0;
# Disable buffering
proxy_buffering off;
proxy_request_buffering off;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 300;
# Default is HTTP/1, keepalive is only enabled in HTTP/1.1
proxy_http_version 1.1;
proxy_set_header Connection "";
chunked_transfer_encoding off;
proxy_pass https://minio_s3; # This uses the upstream directive definition to load balance
}
location /minio/ui/ {
rewrite ^/minio/ui/(.*) /$1 break;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-NginX-Proxy true;
# This is necessary to pass the correct IP to be hashed
real_ip_header X-Real-IP;
proxy_connect_timeout 300;
# To support websockets in MinIO versions released after January 2023
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Some environments may encounter CORS errors (Kubernetes + Nginx Ingress)
# Uncomment the following line to set the Origin request to an empty string
# proxy_set_header Origin '';
chunked_transfer_encoding off;
proxy_pass https://minio_console; # This uses the upstream directive definition to load balance
}
}
EOF
Start nginx service
systemctl restart nginx
Create a keepalived configuration file and modify it according to the actual environmentinterface
andvirtual_ipaddress
Parameters, the two nodes have the same configuration:
cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
router_id minio
vrrp_version 2
vrrp_garp_master_delay 1
script_user root
enable_script_security
}
vrrp_script chk_nginx {
script "/usr/bin/killall -0 nginx"
timeout 3
interval 3 # check every 1 second
fall 2 # require 2 failures for KO
rise 2 # require 2 successes for OK
}
vrrp_instance lb-minio {
state BACKUP
interface ens33
virtual_router_id 51
priority 100
advert_int 1
nopreempt
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.72.100
}
track_script {
chk_nginx
}
}
EOF
Start the keepalive service
systemctl restart keepalived
View the generated VIP address
root@lb1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:9a:92:75 brd ff:ff:ff:ff:ff:ff
altname enp2s1
inet 192.168.72.55/24 brd 192.168.72.255 scope global ens33
valid_lft forever preferred_lft forever
inet 192.168.72.100/32 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe9a:9275/64 scope link
valid_lft forever preferred_lft forever
On the machine that needs to access minio, configure hosts resolution
echo "192.168.72.100 minio.example.com" >>/etc/hosts
Access minio console via browser
http://minio.example.com/minio/ui/
View Metrics information:
Install minio client on any machine
wget https://dl.min.io/client/mc/release/linux-amd64/mc
chmod +x mc
mv mc /usr/local/bin/
Configure minio client
mc alias set myminio http://minio.example.com minioadmin Minio@123456
View the cluster status through the API interface
root@ubuntu:~# mc admin info myminio
● minio1.example.com:9000
Uptime: 25 minutes
Version: 2024-07-04T14:25:45Z
Network: 4/4 OK
Drives: 4/4 OK
Pool: 1
● minio2.example.com:9000
Uptime: 25 minutes
Version: 2024-07-04T14:25:45Z
Network: 4/4 OK
Drives: 4/4 OK
Pool: 1
● minio3.example.com:9000
Uptime: 25 minutes
Version: 2024-07-04T14:25:45Z
Network: 4/4 OK
Drives: 4/4 OK
Pool: 1
● minio4.example.com:9000
Uptime: 25 minutes
Version: 2024-07-04T14:25:45Z
Network: 4/4 OK
Drives: 4/4 OK
Pool: 1
Pools:
1st, Erasure sets: 1, Drives per erasure set: 16
16 drives online, 0 drives offline
root@ubuntu:~#