Incubator4

Incubator4

github
steam
nintendo switch

Build a simple and easy-to-use high availability storage using Juicefs and Minio.

Introduction#

Recently, I discovered that Juicefs, a S3-based object storage, can be used to implement a simple and useful high-availability storage solution. So, I decided to document the installation and deployment process, as well as any potential pitfalls that may be encountered.

Target Audience#

  • Users who have a Linux server and multiple disks (partitions), with direct access to physical disks being optimal, but virtual disks and partitions can also be used for building the storage.

  • Users who have a certain level of understanding of Docker.

  • Readers who want to build a (home high-availability storage) solution.

Comparison with Other Solutions#

As we all know, there are various high-availability storage solutions available, such as GlusterFS, Ceph, and ZFS, which can be used in single-node or cluster configurations.

Among these solutions, GlusterFS and HDFS require multiple replicas and nodes, which is too costly for home users. Ceph clusters are also expensive to maintain, even though they can switch between multiple replicas and erasure coding modes based on requirements. Therefore, I considered the Minio solution in recent years. Minio is an open-source S3-compatible object storage that uses erasure coding and has read-write arbitration. It is also easy to deploy and recover data.

Installing Minio#

I have a 12-disk server that I use as the Minio server.

Installing the System#

You can install any Linux system, such as Ubuntu/Debian/Centos/Suse.

Installing Docker#

You can refer to this guide for the installation method.

Preparing Your Disks (Partitions)#

$ lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 279.4G  0 disk
|-sda1   8:1    0 278.4G  0 part /
|-sda2   8:2    0     1K  0 part
`-sda5   8:5    0   976M  0 part [SWAP]
sdb      8:16   0 279.4G  0 disk
sdc      8:32   0   2.7T  0 disk /mnt/disk1
sdd      8:48   0   2.7T  0 disk /mnt/disk2
sde      8:64   0   2.7T  0 disk /mnt/disk3
sdf      8:80   0   2.7T  0 disk /mnt/disk4
sdg      8:96   0   2.7T  0 disk /mnt/disk6
sdh      8:112  0   2.7T  0 disk /mnt/disk5

Here, you can see that I have six disks, and having multiple partitions is also acceptable. Use the following command to format the disks in XFS format.

$ mkfs.xfs /dev/sdc -L DISK1

Give the disks the corresponding labels, in my case, DISK1-6. This way, when mounting the disks, there won't be any issues with disk read order after a restart, which could lead to incorrect mounting positions. The /etc/fstab file should look like this:

# Other system disk mounts omitted
LABEL=DISK1 /mnt/disk1 xfs defaults,noatime 0 2
LABEL=DISK2 /mnt/disk2 xfs defaults,noatime 0 2
LABEL=DISK3 /mnt/disk3 xfs defaults,noatime 0 2
LABEL=DISK4 /mnt/disk4 xfs defaults,noatime 0 2
LABEL=DISK5 /mnt/disk5 xfs defaults,noatime 0 2
LABEL=DISK6 /mnt/disk6 xfs defaults,noatime 0 2

After making the changes to /etc/fstab, use the mount -a command to refresh the mount configuration. Then, use the lsblk command to verify that the disks are mounted correctly.

Running Minio#

Choose a directory, such as $HOME/minio, to store the docker-compose.yaml file required to run Minio. The contents of the file should be as follows:

version: "3.7"

# Settings and configurations that are common for all containers
x-minio-common: &minio-common
  image: quay.io/minio/minio:RELEASE.2022-11-11T03-44-20Z
  # The number of disks corresponds to the number specified here
  command: server --console-address ":9001" /disk{1...6}
  ports:
    - 9000:9000
    - 9001:9001
  expose:
    - "9000"
    - "9001"
  restart: unless-stopped
  environment:
    # Enable metrics, this line can be removed if not needed
    MINIO_PROMETHEUS_AUTH_TYPE: public
    # Set default username and password
    # MINIO_ROOT_USER: minioadmin
    # MINIO_ROOT_PASSWORD: minioadmin
  healthcheck:
    test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
    interval: 30s
    timeout: 20s
    retries: 3
services:
  minio1:
    <<: *minio-common
    volumes:
      # Mount configuration files and map disks
      - /home/areswang/minio/config:/root/.minio
      - /mnt/disk1:/disk1
      - /mnt/disk2:/disk2
      - /mnt/disk3:/disk3
      - /mnt/disk4:/disk4
      - /mnt/disk5:/disk5
      - /mnt/disk6:/disk6

After running docker-compose up -d, Minio will start running.

How to Recover Data#

Let's say one or two of your disks have failed, and you find the following when using lsblk:

sdc      8:32   0   2.7T  0 disk /mnt/disk1
sdd      8:48   0   2.7T  0 disk /mnt/disk2
sde      8:64   0   2.7T  0 disk
sdf      8:80   0   2.7T  0 disk /mnt/disk4
sdg      8:96   0   2.7T  0 disk /mnt/disk6
sdh      8:112  0   2.7T  0 disk /mnt/disk5

The disk /dev/sde cannot be mounted. However, there is no need to worry because Minio can still operate as long as half (including the above) of the disks are still functional.

  • If you still have half of the disks, for example, if you have six disks and currently have 4-6 disks alive, then Minio can still be read and written.
  • If you only have three disks left, then Minio can still be read but not written.

In this case, try replacing the failed disk or reformatting it (after confirming whether it is a problem with the disk itself). Use the mkfs.xfs command to format the disk with the same label. For example, if I lost /disk3, the command would be:

mkfs.xfs /dev/sde -L DISK3 -f

After formatting, use the mount -a command to remount the disk and restart Minio. It should be back up and running.

Other Things to Prepare#

  1. Create a bucket for JuiceFS to use. The name of the bucket will be "juicefs".
  2. Create a "juicefs" account in Identity - Users and generate an "AccessKey" and "SecretKey".
  3. Prepare a database, such as Redis/PostgreSQL/MySQL/Etcd, to store the metadata for JuiceFS. It can also be run on the same server as Minio.

Using JuiceFS#

Installing JuiceFS#

Installation is also quite simple. For non-Windows systems, you can use a one-click script to install:

curl -sSL https://d.juicefs.com/install | sh -

Creating a File System#

We need a database to store JuiceFs metadata and an object storage to store the actual data.

juicefs format \
    --storage minio \
    --bucket http://<minio-server>:9000/<bucket> \
    --access-key <your-key> \
    --secret-key <your-secret> \
    redis://:mypassword@<redis-server>:6379/1 \
    myjfs

Note that the addresses for the S3 storage and Redis database should be external addresses, not 127.0.0.1/localhost

Otherwise, the S3 address recorded in the metadata engine will still be localhost, and other devices will not be able to access the S3 storage.

Mounting JuiceFS#

Unix Systems#

For details, please refer to Mounting JuiceFS at Boot Time.

Windows Systems#

TODO

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.