GlusterFS replicated-distributed storage pool on CentOS/RHEL 7.x

GlusterFS – a storage technology  stack that got momentum in the last days and is mature enough to be used in production. The setup I will explain here is similar to RAID10 but over network, using two machines.
If we were using only one machine with disk redundancy, at some point we would have reached hardware limitations while still having a single point of failure, the machine itself; here comes GlusterFS that was designed to be used on commodity hardware and scale.
It allows creation of different storage configurations: replicated, distributed, stripped or a combinations of them.

The scenario:
– distributed-replicated (similar to RAID10)
– number of bricks: 4 (2 per machine)
– accessing the cluster: native

Steps to follow:

Prepare two machines with latest CentOS 7.x
The machines must have 2 disks each; remember that you can use also VMs if it is a test setup but for production it is recommended to use bare-metal.

Make sure all implied nodes have DNS records
Or you can use /etc/hosts

Install GlusterFS repository
yum -y install epel-release centos-release-gluster38.noarch

Install glusterfs-server on both nodes
yum -y install glusterfs-server policycoreutils-python

Start and enable the service
systemctl start glusterd &&  systemctl enable glusterd

Make sure the nodes know one about each other
You can run the command only on
gluster peer probe

and you should receive a message:

peer probe: success

Check again the status:

gluster peer status

a response similar should be printed:

Number of Peers: 1

Port: 24007
Uuid: 9fba506-3a7a-4c5e-94fa-1aaf83f7429t
State: Peer in Cluster (Connected)

Prepare the bricks on each node
We consider using 2 local disks, one for each brick.
mkfs.xfs /dev/sdb && mkfs.xfs /dev/sdc && \
mkdir -p /gluster/brick1 /gluster/brick2 && \
mount /dev/sdb /gluster/brick1 && \
mount /dev/sdc /gluster/brick2 && \
mkdir /gluster/brick1/test && \
mkdir /gluster/brick2/test

and extend the /etc/fstab with:

/dev/sdb /gluster/brick1 xfs defaults 0 0
/dev/sdc /gluster/brick2 xfs defaults 0 0

Make sure it can traverse firewall
firewall-cmd --zone=public --add-port=24007-24008/tcp --permanent && firewall-cmd --reload

for bricks:

firewall-cmd --zone=public --add-port=24009-24010/tcp --permanent && firewall-cmd --reload

and for the clients (Glusterfs/NFS/CIFS Clients):

firewall-cmd --zone=public --add-service=nfs --add-service=samba --add-service=samba-client --permanent


firewall-cmd --zone=public --add-port=111/tcp --add-port=139/tcp --add-port=445/tcp --add-port=965/tcp --add-port=2049/tcp \
--add-port=38465-38469/tcp --add-port=631/tcp --add-port=111/udp --add-port=963/udp --add-port=49152-49251/tcp --permanent


firewall-cmd --reload

Creation of the volume
gluster volume create test-volume replica 2 transport tcp

Start and verify the new volume
gluster volume start test-volume

gluster volume info all

Access the volume from a 3rd machine
For this you need to install client packages:

yum -y install glusterfs glusterfs-fuse attr

and then mount the volume:

mount -t glusterfs /mnt/test-volume


How to add to /etc/fstab with backup server:

mkdir /gluster && echo " /gluster glusterfs defaults,_netdev,log-level=WARNING,log-file=/var/log/gluster.log, 0 0" >> /etc/fstab && mount -a


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.