Use case: We want more IOPS and redundancy but we do not have all the needed disks installed for now; also we do not want to power down the machine to grow the raid.
As extra info:
– raid disks will not contain the OS in this case
– /dev/sdb will be the primary disk in the soft raid1
– /dev/sdc will be the 2nd disk that we will add later
– /dev/md0 is the softraid block device we are building
Make sure all block devices are clean before we begin
dd if=/dev/zero of=/dev/sdb bs=512 count=1 && \
dd if=/dev/zero of=/dev/sdc bs=512 count=1
Build the raid
mdadm -Cv /dev/md0 –level=mirror –force –raid-devices=1 /dev/sdb
Use lvm on top of /dev/md0
pvcreate /dev/md0 && \
vgcreate raid_1 /dev/md0
Optional: create thin provisioning pool on top of the volume group
lvcreate -l 95%FREE -T raid_1/thin_pool
Optional: create thin provisioned volume on top of the thin_pool
lvcreate -V 8G –thin -n test01 raid_1/thin_pool
Grow the raid
mdadm –manage /dev/md0 –add /dev/sdc && \
mdadm –grow /dev/md0 –raid-devices=2
Monitor the rebuild
mdadm –detail /dev/md0
It should like similar to this:
[[email protected] ~]# mdadm –detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Tue Mar 15 14:46:01 2016
Raid Level : raid1
Array Size : 488255488 (465.64 GiB 499.97 GB)
Used Dev Size : 488255488 (465.64 GiB 499.97 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Fri Mar 25 08:17:11 2016
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Rebuild Status : 13% complete
Name : pod17.infra.zeding.ro:0 (local to host pod17.infra.zeding.ro)
UUID : ab4da017:aadaa09d:faa8c874:4f4e47c4
Events : 1013
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 spare rebuilding /dev/sdc
Optional: replace a failed disk (/dev/sdc)
mdadm –manage /dev/md0 –fail /dev/sdc
mdadm –manage /dev/md0 –remove /dev/sdc
Replace the failed disk and run:
mdadm –manage /dev/md0 –add /dev/sdc && \
watch “mdadm –detail /dev/md0”