I upgraded my 2 HDD in my Synology NAS to bigger ones. The change and rebuild of the RAID mirror was seemless. But I wanted to verify the health of the filesystems before growing the volumes. Here is how to do it.
Note: I am making the following assumptions, you know what you are doing, you activated SSH on your box and know how to connect as root, you know and understand how your NAS HDD have been configured, you have a wokring backup of your HDD data, you are not afraid of losing your data.
First find out what is the drive name of your volume(s) and also what is the filesystem type:
# mount /dev/root on / type ext4 (rw,relatime,barrier=0,journal_checksum,data=ordered) (...) /dev/mapper/vol1-origin on /volume1 type ext4 (usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl) /dev/mapper/vol2-origin on /volume2 type ext4 (usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl)
In my case, I have created 2 volumes on top of my mirror. The device on which these volumes are stored are /dev/mapper/vol1-origin and vol2-origin, they are both ext4 filesystems. But you probably do not have such a setup and only have one volume on top of your RAID array and your device might simply be /dev/md[x].
The fact that my devices were in /dev/mapper hinted me that they might be a LVM layer somewhere. So I executed the following command (harmless):
# lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert syno_vg_reserved_area vg1 -wi-a- 12.00M volume_1 vg1 -wi-ao 1.00T volume_2 vg1 -wi-ao 1.00T
So my 2 volumes are LVM logical volumes. Now that I have this information, I can do the following to verify the filesystems’ health. First of all, shutting down most services and unmounting the filesystems:
# syno_poweroff_task
Now if you did not have LVM but rather a /dev/md[x] device, you can simply do (if you have an ext2/ext3/ext4 filesystem only, and replace the ‘x’ by the correct number):
# e2fsck -pvf /dev/mdx
But if like me you have LVM, then you will need a few extra steps. The ‘syno_power_off’ has probably deactivated the LVM logical volumes, to be sure check the “LV Status” given by the next command (harmless, not the complete output is here given):
# lvdisplay --- Logical volume --- LV Name /dev/vg1/volume_1 VG Name vg1 LV UUID <UUID> LV Write Access read/write LV Status NOT available LV Size 1.00 TB
As you can see the logical volume is not available. We need to make it available so that the link to the logical volume is accessible:
# lvm lvchange -ay vg1/volume_1 # lvdisplay --- Logical volume --- LV Name /dev/vg1/volume_1 VG Name vg1 LV UUID <UUID> LV Write Access read/write LV Status available # open 0 LV Size 1.00 TB
The status has changed now to “available”, so we can proceed with the filesystem verification:
# e2fsck -pvf /dev/vg1/volume_1
To finish this, you need to remount and restart all the stopped services. I do not know a specific Synology command to do that, so I simply rebooted the machine:
# shutdown -r now
Thanks for the article, it was a big help for me. One note only, syno_poweroff_task has to be run in debug mode (syno_poweroff_task -d) in order for logical volumes to be available.
Hi Igor,
Thanks for leaving a comment and letting me know.
I’ll have a look at the next maintenance window. Maybe this option is now necessary.
Cheers