Growing a mdadm RAID by replacing disks


As it can be read in my related earlier post: Replacing a failed disk in a mdadm RAID I have a 4 disk RAID 5 setup which I initially populated with 1TB disk WD GREEN (cheap, but not really suited for NAS operation). After a few years I started fill up the file system, so I wanted to grow my RAID by upgrading the disks to WD RED 3TB disks. The WD RED disk are especially tailored to the NAS workload. The workflow of growing the mdadm RAID is done through the following steps:

  • Fail, remove and replace each of 1TB disk with a 3TB disk. After each disk I have to wait for the RAID to resync to the new disk.
  • I then have to grow the RAID to use all the space on each of the 3TB disks.
  • Finally, I have to grow the filesystem to use the available space on the RAID device.

The following is similar to my previous article Replacing a failed disk in a mdadm RAID, but I have included it hear for completness.

Removing the old drive

The enclosure I have does not support hot-swap and the disk have no separate lights for each disk, so I need a way to find out which of the disks to replace. Finding the serial number of the disk is fairly easy:

# hdparm -i /dev/sde | grep SerialNo
 Model=WDC WD10EARS-003BB1, FwRev=80.00A80, SerialNo=WD-WCAV5K430328

and luckily the Western Digital disks I have came with a small sticker which shows the serial on the disk. So now I know the serial number of the disk I want to replace, so before shutting down and replacing the disk I marked as failed in madam and removed from the raid:

mdadm --manage /dev/md0 --fail /dev/sde1
mdadm --manage /dev/md0 --remove /dev/sde1

Continue reading Growing a mdadm RAID by replacing disks

Rotating website backup using rsync over ssh


Recently the hosting company for website started supporting SSH access. This meant I could ditch the unsecure FTP transfers and do everything though SFTP and rsync over ssh. Beside making editing of files much easier this also allowed me to implement a rolling/rotating backup of the website. While it can be argued that such backup would never be needed as the hosting company surely has a safe storage solution I have personally experienced the loss of data from a server breakdown at the hosting company.

The python script

Below I have written a python script to automate the backup and keep the last 12 weeks of changes in separate folders with hard links in between. This means for website like mine (with low amount of changes) that the backup does not take up much more than the size of the websize + size of changes (which are small). The script defaults to 12 copies backups and I run the script through cron every week on my home Linux server. The script can also be run on the command line if needed with the syntax: user@host:/www/ /home/tjansson/backup/websites/host/

A cron line to run the script monthly on the first day of the month at 4:05 in the morning.

5 4 1 * * /home/tjansson/bin/ user@host:/www/ /home/tjansson/backup/websites/host/

On a final note it is assumed for this script to work through cron, that the ssh access is setup using keys and perhaps ssh-agent for passwordless access to the server.

#!/usr/bin/env python
import os
import argparse
import shutil
if __name__ == '__main__':
    parser = argparse.ArgumentParser(description='This script does rotating backup using rsync  ')
    parser.add_argument('source',         type=str,             help='The source. Example: user@host:/www/')
    parser.add_argument('backup_path',    type=str,             help='The backup path template. Example: /home/tjansson/backup/websites/host/')
    parser.add_argument('-c', '--copies', type=str, default=12, help='The maximum number of copies to save in the rotation. Default=12')
    parser.add_argument('-d', '--debug',  dest='debug', action='store_true', help='Turn on verbose debugging')
    args = parser.parse_args()
# Folder template
folder = '{}backup{}'.format(args.backup_path, '{}')
# Delete the oldest folder
folder_old = folder.format(args.copies)
if os.path.isdir(folder_old):
    if args.debug:
        print 'Removing the oldest folder: {}'.format(folder_old)
# Rotating backups
if args.debug:
    print 'Rotating backups'
for i in range(args.copies-1,-1,-1):
    folder_0 = folder.format(i)
    folder_1 = folder.format(i+1)
    if os.path.isdir(folder_0):
        if args.debug:
            print 'mv {} {}'.format(folder_0, folder_1)
        os.system('mv {} {}'.format(folder_0, folder_1))
#Execute the RSYNC
target = folder.format(0)
link   = folder.format(1)
if not os.path.isdir(target):
if not os.path.isdir(link):
    cmd = 'rsync -ah --delete -e ssh {source} {target}'.format(link=link, source=args.source, target=target)
    cmd = 'rsync -ah --delete -e ssh --link-dest="{link}" {source} {target}'.format(link=link, source=args.source, target=target)
if args.debug:
    print 'Rsyncing the latests changes'
    print cmd
os.system('touch {}'.format(target))

Further reading and inspiration to this post