YANO's digital garage

Copyright ©YANO All rights reserved. https://www.bravotouring.com/~yano/

Last-modified: 2018-11-10 (土)


[一語一絵/IT系]

WD30EZRX再登板 / 2018-05-10 (木)

3TB×4構成でRAID5な9TBドライブが窮屈になってきたので、先月のWD40EZRZ投入で浮いたWD30EZRXを追加して、RAID5の容量を拡張する事に。

WD30EZRX
WD30EZRX

7日の夜にmdadm --addでドライブ/dev/sdjを追加し、mdadm --growでreshapingを開始。

root@GT110b:~# mdadm /dev/md0 --add /dev/sdj
root@GT110b:~# mdadm --grow /dev/md0 -n 5
root@GT110b:~#
丸一日以上が経過した、昨日の朝の状況を確認したところ…
root@GT110b:~# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Thu Jul 20 23:01:38 2017
     Raid Level : raid5
     Array Size : 8790405120 (8383.18 GiB 9001.37 GB)
  Used Dev Size : 2930135040 (2794.39 GiB 3000.46 GB)
   Raid Devices : 5
  Total Devices : 5
    Persistence : Superblock is persistent

    Update Time : Wed May  9 08:38:47 2018
          State : clean, reshaping
 Active Devices : 5
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

 Reshape Status : 50% complete
  Delta Devices : 1, (4->5)

           Name : GT110b:0  (local to host GT110b)
           UUID : 81328295:5f1fa0c2:b3535462:79680794
         Events : 11175

    Number   Major   Minor   RaidDevice State
       0       8       64        0      active sync   /dev/sde
       1       8       80        1      active sync   /dev/sdf
       2       8       96        2      active sync   /dev/sdg
       4       8      112        3      active sync   /dev/sdh
       5       8      144        4      active sync   /dev/sdj
root@GT110b:~#
まだ50%との事だったので、もう一日様子見。 今朝になってStateがcleanになりArray Sizeも増えていたので、xfs_growfsでxfsファイルシステムに反映。
root@GT110b:~# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Thu Jul 20 23:01:38 2017
     Raid Level : raid5
     Array Size : 11720540160 (11177.58 GiB 12001.83 GB)
  Used Dev Size : 2930135040 (2794.39 GiB 3000.46 GB)
   Raid Devices : 5
  Total Devices : 5
    Persistence : Superblock is persistent

    Update Time : Thu May 10 05:12:41 2018
          State : clean
 Active Devices : 5
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : GT110b:0  (local to host GT110b)
           UUID : 81328295:5f1fa0c2:b3535462:79680794
         Events : 18525

    Number   Major   Minor   RaidDevice State
       0       8       64        0      active sync   /dev/sde
       1       8       80        1      active sync   /dev/sdf
       2       8       96        2      active sync   /dev/sdg
       4       8      112        3      active sync   /dev/sdh
       5       8      144        4      active sync   /dev/sdj
root@GT110b:~# xfs_growfs /dev/md0
meta-data=/dev/md0               isize=256    agcount=32, agsize=68675072 blks
         =                       sectsz=4096  attr=2
data     =                       bsize=4096   blocks=2197601280, imaxpct=5
         =                       sunit=128    swidth=384 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=521728, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 2197601280 to 2930135040
root@GT110b:~# df
Filesystem           Size  Used Avail Use% Mounted on
udev                  12G   12K   12G   1% /dev
tmpfs                2.4G  4.4M  2.4G   1% /run
/dev/sda1            131G  8.2G  117G   7% /
none                 4.0K     0  4.0K   0% /sys/fs/cgroup
none                 5.0M     0  5.0M   0% /run/lock
none                  12G     0   12G   0% /run/shm
none                 100M     0  100M   0% /run/user
/dev/sdb1            4.6T  2.5T  1.8T  59% /home
/dev/sdc1            3.7T  2.3T  1.5T  62% /mnt/kids
/dev/sdd1            3.7T  3.4T  303G  92% /mnt/video
/dev/md0              11T  7.9T  3.1T  73% /mnt/raid
/dev/sdi1            1.8T  1.3T  421G  76% /mnt/backup
//landisk/disk1      1.4T  975G  420G  70% /mnt/landisk
//linkstation/share  1.9T  1.7T  160G  92% /mnt/linkstation
//readynas/Video     5.5T  4.7T  807G  86% /mnt/readynas
dfの空き容量も増えてひと安心。
root@GT110b:~# fgrep md0 /var/log/syslog
2018-05-08T01:58:08.417595+09:00 GT110b kernel: [ 7409.705589] md: reshape of RAID array md0
2018-05-08T13:38:12.466161+09:00 GT110b mdadm[2111]: Rebuild20 event detected on md device /dev/md0
2018-05-09T02:08:16.589925+09:00 GT110b mdadm[2111]: Rebuild40 event detected on md device /dev/md0
2018-05-09T15:28:20.997790+09:00 GT110b mdadm[2111]: Rebuild60 event detected on md device /dev/md0
2018-05-10T02:51:44.918767+09:00 GT110b mdadm[2111]: Rebuild81 event detected on md device /dev/md0
2018-05-10T05:12:41.825581+09:00 GT110b kernel: [191873.103299] md: md0: reshape done.
2018-05-10T05:12:41.969649+09:00 GT110b kernel: [191873.248230] md0: detected capacity change from 9001374842880 to 12001833123840
2018-05-10T05:12:41.973586+09:00 GT110b kernel: [191873.249265] VFS: busy inodes on changed media or resized disk md0
2018-05-10T05:12:42.037372+09:00 GT110b mdadm[2111]: RebuildFinished event detected on md device /dev/md0
root@GT110b:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdj[5] sdh[4] sdg[2] sdf[1] sde[0]
      11720540160 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]

unused devices: <none>
root@GT110b:~#
今回は念の為/dev/md0をunmountしてオフライン作業した(9TBもバックアップするとこ無いので中身はそのまま)が、オンラインでもできちゃうみたいですな。スゲー

【参照】
●Qiita https://qiita.com/
ソフトウェアRAID の初期設定と運用 2015/12/06
●つねおの怠惰な非日常 http://elsidion.blogspot.jp/
Linux Software RAIDのオンライン容量拡張の検証 2008/04/12
●computerの日記 http://intrajp-computer.hatenadiary.jp/
Linux で既存のソフトウェアRAID 5 の領域を増やす方法 2017/10/28
●untitled document http://neet.waterblue.net/
mdadmによるRAID5の領域の拡大・縮小 2011/11/03
●ぴろにっき http://piro791.blog.so-net.ne.jp/
mdadmでソフトRAIDを領域拡張してみる(RAID 5編) 2008/11/07
●Wikipedia https://ja.wikipedia.org/wiki/
RAID
論理ボリュームマネージャ
XFS