web-dev-qa-db-ja.com

クリーンで劣化したRAID-5から回復する

みんなおはよう、

私はいくつかのNASソフトウェアを持っていますが、ほとんどの場合、うまく機能しますが、Hyper-Vホストに何かが起こったとき、それはしばしば見事に失敗します。今日はその日の1つです。

私は内部のことは得意ではありませんが、グーグルが周りを回って、すべてが混乱しているという結論に達しました! mdadm --detail/dev/md2に対応しているように見える合計7つのディスクを持つRAID5が1つだけあります。

ボリュームを生き返らせる方法について誰かが私にガイダンスを与えることができますか?

mdadm --detail /dev/md3
/dev/md3:
Version : 1.2
Creation Time : Sun Apr  1 20:22:22 2018
Raid Level : raid5
Array Size : 3906971648 (3725.98 GiB 4000.74 GB)
Used Dev Size : 976742912 (931.49 GiB 1000.18 GB)
Raid Devices : 5
Total Devices : 4
Persistence : Superblock is persistent

Update Time : Fri Dec 21 07:52:22 2018
      State : clean, degraded
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 64K

Name : SUPERSYNO:3
UUID : e3f844f6:4bd70c27:c40439c2:8d1a29e9
Events : 12216

Number   Major   Minor   RaidDevice State
0       8       70        0      active sync   /dev/sde6
1       0        0        1      removed
4       8      150        2      active sync   /dev/sdj6
5       8      118        3      active sync   /dev/sdh6
2       8      134        4      active sync   /dev/sdi6

mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Sat Jan  1 00:04:42 2000
Raid Level : raid1
Array Size : 2490176 (2.37 GiB 2.55 GB)
Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
Raid Devices : 12
Total Devices : 8
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Fri Dec 21 08:08:09 2018
    State : clean, degraded
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0

UUID : 26aa5fea:f6cb55b4:3017a5a8:c86610be
Events : 0.5720

Number   Major   Minor   RaidDevice State
0       8       33        0      active sync   /dev/sdc1
1       8       49        1      active sync   /dev/sdd1
2       8      145        2      active sync   /dev/sdj1
3       8      129        3      active sync   /dev/sdi1
4       8      113        4      active sync   /dev/sdh1
5       8       97        5      active sync   /dev/sdg1
6       8       81        6      active sync   /dev/sdf1
7       8       65        7      active sync   /dev/sde1
8       0        0        8      removed
9       0        0        9      removed
10       0        0       10      removed
11       0        0       11      removed

mdadm --detail /dev/md1
/dev/md1:
Version : 0.90
Creation Time : Thu Dec 20 22:46:39 2018
Raid Level : raid1
Array Size : 2097088 (2048.28 MiB 2147.42 MB)
Used Dev Size : 2097088 (2048.28 MiB 2147.42 MB)
Raid Devices : 12
Total Devices : 8
Preferred Minor : 1
Persistence : Superblock is persistent

Update Time : Thu Dec 20 22:47:37 2018
      State : active, degraded
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0

UUID : 3637e4b4:d5bb08ce:dca69c88:18a34d86 (local to Host SuperSyno)
Events : 0.19

Number   Major   Minor   RaidDevice State
0       8       34        0      active sync   /dev/sdc2
1       8       50        1      active sync   /dev/sdd2
2       8       66        2      active sync   /dev/sde2
3       8       82        3      active sync   /dev/sdf2
4       8       98        4      active sync   /dev/sdg2
5       8      114        5      active sync   /dev/sdh2
6       8      130        6      active sync   /dev/sdi2
7       8      146        7      active sync   /dev/sdj2
8       0        0        8      removed
9       0        0        9      removed
10       0        0       10      removed
11       0        0       11      removed

mdadm --detail /dev/md2
/dev/md2:
Version : 1.2
Creation Time : Sun Apr  1 20:22:22 2018
Raid Level : raid5
Array Size : 20478048192 (19529.39 GiB 20969.52 GB)
Used Dev Size : 2925435456 (2789.91 GiB 2995.65 GB)
Raid Devices : 8
Total Devices : 7
Persistence : Superblock is persistent

Update Time : Thu Dec 20 22:34:47 2018
      State : clean, degraded
Active Devices : 7
Working Devices : 7
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 64K

Name : SUPERSYNO:2
UUID : 7d8b04cf:aebb8d01:6034359b:c4bb62db
Events : 280533

Number   Major   Minor   RaidDevice State
9       8       53        0      active sync   /dev/sdd5
1       8       69        1      active sync   /dev/sde5
2       0        0        2      removed
7       8      149        3      active sync   /dev/sdj5
8       8      117        4      active sync   /dev/sdh5
5       8      133        5      active sync   /dev/sdi5
4       8      101        6      active sync   /dev/sdg5
3       8       85        7      active sync   /dev/sdf5

mdadm --detail /dev/md3
/dev/md3:
Version : 1.2
Creation Time : Sun Apr  1 20:22:22 2018
Raid Level : raid5
Array Size : 3906971648 (3725.98 GiB 4000.74 GB)
Used Dev Size : 976742912 (931.49 GiB 1000.18 GB)
Raid Devices : 5
Total Devices : 4
Persistence : Superblock is persistent

Update Time : Fri Dec 21 08:01:56 2018
      State : clean, degraded
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 64K

Name : SUPERSYNO:3
UUID : e3f844f6:4bd70c27:c40439c2:8d1a29e9
Events : 12218

Number   Major   Minor   RaidDevice State
0       8       70        0      active sync   /dev/sde6
1       0        0        1      removed
4       8      150        2      active sync   /dev/sdj6
5       8      118        3      active sync   /dev/sdh6
2       8      134        4      active sync   /dev/sdi6


cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md1 : active raid1 sdj2[7] sdi2[6] sdh2[5] sdg2[4] sdf2[3] sde2[2] sdd2[1] 
sdc2[0]
097088 blocks [12/8] [UUUUUUUU____]

md2 : active raid5 sdd5[9] sdf5[3] sdg5[4] sdi5[5] sdh5[8] sdj5[7] sde5[1]
  20478048192 blocks super 1.2 level 5, 64k chunk, algorithm 2 [8/7] [UU_UUUUU]

md3 : active raid5 sde6[0] sdi6[2] sdh6[5] sdj6[4]
  3906971648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [U_UUU]

md0 : active raid1 sdj1[2] sdi1[3] sdh1[4] sdg1[5] sdf1[6] sde1[7] sdd1[1] sdc1[0]
  2490176 blocks [12/8] [UUUUUUUU____]

unused devices: <none>

cat /proc/partitions
major minor  #blocks  name

8       32   10485760 sdc
8       33    2490240 sdc1
8       34    2097152 sdc2
8       48 2930266584 sdd
8       49    2490240 sdd1
8       50    2097152 sdd2
8       53 2925436480 sdd5
8       64 3907018584 sde
8       65    2490240 sde1 
8       66    2097152 sde2
8       69 2925436480 sde5
8       70  976743952 sde6
8       80 2930266584 sdf
8       81    2490240 sdf1
8       82    2097152 sdf2
8       85 2925436480 sdf5
8       96 2930266584 sdg
8       97    2490240 sdg1
8       98    2097152 sdg2
8      101 2925436480 sdg5
8      112 3907018584 sdh
8      113    2490240 sdh1
8      114    2097152 sdh2
8      117 2925436480 sdh5
8      118  976743952 sdh6
8      128 3907018584 sdi
8      129    2490240 sdi1
8      130    2097152 sdi2
8      133 2925436480 sdi5
8      134  976743952 sdi6
8      144 3907018584 sdj
8      145    2490240 sdj1
8      146    2097152 sdj2
8      149 2925436480 sdj5
8      150  976743952 sdj6
9        0    2490176 md0
251        0    2430976 zram0
9        3 3906971648 md3
9        2 20478048192 md2
253        0 24385015808 dm-0
9        1    2097088 md1

8番目のドライブが戻ってきた

cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid5 sde5[9] sdf5[3] sdg5[4] sdj5[5] sdh5[8] sdk5[7] sdd5[1]
  20478048192 blocks super 1.2 level 5, 64k chunk, algorithm 2 [8/7] [UU_UUUUU]

md3 : active raid5 sdd6[0] sdj6[2] sdh6[5] sdk6[4]
  3906971648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [U_UUU]

md1 : active raid1 sdk2[8] sdj2[7] sdi2[6] sdh2[5] sdg2[4] sdf2[3] sde2[2] sdd2[1] sdc2[0]
  2097088 blocks [12/9] [UUUUUUUUU___]

md0 : active raid1 sdk1[2] sdj1[3] sdh1[4] sdg1[5] sdf1[7] sde1[8] sdd1[1] sdc1[0] sdi1[6]
  2490176 blocks [12/9] [UUUUUUUUU___]

unused devices: <none>

cat /proc/partitions
major minor  #blocks  name

8       32    5242880 sdc
8       33    2490240 sdc1
8       34    2097152 sdc2
8       48 3907018584 sdd
8       49    2490240 sdd1
8       50    2097152 sdd2
8       53 2925436480 sdd5
8       54  976743952 sdd6
8       64 2930266584 sde
8       65    2490240 sde1
8       66    2097152 sde2
8       69 2925436480 sde5
8       80 2930266584 sdf
8       81    2490240 sdf1
8       82    2097152 sdf2
8       85 2925436480 sdf5
8       96 2930266584 sdg
8       97    2490240 sdg1
8       98    2097152 sdg2
8      101 2925436480 sdg5
8      112 3907018584 sdh
8      113    2490240 sdh1
8      114    2097152 sdh2
8      117 2925436480 sdh5
8      118  976743952 sdh6
8      128 3907018584 sdi
8      129    2490240 sdi1
8      130    2097152 sdi2
8      133 2925436480 sdi5
8      134  976743952 sdi6
8      144 3907018584 sdj
8      145    2490240 sdj1
8      146    2097152 sdj2
8      149 2925436480 sdj5
8      150  976743952 sdj6
8      160 3907018584 sdk
8      161    2490240 sdk1
8      162    2097152 sdk2
8      165 2925436480 sdk5
8      166  976743952 sdk6
9        0    2490176 md0
9        1    2097088 md1
251        0    2430976 zram0
9        3 3906971648 md3
9        2 20478048192 md2
253        0 24385015808 dm-0

整合性チェック後の午前7時19分に編集

cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid5 sdd5[9] sdf5[3] sdg5[4] sdj5[5] sdh5[8] sdk5[7] sdi5[10] sde5[1]
  20478048192 blocks super 1.2 level 5, 64k chunk, algorithm 2 [8/8] [UUUUUUUU]

md3 : active raid5 sde6[0] sdj6[2] sdh6[5] sdk6[4] sdi6[6]
  3906971648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]

md1 : active raid1 sdc2[0] sdd2[2] sde2[1] sdf2[3] sdg2[4] sdh2[5] sdi2[6] sdj2[7] sdk2[8]
  2097088 blocks [12/9] [UUUUUUUUU___]

md0 : active raid1 sdc1[0] sdd1[8] sde1[1] sdf1[7] sdg1[5] sdh1[4] sdi1[6] sdj1[3] sdk1[2]
  2490176 blocks [12/9] [UUUUUUUUU___]

unused devices: <none>
1
Graham Jordan

cat /proc/mdstatの出力から、sdimd0およびmd1配列に再読み込みされていることがわかります。これらは、すべてのディスクにミラーを備えた2つの小さなRAID-1アレイです。その多くのミラーで、それらはデータ損失のリスクがありませんでした。

残念ながら、sdiは現在データ損失のリスクがあるRAID-5アレイであるmd2md3に再読み込みされませんでした。

なぜ自動的に再読み込みされなかったのかわかりません。手動で実行する場合のコマンドは次のとおりです。

mdadm --add /dev/md2 /dev/sdi5
mdadm --add /dev/md3 /dev/sdi6

mdadmの経験はありますが、Synologyの経験がないため、これらのコマンドの実行にSynology固有のリスクがあるかどうかはわかりません。同じ効果を達成するためのSynology固有のツールがある場合は、生のmdadmコマンドラインツールよりも望ましい場合があります。

2
kasperd