Recuperação de RAID degradado


sudo fdisk -l

Disco /dev/sda: 750.2 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Unidades = cilindros de 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Identificador do disco: 0x0004c16a

Dispositivo Boot Início Fim Blocos Id Sistema
/dev/sda1   *           1        4255    34178256   fd  Detecção automática de RAID Linux
/dev/sda2            4256        4863     4883760   82  Linux swap / Solaris
/dev/sda3            4864       31629   214997895   fd  Detecção automática de RAID Linux
/dev/sda4           31630       91201   478512090   fd  Detecção automática de RAID Linux

Disco /dev/sdb: 750.2 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Unidades = cilindros de 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Identificador do disco: 0x0004993a

Dispositivo Boot Início Fim Blocos Id Sistema
/dev/sdb1   *           1        4255    34178256   fd  Detecção automática de RAID Linux
/dev/sdb2            4256        4863     4883760   82  Linux swap / Solaris
/dev/sdb3            4864       31629   214997895   fd  Detecção automática de RAID Linux
/dev/sdb4           31630       91201   478512090   fd  Detecção automática de RAID Linux

Disco /dev/sdc: 750.2 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Unidades = cilindros de 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Identificador do disco: 0x000098f4

Dispositivo Boot Início Fim Blocos Id Sistema
/dev/sdc1   *           1        4255    34178256   fd  Detecção automática de RAID Linux
/dev/sdc2            4256        4863     4883760   82  Linux swap / Solaris
/dev/sdc3            4864       31629   214997895   fd  Detecção automática de RAID Linux
/dev/sdc4           31630       91201   478512090   fd  Detecção automática de RAID Linux

Disco /dev/md0: 35.0 GB, 34998452224 bytes
2 heads, 4 sectors/track, 8544544 cylinders
Unidades = cilindros de 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Identificador do disco: 0x00000000

O disco /dev/md0 não contém uma tabela de partições válida

Disco /dev/md2: 980.0 GB, 979992576000 bytes
2 heads, 4 sectors/track, 239256000 cylinders
Unidades = cilindros de 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 131072 bytes
Identificador do disco: 0x00000000

O disco /dev/md2 não contém uma tabela de partições válida



cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid5 sda4[0] sdc4[2] sdb4[3]
      957024000 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U]
      [>....................]  recovery =  4.3% (20862592/478512000) finish=123.7min speed=61613K/sec
     
md1 : inactive sda3[0](S) sdb3[1](S) sdc3[2](S)
      644993472 blocks
      
md0 : active raid1 sdc1[1] sda1[0]
      34178176 blocks [2/2] [UU]


sudo mdadm --query --detail /dev/md1
mdadm: md device /dev/md1 does not appear to be active.



sudo mdadm -A -s /dev/md1
mdadm: /dev/md1 assembled from 1 drive - not enough to start the array.


Instruções avançadas de recuperação em: https://raid.wiki.kernel.org/index.php/RAID_Recovery


sudo mdadm --examine /dev/sda3 > raid.sda3.status
sudo mdadm --examine /dev/sdb3 > raid.sdb3.status
sudo mdadm --examine /dev/sdc3 > raid.sdc3.status
sudo mdadm --assemble --verbose /dev/md1 /dev/sda3 /dev/sdb3 /dev/sdc3
ou
sudo mdadm -As /dev/md1 /dev/sda3 /dev/sdb3 /dev/sdc3
ou
sudo mdadm --verbose --assemble --force /dev/md1 /dev/sda3 /dev/sdb3 /dev/sdc3

mdadm: looking for devices for /dev/md1
mdadm: /dev/sda3 is identified as a member of /dev/md1, slot 0.
mdadm: /dev/sdb3 is identified as a member of /dev/md1, slot 1.
mdadm: /dev/sdc3 is identified as a member of /dev/md1, slot 2.
mdadm: forcing event count in /dev/sdc3(2) from 209885 upto 209892
mdadm: clearing FAULTY flag for device 2 in /dev/md1 for /dev/sdc3
mdadm: SET_ARRAY_INFO failed for /dev/md1: Device or resource busy

cat /proc/mdstat

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid5 sda4[0] sdc4[2] sdb4[3]
      957024000 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U]
      [==============>......]  recovery = 73.7% (352831488/478512000) finish=40.3min speed=51856K/sec
     
md1 : active raid5 sda3[0] sdc3[2]
      429995648 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U]
     
md0 : active raid1 sdc1[1] sda1[0]
      34178176 blocks [2/2] [UU]

sudo gedit /etc/mdadm/mdadm.conf

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sda4 /dev/sdb4 /dev/sdc4 /dev/sda3 /dev/sdb3 /dev/sdc3
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md0 level=raid1 num-devices=2 spares=1 UUID=47c64f2f:edce8a27:63439065:c6944d76 devices=/dev/sda1,/dev/sdb1,/dev/sdc1
ARRAY /dev/md2 level=raid5 num-devices=3 UUID=f55c32f0:09a0d139:afae264f:626723db devices=/dev/sda4,/dev/sdb4,/dev/sdc4
ARRAY /dev/md1 level=raid5 num-devices=3 UUID=db2b9219:0457144d:afae264f:626723db devices=/dev/sda3,/dev/sdb3,/dev/sdc3


sudo mdadm --detail /dev/md1
/dev/md1:
        Version : 00.90
  Creation Time : Mon Jul 13 23:13:31 2009
     Raid Level : raid5
     Array Size : 429995648 (410.08 GiB 440.32 GB)
  Used Dev Size : 214997824 (205.04 GiB 220.16 GB)
   Raid Devices : 3
  Total Devices : 2
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Tue Apr  5 18:16:29 2011
          State : clean, degraded
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : db2b9219:0457144d:afae264f:626723db (local to host master)
         Events : 0.209892

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       0        0        1      removed
       2       8       35        2      active sync   /dev/sdc3

sudo mdadm --add /dev/md1 /dev/sdb3
mdadm: re-added /dev/sdb3

sudo mdadm --detail /dev/md1
/dev/md1:
        Version : 00.90
  Creation Time : Mon Jul 13 23:13:31 2009
     Raid Level : raid5
     Array Size : 429995648 (410.08 GiB 440.32 GB)
  Used Dev Size : 214997824 (205.04 GiB 220.16 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Tue Apr  5 20:37:46 2011
          State : clean, degraded
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : db2b9219:0457144d:afae264f:626723db (local to host master)
         Events : 0.209896

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       3       8       19        1      spare rebuilding   /dev/sdb3
       2       8       35        2      active sync   /dev/sdc3

cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid5 sda4[0] sdc4[2] sdb4[3]
      957024000 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U]
      [=================>...]  recovery = 87.0% (416346880/478512000) finish=23.1min speed=44703K/sec
     
md1 : active raid5 sdb3[3] sda3[0] sdc3[2]
      429995648 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U]
          resync=DELAYED
     
md0 : active raid1 sdc1[1] sda1[0]
      34178176 blocks [2/2] [UU]
     
unused devices: <none>

sudo mdadm --add /dev/md0 /dev/sdb1
mdadm: re-added /dev/sdb1

sudo mdadm --detail /dev/md0
/dev/md0:
        Version : 00.90
  Creation Time : Mon Jul 13 02:26:36 2009
     Raid Level : raid1
     Array Size : 34178176 (32.59 GiB 35.00 GB)
  Used Dev Size : 34178176 (32.59 GiB 35.00 GB)
   Raid Devices : 2
  Total Devices : 3
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Tue Apr  5 21:02:32 2011
          State : clean
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 1

           UUID : 47c64f2f:edce8a27:63439065:c6944d76
         Events : 0.1191740

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       33        1      active sync   /dev/sdc1

       2       8       17        -      spare   /dev/sdb1

cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid5 sda4[0] sdc4[2] sdb4[1]
      957024000 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
     
md1 : active raid5 sdb3[3] sda3[0] sdc3[2]
      429995648 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U]
      [>....................]  recovery =  3.7% (7976704/214997824) finish=48.9min speed=70515K/sec
     
md0 : active raid1 sdb1[2](S) sdc1[1] sda1[0]
      34178176 blocks [2/2] [UU]
     
unused devices: <none>


watch cat /proc/mdstat

Atualiza automaticamente o resultado da operação de reconstrução do RAID




Comments