This shows you the differences between two versions of the page.
unix:lvm_recovery [2011/10/23 16:24] robm |
unix:lvm_recovery [2013/08/20 22:54] |
||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== Recovering my lost data: LVM and RAID ====== | ||
- | |||
- | So I upgraded Ubuntu 9.10 to Ubuntu 11.10. When the system boots it says it cannot mount ''/ | ||
- | |||
- | ===== About the file-system: | ||
- | |||
- | There are many layers of indirection between the file-system and the physical storage when using LVM or RAID. When using both, the number of layers can seem excessive. Here's a diagram of the layers involved in my (lost) setup: | ||
- | |||
- | < | ||
- | digraph G { | ||
- | node [shape=box] | ||
- | |||
- | md0 [label=" | ||
- | sdb1 [label=" | ||
- | sdc1 [label=" | ||
- | sdd1 [label=" | ||
- | sde1 [label=" | ||
- | sdf1 [label=" | ||
- | sdb [label=" | ||
- | sdc [label=" | ||
- | sdd [label=" | ||
- | sde [label=" | ||
- | sdf [label=" | ||
- | |||
- | fs_store [label=" | ||
- | lv_store [label=" | ||
- | vg_store [label=" | ||
- | pv_store [label=" | ||
- | |||
- | sdb1 -> sdb | ||
- | sdc1 -> sdc | ||
- | sdd1 -> sdd | ||
- | sde1 -> sde | ||
- | sdf1 -> sdf | ||
- | md0 -> {sdb1 sdc1 sdd1 sde1 sdf1} | ||
- | pv_store -> md0 | ||
- | vg_store -> pv_store | ||
- | lv_store -> vg_store | ||
- | fs_store -> lv_store | ||
- | } | ||
- | </ | ||
- | |||
- | Note that the RAID block device, '' | ||
- | |||
- | ===== Problem statement ===== | ||
- | |||
- | Since upgrading system boot is interrupted with an error screen to the affect of " | ||
- | |||
- | < | ||
- | digraph G { | ||
- | node [shape=box] | ||
- | |||
- | md0 [label=" | ||
- | sdb1 [label=" | ||
- | sdc1 [label=" | ||
- | sdd1 [label=" | ||
- | sde1 [label=" | ||
- | sdf1 [label=" | ||
- | sdb [label=" | ||
- | sdc [label=" | ||
- | sdd [label=" | ||
- | sde [label=" | ||
- | sdf [label=" | ||
- | |||
- | sdb1 -> sdb | ||
- | sdc1 -> sdc | ||
- | sdd1 -> sdd | ||
- | sde1 -> sde | ||
- | sdf1 -> sdf | ||
- | md0 -> {sdb1 sdc1 sdd1 sde1 sdf1} | ||
- | } | ||
- | </ | ||
- | |||
- | The RAID (multi-disk) status looks fine to me: | ||
- | |||
- | < | ||
- | root@ikari: | ||
- | Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] | ||
- | md127 : active raid5 sdb[1] sde[3] sdc[0] sdd[2] sdf[4](S) | ||
- | 937713408 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] | ||
- | | ||
- | unused devices: < | ||
- | </ | ||
- | |||
- | but the resulting 960.2 GB block device is partitioned as a "Linux RAID autodetect" | ||
- | |||
- | < | ||
- | root@ikari: | ||
- | |||
- | Disk /dev/md127: 960.2 GB, 960218529792 bytes | ||
- | 255 heads, 63 sectors/ | ||
- | Units = sectors of 1 * 512 = 512 bytes | ||
- | Sector size (logical/ | ||
- | I/O size (minimum/ | ||
- | Disk identifier: 0xd71c877b | ||
- | |||
- | Device Boot Start | ||
- | / | ||
- | Partition 1 does not start on physical sector boundary. | ||
- | </ | ||
- | |||
- | ====== Recovery strategy ====== | ||
- | |||
- | - Create a disk-image of '' | ||
- | - Use LVM snapshots with this disk-image to make (and quickly roll-back) experimental changes | ||
- | |||
- | ===== Formatting the external USB drive ===== | ||
- | |||
- | So I created a single "Linux LVM" partition on the 2TB disk, created a single 1.8TB physical volume and a single 1.8TB volume group containing it. On this I created a 1TB logical volume called '' | ||
- | |||
- | LVM snapshots are interesting creatures. As the name suggests, the snapshot (named '' | ||
- | |||
- | Now here is where things get interesting. The snapshot, '' | ||
- | |||
- | < | ||
- | digraph G { | ||
- | sdj [label=" | ||
- | sdj1 [label=" | ||
- | pv_scratch [label=" | ||
- | vg_scratch [label=" | ||
- | lv_scratch [label=" | ||
- | snap [label=" | ||
- | |||
- | lv_scratch -> vg_scratch -> pv_scratch -> sdj1 -> sdj | ||
- | snap -> vg_scratch | ||
- | |||
- | snap -> lv_scratch [style=" | ||
- | } | ||
- | </ | ||
- | |||
- | < | ||
- | root@ikari: | ||
- | |||
- | Disk /dev/sdj: 2000.4 GB, 2000398934016 bytes | ||
- | 255 heads, 63 sectors/ | ||
- | Units = sectors of 1 * 512 = 512 bytes | ||
- | Sector size (logical/ | ||
- | I/O size (minimum/ | ||
- | Disk identifier: 0x000f0222 | ||
- | |||
- | | ||
- | / | ||
- | </ | ||
- | |||
- | < | ||
- | root@ikari: | ||
- | /dev/dm-0: read failed after 0 of 4096 at 0: Input/ | ||
- | /dev/dm-1: read failed after 0 of 4096 at 0: Input/ | ||
- | /dev/dm-2: read failed after 0 of 4096 at 0: Input/ | ||
- | /dev/dm-3: read failed after 0 of 4096 at 0: Input/ | ||
- | --- Physical volume --- | ||
- | PV Name / | ||
- | VG Name | ||
- | PV Size 1.82 TiB / not usable 4.00 MiB | ||
- | Allocatable | ||
- | PE Size 4.00 MiB | ||
- | Total PE 476931 | ||
- | Free PE 86787 | ||
- | Allocated PE 390144 | ||
- | PV UUID | ||
- | |||
- | root@ikari: | ||
- | /dev/dm-0: read failed after 0 of 4096 at 0: Input/ | ||
- | /dev/dm-1: read failed after 0 of 4096 at 0: Input/ | ||
- | /dev/dm-2: read failed after 0 of 4096 at 0: Input/ | ||
- | /dev/dm-3: read failed after 0 of 4096 at 0: Input/ | ||
- | --- Volume group --- | ||
- | VG Name | ||
- | System ID | ||
- | Format | ||
- | Metadata Areas 1 | ||
- | Metadata Sequence No 13 | ||
- | VG Access | ||
- | VG Status | ||
- | MAX LV 0 | ||
- | Cur LV 2 | ||
- | Open LV 0 | ||
- | Max PV 0 | ||
- | Cur PV 1 | ||
- | Act PV 1 | ||
- | VG Size 1.82 TiB | ||
- | PE Size 4.00 MiB | ||
- | Total PE 476931 | ||
- | Alloc PE / Size | ||
- | Free PE / Size 86787 / 339.01 GiB | ||
- | VG UUID | ||
- | |||
- | root@ikari: | ||
- | /dev/dm-0: read failed after 0 of 4096 at 0: Input/ | ||
- | /dev/dm-1: read failed after 0 of 4096 at 0: Input/ | ||
- | /dev/dm-2: read failed after 0 of 4096 at 0: Input/ | ||
- | /dev/dm-3: read failed after 0 of 4096 at 0: Input/ | ||
- | --- Logical volume --- | ||
- | LV Name / | ||
- | VG Name vg_scratch | ||
- | LV UUID aFBpgv-gqcd-jjLU-c7xO-Jyeb-2R0t-HpEF84 | ||
- | LV Write Access | ||
- | LV snapshot status | ||
- | / | ||
- | LV Status | ||
- | # open 0 | ||
- | LV Size 1.00 TiB | ||
- | Current LE | ||
- | Segments | ||
- | Allocation | ||
- | Read ahead sectors | ||
- | - currently set to 256 | ||
- | Block device | ||
- | |||
- | --- Logical volume --- | ||
- | LV Name / | ||
- | VG Name vg_scratch | ||
- | LV UUID OvOsQ7-uACi-xJVZ-vseu-fKEc-F73h-CmSalH | ||
- | LV Write Access | ||
- | LV snapshot status | ||
- | LV Status | ||
- | # open 0 | ||
- | LV Size 1.00 TiB | ||
- | Current LE | ||
- | COW-table size | ||
- | COW-table LE | ||
- | Allocated to snapshot | ||
- | Snapshot chunk size 4.00 KiB | ||
- | Segments | ||
- | Allocation | ||
- | Read ahead sectors | ||
- | - currently set to 256 | ||
- | Block device | ||
- | </ | ||
- | |||
- | So here's the goal I'm aiming for on my external storage: | ||
- | |||
- | < | ||
- | digraph G { | ||
- | sdj [label=" | ||
- | sdj1 [label=" | ||
- | pv_scratch [label=" | ||
- | vg_scratch [label=" | ||
- | lv_scratch [label=" | ||
- | snap [label=" | ||
- | |||
- | lv_scratch -> vg_scratch -> pv_scratch -> sdj1 -> sdj | ||
- | snap -> vg_scratch | ||
- | snap -> lv_scratch [style=" | ||
- | |||
- | node [shape=box] | ||
- | fs_store [label=" | ||
- | lv_store [label=" | ||
- | vg_store [label=" | ||
- | pv_store [label=" | ||
- | |||
- | |||
- | pv_store -> snap | ||
- | vg_store -> pv_store | ||
- | lv_store -> vg_store | ||
- | fs_store -> lv_store | ||
- | |||
- | |||
- | } | ||
- | </ | ||
- | |||
- | ====== Recognising the nested LVM volumes ====== | ||
- | |||
- | |||
- | |||