1

Как вы ремонтируете суперблок тома LVM? В качестве альтернативы, как вы восстанавливаете данные на не монтируемом логическом томе lvm?

Недавно я добавил второй жесткий диск, расширил мою группу томов и отразил существующий LV vg00/FAST с помощью lvconvert -m1 /dev/vg00/FAST на физическое пространство на новом диске. Я также создал второй LV vg00/SLOW на оставшемся месте нового жесткого диска. Обе файловые системы ext4. Насколько я понимаю, текущая реализация LVM по умолчанию использует зеркалирование mdadm raid 1. Я запускаю lvm2 2.02.168-1 в Arch. У меня также есть несколько резервных копий тома LVM в /etc/lvm/archive и /etc/lvm/backup .

В физическом смысле /dev/sda4 - это раздел LVM 1.79T GPT, содержащий vg00/FAST , /dev/sdc1 - раздел LVM 2.73T GPT, содержащий vg00/FAST и vg00/SLOW .

Во время загрузки Linux я получаю следующую ошибку суперблока:

15.111767 device-mapper: raid: Failed to read superblock of device at position 1
Failed to start lvm2 PV scan on device 8:4
See systemctl status 'lvm20pvscan@8:4.service'

vg00/FAST (из зеркала /dev/sda4 ) и файловая система / данные не повреждены, но vg00/SLOW не монтируется. Кажется, LVM не будет читать LV из /dev/sdb1 .

С нормальным разделом ext4 я бы fsck и пошел дальше, но vg/SLOW не находится в /dev поэтому я не могу fsck /dev/vg00/SLOW . Если я fsck /dev/sdb1 , понимает ли fsck базовую структуру LVM, или он увидит sdb1 как поврежденный раздел ext4 и сгенерирует поток ошибок?

Журналы:

    $  journalctl -xb
    .....
    Dec 08 23:21:52 hostname lvm[304]:   WARNING: Device for PV g1WAG2-Dc9i-Gods-w3n7-SZD7-5Kvc-qzepno not found or rejected
    Dec 08 23:21:52 hostname lvm[304]:   Refusing refresh of partial LV vg00/FAST_rimage_1. Use '--activationmode partial' t
    Dec 08 23:21:52 hostname lvm[304]:   Refusing refresh of partial LV vg00/SLOW. Use '--activationmode partial' to overrid
    Dec 08 23:21:52 hostname lvm[304]:   Refusing refresh of partial LV vg00/FAST_rimage_1. Use '--activationmode partial' t
    Dec 08 23:21:52 hostname lvm[304]:   Refusing refresh of partial LV vg00/SLOW. Use '--activationmode partial' to overrid
    Dec 08 23:21:52 hostname lvm[304]:   Refusing refresh of partial LV vg00/FAST_rimage_1. Use '--activationmode partial' t
    Dec 08 23:21:52 hostname lvm[304]:   Refusing refresh of partial LV vg00/SLOW. Use '--activationmode partial' to overrid
    Dec 08 23:21:52 hostname lvm[304]:   Refusing refresh of partial LV vg00/FAST_rimage_1. Use '--activationmode partial' t
    Dec 08 23:21:52 hostname lvm[304]:   Refusing refresh of partial LV vg00/SLOW. Use '--activationmode partial' to overrid
    Dec 08 23:21:52 hostname lvm[304]:   Refusing refresh of partial LV vg00/FAST_rimage_1. Use '--activationmode partial' t
    Dec 08 23:21:52 hostname lvm[304]:   Refusing refresh of partial LV vg00/SLOW. Use '--activationmode partial' to overrid
    .....
    Dec 08 23:21:57 hostname kernel: device-mapper: raid: Failed to read superblock of device at position 1
    Dec 08 23:21:58 hostname kernel: md: raid1 personality registered for level 1
    Dec 08 23:21:58 hostname kernel: md/raid1:mdX: active with 1 out of 2 mirrors
    Dec 08 23:21:58 hostname kernel: created bitmap (917 pages) for device mdX
    Dec 08 23:21:58 hostname kernel: mdX: bitmap initialized from disk: read 58 pages, set 921 of 1876944 bits
    Dec 08 23:21:58 hostname kernel: EXT4-fs (dm-6): mounted filesystem with ordered data mode. Opts: data=ordered
    Dec 08 23:21:52 hostname systemd[1]: Starting Flush Journal to Persistent Storage...
    -- Subject: Unit systemd-journal-flush.service has begun start-up
    -- Defined-By: systemd
    --
    -- Unit systemd-journal-flush.service has begun starting up.
    Dec 08 23:21:55 hostname dmeventd[403]: dmeventd ready for processing.
    Dec 08 23:21:58 hostname lvm[304]:   vg00: refresh before autoactivation failed.
    Dec 08 23:21:58 hostname lvm[304]:   Refusing activation of partial LV vg00/SLOW.  Use '--activationmode partial' to ove
    Dec 08 23:21:58 hostname lvm[304]:   1 logical volume(s) in volume group "vg00" now active
    Dec 08 23:21:58 hostname lvm[304]:   vg00: autoactivation failed.
    Dec 08 23:21:55 hostname systemd[1]: Started Device-mapper event daemon.
    -- Subject: Unit dm-event.service has finished start-up
    -- Defined-By: systemd
    --
    -- Unit dm-event.service has finished starting up.
    --
    -- The start-up result is done.
    Dec 08 23:21:55 hostname lvm[403]: Monitoring RAID device vg00-FAST for events.
    Dec 08 23:21:55 hostname systemd[1]: Found device /dev/vg00/FAST.
    -- Subject: Unit dev-vg00-FAST.device has finished start-up
    -- Defined-By: systemd
    --
    -- Unit dev-vg00-FAST.device has finished starting up.
    --
    -- The start-up result is done.
    Dec 08 23:21:55 hostname systemd[1]: lvm2-pvscan@8:4.service: Main process exited, code=exited, status=5/NOTINSTALLED
    Dec 08 23:21:55 hostname systemd[1]: Failed to start LVM2 PV scan on device 8:4.
    -- Subject: Unit lvm2-pvscan@8:4.service has failed
    -- Defined-By: systemd
    --
    -- Unit lvm2-pvscan@8:4.service has failed.
    --
    -- The result is failed.
    Dec 08 23:21:55 hostname systemd[1]: lvm2-pvscan@8:4.service: Unit entered failed state.
    Dec 08 23:21:58 hostname systemd-fsck[409]: /dev/mapper/vg00-FAST: clean, 5273/120127488 files, 389615339/480497664 bloc
    Dec 08 23:21:55 hostname systemd[1]: lvm2-pvscan@8:4.service: Failed with result 'exit-code'.
    Dec 08 23:21:55 hostname systemd[1]: Starting File System Check on /dev/vg00/FAST...
    -- Subject: Unit systemd-fsck@dev-vg00-FAST.service has begun start-up
    -- Defined-By: systemd
    --
    -- Unit systemd-fsck@dev-vg00-FAST.service has begun starting up.
    Dec 08 23:21:56 hostname systemd[1]: Started File System Check on /dev/vg00/FAST.
    -- Subject: Unit systemd-fsck@dev-vg00-FAST.service has finished start-up
    -- Defined-By: systemd
    --
    -- Unit systemd-fsck@dev-vg00-FAST.service has finished starting up.

lvdisplay:

$ lvdisplay -v

  WARNING: Device for PV g1WAG2-Dc9i-Gods-w3n7-SZD7-5Kvc-qzepno not found or rejected by a filter.
    There are 1 physical volumes missing.
    There are 1 physical volumes missing.
  --- Logical volume ---
  LV Path                /dev/vg00/FAST
  LV Name                FAST
  VG Name                vg00
  LV UUID                g7lYit-lsR3-WvoE-Idfj-C8kE-kcz4-VXpS1D
  LV Write Access        read/write
  LV Creation host, time host, 2015-12-02 21:06:11 -0500
  LV Status              available
  # open                 1
  LV Size                1.79 TiB
  Current LE             469236
  Mirrored volumes       2
  Segments               1
  Allocation             contiguous
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:6

  --- Logical volume ---
  LV Path                /dev/vg00/SLOW
  LV Name                SLOW
  VG Name                vg00
  LV UUID                u2ExhF-DFCH-U1Yc-M1WM-aIUl-bODu-FILusX
  LV Write Access        read/write
  LV Creation host, time host, 2016-12-07 09:16:22 -0500
  LV Status              NOT available
  LV Size                961.56 GiB
  Current LE             246159
  Segments               1
  Allocation             contiguous
  Read ahead sectors     auto

lvscan:

$ lvscan -v

  WARNING: Device for PV g1WAG2-Dc9i-Gods-w3n7-SZD7-5Kvc-qzepno not found or rejected by a filter.
    There are 1 physical volumes missing.
    There are 1 physical volumes missing.
  ACTIVE            '/dev/vg00/FAST' [1.79 TiB] contiguous
  inactive          '/dev/vg00/SLOW' [961.56 GiB] contiguous

lvmdiskscan:

$ lvmdiskscan
  /dev/sda2                          [      20.00 GiB]
  /dev/sda3                          [       8.00 GiB]
  /dev/sda4                          [       1.79 TiB] LVM physical volume
  /dev/vg00/FAST                     [       1.79 TiB]
  /dev/sdb1                          [       2.73 TiB]
  2 disks
  5 partitions
  0 LVM physical volume whole disks
  1 LVM physical volume

Структура раздела gdisk выглядит нормально:

$ gdisk -l /dev/sdb

GPT fdisk (gdisk) version 1.0.1

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdb: 5860533168 sectors, 2.7 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 9B452B16-4F7C-4F3E-B54F-0B46C7B978E1
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 5860533134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048      5860533134   2.7 TiB     8E00  Linux LVM

Testdisk, кажется, тоже нормально:

TestDisk 7.0, Data Recovery Utility, April 2015
Christophe GRENIER <grenier@cgsecurity.org>
http://www.cgsecurity.org

Disk /dev/sdb - 3000 GB / 2794 GiB - CHS 364801 255 63
Current partition structure:
     Partition                  Start        End    Size in sectors

No LVM or LVM2 structure
 1 P Linux LVM                   2048 5860533134 5860531087 [Linux LVM]
 1 P Linux LVM                   2048 5860533134 5860531087 [Linux LVM]

Попытка монтирования раздела напрямую приводит к:

$ mount /dev/sdb1 /mnt/TMP/
mount: mount /dev/sdb1 on /mnt/TMP failed: Structure needs cleaning

Отредактировано для добавления:

Кажется, что проблема с точки зрения LVM заключается в том, что существует «Не удалось найти метку LVM в /dev /sdb1». Могу ли я воссоздать метку LVM, независимо от того, что она состоит из /dev/sdb1? pvscan:

$ pvscan -v
    Wiping internal VG cache
    Wiping cache of LVM-capable devices
  WARNING: Device for PV g1WAG2-Dc9i-Gods-w3n7-SZD7-5Kvc-qzepno not found or rejected by a filter.
    There are 1 physical volumes missing.
    There are 1 physical volumes missing.
  PV /dev/sda4   VG vg00            lvm2 [1.79 TiB / 4.00 MiB free]
  PV [unknown]   VG vg00            lvm2 [2.73 TiB / 0    free]
  Total: 2 [4.52 TiB] / in use: 2 [4.52 TiB] / in no VG: 0 [0   ]

pvck на новый нерабочий раздел:

$ pvck -v /dev/sdb1
    Scanning /dev/sdb1
  Could not find LVM label on /dev/sdb1

pvck на оригинальном (рабочем) разделе lvm:

$ pvck -v /dev/sda4
    Scanning /dev/sda4
  Found label on /dev/sda4, sector 1, type=LVM2 001
  Found text metadata area: offset=4096, size=1044480
    Found LVM2 metadata record at offset=52224, size=3072, offset2=0 size2=0
    Found LVM2 metadata record at offset=49152, size=3072, offset2=0 size2=0
    Found LVM2 metadata record at offset=46080, size=3072, offset2=0 size2=0
    Found LVM2 metadata record at offset=43520, size=2560, offset2=0 size2=0
    Found LVM2 metadata record at offset=40960, size=2560, offset2=0 size2=0
    Found LVM2 metadata record at offset=38400, size=2560, offset2=0 size2=0
    Found LVM2 metadata record at offset=35840, size=2560, offset2=0 size2=0
    Found LVM2 metadata record at offset=33280, size=2560, offset2=0 size2=0
    Found LVM2 metadata record at offset=30720, size=2560, offset2=0 size2=0
    Found LVM2 metadata record at offset=28160, size=2560, offset2=0 size2=0
    Found LVM2 metadata record at offset=25600, size=2560, offset2=0 size2=0
    Found LVM2 metadata record at offset=23040, size=2560, offset2=0 size2=0
    Found LVM2 metadata record at offset=21504, size=1536, offset2=0 size2=0
    Found LVM2 metadata record at offset=19968, size=1536, offset2=0 size2=0
    Found LVM2 metadata record at offset=18432, size=1536, offset2=0 size2=0
    Found LVM2 metadata record at offset=16896, size=1536, offset2=0 size2=0
    Found LVM2 metadata record at offset=15360, size=1536, offset2=0 size2=0
    Found LVM2 metadata record at offset=13824, size=1536, offset2=0 size2=0
    Found LVM2 metadata record at offset=12288, size=1536, offset2=0 size2=0
    Found LVM2 metadata record at offset=10752, size=1536, offset2=0 size2=0
    Found LVM2 metadata record at offset=9216, size=1536, offset2=0 size2=0
    Found LVM2 metadata record at offset=7680, size=1536, offset2=0 size2=0
    Found LVM2 metadata record at offset=6144, size=1536, offset2=0 size2=0

lsblk:

$ lsblk -o name,mountpoint,label,size,uuid
NAME                           MOUNTPOINT LABEL                    SIZE UUID
sdb                                                                2.7T
└─sdb1                                    BACKUP                   2.7T a1b354f4-dea8-4b39-9520-aebcf6c9c72f
vg00-FAST_rimage_1-missing_0_0                                     1.8T
└─vg00-FAST_rimage_1                                               1.8T
sr0                                       Parted Magic 2016_01_06  494M 2016-01-13-12-09-50-00
vg00-FAST_rmeta_1-missing_0_0                                        4M
└─vg00-FAST_rmeta_1                                                  4M
loop0                                                               30G d5ce3be0-00fc-4f73-8ee4-989440310d23
└─docker-8:2-410226-pool                                            30G
sda                                                                1.8T
├─sda4                                                             1.8T UDq1dm-JH3Z-aCIX-3oQy-1ZDG-kNoi-BdNtkK
│ ├─vg00-FAST_rimage_0                                             1.8T
│ │ └─vg00-FAST                /mnt/FAST                           1.8T b78849e7-3399-444f-b98f-cba61d073961
│ └─vg00-FAST_rmeta_0                                                4M
│   └─vg00-FAST                /mnt/FAST                           1.8T b78849e7-3399-444f-b98f-cba61d073961
├─sda2                         /          rootfs                    20G ae45e705-67f4-4269-88e3-06d8e77a9e36
├─sda3                         [SWAP]                                8G 22a40da7-d0f9-4371-a585-8d49dc585708
└─sda1                                                            1007K

0