Recovering SME Server with lvm drives

From SME Server
Jump to navigationJump to search

Recovering SME Server with lvm drives

The purpose of this howto is to give to you the abilities to access to your data if the SME Server is broken and can't start in a normal way. you have several methods below, but all are done on a default Raid over LVM, therefore you might need to adapt to your configuration, if necessary.

If your issue concerns a grub issue, you should look to this wiki page


Method with the official SME Server CDROM

I presume that your SME Server is on a RAID over LVM, otherwise you will have to adapt this HOWTO.

  • start the system with your official SME Server CDROM
  • give at prompt : linux rescue
  • set your language and your keyboard
  • set to no the start of network interfaces
  • set to continue the question about how the system is mounted in /mnt/sysimage
  • set to ok
  • give at prompt :
chroot /mnt/sysimage
su -

Now you have successfully mounted your LVM and you are able to read your data in a chroot environment, you can save them on a usb disk

Method with SystemRescueCd

Important.png Note:
We go to work with SystemRescueCd which is a Linux system rescue disk available as a bootable CD-ROM or USB stick for administrating or repairing your system and data after a crash.Download. The goal is to get mounted your logical volumes on /mnt where you can save them on a usb disk.


start the system with your system rescue cd or you usb stick, choose your keyboard settings

then start the server X

startx

open a terminal to verify if your raid is initiated.

cat  /proc/mdstat

if you are lucky the output will look like this

# cat /proc/mdstat 
Personalities : [raid1] 
md99 : active raid1 sdb1[1] sda1[0]
      104320 blocks [2/2] [UU]
      
md100 : active raid1 sdb2[1] sda2[0]
      262036096 blocks [2/2] [UU]
      
unused devices: <none>

so we need to launch the LVM

vgchange -ay

afterward if the LVM is launched without error messages, we can mount the LVM in /mnt

mkdir /mnt/recover
mount /dev/main/root /mnt/recover


Information.png Tip:
if you have a name of logical volume who is not /dev/main/root, you can type this command for knowing all your logical volume, and adapt this to your configuration.


lvdisplay

Now you have successfully mounted your LVM and you are able to read your data on /mnt/recover, you can save them on a usb disk with the file browser for example.

Method

Let’s try starting the raid and see what we get:

mdadm -E /dev/sdb1

What “mdadm -E /dev/sdb1” command shows it is a part of a raid array, what level, how many members, etc.

user@user-desktop:/mnt$ mdadm -E /dev/sdb2
/dev/sdb2:
         Magic : a92b4efc
       Version : 00.90.00
          UUID : 550e0406:c9ce50d2:825b32e4:4a9d3549
 Creation Time : Sat Sep  8 12:15:29 2007
    Raid Level : raid1
 Used Dev Size : 1991936 (1945.58 MiB 2039.74 MB)
    Array Size : 1991936 (1945.58 MiB 2039.74 MB)
  Raid Devices : 2
 Total Devices : 1
Preferred Minor : 2
   Update Time : Sat Sep  8 12:22:05 2007
         State : clean
Active Devices : 1
Working Devices : 1
Failed Devices : 1
 Spare Devices : 0
      Checksum : 22e3837f - correct
        Events : 0.991
     Number   Major   Minor   RaidDevice State
this     0       8        2        0      active sync   /dev/sda2
   0     0       8        2        0      active sync   /dev/sda2
   1     1       0        0        1      faulty removed
user@user -desktop:/mnt$

With it being a raid 1 we only need 1 member to start it.

You can also use any md device to assemble the array. You need to make sure you are using an md device that isn't already in use, to check what isn’t being used type:

cat /proc/mdstat

So we have now found which md device we can use and for our example we will use “md8”

What we will do now is assemble and run the array:

mdadm -AR /dev/md8 /dev/sdb2

If you are running other then raid1, you may need to include additional members from other drives:

mdadm -AR /dev/md8 /dev/sdb2 /dev/sdd2 /dev/sde3

  Now see if the array is assembled:

cat /proc/mdstat

See if it detects the physical volumes:

user@user-desktop:~$ pvs
 PV         VG   Fmt  Attr PSize PFree
 /dev/md8   main lvm2 a-   1.88G 32.00M
user@user-desktop:~$

To activate all known volume groups in the system:

user@user-desktop:~$ vgchange main -a n
 0 logical volume(s) in volume group "main" now active
user@user-desktop:~$ vgchange main -a y
 2 logical volume(s) in volume group "main" now active
user@user-desktop:~$

Now we should be able to mount the drive:

user@user-desktop:~$ mount /dev/main/root /mnt/oldsmeserver/
user@user-desktop:~$

Looking good so let’s show where our files are:

user@user-desktop:~$ cd /mnt/oldsmeserver/
user@user-desktop:/mnt/oldsmeserver$ dir
aquota.group  boot     etc     lib         mnt      proc  selinux  sys  var
aquota.user   command  home    lost+found  opt      root  service  tmp
bin           dev      initrd  media       package  sbin  srv      usr
user@user-desktop:/mnt/oldsmeserver$

Now you have successfully assembled your array and able to recover your data.

Notes:

  • If the existing system has lvs already installed and has a volume group called "main" there may be issues.
  • If you installed SME Server <7.0 your volume group will be different, to find out your volume group type:
user@user-desktop:~$ vgdisplay
 --- Volume group ---
 VG Name               main  The Volume group name.
 System ID
 Format                lvm2
[..]
user@user-desktop:~$

Method with a ubuntu Cdrom

based on http://www.linuxjournal.com/article/8874?page=0,0

on ubuntu (non lvm), install mdadm and lvm2, attach server drive,

find UUID's

$ sudo mdadm --examine --scan  /dev/sdb1 /dev/sdb2
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=895293be:9cfa7672:f1761508:386417bc
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=10573599:841f46aa:4c068816:67364324


add ARRAY lines to mdadm.conf

$ cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=895293be:9cfa7672:f1761508:386417bc
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=10573599:841f46aa:4c068816:67364324


checking

$  sudo pvscan
 PV /dev/md2   VG main   lvm2 [148.94 GB / 64.00 MB free]
 Total: 1 [148.94 GB] / in use: 1 [148.94 GB] / in no VG: 0 [0   ]

checking

$ sudo lvscan
 ACTIVE            '/dev/main/root' [146.94 GB] inherit
 ACTIVE            '/dev/main/swap' [1.94 GB] inherit

mount, check and copy to safe location ...

$ sudo mkdir /mnt/ga
$ sudo mount /dev/main/root /mnt/ga
$ sudo ls -la /mnt/ga/var/log/messages
lrwxrwxrwx 1 root root 32 2010-03-24 18:00 /mnt/ga/var/log/messages -> /var/log/messages.20*