Changes

Jump to navigation Jump to search
54 bytes added ,  01:45, 30 October 2013
Minor grammer changes
Line 3: Line 3:  
Source of this page is the [https://raid.wiki.kernel.org/index.php/Growing raid wiki]
 
Source of this page is the [https://raid.wiki.kernel.org/index.php/Growing raid wiki]
   −
The purpose is to add a new drive to an existing Raid5 with LVM which is the standard installation of SME Server. Please backup your datas before to start this HOWTO, '''you may loose them'''.
+
The purpose of this HOWTO is to add a new drive to an existing Raid5 with LVM, LVM is the standard installation of SME Server. Please backup your data before starting this HOWTO, '''or you may loose the lot'''.
 
==Growing an existing Array==
 
==Growing an existing Array==
   −
{{Note box|due to a bug of kernel 2.6.18 which is the default kernel of Centos5 and SME Server 8.0, you can not grow a RAID6}}
+
{{Note box|due to a bug in kernel 2.6.18 which is the default kernel of Centos 5 and SME Server 8.0, you can not grow a RAID6}}
   −
When new disks are added, existing raid partitions can be grown to use the new disks. After the new disk was partitioned, the RAID level 1/4/5 array can be grown. Assuming that before growing it contains four drives in Raid5 and therefore an array of 3 drives (3*10G) and 1 spare drive(10G). See this [[Raid#Hard_Drives_.E2.80.93_Raid|HowTo]] for understanding the automatic raid construction of SME Server
+
When new disks are added, existing raid partitions can be grown to use the new disks. After the new disk has been partitioned, the RAID array 1/4/5 may be grown. Assuming that before growing, it contains four drives in Raid5 and therefore an array of 3 drives (3*10G) and 1 spare drive(10G). See this [[Raid#Hard_Drives_.E2.80.93_Raid|HowTo]] for understanding the automatic raid construction of SME Server
   −
This is how your array looks before.
+
This is how your array shpould look before changing.
    
  [root@smeraid5 ~]# cat /proc/mdstat
 
  [root@smeraid5 ~]# cat /proc/mdstat
Line 27: Line 27:  
  sfdisk -f /dev/sde < sfdisk_sda.output
 
  sfdisk -f /dev/sde < sfdisk_sda.output
   −
If you have errors about sfdisk command, you can clean the drive with the dd command.
+
If you have errors using the sfdisk command, you can clean the drive with the dd command.
{{Warning box|Be aware that dd is called data-destroyer, think about which partition you type.}}
+
{{Warning box|Be aware that dd is called data-destroyer, be certaing of the partition you want zeroed.}}
 
  #dd if=/dev/zero of=/dev/sdX bs=512 count=1
 
  #dd if=/dev/zero of=/dev/sdX bs=512 count=1
   Line 39: Line 39:  
  [root@smeraid5 ~]# mdadm --grow --raid-devices='''5''' /dev/md1
 
  [root@smeraid5 ~]# mdadm --grow --raid-devices='''5''' /dev/md1
 
   
 
   
Here we use the option --raid-devices='''5''' because the raid1 use all drives. You can see how the array is
+
Here we use the option --raid-devices='''5''' because raid1 uses all drives. You can see how the array looks by:
    
  [root@smeraid5 ~]# mdadm --detail /dev/md1
 
  [root@smeraid5 ~]# mdadm --detail /dev/md1
Line 118: Line 118:  
===LVM: Growing the PV===
 
===LVM: Growing the PV===
   −
Once the construction is done, we have to set the LVM to use the whole space
+
Once the construction is complete, we have to set the LVM to use the whole space
    
[root@smeraid5 ~]# pvresize /dev/md2
 
[root@smeraid5 ~]# pvresize /dev/md2

Navigation menu