Setup NFS on top of LVM for VMWare ESXi

This tutorial shows how to setup a NFS storage to be used with WMware ESXi sever.

The biggest advantage of this solution is that you can do snapshot and flawlessly backup data without having VM shut down. The snapshot gives you consistent image at the time.

First we deploy standard Centos installation and add one or two additional storages depending if we want to use hardware or software RAID.

In case we do software RAID we have to set the partition(s) up and give them type “fd” which makes them “Linux auto raid”. Write changes and run the command below that builds the meta device. Please change devices paths accordingly or you will destroy your data! Be really careful about this step

root@store-pod-05:~# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm: array /dev/md0 started.

root@store-pod-05:~# cat /proc/mdstat 
Personalities : [raid1] 
md0 : active raid1 sdc1[1] sdb1[0]
      805306176 blocks [2/2] [UU]
      [>....................]  resync =  0.0% (362752/805306176) finish=147.9min speed=90688K/sec

unused devices: <none>

So now we have a RAID ready to be used either we use software or hardware version of it. Now we can finally get to the LVM creation. First we define which device are we going o use into variable DEVICE and then use it to create LVM storage.

root@store:~# DEVICE=/dev/md0
root@store:~# DEVICE_NAME=md0
root@store:~# MOUNT_POINT="/storage/raid1"
root@store:~# pvcreate ${DEVICE}
 Physical volume "/dev/md0" successfully created
root@store:~# vgcreate vg_${DEVICE_NAME} ${DEVICE}
 Volume group "vg_md0" successfully created 
root@store:~# lvcreate -l95%FREE -n lv_${DEVICE_NAME} vg_${DEVICE_NAME}
 Logical volume "lv_md0" created

Volume Group details – as you can see, we have 27GB of space for data to be written while we have snapshots for backups.

root@store:~# vgdisplay vg_md0
  --- Volume group ---
  VG Name               vg_md0
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               768.00 GB
  PE Size               4.00 MB
  Total PE              196607
  Alloc PE / Size       186776 / 729.59 GB
  Free  PE / Size       6912 / 27.00 GiB
  VG UUID               I3cF8r-GviS-E2X5-WzRF-UDoi-3qQA-cIHrd6

Format the created Logical Volume

/sbin/mkfs.ext4 -j /dev/vg_${DEVICE_NAME}/lv_${DEVICE_NAME}

Add the following to the /etc/fstab

/dev/vg_${DEVICE_NAME}/lv_${DEVICE_NAME} ${MOUNT_POINT} ext4 defaults 1 2

Here is a complete script:

#!/bin/bash
DEVICE="/dev/md0"
DEVICE_NAME="md0"
MOUNT_POINT="/storage/raid1"

echo "creating physical volume"
pvcreate ${DEVICE}
echo "creating volume group"
vgcreate vg_${DEVICE_NAME} ${DEVICE}
echo "create logical volume"
lvcreate -l95%FREE -n lv_${DEVICE_NAME} vg_${DEVICE_NAME}

echo "formatting the logical volume"
/sbin/mkfs.ext4 -j /dev/vg_${DEVICE_NAME}/lv_${DEVICE_NAME}

echo "done"
echo "/dev/vg_${DEVICE_NAME}/lv_${DEVICE_NAME} ${MOUNT_POINT} ext4 defaults 1 2" >> /etc/fstab

echo "all done, now mount using \"mount -a\""
echo "done"

Mount the Volume

mount -a

Add the shared drive to the /etc/exports.

You can add multiple IP or ranges in order to give more machines access to the exported NFS drive. The no_root_squash is mandarory for NFS to be used with VMWare ESXi.

/storage/raid1 192.168.16.96/28(rw,no_root_squash) 192.168.16.80/28(rw,no_root_squash)

Let’s install and run and services the tools we need.

yum install nfs-utils rpcbind
chkconfig nfs on
chkconfig rpcbind on
service rpcbind restart

Restart the NFS service and we are ready to go.

service nfs restart

Now we can add the storage to your VMWare ESXi and use it as you like 🙂

 

Leave a Reply