Tag Archives: nfs

Auto mount an encrypted IMG file stored on NFS share

Yes, here we are again.
Now that I have a NAS at home, it’s about time to get rid of all these single USB disks connected to my Raspberry PIs.

I have a share called nfsshare available from my NAS (IP: 192.168.1.10). The full share path is 192.168.1.10:/volume1/nfsshare. My NAS handles NFS version 4.

So, here what I’ve done, to setup my Banana Pro Pi with Armbian based on Debian 10 (buster).

Configure NFS client

First of all, we need to create the mount point where we’re going to access the nfs share (let’s use /nfs) and install the packages for NFS.

mkdir /nfs
apt-get install nfs-common

Once done, a minimal tuning of idmapd.conf, if you have defined a domain/workgroup within your network. In this example I’m using mydomain.loc.

sed -i 's/#Domain = local.domain.edu/Domain = mydomain.loc/' /etc/idmapd.conf

Update our /etc/fstab file, to make sure it mounts at boot, and test if all works as expected:

192.168.1.10:/volume1/nfsshare /nfs nfs4 auto,_netdev,nofail,noatime,nolock 0 0

I have used _netdev to make sure that the system understands that this needs the network up before trying to mount, and, if something goes wrong, the boot continues (nofail). This is very handy on systems without a proper monitor where you rely on ssh connections.

Now, with a simple mount /nfs command, you should be able to get the share mounted. df -Th or mount commands are what I would you to verify.

Cool, we have now the share mounted. Issue a quick shutdown -r now to see if all works as expected. Once your device is back online, ssh into it and check with df -Th or mount commands again. Hopefully, you can see your nfs share mounted to /nfs.

Create and configure your Encrypted “space”

I have already discussed something about encrypted devices in another post. This will be a revised version of the previous post, without custom scripts, but simply using what Debian offers out of the box.

Create an empty IMG file to host our encrypted space

I have decided to create 500GB of encrypted space to store my data. To do so, I did the following:

  • install the required software for encryption
  • create a sparsefile (on my /nfs share)
  • encrypt it
  • format it (ext4)
  • setup the auto mount
apt-get install cryptsetup

dd of=/nfs/file_container.img bs=1 count=0 seek=500G

cryptsetup -y luksFormat /nfs/file_container.img
cryptsetup luksOpen /nfs/file_container.img cryptcontainer

mkfs.ext4 -L cryptarchive /dev/mapper/cryptcontainer

During the above steps, you will be asked to set a passphrase, and use it to open the IMG file. Pretty straight forward, isn’t it?

Cool. Now we have 500GB sparsefile called file_container.img store on our share /nfs ready to be mounted somewhere and utilised.

To make sure we can mount at boot, we need a secret key that we are going to use to decrypt the IMG file without typing any passphrase.

Let’s create this key stored under /root (in this example). You can store wherever you want, as long as it’s accessible before the decryption start. Another good place is /boot.

dd if=/dev/urandom of=/root/keyfile bs=1024 count=4
chmod 0400 /root/keyfile

Now we need to add this key within the IMG file

cryptsetup luksAddKey /nfs/file_container.img /root/keyfile

Next step, is to instruct /etc/crypttab, with the details about our encrypted file and its key.
Just add the following line at the end of /etc/crypttab file.

cryptcontainer /nfs/file_container.img /root/keyfile luks

Now, there is a problem. Your OS needs to know that the IMG file isn’t stored locally and has a dependency on the NFS share. In fact, if the OS tries to decrypt the IMG file before mounting the NFS share, it will fail, and you might get stuck in a no-end booting, forcing sometimes to get your mini monitor for troubleshooting, a spare keyboard and anti-human positions to reach your small Pi etc etc… basically, a nightmare!

So, here a trick that seems working.
In Debian, there is a file called /etc/default/cryptdisks
Within this file, we are going to make sure that CRYPTDISKS_ENABLE is set to yes and CRYPTDISKS_MOUNT is set to our NFS mount (/nfs). In this way, the service that handles the encryption/decription will wait for /nfs mounted before starting.
IMPORTANT: this must be a mountpoint within /etc/fstab

Here the content of my /etc/default/cryptdisks file

# Run cryptdisks initscripts at startup? Default is Yes.
CRYPTDISKS_ENABLE=Yes

# Mountpoints to mount, before cryptsetup is invoked at initscripts. Takes
# mountpoins which are configured in /etc/fstab as arguments. Separate
# mountpoints by space.
# This is useful for keyfiles on removable media. Default is unset.
CRYPTDISKS_MOUNT="/nfs"

# Default check script. Takes effect, if the 'check' option is set in crypttab
# without a value.
CRYPTDISKS_CHECK=blkid

Amazing! Now, just the last bit: update /etc/fstab with the reference of our device. Because now we have setup all the necessary to open the encrypted IMG file and associate it to a mountable device. But we haven’t mounted it yet!

Create the mount point:

mkdir /cryptoarchive

Update /etc/fstab, appending this line:

/dev/mapper/cryptcontainer /cryptoarchive ext4 defaults,nofail 0 2

Again, the nofail, as for the NFS share, to avoid the boot process to get stuck in case of errors, and allow you to ssh into the device and troubleshoot.

Now we’re ready to try a mount /cryptoarchive, a df -Th and mount checks, and also a shutdown -r now, to verify that the NFS share gets mounted and the IMG encrypted disk mounted and available too.

Happy playing! 😉

NFS – quick win

This is a very basic step-by-step guide to create a CentOS7 NFS server that shares a folder /nfsshare over 192.168.4.0/24 network. This share will be owned by apache and mountable on a CentOS web server.

Here the instructions how to create the server and how to setup the client.

NFS Server

Add this line in IPTABLES:

-A INPUT -s 192.168.4.0/24 -m comment --comment "NFS Network" -j ACCEPT

 

Run the following to create a share folder and setup NFS:

mkdir /nfsshare
yum install nfs-utils nfs-utils-lib -y
systemctl enable nfs-server
echo "/nfsshare 192.168.4.0/24(rw,sync,no_root_squash)" >> /etc/exports
sed -i 's/#Domain = local.domain.edu/Domain = nfsdomain.loc/' /etc/idmapd.conf
systemctl start rpcbind
systemctl start nfs-server

# Create apache user/group
# (NFS clients will read/write using this user so we want to have 
# the same set also on the server for an easier ownership management)
groupadd -g 48 apache
useradd -g 48 -u 48 apache

 

NFS Client

e.g. assuming that NFS server’s IP is 192.168.4.1

Add this line in IPTABLES:

-A INPUT -s 192.168.4.0/24 -m comment --comment "NFS Network" -j ACCEPT

Then, run this:

yum install nfs-utils rpcbind

sed -i 's/#Domain = local.domain.edu/Domain = nfsdomain.loc/' /etc/idmapd.conf
echo "192.168.4.1 NFS01" >> /etc/hosts
mount -t nfs4 -o noatime,proto=tcp,actimeo=3,hard,intr,acl,_netdev NFS01:/nfsshare
tail -1 /proc/mounts >> /etc/fstab

NOTE: we are hardly mapping the NFS server’s IP in /etc/hosts to make easier to recognise the mount (in case of multiple mounts).

If you are facing the issue where you mount /nfsshare and you see the owner of the files and folders showing as nobody:nobody, it could be related to rpcidmapd and DNS. To fix this, try to update /etc/hosts on the Client with <hostname>.nfsdomain.loc

# ============= #
# Ubuntu Notes  #
# ============= #

!! Same users on Server and Client - for the exported partition !!

SERVER
apt-get install nfs-kernel-server

vim /etc/exports
/var/www/vhosts		192.168.3.*(rw,sync,no_root_squash,no_subtree_check)

service nfs-kernel-server restart
exportfs -a


CLIENT
apt-get install nfs-common
mount -t nfs4 192.168.3.1:/var/www/vhosts /var/www/vhosts/

!! CHECK the output of cat /proc/mounts and you can get the correct rsize/wsize. If the firewall/network can handle, keep this value as big as possible:

e.g.

noatime,proto=tcp,actimeo=3,hard,intr,acl,_netdev,rsize=1048576,wsize=1048576

vim /etc/fstab
192.168.3.1:/var/www/vhosts   /var/www/vhosts nfs4    noatime,actimeo=3,hard,intr,acl,_netdev,rsize=32768,wsize=32768 0 0

 

ESXi host on D945GCLF2 Intel Atom mainboard, with NFS storage attached running on RAID1

I’ve used this procedure to create a ESXi host on D945GCLF2 Intel Atom mainboard, with RAID1 storage built in, attached to itself 😉

On that, I have at the moment 3 VMs running (minimal Debian with NFS, FreePBX machine, Debian server with a little LAMP server, SAMBA and web based torrent client)…and more resources available.

How? 🙂

“Simply”, I needed:

HARDWARE

  • D945GCLF2 Intel Atom mainboard
  • 2GB or RAM DDR2 (667 or 533) in a single module
  • IDEtoSD adapter
  • 4GB SD card
  • 2 SATA Hard Drives – same capacity (I’ve used 2×2.5″ 160GB – It’s all installed in a little case)
  • spare SATA CD-ROM and a empty CD-ROM to burn the ESXi ISO (I had issues using a USB stick and utilities like unetbootin or similar… so I ended up with the old fashion but working systems)

SOFTWARE

  • ESXi 4.1 ISO – I couldn’t find a way to patch most recent ISOs. Patch is required to add support for the integrated NIC. Also 4.1 has all the required functions for this project.
  • Here the drivers and script to patch the ISO.
  • Debian net-install iso for the NFS vm.
  • vSphere client installed on your machine, to be able to connect to the host and copy the Debian ISO and manage the HOST.

Procedure

  1. Patch the ISO and burn it on your blank CD.
  2. Connect the IDEtoSD card to the single IDE channel, with the SD. This will be our “main IDE hard drive”.
  3. Make sure to have enabled Hyper Threading Technology in the BIOS.
  4. Connect (temporary) the SATA CD-ROM to one of the two SATA channels, with the ESXi CD in, and complete the installation on the “4GB IDE hard drive” present on the system.
  5. Turn off the host, remove the SATA CD-ROM and connect the two hard drives to the SATA connectors.
  6. Boot up, and create a local datastore with the remaining space of the SD (if this hasn’t been created already automatically) and call it “SD_local“. Here we will store our NFS machine which will provide NFS storage to the host.
  7. Create the RDM devices for our minimal Debian NFS machine follow the below instructions (ensure to make a minimal/basic installation, plus ssh, initramfs-tools, mdadm, nfs-kernel-server, nfs-common, portmap. No graphic interface, no extra packages!).
  8. Create the Debian NFS vm, share the storage using NFS, attach it to the host, and you are ready to go! 😉 The host will be ready to have VMs up and running, with their virtual hard drives stored on a redundant storage.

The scope of this is to allow the Debian NFS VM, which will be stored on the local storage called “SD_local“, to directly access the physical SATA hard drives, create a software RAID1 with them, and using NFS protocol, share the space to the ESXi host and use it to store VMs/ISOs etc.

Of course, this Debian NFS VM, and in particular the SD card, are the single point of failure of this project. But theoretically, a dd of the SD once all is configured can be a good “backup” in case of problems (and a spare 4GB SD home as well 🙂 )

ESXi – How to create a Physical RDM and attach it to a VM

1. Determine the VML ID for the SATA disks

# ls /dev/disks/ -l
-rw------- 1 root root 4041211904 May 19 20:18 t10.ATA_____Memory_Card_Adapter_______________________________________0_
-rw------- 1 root root 939524096 May 19 20:18 t10.ATA_____Memory_Card_Adapter_______________________________________0_:1
-rw------- 1 root root 3097493504 May 19 20:18 t10.ATA_____Memory_Card_Adapter_______________________________________0_:2
-rw------- 1 root root 4177920 May 19 20:18 t10.ATA_____Memory_Card_Adapter_______________________________________0_:4
-rw------- 1 root root 262127616 May 19 20:18 t10.ATA_____Memory_Card_Adapter_______________________________________0_:5
-rw------- 1 root root 262127616 May 19 20:18 t10.ATA_____Memory_Card_Adapter_______________________________________0_:6
-rw------- 1 root root 115326976 May 19 20:18 t10.ATA_____Memory_Card_Adapter_______________________________________0_:7
-rw------- 1 root root 299876352 May 19 20:18 t10.ATA_____Memory_Card_Adapter_______________________________________0_:8
-rw------- 1 root root 160041885696 May 19 20:18 t10.ATA_____ST9160821AS_____________________________5MA57R13____________
-rw------- 1 root root 160041885696 May 19 20:18 t10.ATA_____ST9160821AS_________________________________________5MA8PT4Q
lrwxrwxrwx 1 root root 72 May 19 20:18 vml.010000000020202020202020202020202020202020202030204d656d6f7279 -> t10.ATA_____Memory_Card_Adapter_______________________________________0_
lrwxrwxrwx 1 root root 74 May 19 20:18 vml.010000000020202020202020202020202020202020202030204d656d6f7279:1 -> t10.ATA_____Memory_Card_Adapter_______________________________________0_:1
lrwxrwxrwx 1 root root 74 May 19 20:18 vml.010000000020202020202020202020202020202020202030204d656d6f7279:2 -> t10.ATA_____Memory_Card_Adapter_______________________________________0_:2
lrwxrwxrwx 1 root root 74 May 19 20:18 vml.010000000020202020202020202020202020202020202030204d656d6f7279:4 -> t10.ATA_____Memory_Card_Adapter_______________________________________0_:4
lrwxrwxrwx 1 root root 74 May 19 20:18 vml.010000000020202020202020202020202020202020202030204d656d6f7279:5 -> t10.ATA_____Memory_Card_Adapter_______________________________________0_:5
lrwxrwxrwx 1 root root 74 May 19 20:18 vml.010000000020202020202020202020202020202020202030204d656d6f7279:6 -> t10.ATA_____Memory_Card_Adapter_______________________________________0_:6
lrwxrwxrwx 1 root root 74 May 19 20:18 vml.010000000020202020202020202020202020202020202030204d656d6f7279:7 -> t10.ATA_____Memory_Card_Adapter_______________________________________0_:7
lrwxrwxrwx 1 root root 74 May 19 20:18 vml.010000000020202020202020202020202020202020202030204d656d6f7279:8 -> t10.ATA_____Memory_Card_Adapter_______________________________________0_:8
lrwxrwxrwx 1 root root 72 May 19 20:18 <span style="color: #ff0000;">vml.0100000000202020202020202020202020354d413850543451535439313630</span> -> t10.<span style="color: #0000ff;">ATA_____ST9160821AS</span>_________________________________________5MA5SS2A
lrwxrwxrwx 1 root root 72 May 19 20:18 <span style="color: #ff9900;">vml.0100000000354d413537523133202020202020202020202020535439313630</span> -> t10.<span style="color: #0000ff;">ATA_____ST9160821AS</span>_____________________________5MA43W02____________

2. Find the two hard drives

Highlighted in red and orange (in blue I’ve highlighted the serial number which helps to identify them as well).

3. Check the volumes available

# ls -l /vmfs/volumes drwxr-xr-x 1 root root 8 Jan 1 1970 ed0aa47f-f157c36d-0295-b6663f811221 drwxr-xr-x 1 root root 8 Jan 1 1970 e2f7c177-db75edcf-defa-90346375bdf2 drwxr-xr-x 1 root root 8 Jan 1 1970 2da668ef-40e5d96b-90bf-855ddb9c5547 drwxr-xr-t 1 root root 1.4k May 19 21:29 4fb7f163-a1959434-4766-001cc07e74e5 lrwxr-xr-x 1 root root 35 May 19 23:16 SD_local -> 4fb7f163-a1959434-4766-001cc07e74e5 lrwxr-xr-x 1 root root 35 May 19 23:16 Hypervisor3 -> 2da668ef-40e5d96b-90bf-855ddb9c5547 lrwxr-xr-x 1 root root 35 May 19 23:16 Hypervisor2 -> ed0aa47f-f157c36d-0295-b6663f811221 lrwxr-xr-x 1 root root 35 May 19 23:16 Hypervisor1 -> e2f7c177-db75edcf-defa-90346375bdf2

4. Use one of the available to create a subfolder that will contain the VMDK information for the RDM disks (using SD_local)

# cd /vmfs/volumes/SD_local/
/vmfs/volumes/4fb7f163-a1959434-4766-001cc07e74e5 # mkdir RDMs
/vmfs/volumes/4fb7f163-a1959434-4766-001cc07e74e5 # cd RDMs/

5. Create the devices

vmkfstools -z /vmfs/devices/disks/vml.0100000000202020202020202020202020354d413850543451535439313630 rmd_sata1.vmdk -a lsilogic
vmkfstools -z /vmfs/devices/disks/vml.0100000000354d413537523133202020202020202020202020535439313630 rmd_sata2.vmdk -a lsilogic

6. New RDM devices created and ready to be added to the VM

  • Edit the properties of an existing VM and click Add…
  • Select Use an existing virtual disk and click Next >
  • Click Browse. You now need to navigate your local datastore ([SD_local]/RMSs) and select the VMDK’s that we created
  • Select Permanent / Persistent > Next..
  • You should now see your new Hard Disk’s in your VM and vSphere will correctly identify them as Mapped Raw LUN.

7. Run your linux VM and create Linux Raid auto (FD type)

8. Create the mdX device

# mdadm --create /dev/md0 --chunk=4 --level=0 --raid-devices=2 /dev/sda1 /dev/sdb1

9. Create the filesystem and add it to /etc/fstab

 

Sources
http://www.vm-help.com/esx40i/SATA_RDMs.php
http://blog.davidwarburton.net/2010/10/25/rdm-mapping-of-local-sata-storage-for-esxi/