Category Archives: Virtualisation

Windows 10 – VMWare Disk cleanup and shrink

Simple batch script to run as Administrator in order to cleanup the disk, defrag and shrink.

Please note that the shrink works only if the VMWare tools are installed on the guest VM.

Save this content in a .bat file and… enjoy it!

Kernel space – User space – Containers – Virtualisation

How many times I’ve heard “well, a container is like a super light-weight virtual machine“. And yes, true, I admit as well, that I was one of them.

But I wasn’t happy about this answer, so I did some researches and I think now I have a better understanding and I feel the pain of my friends where I was simplistically (and wrongly) saying that – public apologies ūüėõ ūüôā


So… let’s start…


Concept 1: Virtual memory.

Virtual memory is the collective memory used by processes (RAM, disk swap, etc).

Of this virtual memory, we have generally a separation beween 2 types:

  • kernel space: reserverd for the kernel and generally drivers
  • user space: for the applications, incluse libraries

This separation serves to provide memory protection and hardware protection from malicious or errant software behavior.

NOTE1: User space is not namespace.


NOTE2: FUSE is not really related with this topic, but could confuse someone. So, just to clarify:¬†FUSE – (Filesystem in Userspace) is a software interface for Unix-like computer operating systems that lets non-privileged users create their own file systems without editing kernel code. This is achieved by running file system code in user space while the FUSE module provides only a “bridge” to the actual kernel interfaces.

Modern kernels have cgroups and namespace capabilities.

  • Cgroups can restrict what you can USE -> CPU, memory, storage, network, devices, etc. Also allows to ‘freeze’.
  • Namespace can restrict what you SEE -> PID, mnt, UID/GID, etc…

Containers runtimes (like LXC, Docker, etc…) are using cgroups and namespaces to create separate isolated user-space entities called ‘containers‘.
Containers have basically no overhead because they are using the same system calls to the host kernel => No need of emuation or virtual machine.

They use the same kernel of the host (this is a key difference with virtualisation). So, currently, you cannot run Windows containers on a Linux host. But you can still run different versions of Linux, as they all share the same kernel.

Virtualisation: fully isolated OS, running its own kernel.

  • Full virtualised: (eg. VMWare, Virtuabox, ESXi…). The OS in the VM is not aware to be a VM. Hypervisor emulates the hardware platform for the guest OS and then translates the hardware accesses requests to the physical hardware. Hypervisor provides the drivers to the guest OS.
    => higher overhead because hardware virtualisation BUT best isolation and security
  • Para virtualised: (XEN, KVM) the OS in the VM knows to be virtualised. Drivers are sending instructions directly to the hardware of the host, via the Hypervisor. Hardware is not virtualised BUT the OS runs in isolation.
    => better performance and ability to use recent hardware drivers directly BUT guest OS needs to be modified to use paravirtualised devices

NOTE: Emulation is not platform virtualisation (e.g. QEMU)
With emulation you can emulate different architectures (e.g. ARM/RISC…) on a host that has a differnt instruction set (eg. i386). Performances are cleary not ideal.

Main sources:

Chef – notes

Learning site:

As any other Configuration Manager tools, the main goal is automate and keep consistency in the infrastructure:

  • create files if missing
  • ignore file/task if already up to date
  • replace with original version if modified

Typically, Chef is comprised of three parts:

  1. your workstation – where you create your recipes/cookbooks
  2. a Chef server –¬†The guy who host the active version of recipes/cookbooks (central repository) and manage the nodes
  3. nodes Рmachines managed by Chef server. FYI, any nodes has Chef client installed.


picture source

Generally, you deploy your cookbooks on your workstation and push them onto the Chef Server. The node(s) communicate with the Chef Server via chef-client and pulls and execute the cookbook.

There is no communication between the workstation and the node EXCEPT for the first initial¬†bootstrap task. This is the only time when the workstation connects directly to the node and provides the details required to communicate with the Chef Server (Chef Server’s URL, validation Key). It also installs¬†chef on the node and runs¬†chef-client for the first time. During this time, the nodes gets registered¬†on the Chef Sever and receive a unique¬†client.pem¬†key, that will be used by chef-client to authenticate afterwards.
The information gets stored in a Postgress DB, and there is some indexing happening as well in Apache Solr (Elastic Search in a Chef Server cluster environment).

Further explanation here:

Some terms:

  • resource: part of the system in a desiderable state (e.g. package installed, file created…);
  • recipe: it contains declaration of resources, basically, the¬†things to do;
  • cookbook: is a collection of recipes, templates, attributes, etc… basically The final collection of all.

Important to remember:

  • there are default actions. If not specified, the default action applies (e.g.¬†:create for a file),
  • in the recipe you define WHAT but not¬†HOW. The “how” is managed by Chef itself,
  • the¬†order is important! For example, make sure to define the install of a package BEFORE setting a state enable. ONLY¬†attributes can be listed without order.


Test images: and
=> you can get these boxes using Vagrant

Example, how to get CentOS7 for Virtualbox and start it/connect/remove:


Software links and info:

Chef DK: it provides tools (chef, knife, berks…) to manage your servers remotely from your workstation.
Download link here.

To communicate with the Chef Server, your workstation needs to have .chef/knife.rb file configured as well:

Make sure to also have admin.pem (the RSA key) in the same .chef directory.

To fetch and verify the SSL certificate from the Chef server:


Chef DK also provides tools to allow you to configure a machine directly, but it is just for testing purposes. Syntax example:



Chef Server: Download here.
To remember, Chef Server needs¬†RSA keys (command line switch –filename) to communicate. We have user’s key, organisation key (chef-validator key).
There are different type of installation. Here you can find more information. And here more detail about the new HA version.

Chef Server can have a web interface, if you also install the Chef Management Console:


Alternatively you can use Hosted Chef service.

Chef Client:
(From official docs) The chef-client accesses the Chef server from the node on which it’s installed to get configuration data, performs searches of historical chef-client run data, and then pulls down the necessary configuration data. After the chef-client run is finished, the chef-client uploads updated run data to the Chef server.


Handy commands:


Practical examples:

Create file/directory

Package management

Use of template

Use variables in the template


General notes

Chef Supermarket

link here –¬†Community cookbook repository.
Best way to get a cookbook from Chef Supermarket is using Berkshelf command (berks) as it resolves all the dependencies. knive supermarket does NOT resolve dependencies.

Add the cookbooks in Berksfile

And run

This will download the cookbooks and dependencies in ~/.berkshelf/cookbooks

Then to upload ALL to Chef Server, best way:



Define a function of a node.
Stored as objects on the Chef server.
knife role create OR (better) knife role from file <role/myrole.json>. Using JSON is recommended as it can be version controlled.

Examples of web.json role:


To apply the changes you need to run chef-client on the node.

You can also verify:



All the following is extracted from the official

Test Kitchen helps speed up the development process by applying your infrastructure code on test environments from your workstation, before you apply your work in production.

Test Kitchen runs your infrastructure code in an isolated environment that resembles your production environment. With Test Kitchen, you continue to write your Chef code from your workstation, but instead of uploading your code to the Chef server and applying it to a node, Test Kitchen applies your code to a temporary environment, such as a virtual machine on your workstation or a cloud or container instance.

When you use the chef generate cookbook command to create a cookbook, Chef creates a file named .kitchen.yml in the root directory of your cookbook. .kitchen.yml defines what’s needed to run Test Kitchen, including which virtualisation provider to use, how to run Chef, and what platforms to run your code on.

Kitchen steps:


Handy commands:


Create and mount SWAP file





Set swappiness

A swappiness setting of zero means that the disk will be avoided unless absolutely necessary (you run out of memory), while a swappiness setting of 100 means that programs will be swapped to disk almost instantly.


Set via command line

Or simply modify /etc/sysctl.conf adding this line:


Virtualbox mount host’s shares to specific guest’s paths

Settings > Shared Folders > Add New Shared Folder
Folder path: <insert_here_hosts_path>
Folder name: <name_of_the_share_on_guest>

Select “Make Permanent”.
Leave unselected “Read-only” and “Auto-mount”.

Make sure the virtual box guest tools are properly installed in the guest machine.

After that, edit /etc/fstab and add the following:

This is an example for a shared called “Downloads”.
This share will be mounted under /home/user/Downloads forcing uid/gid to 1000, which it will be the one related to the myuser

Create a bootable Sierra ISO for VMware

Open the Terminal app and run the following:

NOTE: To have VMWare Workstation able to run MacOS X, you need to patch your version using this . If the file is no longer available, you can get a copy here.

If you want to force specific hardware parameters (like serial number etc), you need to add the following in your vmx file:

To make sure some software like Google Music will recognise your VM, you need to apply also this change:

A) Remove these lines in the VMX file:

B) Add the following instead:

Replace “xx:xx:xx:xx:xx:xx” with a real¬†Apple MAC Address choosing from one of the listed here.



Puppet – Let’s start

Puppet is a quite powerful configuration manager tool which allows you to configure automatically hosts and keep configurations consistence.

I did some tests using 3 VMs:

  • puppetmaster (server)
  • puppetagent01 (client)
  • puppetagent02 (client)

Of course, most of the work is done on puppetmaster server. On the last two machines you will simply see the outcome of the configurations that you’re going do set on puppetmaster.

Important: all the machines have to be able to communicate between each others. Please make sure DNS is working or set local names/IPs in /etc/hosts file, and do some ping tests before proceeding.

Client setup

On each puppetagent machine, just install the package puppet

By default, the client will look for a host called “puppet” on the network.
If your DNS/hosts file doesn’t have this entry, and it can’t be resolved, you can manually set the name of the puppetmaster in /etc/puppet/puppet.conf file, adding this line under [main] section:

Now, no more configuration is required from the client side. Just edit /etc/default/puppet to start at boot time and start the service.


Starting the service, will make automatically a request to the server to be added under his control.

If you want to do some tests, you can eventually use the following command to run puppet only once. This will also force the polling updates, which by default runs every 30 minutes.

You can repeat all these steps on the second client machine.

Server setup

Check if the service is running, otherwise, start it up.

Sign clients’ certificates on the server side

Puppet uses this client/server certificate sign system to add/remove hosts from being managed by the server.

To see who has requested to be “controlled” use this command:

This will show all the hosts waiting to be added under puppetmaster server.

This command will add the host.

Puppetmaster configuration files

The main configuration file is /etc/puppet/manifests/site.pp

Inside manifests folder, I’ve created a subfolder called classes with extra definitions (content of these files is showed later in this post).


Here the content of the single files .pp in classes folder:





It’s important to remember to NOT duplicate entries.
For example, in this case, we have a specific file where we have setup ntp service, including the required package. This means that we do NOT have to add this package in the list into packages.pp, otherwise you will get an error and configs won’t get pushed.

As I’m sure you’ve noted, there are references to some “files”.
Yes, we need some extra configuration, to tell puppet to run as file server as well and where files are located.

In our example we are storing our files in here:

Now we need to add the following in /etc/puppet/fileserver.conf

Last bit, is creating the subfolders and place the files required for our configuration:

Inside mysite create mysite_apache.conf and index.html files.

Example mysite_apache.conf

For index.html, you can simply have some text, just for testing purposes.

In this example, we have also setup ntp to be installed and to have a custom ntp.conf file pushed.
For this reason, we need to make sure to have this file present into /etc/puppet/files/etc as declared into our .pp file.

After doing all these changes, you should restart your puppetmaster service on the server.

If all went well, you should have the following:

  • puppetagent02 host with screen, dselect, vim (installed and with syntax on), ntp (installed, running with custom ntp.conf file)
  • puppetagent01: with the same as puppetagent02 PLUS apache with a running website

Of course this is just a raw example and you can use template and other super features.
But I think it’s a good start ūüėČ



ESXi host on D945GCLF2 Intel Atom mainboard, with NFS storage attached running on RAID1

I’ve used this procedure to create a ESXi host on D945GCLF2 Intel Atom mainboard, with RAID1 storage built in, attached to itself ūüėČ

On that, I have at the moment 3 VMs running¬†(minimal Debian with NFS, FreePBX machine, Debian server with a little LAMP server, SAMBA and web based torrent client)…and more resources available.

How? ūüôā

“Simply”, I needed:


  • D945GCLF2 Intel Atom mainboard
  • 2GB or RAM DDR2 (667 or 533) in a single module
  • IDEtoSD adapter
  • 4GB SD card
  • 2 SATA Hard Drives – same capacity (I’ve used 2×2.5″ 160GB – It’s all installed in a little case)
  • spare SATA CD-ROM and a empty CD-ROM to burn the ESXi ISO (I had issues using a USB stick and utilities like unetbootin or similar… so I ended up with the old fashion but working systems)


  • ESXi 4.1 ISO – I couldn’t find a way to patch most recent ISOs. Patch is required to add support¬†for the integrated NIC. Also¬†4.1 has all the required functions for this project.
  • Here the drivers and script to patch the ISO.
  • Debian net-install iso for the NFS vm.
  • vSphere client installed on your machine, to be able to connect to the host and copy the Debian ISO and manage the HOST.


  1. Patch the ISO and burn it on your blank CD.
  2. Connect the IDEtoSD card to the single IDE channel, with the SD. This will be our “main IDE hard drive”.
  3. Make sure to have enabled Hyper Threading Technology in the BIOS.
  4. Connect (temporary) the SATA CD-ROM to one of the two SATA channels, with the ESXi CD in, and complete the installation on the “4GB IDE hard drive” present on the system.
  5. Turn off the host, remove the SATA CD-ROM and connect the two hard drives to the SATA connectors.
  6. Boot up, and create a local datastore with the remaining space of the SD (if this hasn’t been created already automatically) and call it¬†“SD_local“. Here we will store our NFS machine which will provide NFS storage to the host.
  7. Create the RDM devices for our minimal Debian NFS machine follow the below instructions (ensure to make a minimal/basic installation, plus ssh, initramfs-tools, mdadm, nfs-kernel-server, nfs-common, portmap. No graphic interface, no extra packages!).
  8. Create the¬†Debian NFS vm, share the storage using NFS, attach it to the host, and you are ready to go! ūüėČ The host will be ready to have VMs up and running, with their virtual hard drives stored on a redundant storage.

The scope of this is to allow the Debian NFS VM, which will be stored on the local storage called¬†“SD_local“, to¬†directly access the physical SATA hard drives, create a software RAID1 with them, and using NFS protocol, share the space to the ESXi host and use it to store VMs/ISOs etc.

Of course, this Debian NFS VM, and in particular the SD card, are the single point of failure of this project. But theoretically, a dd of the SD once all is configured can be a good “backup” in case of problems (and a spare 4GB SD home as well ūüôā )

ESXi – How to create a Physical RDM and attach it to a VM

1. Determine the VML ID for the SATA disks

2. Find the two hard drives

Highlighted in red and orange (in blue I’ve highlighted the serial number which helps to identify them as well).

3. Check the volumes available

4. Use one of the available to create a subfolder that will contain the VMDK information for the RDM disks (using SD_local)

5. Create the devices

6. New RDM devices created and ready to be added to the VM

  • Edit the properties of an existing VM and click Add‚Ķ
  • Select Use an existing virtual disk and click Next >
  • Click Browse. You now need to navigate your local datastore ([SD_local]/RMSs) and select the VMDK‚Äôs that we created
  • Select Permanent / Persistent > Next..
  • You should now see your new Hard Disk‚Äôs in your VM and vSphere will correctly identify them as Mapped Raw LUN.

7. Run your linux VM and create Linux Raid auto (FD type)

8. Create the mdX device

9. Create the filesystem and add it to /etc/fstab