Category Archives: Virtualisation

Ping on WSL

Do we really need to be root to simply to a ping from your WSL on Windows?
Apparently yes.
You have probably faced the following error:

ping: socktype: SOCK_RAW
ping: socket: Operation not permitted
ping: => missing cap_net_raw+p capability or setuid?

Wanna fix it?

sudo setcap cap_net_raw+ep /bin/ping

Run this command once, and the command ping will be “usable” again šŸ˜‰

Happy ping’ing! šŸ˜‰

Mount Windows Network drives in WSL

In Windows WSL, you can access the local disk navigating the path /mnt/c/ for the C: drive, for example.

Sometimes, network drives mounted on boot aren’t automatically mounted within your WSL Linux shell. You can do it manually using the following commands:

# For a drive already mapped in Windows (e.g. Z: drive)
$ sudo mkdir /mnt/z
$ sudo mount -t drvfs Z: /mnt/z

# For a network drive accessible via \\myserver\dir1 in Explorer
$ sudo mkdir /mnt/dir1
$ sudo mount -t drvfs '\\myserver\dir1' /mnt/dir1

Windows 10 – VMWare Disk cleanup and shrink

Simple batch script to run as Administrator in order to cleanup the disk, defrag and shrink.

Please note that the shrink works only if the VMWare tools are installed on the guest VM.

@ECHO OFF
REM Make sure to run the script as Administrator
whoami /groups | find "S-1-16-12288" > nul

if %errorlevel% == 0 (
 echo Welcome, Admin
) else (
 echo You must run this script as Administrator. Aborting...
 goto EOF
)

REM Enable components to cleanup
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Active Setup Temp Folders" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\BranchCache" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Downloaded Program Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\GameNewsFiles" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\GameStatisticsFiles" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\GameUpdateFiles" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Internet Cache Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Memory Dump Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Offline Pages Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Old ChkDsk Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Previous Installations" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Recycle Bin" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Service Pack Cleanup" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Setup Log Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\System error memory dump files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\System error minidump files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Temporary Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Temporary Setup Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Temporary Sync Files" /V StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Thumbnail Cache" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Update Cleanup" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Upgrade Discarded Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\User file versions" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Windows Defender" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Windows Error Reporting Archive Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Windows Error Reporting Queue Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Windows Error Reporting System Archive Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Windows Error Reporting System Queue Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Windows ESD installation files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Windows Upgrade Log Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
 
REM Run cleanup
IF EXIST %SystemRoot%\SYSTEM32\cleanmgr.exe START /WAIT cleanmgr /sagerun:100

echo *** DEFRAGGING DRIVES ***
echo DEFRAG C:
defrag c: -f

if not exist "C:\Program Files\VMware\VMware Tools" goto NOVMWARE
echo.
echo *** SHRINKING DRIVES ***
cd "C:\Program Files\VMware\VMware Tools\"
VMwareToolboxCmd.exe disk shrink c:\

if not exist D: goto NODDRIVESHRINK
VMwareToolboxCmd.exe disk shrink D:\
:NODDRIVESHRINK
:NOVMWARE


echo *** SHUTTING DOWN ***
shutdown -s -t 30

:EOF
echo Terminating script
pause

Save this content in a .bat file and… enjoy it!

Kernel space – User space – Containers – Virtualisation

How many times I’ve heard “well, a container is like a super light-weight virtual machine“. And yes, true, I admit as well, that I was one of them.

But I wasn’t happy about this answer, so I did some researches and I think now I have a better understanding and I feel the pain of my friends where I was simplistically (and wrongly) saying that – public apologies šŸ˜› šŸ™‚

 

So… let’s start…

 

Concept 1: Virtual memory.

Virtual memory is the collective memory used by processes (RAM, disk swap, etc).

Of this virtual memory, we have generally a separation beween 2 types:

  • kernel space: reserverd for the kernel and generally drivers
  • user space: for the applications, incluse libraries

This separation serves to provide memory protection and hardware protection from malicious or errant software behavior.

NOTE1: User space is not namespace.

 

NOTE2: FUSE is not really related with this topic, but could confuse someone. So, just to clarify:Ā FUSE – (Filesystem in Userspace) is a software interface for Unix-like computer operating systems that lets non-privileged users create their own file systems without editing kernel code. This is achieved by running file system code in user space while the FUSE module provides only a “bridge” to the actual kernel interfaces.

Modern kernels have cgroups and namespace capabilities.

  • Cgroups can restrict what you can USE -> CPU, memory, storage, network, devices, etc. Also allows to ‘freeze’.
  • Namespace can restrict what you SEE -> PID, mnt, UID/GID, etc…

Containers runtimes (like LXC, Docker, etc…) are using cgroups and namespaces to create separate isolated user-space entities called ‘containers‘.
Containers have basically no overhead because they are using the same system calls to the host kernel => No need of emuation or virtual machine.

They use the same kernel of the host (this is a key difference with virtualisation). So, currently, you cannot run Windows containers on a Linux host. But you can still run different versions of Linux, as they all share the same kernel.

Virtualisation: fully isolated OS, running its own kernel.

  • Full virtualised: (eg. VMWare, Virtuabox, ESXi…). The OS in the VM is not aware to be a VM. Hypervisor emulates the hardware platform for the guest OS and then translates the hardware accesses requests to the physical hardware. Hypervisor provides the drivers to the guest OS.
    => higher overhead because hardware virtualisation BUT best isolation and security
  • Para virtualised: (XEN, KVM) the OS in the VM knows to be virtualised. Drivers are sending instructions directly to the hardware of the host, via the Hypervisor. Hardware is not virtualised BUT the OS runs in isolation.
    => better performance and ability to use recent hardware drivers directly BUT guest OS needs to be modified to use paravirtualised devices

NOTE: Emulation is not platform virtualisation (e.g. QEMU)
With emulation you can emulate different architectures (e.g. ARM/RISC…) on a host that has a differnt instruction set (eg. i386). Performances are cleary not ideal.


Main sources:

Chef – notes

Websites:Ā https://www.chef.io
Learning site:Ā https://learn.chef.io

As any other Configuration Manager tools, the main goal is automate and keep consistency in the infrastructure:

  • create files if missing
  • ignore file/task if already up to date
  • replace with original version if modified

Typically, Chef is comprised of three parts:

  1. your workstation – where you create your recipes/cookbooks
  2. a Chef server –Ā The guy who host the active version of recipes/cookbooks (central repository) and manage the nodes
  3. nodes – machines managed by Chef server. FYI, any nodes hasĀ Chef client installed.

diagram

picture source https://learn.chef.io

Generally, you deploy your cookbooks on your workstation and push them onto the Chef Server. The node(s) communicate with the Chef Server via chef-client and pulls and execute the cookbook.

There is no communication between the workstation and the node EXCEPT for the first initialĀ bootstrap task. This is the only time when the workstation connects directly to the node and provides the details required to communicate with the Chef Server (Chef Server’s URL, validation Key). It also installsĀ chef on the node and runsĀ chef-client for the first time. During this time, the nodes gets registeredĀ on the Chef Sever and receive a uniqueĀ client.pemĀ key, that will be used by chef-client to authenticate afterwards.
The information gets stored in a Postgress DB, and there is some indexing happening as well in ApacheĀ Solr (Elastic SearchĀ in a Chef Server cluster environment).

Further explanation here:Ā https://docs.chef.io/chef_overview.html

Some terms:

  • resource: part of the system in a desiderable state (e.g. package installed, file created…);
  • recipe: it contains declaration of resources, basically, theĀ things to do;
  • cookbook: is a collection of recipes, templates, attributes, etc… basically The final collection of all.

Important to remember:

  • there are default actions. If not specified, the default action applies (e.g.Ā :create for a file),
  • in the recipe you define WHAT but notĀ HOW. The “how” is managed by Chef itself,
  • theĀ order is important! For example, make sure to define the install of a package BEFORE setting a state enable. ONLYĀ attributes can be listed without order.


Labs

Test images:Ā http://chef.github.io/bento/Ā andĀ https://atlas.hashicorp.com/bento
=> you can get these boxes using Vagrant

Example, how to get CentOS7 for Virtualbox and start it/connect/remove:

vagrant box add bento/centos-7.2 --provider=virtualbox

vagrant init bento/centos-7.2

vagrant up

vagrant ssh

vagrant destroy

Exercises:

Software links and info:

Chef DK: it provides tools (chef, knife, berks…) to manage your servers remotely from your workstation.
Download link here.

To communicate with the Chef Server, your workstation needs to haveĀ .chef/knife.rb file configured as well:

# See http://docs.chef.io/config_rb_knife.html for more information on knife configuration options

current_dir = File.dirname(__FILE__)
log_level                :info
log_location             STDOUT
node_name                "admin"
client_key               "#{current_dir}/admin.pem"
chef_server_url          "https://chef-server.test/organizations/myorg123"
cookbook_path            ["#{current_dir}/../cookbooks"]

Make sure to also haveĀ admin.pem (the RSA key) in the sameĀ .chef directory.

To fetch and verify the SSL certificate from the Chef server:

knife ssl fetch

knife ssl check

 

Chef DKĀ also provides tools toĀ allow you to configure a machine directly, but it is just for testing purposes. Syntax example:

chef-client --local-mode myrecipe.rb

 

 

Chef Server:Ā Download here.
To remember, Chef Server needsĀ RSA keys (command line switch –filename) to communicate. We have user’s key, organisation key (chef-validator key).
There are different type of installation.Ā Here you can find more information. And here more detail about the new HA version.

Chef Server can have a web interface, if you also install the Chef Management Console:

# chef-server-ctl install chef-manage

 

Alternatively you can use Hosted Chef service.

Chef Client:
(From official docs) The chef-client accesses the Chef server from the node on which itā€™s installed to get configuration data, performs searches of historical chef-client run data, and then pulls down the necessary configuration data. After the chef-client run is finished, the chef-client uploads updated run data to the Chef server.

 


Handy commands:

# Create a cookbook (structure) called chef_test01, into cookbooks dir
chef generate cookbook cookbooks/chef_test01

# Create a template for file "index.html" 
# this will generate a file "index.html.erb" under "cookbooks/templates" folder
chef generate template cookbooks/chef_test01 index.html

# Run a specific recipe web.rb of a cookbook, locally
# --runlist + --local-mode
chef-client --local-mode --runlist 'recipe[chef_test01::web]'

# Upload cookbook to Chef server
knife cookbook upload chef_test01

# Verify uploaded cookbooks (and versions)
knife cookbook list

# Bootstrap a node (to do ONCE)
# knife bootstrap ADDRESS --ssh-user USER --sudo --identity-file IDENTITY_FILE --node-name NODE_NAME
# Opt: --run-list 'recipe[RECIPE_NAME]'
knife bootstrap 10.0.3.1 --ssh-port 22 --ssh-user user1 --sudo --identity-file /home/me/keys/user1_private_key --node-name node1
# Verify that the node has been added
knife node list
knife node show node1

# Run cookbook on one node
# (--attribute ipaddress is used if the node has no resolvable FQDN)
knife ssh 'name:node1' 'sudo chef-client' --ssh-user user1 --identity-file /home/me/keys/user1_private_key --attribute ipaddress

# Delete the data about your node from the Chef server
knife node delete node1
knife client delete node1

# Delete Cookbook on Chef Server (select which version)
# use  --all --yes if you want remove everything
knife cookbook delete chef_test01

# Delete a role
knife role delete web

 


Practical examples:

Create file/directory

directory '/my/path'

file '/my/path/myfile' do
  content 'Content to insert in myfile'
  owner 'user1'
  group 'user1'
  mode '0644'
end

Package management

package 'httpd'

service 'httpd' do
  action [:enable, :start]
end

Use of template

template '/var/www/html/index.html' do
  source 'index.html.erb'
end

Use variables in the template

<html>
  <body>
    <h1>hello from <%= node['fqdn'] %></h1>
  </body>
</html>

 


General notes

Chef Supermarket

link here –Ā Community cookbook repository.
Best way to get a cookbook from Chef Supermarket is usingĀ Berkshelf command (berks)Ā as it resolves all the dependencies.Ā knive supermarket does NOT resolve dependencies.

Add the cookbooks in Berksfile

source 'https://supermarket.chef.io'
cookbook 'chef-client'

And run

berks install

This will download the cookbooks and dependencies inĀ ~/.berkshelf/cookbooks

Then to upload ALL to Chef Server, best way:

# Production
berks upload 

# Just to test (ignore SSL check)
berks upload --no-ssl-verify

 

Roles

Define a function of a node.
Stored as objects on the Chef server.
knife role create OR (better)Ā knife role from file <role/myrole.json>.Ā Using JSON is recommended as it can be version controlled.

Examples of web.json role:

{
   "name": "web",
   "description": "Role for Web Server",
   "json_class": "Chef::Role",
   "override_attributes": {
   },
   "chef_type": "role",
   "run_list": ["recipe[chef_test01::default]",
                "recipe[chef_test01::web]"
   ],
   "env_run_lists": {
   }
}

Commands:

# Push a role
knife role from file roles/web.json
knife role from file roles/db.json

# Check what's available
knife role list

# View the role pushed
knife role show web

# Assign a role to a specific node
knife node run_list set node1 "role[web]"
knife node run_list set node2 "role[db]"

# Verify
knife node show node1
knife node show node2

To apply the changes you need to runĀ chef-client on the node.

You can also verify:

knife status 'role:web' --run-list

 


Kitchen

All the following is extracted from the officialĀ https://learn.chef.io

Test Kitchen helps speed up the development process by applying your infrastructure code on test environments from your workstation, before you apply your work in production.

Test Kitchen runs your infrastructure code in an isolated environment that resembles your production environment. With Test Kitchen, you continue to write your Chef code from your workstation, but instead of uploading your code to the Chef server and applying it to a node, Test Kitchen applies your code to a temporary environment, such as a virtual machine on your workstation or a cloud or container instance.

When you use the chef generate cookbook command to create a cookbook, Chef creates a file named .kitchen.yml in the root directory of your cookbook. .kitchen.yml defines what’s needed to run Test Kitchen, including which virtualisation provider to use, how to run Chef, and what platforms to run your code on.

Kitchen steps:

Kitchen WORKFLOW

Handy commands:

$ kitchen list
$ kitchen create
$ kitchen converge

 

Create and mount SWAP file

In the Cloud era, virtual servers come with no swap. And it’s perfectly fine, cause swapping isn’t good in terms of performace, and Cloud technology is designed for horizontal scaling, so, if you need more memory, add another server.

However, it could be handy sometimes to have a some more room for testing (and save some money).

So here below one quick script to automatically create a 4GB swap file, activate and also tune some system parameters to limit the use of the swap only when really necessary:

fallocate -l 4G /swapfile
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
sysctl vm.swappiness=0
echo 'vm.swappiness=0' >> /etc/sysctl.conf
tail /etc/sysctl.conf

NOTES:
Swappiness: setting to zero means that the swap won’t be used unless absolutely necessary (you run out of memory), while a swappiness setting of 100 means that programs will be swapped to disk almost instantly.

Virtualbox mount host’s shares to specific guest’s paths

Settings > Shared Folders > Add New Shared Folder
Folder path: <insert_here_hosts_path>
Folder name: <name_of_the_share_on_guest>

Select “Make Permanent”.
Leave unselected “Read-only” and “Auto-mount”.

Make sure the virtual box guest tools are properly installed in the guest machine.

After that, edit /etc/fstab and add the following:

Downloads /home/myuser/Downloads	vboxsf	rw,exec,uid=1000,gid=1000,dmode=0755,fmode=0644 0 0

This is an example for a shared called “Downloads”.
This share will be mounted under /home/user/Downloads forcing uid/gid to 1000, which it will be the one related to the myuser

Create a bootable Sierra ISO for VMware

Open the Terminal app and run the following:

hdiutil attach /Applications/Install\ macOS\ Sierra.app/Contents/SharedSupport/InstallESD.dmg -noverify -nobrowse -mountpoint /Volumes/install_app
hdiutil create -o /tmp/Sierra.cdr -size 7316m -layout SPUD -fs HFS+J
hdiutil attach /tmp/Sierra.cdr.dmg -noverify -nobrowse -mountpoint /Volumes/install_build
asr restore -source /Volumes/install_app/BaseSystem.dmg -target /Volumes/install_build -noprompt -noverify -erase
rm /Volumes/OS\ X\ Base\ System/System/Installation/Packages
cp -rp /Volumes/install_app/Packages /Volumes/OS\ X\ Base\ System/System/Installation/
cp -rp /Volumes/install_app/BaseSystem.chunklist /Volumes/OS\ X\ Base\ System/BaseSystem.chunklist
cp -rp /Volumes/install_app/BaseSystem.dmg /Volumes/OS\ X\ Base\ System/BaseSystem.dmg
hdiutil detach /Volumes/install_app
hdiutil detach /Volumes/OS\ X\ Base\ System/
hdiutil convert /tmp/Sierra.cdr.dmg -format UDTO -o /tmp/Sierra.iso
mv /tmp/Sierra.iso.cdr ~/Desktop/Sierra.iso

NOTE: To have VMWare Workstation able to run MacOS X, you need to patch your version using thisĀ . If the file is no longer available, you can get a copy here.

If you want to force specific hardware parameters (like serial number etc), you need to add the following in yourĀ vmx file:

board-id.reflectHost = "FALSE"
board-id = <board-id>
hw.model.reflectHost = "FALSE"
hw.model = <product-name>
serialNumber.reflectHost = "FALSE"
serialNumber = <serial-number>
smbios.reflectHost = "FALSE"

To make sure some software like Google Music will recognise your VM, you need to apply also this change:

A) Remove these lines in the VMX file:

ethernet0.addressType = "generated"
ethernet0.generatedAddress = "xx:xx:xx:xx:xx:xx"
ethernet0.generatedAddressOffset = "0"

B) Add the following instead:

ethernet0.Address = "xx:xx:xx:xx:xx:xx"
ethernet0.addressType = "static"
ethernet0.checkMACAddress = "false"

Replace “xx:xx:xx:xx:xx:xx” with a realĀ Apple MAC Address choosing from one of the listed here.


Sources:

 

Puppet – Let’s start

Puppet is a quite powerful configuration manager tool which allows you to configure automatically hosts and keep configurations consistence.

I did some tests using 3 VMs:

  • puppetmaster (server)
  • puppetagent01 (client)
  • puppetagent02 (client)

Of course, most of the work is done on puppetmaster server. On the last two machines you will simply see the outcome of the configurations that you’re going do set on puppetmaster.

Important: all the machines have to be able to communicate between each others. Please make sure DNS is working or set local names/IPs in /etc/hosts file, and do some ping tests before proceeding.

Client setup

On each puppetagent machine, just install the package puppet

apt-get install puppet

By default, the client will look for a host called “puppet” on the network.
If your DNS/hosts file doesn’t have this entry, and it can’t be resolved, you can manually set the name of the puppetmaster in /etc/puppet/puppet.conf file, adding this line under [main] section:

server=puppetmaster.yournet.loc

Now, no more configuration is required from the client side. Just edit /etc/default/puppet to start at boot time and start the service.

# Defaults for puppet - sourced by /etc/init.d/puppet

# Start puppet on boot?
START=yes

# Startup options
DAEMON_OPTS=""

 

service puppet start

Starting the service, will make automatically a request to the server to be added under his control.

If you want to do some tests, you can eventually use the following command to run puppet only once. This will also force the polling updates, which by default runs every 30 minutes.

puppet agent --no-daemonize --onetime --verbose

You can repeat all these steps on the second client machine.

Server setup

apt-get install puppetmaster

Check if the service is running, otherwise, start it up.

Sign clients’ certificates on the server side

Puppet uses this client/server certificate sign system to add/remove hosts from being managed by the server.

To see who has requested to be “controlled” use this command:

puppet cert --list

This will show all the hosts waiting to be added under puppetmaster server.

puppet cert --sign

This command will add the host.

Puppetmaster configuration files

The main configuration file is /etc/puppet/manifests/site.pp

Inside manifests folder, I’ve created a subfolder called classes with extra definitions (content of these files is showed later in this post).

/etc/puppet/manifests# tree
.
|___ classes
|   |___ apache.pp
|   |___ mysite.pp
|   |___ ntpd.pp
|   |___ packages.pp
|___ site.pp

/etc/puppet/manifests/site.pp

import 'classes/*.pp'
# This add all the custom .pp files into classes folder
class puppettools {
# Creates a file, setting permissions and content
        file { '/usr/local/sbin/puppet_once.sh':
                owner => root, group => root, mode => 755,
                content => "#!/bin/sh\npuppet agent --no-daemonize --onetime --verbose $1\n",
        }
# Install (if not present) some puppet modules required for 'vimconf' class
        exec { "install_puppet_module":
        command => "puppet module install puppetlabs-stdlib",
        path => [ "/bin", "/sbin", "/usr/bin", "/usr/sbin",
              "/usr/local/bin", "/usr/local/sbin" ],
        onlyif  => "test `puppet module list | grep puppetlabs-stdlib | wc -l` -eq 0"
        }
}

class vimconf {
# Modify vimrc conf file, enabling syntax on
        file_line { 'vim_syntax_on':
        path  => '/etc/vim/vimrc',
        match => '^.*syntax on.*$',
        line  => 'syntax on',
        }
}

node  default {
# this will be applied to all nodes without specific node definitions
        include packages
        include vimconf
        include ntp
        include puppettools
}

node  'puppetagent01' inherits default {
# this specific node, gets all the default classes PLUS some extras
        include mysite
}

Here the content of the single files .pp in classes folder:

class apache {
	package { 'apache2-mpm-prefork':
		ensure => installed
	}

	service { 'apache2':
		ensure => running,
		hasstatus => true,
		hasrestart => true,
	}
}

 

class mysite {

	include apache

	file { '/etc/apache2/sites-available/mysite':
		owner => root, group => root, mode => 0644,
		source => "puppet:///files/mysite/mysite_apache.conf",
	}

	file {'/var/www/mysite.localdomain':
		ensure => directory,
	}

	file {'/var/www/mysite.localdomain/index.html':
                owner => root, group => www-data, mode => 0755,
                source => "puppet:///files/mysite/index.html",
	}

	 exec {'/usr/sbin/a2dissite * ; /usr/sbin/a2ensite mysite':
            	onlyif => '/usr/bin/test -e /etc/apache2/sites-available/mysite',
		notify => Service['apache2'],
	}
}

 

class ntp {
		package { ntp: ensure => present }
		file { "/etc/ntp.conf":
			owner	 => root,
			group	 => root,
			mode	=> 444,
			backup => false,
			source	=> "puppet:///files/etc/ntp.conf",
			require => Package["ntp"],
                        notify Ā => Service["ntp"],
		}
		service { "ntp":
			enable => true ,
			ensure => running,
			subscribe => [Package[ntp], File["/etc/ntp.conf"],],
		}
	}

 

class packages  {
        Package { ensure => "installed" }

        package { "screen": }
        package { "dselect": }
        package { "vim": }
        package { "curl": }
}

 

It’s important to remember to NOT duplicate entries.
For example, in this case, we have a specific file where we have setup ntp service, including the required package. This means that we do NOT have to add this package in the list into packages.pp, otherwise you will get an error and configs won’t get pushed.

As I’m sure you’ve noted, there are references to some “files”.
Yes, we need some extra configuration, to tell puppet to run as file server as well and where files are located.

In our example we are storing our files in here:

mkdir -p /etc/puppet/files

Now we need to add the following inĀ /etc/puppet/fileserver.conf

[files]
  path /etc/puppet/files
allow *

Last bit, is creating the subfolders and place the files required for our configuration:

mkdir -p /etc/puppet/files 
cd /etc/puppet/files 
mkdir mysite mkdir etc

Inside mysite create mysite_apache.conf and index.html files.

Example mysite_apache.conf

<VirtualHost *:80> 
  ServerName mysite.localdomain 
  DocumentRoot /var/www/mysite.localdomain 
</VirtualHost>

For index.html, you can simply have some text, just for testing purposes.

In this example, we have also setup ntp to be installed and to have a custom ntp.conf file pushed.
For this reason, we need to make sure to have this file present into /etc/puppet/files/etc as declared into our .pp file.

After doing all these changes, you should restart your puppetmaster service on the server.

If all went well, you should have the following:

  • puppetagent02 host with screen, dselect, vim (installed and with syntax on), ntp (installed, running with custom ntp.conf file)
  • puppetagent01: with the same as puppetagent02 PLUS apache with a running website

Of course this is just a raw example and you can use template and other super features.
But I think it’s a good start šŸ˜‰

 

Sources:


https://forge.puppetlabs.com/puppetlabs/stdlib
http://finninday.net/wiki/index.php/Zero_to_puppet_in_one_day
http://www.puppetcookbook.com/
http://foaa.de/old-blog/2010/07/playing-with-puppets-on-debian/trackback/index.html
http://www.harker.com/puppet/BayLISA100715.html
http://docs.puppetlabs.com/puppet/latest/reference/lang_relationships.html

ESXi host on D945GCLF2 Intel Atom mainboard, with NFS storage attached running on RAID1

I’ve used this procedure to create a ESXi host on D945GCLF2 Intel Atom mainboard, with RAID1 storage built in, attached to itself šŸ˜‰

On that, I have at the moment 3 VMs runningĀ (minimal Debian with NFS, FreePBX machine, Debian server with a little LAMP server, SAMBA and web based torrent client)…and more resources available.

How? šŸ™‚

“Simply”, I needed:

HARDWARE

  • D945GCLF2 Intel Atom mainboard
  • 2GB or RAM DDR2 (667 or 533) in a single module
  • IDEtoSD adapter
  • 4GB SD card
  • 2 SATA Hard Drives – same capacity (I’ve used 2×2.5″ 160GB – It’s all installed in a little case)
  • spare SATA CD-ROM and a empty CD-ROM to burn the ESXi ISO (I had issues using a USB stick and utilities like unetbootin or similar… so I ended up with the old fashion but working systems)

SOFTWARE

  • ESXi 4.1 ISO – I couldn’t find a way to patch most recent ISOs. Patch is required to add supportĀ for the integrated NIC. AlsoĀ 4.1 has all the required functions for this project.
  • Here the drivers and script to patch the ISO.
  • Debian net-install iso for the NFS vm.
  • vSphere client installed on your machine, to be able to connect to the host and copy the Debian ISO and manage the HOST.

Procedure

  1. Patch the ISO and burn it on your blank CD.
  2. Connect the IDEtoSD card to the single IDE channel, with the SD. This will be our “main IDE hard drive”.
  3. Make sure to have enabled Hyper Threading TechnologyĀ in the BIOS.
  4. Connect (temporary) the SATA CD-ROM to one of the two SATA channels, with the ESXi CD in, and complete the installation on the “4GB IDE hard drive” present on the system.
  5. Turn off the host, remove theĀ SATA CD-ROM and connect the two hard drives to the SATA connectors.
  6. Boot up, and create a local datastore with the remaining space of the SD (if this hasn’t been created already automatically) and call itĀ “SD_local“. Here we will store our NFS machine which will provide NFS storage to the host.
  7. Create the RDM devices for our minimal Debian NFS machine follow the below instructions (ensure to make a minimal/basic installation, plus ssh,Ā initramfs-tools, mdadm,Ā nfs-kernel-server, nfs-common, portmap. No graphic interface, no extra packages!).
  8. Create theĀ Debian NFS vm, share the storage using NFS, attach it to the host, and you are ready to go! šŸ˜‰ The host will be ready to have VMs up and running, with their virtual hard drives stored on a redundant storage.

The scope of this is to allow the Debian NFS VM, which will be stored on the local storage calledĀ “SD_local“, toĀ directly access the physical SATA hard drives, create a software RAID1 with them, and using NFS protocol, share the space to the ESXi host and use it to store VMs/ISOs etc.

Of course, this Debian NFS VM, and in particular the SD card, are the single point of failure of this project. But theoretically, a dd of the SD once all is configured can be a good “backup” in case of problems (and a spare 4GB SD home as well šŸ™‚ )

ESXi – How to create a Physical RDM and attach it to a VM

1. Determine the VML ID for the SATA disks

# ls /dev/disks/ -l
-rw------- 1 root root 4041211904 May 19 20:18 t10.ATA_____Memory_Card_Adapter_______________________________________0_
-rw------- 1 root root 939524096 May 19 20:18 t10.ATA_____Memory_Card_Adapter_______________________________________0_:1
-rw------- 1 root root 3097493504 May 19 20:18 t10.ATA_____Memory_Card_Adapter_______________________________________0_:2
-rw------- 1 root root 4177920 May 19 20:18 t10.ATA_____Memory_Card_Adapter_______________________________________0_:4
-rw------- 1 root root 262127616 May 19 20:18 t10.ATA_____Memory_Card_Adapter_______________________________________0_:5
-rw------- 1 root root 262127616 May 19 20:18 t10.ATA_____Memory_Card_Adapter_______________________________________0_:6
-rw------- 1 root root 115326976 May 19 20:18 t10.ATA_____Memory_Card_Adapter_______________________________________0_:7
-rw------- 1 root root 299876352 May 19 20:18 t10.ATA_____Memory_Card_Adapter_______________________________________0_:8
-rw------- 1 root root 160041885696 May 19 20:18 t10.ATA_____ST9160821AS_____________________________5MA57R13____________
-rw------- 1 root root 160041885696 May 19 20:18 t10.ATA_____ST9160821AS_________________________________________5MA8PT4Q
lrwxrwxrwx 1 root root 72 May 19 20:18 vml.010000000020202020202020202020202020202020202030204d656d6f7279 -> t10.ATA_____Memory_Card_Adapter_______________________________________0_
lrwxrwxrwx 1 root root 74 May 19 20:18 vml.010000000020202020202020202020202020202020202030204d656d6f7279:1 -> t10.ATA_____Memory_Card_Adapter_______________________________________0_:1
lrwxrwxrwx 1 root root 74 May 19 20:18 vml.010000000020202020202020202020202020202020202030204d656d6f7279:2 -> t10.ATA_____Memory_Card_Adapter_______________________________________0_:2
lrwxrwxrwx 1 root root 74 May 19 20:18 vml.010000000020202020202020202020202020202020202030204d656d6f7279:4 -> t10.ATA_____Memory_Card_Adapter_______________________________________0_:4
lrwxrwxrwx 1 root root 74 May 19 20:18 vml.010000000020202020202020202020202020202020202030204d656d6f7279:5 -> t10.ATA_____Memory_Card_Adapter_______________________________________0_:5
lrwxrwxrwx 1 root root 74 May 19 20:18 vml.010000000020202020202020202020202020202020202030204d656d6f7279:6 -> t10.ATA_____Memory_Card_Adapter_______________________________________0_:6
lrwxrwxrwx 1 root root 74 May 19 20:18 vml.010000000020202020202020202020202020202020202030204d656d6f7279:7 -> t10.ATA_____Memory_Card_Adapter_______________________________________0_:7
lrwxrwxrwx 1 root root 74 May 19 20:18 vml.010000000020202020202020202020202020202020202030204d656d6f7279:8 -> t10.ATA_____Memory_Card_Adapter_______________________________________0_:8
lrwxrwxrwx 1 root root 72 May 19 20:18 <span style="color: #ff0000;">vml.0100000000202020202020202020202020354d413850543451535439313630</span> -> t10.<span style="color: #0000ff;">ATA_____ST9160821AS</span>_________________________________________5MA5SS2A
lrwxrwxrwx 1 root root 72 May 19 20:18 <span style="color: #ff9900;">vml.0100000000354d413537523133202020202020202020202020535439313630</span> -> t10.<span style="color: #0000ff;">ATA_____ST9160821AS</span>_____________________________5MA43W02____________

2. Find the two hard drives

Highlighted in red and orange (in blue I’ve highlighted the serial number which helps to identify them as well).

3. Check the volumes available

# ls -l /vmfs/volumes drwxr-xr-x 1 root root 8 Jan 1 1970 ed0aa47f-f157c36d-0295-b6663f811221 drwxr-xr-x 1 root root 8 Jan 1 1970 e2f7c177-db75edcf-defa-90346375bdf2 drwxr-xr-x 1 root root 8 Jan 1 1970 2da668ef-40e5d96b-90bf-855ddb9c5547 drwxr-xr-t 1 root root 1.4k May 19 21:29 4fb7f163-a1959434-4766-001cc07e74e5 lrwxr-xr-x 1 root root 35 May 19 23:16 SD_local -> 4fb7f163-a1959434-4766-001cc07e74e5 lrwxr-xr-x 1 root root 35 May 19 23:16 Hypervisor3 -> 2da668ef-40e5d96b-90bf-855ddb9c5547 lrwxr-xr-x 1 root root 35 May 19 23:16 Hypervisor2 -> ed0aa47f-f157c36d-0295-b6663f811221 lrwxr-xr-x 1 root root 35 May 19 23:16 Hypervisor1 -> e2f7c177-db75edcf-defa-90346375bdf2

4. Use one of the available to create a subfolder that will contain the VMDK information for the RDM disks (using SD_local)

# cd /vmfs/volumes/SD_local/
/vmfs/volumes/4fb7f163-a1959434-4766-001cc07e74e5 # mkdir RDMs
/vmfs/volumes/4fb7f163-a1959434-4766-001cc07e74e5 # cd RDMs/

5. Create the devices

vmkfstools -z /vmfs/devices/disks/vml.0100000000202020202020202020202020354d413850543451535439313630 rmd_sata1.vmdk -a lsilogic
vmkfstools -z /vmfs/devices/disks/vml.0100000000354d413537523133202020202020202020202020535439313630 rmd_sata2.vmdk -a lsilogic

6. New RDM devices created and ready to be added to the VM

  • Edit the properties of an existing VM and click Addā€¦
  • Select Use an existing virtual disk and click Next >
  • Click Browse. You now need to navigate your local datastore ([SD_local]/RMSs) and select the VMDKā€™s that we created
  • Select Permanent / Persistent > Next..
  • You should now see your new Hard Diskā€™s in your VM and vSphere will correctly identify them as Mapped Raw LUN.

7. Run your linux VM and create Linux Raid auto (FD type)

8. Create the mdX device

# mdadm --create /dev/md0 --chunk=4 --level=0 --raid-devices=2 /dev/sda1 /dev/sdb1

9. Create the filesystem and add it to /etc/fstab

 

Sources
http://www.vm-help.com/esx40i/SATA_RDMs.php
http://blog.davidwarburton.net/2010/10/25/rdm-mapping-of-local-sata-storage-for-esxi/