Tag Archives: linux

Reverse SSH Tunnel

To allow LOCAL_SERVER behind a firewall/NAT/Home Router to be accessible via SSH from a REMOTE_SERVER you can use a ssh tunnel (reverse).

Basically, from your LOCAL_SERVER you forward port 22 (ssh) to another port on REMOTE_SERVER, for example 8000 and you can ssh into your LOCAL_SERVER from the public IP of the REMOTE_SERVER via port 8000.

To do so, you need to run the following from LOCAL_SERVER:

 local-server: ~ ssh -fNR 8000:localhost:22 <user>@<REMOTE_SERVER>

On REMOTE_SERVER you can use netstat -nlpt to check if there is a service listening on port 8000.

Example:

remote-server ~# netstat -nplt | grep 8000
tcp        0      0 0.0.0.0:8000            0.0.0.0:*               LISTEN      1396/sshd: root
tcp6       0      0 :::8000                 :::*                    LISTEN      1396/sshd: root

In this case, the REMOTE_SERVER allows connection from ALL the interfaces (0.0.0.0) to port 8000.
This means that, if the REMOTE_SERVER has IP 217.160.150.123, if you can connect to LOCAL_SERVER from a THIRD_SERVER using the following:

third-server: ~ ssh -p 8000 <user_local_server>@217.160.150.123

NOTE. If you see that the LISTEN connection on REMOTE_SERVER is bound to 127.0.0.1 and not to 0.0.0.0, it is probably related to the setting GatewayPorts set to no in /etc/ssh/sshd_config on REMOTE_SERVER.
Best setting is clientspecified (rather than yes) as per this post.

Set this value to yes and restart sshd service.

With that setting, you can potentially allow only connection from the REMOTE_SERVER to the LOCAL_SERVER, to increase security.
To do so, you need to use the following ssh command from LOCAL_SERVER:

 local-server: ~ ssh -fNR 127.0.0.1:8000:localhost:22 <user>@<REMOTE_SERVER>

With netstat, you’ll see now this:

remote-server:~# netstat -nplt | grep 8000
tcp        0      0 127.0.0.1:8000          0.0.0.0:*               LISTEN      1461/sshd: root

With this forward, you will be able to access LOCAL_SERVER ONLY from the REMOTE_SERVER itself:

remote-server: ~ ssh -p 8000 <user_local_server>@localhost

I hope this helps πŸ™‚

Happy tunnelling!

Virtualhost and Letsencrypt

Quick guideline about how to install multiple sites on a single server using Virtualhosting, and have the SSL certificate installed and automatically renewed using Letsencrypt.

There are plenty of how to online, but I wanted to have a quick reference page for myself πŸ™‚

Firstly, this has been tested on Debian 12, but it should work on previous Debian versions and Ubuntu too.

Apache setup and virtualhosts

Firstly, install Apache and other packages that you will mostly likely need, especially if you run WordPress or any php based framework:

apt-get install apache2 php php-mysql libapache2-mod-php php-gd php-curl net-tools telnet dos2unix

Now, you should create the folder structure to host your sites. I used /var/www/virtualhosts/<site>/public_html

I made sure permissions were set correctly too:

chown -R www-data:www-data /var/www/
find /var/www -type -d -exec chmod 775 {} \;

Now, create a virtualhost file for each site. In the following example we are going to create the conf file for site1.

Create /etc/apache2/sites-available/site1.conf

<VirtualHost *:80>
    ServerName site1.com
    ServerAlias www.site1.com
    ServerAdmin webmaster@localhost
    DocumentRoot /var/www/virtualhosts/site1/public_html

    <Directory /var/www/virtualhosts/site1/public_html>
        Options -Indexes +FollowSymLinks
        AllowOverride All
    </Directory>

    ErrorLog ${APACHE_LOG_DIR}/site1-error.log
    CustomLog ${APACHE_LOG_DIR}/site1-access.log combined
</VirtualHost>

Do the same for all the sites you have.

Once done, upload the content of your sites in public_html folder.

Disable all the default Apache sites and enable the ones you have created. You can use the commands a2dissite and a2ensite or manually create symbolic links into /etc/apache2/sites-enabled/

Check that all the virtualhosts are properly loaded:

source /etc/apache2/envvars
apache2 -S

You should see all your sites under *80 section.
Right now we have enabled only Apache on port 80 to return the sites we have hosted. No 443 yet.

Now, you can use curl to do some tests to see if the virtual hosts are responding correctly.

~ curl -IH'Host: site1.com' http://<server_IP>  # to get the header of site1.com
~ curl -H'Host: site1.com' http://<server_IP>  # to get the full page of site1.com

Hopefully all works (if not, troubleshoot it heheh), let’s point our DNS to our server, and test directly using the domain names.

All good? Cool!

Make sure now that your firewall allows port 80 and port 443. Even if you’re considering to serve your site ONLY over SSL (port 443), the certbot tool that does the auto-renewal of the certificate needs port 80 open.

Installation and configuration of certbot – Letsencrypt

As root, issue the below commands:

apt-get install snapd
snap install core
snap refresh core
snap install --classic certbot
ln -s /snap/bin/certbot /usr/bin/certbot

You have now the certbot tool installed.

Following the above example of site1.com, we are going now to get the SSL certificate for that site (even the www.site1.com one), and let the tool install and configure everything automatically.

certbot --apache -d site1.com -d www.site1.com

Hopefully all goes well πŸ™‚ Repeat for each of your sites accordingly.

Once done with all the sites, just to make sure the auto-renewal works, you can also issue a dry-run check:

certbot renew --dry-run

Letsencrypt certificates last 90 days (afaik), but the certbot tool installed in this way does the auto-renewal in an automatic fashion.
If you’re curios where this is written (you might think about cron but unable to find anything – like it happend to me).
If this is the case, you can try to run this command, and you may find the certbot listed:

systemctl list-timers

More information are available on the official website at this address.

You can now test using curl again, but hitting https instead of http:

~ curl -IH'Host: site1.com' https://<server_IP>  # to get the header of site1.com
~ curl -H'Host: site1.com' https://<server_IP>  # to get the full page of site1.com

Oh, one note.
By default, at least at the time when I’m writing this article, once you install the certificate, the *80 virtualhost of your site will be modified, adding the following lines, which force a 302 redirect from http to https.

RewriteEngine on
RewriteCond %{SERVER_NAME} =www.site1.com [OR]
RewriteCond %{SERVER_NAME} =site1.com
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]

If it’s what you want – cool.
If you still want to serve your site on http AND https, comment out (or delete) those new lines.

Happy virtualhosting and ssl’ing! πŸ™‚

Manage PDF files

Merge multiple files into single PDF

I’m sure that we all had the need to send a single PDF file, maybe a signed contract. Yes, those 20 or more pages that you need to return, probably with just two of them filled up and signed.

Some PDF give you the ability to digitally sign them. But in my experience, most of them aren’t so modern.

So, what do I do?

I print ONLY the pages that I need to sign, scan them and here I am, with the need to “rebuild” the PDF, replacing the pages signed.

Example.
You have the file contract.pdf, with 20 pages and you need to sign page 10 and page 20.
The scan has a different resolution (or, even worse, it’s a different format, like jpg).

Here the command to make the magic happen:

convert contract.pdf[0-8] mypage10.jpg contract.pdf[10-18] mypage20.jpg -resize 1240x1753 -extent 1240x1753 -gravity center -units PixelsPerInch -density 150x150 contract_signed.pdf

The bit before -resize is pretty self explanatory. The bit after is a way to have the size of all pages fitting an A4 format, with a good printable resolution.

Of course, to make this happen, you need Linux (or WSL on Windows 10) and imagemagick installed.

Another way is using ghostscript.

A simple Ghostscript command to merge two PDFs in a single file is shown below:

gs -dNOPAUSE -sDEVICE=pdfwrite -sOUTPUTFILE=combine.pdf -dBATCH 1.pdf 2.pdf

What about a quick onliner to reduce and convert to grayscale your pdf?

ghostscript -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/ebook -sProcessColorModel=DeviceGray -sColorConversionStrategy=Gray -dNOPAUSE -dQUIET -dBATCH -sOutputFile=output.pdf input.pdf

PDF size reduce

Sometimes instead, you need to reduce the size of an existing PDF. Here a handy oneliner, using ghostscript:

ghostscript -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/printer -dNOPAUSE -dQUIET -dBATCH -sOutputFile=output.pdf input.pdf

Other options for PDFSETTINGS:

  • /screen selects low-resolution output similar to the Acrobat Distiller “Screen Optimized” setting.
  • /ebook selects medium-resolution output similar to the Acrobat Distiller “eBook” setting.
  • /printer selects output similar to the Acrobat Distiller “Print Optimized” setting.
  • /prepress selects output similar to Acrobat Distiller “Prepress Optimized” setting.
  • /default selects output intended to be useful across a wide variety of uses, possibly at the expense of a larger output file.

Happy PDF’ing πŸ™‚


Sources:
https://stackoverflow.com/questions/23214617/imagemagick-convert-image-to-pdf-with-a4-page-size-and-image-fit-to-page
https://www.shellhacks.com/merge-pdf-files-linux-command-line/

https://gist.github.com/firstdoit/6390547

Migrate Linux Subsystem from one PC to another

Are you enjoying your favorite Linux distro running within the Windows 10 Linux Subsystem?

Have you configured all nicely?

What happened if you get a new pc and you’d like to migrate your VM across?

This is what happened to me. And looking around, I found this post that gave me this kinda-dirty way, but did work!

After that, I decided to review the steps, and I’ve added these directories in the exclude’s list, to make clearer the process of export/import:

/dev
/proc
/sys
/run
/tmp
/media
/mnt
/var/cache
/var/run

Of course, if you have important data in these folders and you want to move across too, just update the one-liner below accordingly. πŸ˜‰

On your OLD PC

  • Open your Linux VM
  • Get inside your Downloads directory (replace <user> with your username): cd /mnt/c/Users/<user>/Dowloads
  • Make sure to be root (sudo su -)
  • Run:
    tar -cvpzf backup.tar.gz --exclude=/backup.tar.gz --exclude=/dev --exclude=/proc --exclude=/sys --exclude=/run --exclude=/tmp --exclude=/media --exclude=/mnt --exclude=/var/cache --exclude=/var/run --one-file-system /
    NOTE: you could achieve the same using the option --exclude-from=file.txt, and having the list of exclusions in this file. I used a one-liner as it’s quicker to copy and paste.
  • Once done, close your Linux VM
  • Verify that you have a new file called backup.tar.bz in Dowloads

On your NEW PC

  • Install from Microsoft Store the same Linux VM (or reinstall in the same way you have done originally on your old pc)
  • Copy across your backup.tar.bz within your new Downloads folder
  • Open the VM that you’ve just installed (minimal setup – this will be completely overwritten, so don’t be bothered too much)
  • Once you’re inside and your backup.tar.bz is in Download, run the following (replace <user> with your username):
    sudo tar -xpzf /mnt/c/Users/<user>/Dowloads/backup.tar.gz -C / --numeric-owner
  • Ignore the errors
  • Close and re-open the VM: DONE! πŸ™‚

Happy migration! πŸ˜‰

Bridge / Bond interfaces CentOS/RedHat

Just few notes about how to bridge or bond network interfaces in CentOS/RedHat systems

# Install the required packages

yum install bridge-utils


BRIDGE
------

/etc/sysconfig/network-scripts/

#ifcfg-br0
DEVICE=br0
TYPE=Bridge
IPADDR=192.168.1.1
NETMASK=255.255.255.0
ONBOOT=yes
BOOTPROTO=none
NM_CONTROLLED=no
DELAY=0

#ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
HWADDR=AA:BB:CC:DD:EE:FF
BOOTPROTO=none
ONBOOT=yes
NM_CONTROLLED=no
BRIDGE=br0


#### USE SCREEN!!
service network restart 

================================
BOND >>> 2 or more eth interfaces!
----

#ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
USERCTL=no
SLAVE=yes
MASTER=bond0
BOOTPROTO=none
HWADDR=AA:BB:CC:DD:EE:FF
NM_CONTROLLED=no

#ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
USERCTL=no
SLAVE=yes
MASTER=bond0
BOOTPROTO=none
HWADDR=AA:BB:CC:DD:EE:FF
NM_CONTROLLED=no

#ifcfg-bond0
DEVICE=bond0
ONBOOT=yes
BONDING_OPTS='mode=1 miimon=100'
BRIDGE=br0
NM_CONTROLLED=no

#ifcfg-br0
DEVICE=br0
ONBOOT=yes
TYPE=Bridge
IPADDR=192.168.1.1
NETMASK=255.255.255.0
NM_CONTROLLED=no


# ifup bond0
#### USE SCREEN!!
# service network restart 


==========================

For DHCP and not static

DEVICE=eth0
BOOTPROTO=dhcp
ONBOOT=yes

 

Sources:

Banana Pi Pro – WLAN configuration

Add ‘ap6210‘ to /etc/modules to enable WiFi, and use modprobe ap6210 to force load the module.

Check dmesg to see if all has been loaded correctly. If not, reboot and check again.

dmesg|grep WLAN

Install the required packages:

apt-get install wireless-tools iw wpasupplicant

Modify /etc/network/interfaces

# Dinamic IP:
auto wlan0
allow-hotplug wlan0
iface wlan0 inet dhcp
wpa-ap-scan 1
wpa-scan-ssid 1
wpa-ssid "WIFI_NETWORK_NAME"
wpa-psk "WLAN-KEY"

# Static IP:
auto wlan0
allow-hotplug wlan0
iface wlan0 inet static
address 192.168.xx.yy
netmask 255.255.255.0
gateway 192.168.0.1
wpa-ap-scan 1
wpa-scan-ssid 1
wpa-ssid "WIFI_NETWORK_NAME"
wpa-psk "WLAN-KEY"

Bring the interface up:

ifconfig wlan0 up

Source: http://oyox.de/882-wlan-auf-bananian-banana-pi-einrichten/

Nagios3 and Lighttpd

This guide will explain how to install Nagios3 on a machine with Debian and Lighttpd webserver.

If you haven’t installed Lighttpd yet, please follow this tutorial.

Install Nagios server

Now, let’s install Nagios.

apt-get install nagios3 nagios-plugins nagios-nrpe-plugin

This will automatically install all the required dependencies.

Enable check_external_commands in /etc/nagios3/nagios.cfg

check_external_commands=1

Add www-data in nagios’ group:

usermod -a -G nagios www-data

And fix some permission issues to avoid some errors like “error: Could not stat() command file”

chmod g+x /var/lib/nagios3/rw

Let’s configure a bit Lighttpd.
Make sure cgi and php modules are enabled.

Then, create a new conf file and enable it:

vim /etc/lighttpd/conf-available/10-nagios3.conf
# Nagios3
 
alias.url =     (
                "/cgi-bin/nagios3" => "/usr/lib/cgi-bin/nagios3",
                "/nagios3/cgi-bin" => "/usr/lib/cgi-bin/nagios3",
                "/nagios3/stylesheets" => "/etc/nagios3/stylesheets",
                "/nagios3" => "/usr/share/nagios3/htdocs"
                )
 
$HTTP["url"] =~ "^/nagios3/cgi-bin" {
        cgi.assign = ( "" => "" )
}
 
$HTTP["url"] =~ "nagios" {
        auth.backend = "htpasswd"
        auth.backend.htpasswd.userfile = "/etc/nagios3/htpasswd.users"
        auth.require = ( "" => (
                "method" => "basic",
                "realm" => "nagios",
                "require" => "user=nagiosadmin"
                )
        )
        setenv.add-environment = ( "REMOTE_USER" => "user" )
}
lighttpd-enable-mod nagios3

Let’s apply the changes:

/etc/init.d/lighttpd force-reload

We need to setup the “nagiosadmin” password:

htpasswd -c /etc/nagios3/htpasswd.users nagiosadmin

Now, open your browser and digit http://yourserver/nagios3
Insert username: nagiosadmin and the password you’ve just chosen… and voila`… πŸ™‚

And now we have installed our nagios server. As you can see, it’s currently monitoring itself.

But what about the other hosts in the network?

Adding hosts

Host configuration

To let our Nagios server to monitor other hosts, we need to follow these steps on any client we want to add:

apt-get install -y nagios-plugins nagios-nrpe-server

Once completed, we need to add the IP of our monitoring host in /etc/nagios/nrpe.cfg under allowed_hosts=xxx.xxx.xxx.xxx.

Also, add this line in /etc/nagios/nrpe_local.cfg:

command[check_all_disks]=/usr/lib/nagios/plugins/check_disk -w '20%' -c '10%' -e -A

This will be used from our monitor server to query nrpe and provide info about ALL the disks.
You can use also -I flag to exclude a specific path. For example on my Time Capsule Pi, I’ve used the following line, to exclude the mount point “TimeMachine” from the checks:

command[check_all_disks]=/usr/lib/nagios/plugins/check_disk -w '20%' -c '10%' -e -A -I '/TimeMachine/*

Monitoring configuration for new host

Now back to our Nagios monitoring machine
In /etc/nagios3/conf.d create a file called for example host1_nagios2.cfg and add the following basic services (add/remove/modify based on your local configuration):

define host{
        use             generic-host
        host_name       host1
        alias           host1
        address         xxx.xxx.xxx.xxx
}

define service{
        use                     generic-service
        host_name               host1
        service_description     Current Load
        check_command           check_nrpe_1arg!check_load
}

define service{
        use                     generic-service
        host_name               host1
        service_description     Current Users
        check_command           check_nrpe_1arg!check_users
}
define service{
        use                     generic-service
        host_name               host1
        service_description     Disk Space
        check_command           check_nrpe_1arg!check_all_disks
}
define service{
        use                     generic-service
        host_name               host1
        service_description     Total Processes
        check_command           check_nrpe_1arg!check_total_procs
}

Also, you can add the new host host1 to be part of any related groups, modifying /etc/nagios3/conf.d/hostgroups_nagios2.cfg

For example, we can add it to debian-servers and ssh-servers groups. This will automatically get some checks like SSH.

# Some generic hostgroup definitions

# A simple wildcard hostgroup
define hostgroup
        hostgroup_name  all
		alias           All Servers
		members         *
        }

# A list of your Debian GNU/Linux servers
define hostgroup {
        hostgroup_name  debian-servers
		alias           Debian GNU/Linux Servers
		members         localhost,host1
        }

# A list of your web servers
define hostgroup {
        hostgroup_name  http-servers
		alias           HTTP servers
		members         localhost
        }

# A list of your ssh-accessible servers
define hostgroup {
        hostgroup_name  ssh-servers
		alias           SSH servers
		members         localhost,host1
        }

Sources:
http://zeldor.biz/2010/11/nagios3-with-lighttpd/comment-page-1/
https://www.digitalocean.com/community/articles/how-to-install-nagios-on-ubuntu-12-10
http://cloud101.eu/blog/2012/03/01/setting-up-nagios-on-debian-or-ubuntu/
http://technosophos.com/2010/01/13/nagios-fixing-error-could-not-stat-command-file-debian.html

Lighttpd and virtualhosts

Here a quick how to, about how to configure Lighttpd to run with Virtualhosts.
This has been installed and tested on a Raspberry Pi.

apt-get install lighttpd php5 php5-cgi

Enable modules:

lighttpd-enable-mod auth cgi fastcgi fastcgi-php nagios3 simple-vhost ssl status

Content of /etc/lighttpd/lighttpd.conf

server.modules = (
        "mod_access",
        "mod_alias",
        "mod_compress",
        "mod_redirect",
#       "mod_rewrite",
)

server.document-root        = "/var/www"
server.upload-dirs          = ( "/var/cache/lighttpd/uploads" )
server.errorlog             = "/var/log/lighttpd/error.log"
server.pid-file             = "/var/run/lighttpd.pid"
server.username             = "www-data"
server.groupname            = "www-data"
server.port                 = 80


index-file.names            = ( "index.php", "index.html", "index.lighttpd.html" )
url.access-deny             = ( "~", ".inc" )
static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" )

compress.cache-dir          = "/var/cache/lighttpd/compress/"
compress.filetype           = ( "application/javascript", "text/css", "text/html", "text/plain" )

# default listening port for IPv6 falls back to the IPv4 port
include_shell "/usr/share/lighttpd/use-ipv6.pl " + server.port
include_shell "/usr/share/lighttpd/create-mime.assign.pl"
include_shell "/usr/share/lighttpd/include-conf-enabled.pl"

To easily manage virtual hosts, edit /etc/lighttpd/conf-available/10-simple-vhost.conf

server.modules += ( "mod_simple_vhost" )
simple-vhost.server-root = "/var/www/vhost"
simple-vhost.default-host = "error.default.loc"
simple-vhost.document-root = "/"

This configuration above will allow you to manage your virutalhosts simply storing them in a folder under /var/www/vhost
No extra configuration is needed from the server side.
Simply go into /var/www/vhost and create a folder named as the virtualhost you would like to manage.
In this particular case, please make sure to have a folder called error.default.loc with a page inside which will be displayed in case of ANY error.
For example, if you want to manage mysite.example.com, simply do the following:

cd /var/www/vhost
mkdir mysite.example.com
chown www-data:www-data mysite.example.com

…and put the html/php files inside that new folder! πŸ™‚

To test if our webserver works, you can always use curl command as explained here.

Puppet – Let’s start

Puppet is a quite powerful configuration manager tool which allows you to configure automatically hosts and keep configurations consistence.

I did some tests using 3 VMs:

  • puppetmaster (server)
  • puppetagent01 (client)
  • puppetagent02 (client)

Of course, most of the work is done on puppetmaster server. On the last two machines you will simply see the outcome of the configurations that you’re going do set on puppetmaster.

Important: all the machines have to be able to communicate between each others. Please make sure DNS is working or set local names/IPs in /etc/hosts file, and do some ping tests before proceeding.

Client setup

On each puppetagent machine, just install the package puppet

apt-get install puppet

By default, the client will look for a host called “puppet” on the network.
If your DNS/hosts file doesn’t have this entry, and it can’t be resolved, you can manually set the name of the puppetmaster in /etc/puppet/puppet.conf file, adding this line under [main] section:

server=puppetmaster.yournet.loc

Now, no more configuration is required from the client side. Just edit /etc/default/puppet to start at boot time and start the service.

# Defaults for puppet - sourced by /etc/init.d/puppet

# Start puppet on boot?
START=yes

# Startup options
DAEMON_OPTS=""

 

service puppet start

Starting the service, will make automatically a request to the server to be added under his control.

If you want to do some tests, you can eventually use the following command to run puppet only once. This will also force the polling updates, which by default runs every 30 minutes.

puppet agent --no-daemonize --onetime --verbose

You can repeat all these steps on the second client machine.

Server setup

apt-get install puppetmaster

Check if the service is running, otherwise, start it up.

Sign clients’ certificates on the server side

Puppet uses this client/server certificate sign system to add/remove hosts from being managed by the server.

To see who has requested to be “controlled” use this command:

puppet cert --list

This will show all the hosts waiting to be added under puppetmaster server.

puppet cert --sign

This command will add the host.

Puppetmaster configuration files

The main configuration file is /etc/puppet/manifests/site.pp

Inside manifests folder, I’ve created a subfolder called classes with extra definitions (content of these files is showed later in this post).

/etc/puppet/manifests# tree
.
|___ classes
|   |___ apache.pp
|   |___ mysite.pp
|   |___ ntpd.pp
|   |___ packages.pp
|___ site.pp

/etc/puppet/manifests/site.pp

import 'classes/*.pp'
# This add all the custom .pp files into classes folder
class puppettools {
# Creates a file, setting permissions and content
        file { '/usr/local/sbin/puppet_once.sh':
                owner => root, group => root, mode => 755,
                content => "#!/bin/sh\npuppet agent --no-daemonize --onetime --verbose $1\n",
        }
# Install (if not present) some puppet modules required for 'vimconf' class
        exec { "install_puppet_module":
        command => "puppet module install puppetlabs-stdlib",
        path => [ "/bin", "/sbin", "/usr/bin", "/usr/sbin",
              "/usr/local/bin", "/usr/local/sbin" ],
        onlyif  => "test `puppet module list | grep puppetlabs-stdlib | wc -l` -eq 0"
        }
}

class vimconf {
# Modify vimrc conf file, enabling syntax on
        file_line { 'vim_syntax_on':
        path  => '/etc/vim/vimrc',
        match => '^.*syntax on.*$',
        line  => 'syntax on',
        }
}

node  default {
# this will be applied to all nodes without specific node definitions
        include packages
        include vimconf
        include ntp
        include puppettools
}

node  'puppetagent01' inherits default {
# this specific node, gets all the default classes PLUS some extras
        include mysite
}

Here the content of the single files .pp in classes folder:

class apache {
	package { 'apache2-mpm-prefork':
		ensure => installed
	}

	service { 'apache2':
		ensure => running,
		hasstatus => true,
		hasrestart => true,
	}
}

 

class mysite {

	include apache

	file { '/etc/apache2/sites-available/mysite':
		owner => root, group => root, mode => 0644,
		source => "puppet:///files/mysite/mysite_apache.conf",
	}

	file {'/var/www/mysite.localdomain':
		ensure => directory,
	}

	file {'/var/www/mysite.localdomain/index.html':
                owner => root, group => www-data, mode => 0755,
                source => "puppet:///files/mysite/index.html",
	}

	 exec {'/usr/sbin/a2dissite * ; /usr/sbin/a2ensite mysite':
            	onlyif => '/usr/bin/test -e /etc/apache2/sites-available/mysite',
		notify => Service['apache2'],
	}
}

 

class ntp {
		package { ntp: ensure => present }
		file { "/etc/ntp.conf":
			owner	 => root,
			group	 => root,
			mode	=> 444,
			backup => false,
			source	=> "puppet:///files/etc/ntp.conf",
			require => Package["ntp"],
                        notify Β => Service["ntp"],
		}
		service { "ntp":
			enable => true ,
			ensure => running,
			subscribe => [Package[ntp], File["/etc/ntp.conf"],],
		}
	}

 

class packages  {
        Package { ensure => "installed" }

        package { "screen": }
        package { "dselect": }
        package { "vim": }
        package { "curl": }
}

 

It’s important to remember to NOT duplicate entries.
For example, in this case, we have a specific file where we have setup ntp service, including the required package. This means that we do NOT have to add this package in the list into packages.pp, otherwise you will get an error and configs won’t get pushed.

As I’m sure you’ve noted, there are references to some “files”.
Yes, we need some extra configuration, to tell puppet to run as file server as well and where files are located.

In our example we are storing our files in here:

mkdir -p /etc/puppet/files

Now we need to add the following inΒ /etc/puppet/fileserver.conf

[files]
  path /etc/puppet/files
allow *

Last bit, is creating the subfolders and place the files required for our configuration:

mkdir -p /etc/puppet/files 
cd /etc/puppet/files 
mkdir mysite mkdir etc

Inside mysite create mysite_apache.conf and index.html files.

Example mysite_apache.conf

<VirtualHost *:80> 
  ServerName mysite.localdomain 
  DocumentRoot /var/www/mysite.localdomain 
</VirtualHost>

For index.html, you can simply have some text, just for testing purposes.

In this example, we have also setup ntp to be installed and to have a custom ntp.conf file pushed.
For this reason, we need to make sure to have this file present into /etc/puppet/files/etc as declared into our .pp file.

After doing all these changes, you should restart your puppetmaster service on the server.

If all went well, you should have the following:

  • puppetagent02 host with screen, dselect, vim (installed and with syntax on), ntp (installed, running with custom ntp.conf file)
  • puppetagent01: with the same as puppetagent02 PLUS apache with a running website

Of course this is just a raw example and you can use template and other super features.
But I think it’s a good start πŸ˜‰

 

Sources:


https://forge.puppetlabs.com/puppetlabs/stdlib
http://finninday.net/wiki/index.php/Zero_to_puppet_in_one_day
http://www.puppetcookbook.com/
http://foaa.de/old-blog/2010/07/playing-with-puppets-on-debian/trackback/index.html
http://www.harker.com/puppet/BayLISA100715.html
http://docs.puppetlabs.com/puppet/latest/reference/lang_relationships.html

DNS updated via DHCP: BIND9 and ISC-DHCP on Linux

Linux: Debian stable (currently version 7)

Packages:

apt-get install install bind9 isc-dhcp-server

Create a key required for DHCP server to update the DNS zones:

/usr/sbin/rndc-confgen -a

This will create /etc/bind/rndc.key, whose contents will look something like this:

key "rndc-key" {
algorithm hmac-md5;
secret "+zZSeeetHWFdNwECit1Ktw==";
};

BIND configuration

Configuration files:

 

/etc/hosts

127.0.0.1 localhost
10.0.60.60 dns.lab.loc dns

# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

 

/etc/bind/named.conf.local

// Do any local configuration here
// Consider adding the 1918 zones here, if they are not used in your organization
include "/etc/bind/zones.rfc1918";

include "/etc/bind/rndc.key";

zone "lab.loc" {
type master;
file "/etc/bind/db.lab.loc";
allow-update { key rndc-key; };
};

zone "60.0.10.in-addr.arpa" {
type master;
file "/etc/bind/db.10.0.60";
allow-update { key rndc-key; };
};

 

/etc/bind/named.conf.options

(just to setup the external forwarders)

options {
directory "/var/cache/bind";

// If there is a firewall between you and nameservers you want
// to talk to, you may need to fix the firewall to allow multiple
// ports to talk. See http://www.kb.cert.org/vuls/id/800113

// If your ISP provided one or more IP addresses for stable
// nameservers, you probably want to use them as forwarders.
// Uncomment the following block, and insert the addresses replacing
// the all-0's placeholder.

forwarders {
<strong>208.67.222.222;208.67.220.220;8.8.8.8;8.8.4.4;</strong>
};

//========================================================================
// If BIND logs error messages about the root key being expired,
// you will need to update your keys. See https://www.isc.org/bind-keys
//========================================================================
dnssec-validation auto;

auth-nxdomain no; # conform to RFC1035

allow-query {
10.0.60/24;
127.0.0.1;
};
allow-transfer {
10.0.60/24;
127.0.0.1;
};

listen-on-v6 { any; };
};

 

/etc/bind/db.lab.loc

$ORIGIN lab.loc.
$TTL 24h ;$TTL (DNS time-to-live setting) used for all RRs without explicit TTL value

;SOA - Start of Authority. This is the record that states that this server is authoritative for the specified domain
;The SOA record lists the name server for the domain, and next the e-mail address of the administer of the domain
;(note that the @ has been replaced by a period).
@ IN SOA dns.lab.loc. root.lab.loc. (
2014032109 ; serial YYYYMMDDNN
10800 ; refresh (3 hours)
1800 ; retry (30 minutes)
604800 ; expire (1 week)
38400 ; minimum (10 hrs 40 min)
)
IN NS dns.lab.loc. ;Specifies the name server to use to look up a domain
; IN NS dns2.lab.loc. ;Specifies the name server to use to look up a domain
IN A 10.0.60.60 ; IP Address(es) of the DNS server(s)
; IN A 10.0.60.61 ; IP Address(es) of the DNS server(s)
IN MX 10 dns.lab.loc. ;Specifies mail server(s) for the domain

; HOSTS
dns IN A 10.0.60.60
;dns2 A 10.0.60.61

esxi01 IN A 10.0.60.71
esxi02 IN A 10.0.60.72
esxi03 IN A 10.0.60.73

freenas IN A 10.0.60.80

mail IN CNAME dns
dnsmaster IN CNAME dns
storage IN CNAME freenas

 

/etc/bind/db.10.0.60

; BIND reverse file for lab.loc
$ORIGIN 60.0.10.in-addr.arpa.
$TTL 24h
@ IN SOA dsn.lab.loc. root.lab.loc. (
2014032104 ; serial number YYMMDDNN
10800 ; Refresh (3 hours)
3600 ; Retry (1 hour)
604800 ; Expire (1 week)
38400 ; Min TTL (10 hours 40 minutes)
)
IN NS dns.lab.loc.
; IN NS dns2.lab.loc.

;LIST OF HOSTS (reverse)

60 IN PTR dns.lab.loc.

71 IN PTR esxi01.lab.loc.
72 IN PTR esxi02.lab.loc.
73 IN PTR esxi03.lab.loc.

80 IN PTR freenas.lab.loc.

 

DHCP configuration

Here there is just one file that has to be modified:Β dhcpd.conf

/etc/dhcp/dhcpd.conf

Here we need to enter the key in plain text.

# DHCPD
ddns-updates on;
ddns-update-style interim;
update-static-leases on;
authoritative;
key rndc-key { algorithm hmac-md5; secret +zZSeeetHWFdNwECit1Ktw==;}
allow unknown-clients;
use-host-decl-names on;
default-lease-time 1814400; #21 days
max-lease-time 1814400; #21 days
log-facility local7;

# lab.loc DNS zones
zone lab.loc. {
primary localhost; # This server is the primary DNS server for the zone
key rndc-key; # Use the key we defined earlier for dynamic updates
}
zone 60.0.10.in-addr.arpa. {
primary localhost; # This server is the primary DNS server for the zone
key rndc-key; # Use the key we defined earlier for dynamic updates
}

# lab.loc LAN scope
subnet 10.0.60.0 netmask 255.255.255.0 {
range 10.0.60.100 10.0.60.200;
option subnet-mask 255.255.255.0;
option routers 10.0.60.2;
option domain-name-servers 10.0.60.60;
option domain-name "lab.loc";
ddns-domainname "lab.loc.";
ddns-rev-domainname "in-addr.arpa.";
}

# lab.loc STATIC assigned group
group {
host freenas.lab.loc {
hardware ethernet 00:0c:29:18:af:b4;
fixed-address 10.0.60.80;
ddns-hostname "freenas";
}
host esxi01.lab.loc {
hardware ethernet 00:0c:29:d4:14:ce;
fixed-address 10.0.60.71;
ddns-hostname "esxi01";
}
host esxi02.lab.loc {
hardware ethernet 00:0c:29:2c:30:fd;
fixed-address 10.0.60.72;
ddns-hostname "esxi02";
}
host esxi03.lab.loc {
hardware ethernet 00:0c:29:46:90:fd;
fixed-address 10.0.60.73;
ddns-hostname "esxi03";
}
}

 

Once everything is configured, just restart bind and dhcp:

/etc/init.d/bind9 restart && /etc/init.d/isc-dhcp-server restart

 

Sources:

https://www.centos.org/docs/4/html/rhel-rg-en-4/s1-bind-zone.html