Once completed, we need to add the IP of our monitoring host in /etc/nagios/nrpe.cfg under allowed_hosts=xxx.xxx.xxx.xxx.
Also, add this line in /etc/nagios/nrpe_local.cfg:
command[check_all_disks]=/usr/lib/nagios/plugins/check_disk -w '20%' -c '10%' -e -A
This will be used from our monitor server to query nrpe and provide info about ALL the disks.
You can use also -I flag to exclude a specific path. For example on my Time Capsule Pi, I’ve used the following line, to exclude the mount point “TimeMachine” from the checks:
command[check_all_disks]=/usr/lib/nagios/plugins/check_disk -w '20%' -c '10%' -e -A -I '/TimeMachine/*
Monitoring configuration for new host
Now back to our Nagios monitoring machine
In /etc/nagios3/conf.d create a file called for example host1_nagios2.cfg and add the following basic services (add/remove/modify based on your local configuration):
define host{
use generic-host
host_name host1
alias host1
address xxx.xxx.xxx.xxx
}
define service{
use generic-service
host_name host1
service_description Current Load
check_command check_nrpe_1arg!check_load
}
define service{
use generic-service
host_name host1
service_description Current Users
check_command check_nrpe_1arg!check_users
}
define service{
use generic-service
host_name host1
service_description Disk Space
check_command check_nrpe_1arg!check_all_disks
}
define service{
use generic-service
host_name host1
service_description Total Processes
check_command check_nrpe_1arg!check_total_procs
}
Also, you can add the new host host1 to be part of any related groups, modifying /etc/nagios3/conf.d/hostgroups_nagios2.cfg
For example, we can add it to debian-servers and ssh-servers groups. This will automatically get some checks like SSH.
# Some generic hostgroup definitions
# A simple wildcard hostgroup
define hostgroup
hostgroup_name all
alias All Servers
members *
}
# A list of your Debian GNU/Linux servers
define hostgroup {
hostgroup_name debian-servers
alias Debian GNU/Linux Servers
members localhost,host1
}
# A list of your web servers
define hostgroup {
hostgroup_name http-servers
alias HTTP servers
members localhost
}
# A list of your ssh-accessible servers
define hostgroup {
hostgroup_name ssh-servers
alias SSH servers
members localhost,host1
}
This configuration above will allow you to manage your virutalhosts simply storing them in a folder under /var/www/vhost
No extra configuration is needed from the server side.
Simply go into /var/www/vhost and create a folder named as the virtualhost you would like to manage.
In this particular case, please make sure to have a folder called error.default.loc with a page inside which will be displayed in case of ANY error.
For example, if you want to manage mysite.example.com, simply do the following:
cd /var/www/vhost
mkdir mysite.example.com
chown www-data:www-data mysite.example.com
…and put the html/php files inside that new folder! π
To test if our webserver works, you can always use curl command as explained here.
Puppet is a quite powerful configuration manager tool which allows you to configure automatically hosts and keep configurations consistence.
I did some tests using 3 VMs:
puppetmaster (server)
puppetagent01 (client)
puppetagent02 (client)
Of course, most of the work is done on puppetmaster server. On the last two machines you will simply see the outcome of the configurations that you’re going do set on puppetmaster.
Important: all the machines have to be able to communicate between each others. Please make sure DNS is working or set local names/IPs in /etc/hosts file, and do some ping tests before proceeding.
Client setup
On each puppetagent machine, just install the package puppet
apt-get install puppet
By default, the client will look for a host called “puppet” on the network.
If your DNS/hosts file doesn’t have this entry, and it can’t be resolved, you can manually set the name of the puppetmaster in /etc/puppet/puppet.conf file, adding this line under [main] section:
server=puppetmaster.yournet.loc
Now, no more configuration is required from the client side. Just edit /etc/default/puppet to start at boot time and start the service.
# Defaults for puppet - sourced by /etc/init.d/puppet
# Start puppet on boot?
START=yes
# Startup options
DAEMON_OPTS=""
service puppet start
Starting the service, will make automatically a request to the server to be added under his control.
If you want to do some tests, you can eventually use the following command to run puppet only once. This will also force the polling updates, which by default runs every 30 minutes.
puppet agent --no-daemonize --onetime --verbose
You can repeat all these steps on the second client machine.
Server setup
apt-get install puppetmaster
Check if the service is running, otherwise, start it up.
Sign clients’ certificates on the server side
Puppet uses this client/server certificate sign system to add/remove hosts from being managed by the server.
To see who has requested to be “controlled” use this command:
puppet cert --list
This will show all the hosts waiting to be added under puppetmaster server.
puppet cert --sign
This command will add the host.
Puppetmaster configuration files
The main configuration file is /etc/puppet/manifests/site.pp
Inside manifests folder, I’ve created a subfolder called classes with extra definitions (content of these files is showed later in this post).
import 'classes/*.pp'
# This add all the custom .pp files into classes folder
class puppettools {
# Creates a file, setting permissions and content
file { '/usr/local/sbin/puppet_once.sh':
owner => root, group => root, mode => 755,
content => "#!/bin/sh\npuppet agent --no-daemonize --onetime --verbose $1\n",
}
# Install (if not present) some puppet modules required for 'vimconf' class
exec { "install_puppet_module":
command => "puppet module install puppetlabs-stdlib",
path => [ "/bin", "/sbin", "/usr/bin", "/usr/sbin",
"/usr/local/bin", "/usr/local/sbin" ],
onlyif => "test `puppet module list | grep puppetlabs-stdlib | wc -l` -eq 0"
}
}
class vimconf {
# Modify vimrc conf file, enabling syntax on
file_line { 'vim_syntax_on':
path => '/etc/vim/vimrc',
match => '^.*syntax on.*$',
line => 'syntax on',
}
}
node default {
# this will be applied to all nodes without specific node definitions
include packages
include vimconf
include ntp
include puppettools
}
node 'puppetagent01' inherits default {
# this specific node, gets all the default classes PLUS some extras
include mysite
}
Here the content of the single files .pp in classes folder:
It’s important to remember to NOT duplicate entries.
For example, in this case, we have a specific file where we have setup ntp service, including the required package. This means that we do NOT have to add this package in the list into packages.pp, otherwise you will get an error and configs won’t get pushed.
As I’m sure you’ve noted, there are references to some “files”.
Yes, we need some extra configuration, to tell puppet to run as file server as well and where files are located.
In our example we are storing our files in here:
mkdir -p /etc/puppet/files
Now we need to add the following inΒ /etc/puppet/fileserver.conf
[files]
path /etc/puppet/files
allow *
Last bit, is creating the subfolders and place the files required for our configuration:
mkdir -p /etc/puppet/files
cd /etc/puppet/files
mkdir mysite mkdir etc
Inside mysite create mysite_apache.conf and index.html files.
For index.html, you can simply have some text, just for testing purposes.
In this example, we have also setup ntp to be installed and to have a custom ntp.conf file pushed.
For this reason, we need to make sure to have this file present into /etc/puppet/files/etc as declared into our .pp file.
After doing all these changes, you should restart your puppetmaster service on the server.
If all went well, you should have the following:
puppetagent02 host with screen, dselect, vim (installed and with syntax on), ntp (installed, running with custom ntp.conf file)
puppetagent01: with the same as puppetagent02 PLUS apache with a running website
Of course this is just a raw example and you can use template and other super features.
But I think it’s a good start π
127.0.0.1 localhost
10.0.60.60 dns.lab.loc dns
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
/etc/bind/named.conf.local
// Do any local configuration here
// Consider adding the 1918 zones here, if they are not used in your organization
include "/etc/bind/zones.rfc1918";
include "/etc/bind/rndc.key";
zone "lab.loc" {
type master;
file "/etc/bind/db.lab.loc";
allow-update { key rndc-key; };
};
zone "60.0.10.in-addr.arpa" {
type master;
file "/etc/bind/db.10.0.60";
allow-update { key rndc-key; };
};
/etc/bind/named.conf.options
(just to setup the external forwarders)
options {
directory "/var/cache/bind";
// If there is a firewall between you and nameservers you want
// to talk to, you may need to fix the firewall to allow multiple
// ports to talk. See http://www.kb.cert.org/vuls/id/800113
// If your ISP provided one or more IP addresses for stable
// nameservers, you probably want to use them as forwarders.
// Uncomment the following block, and insert the addresses replacing
// the all-0's placeholder.
forwarders {
<strong>208.67.222.222;208.67.220.220;8.8.8.8;8.8.4.4;</strong>
};
//========================================================================
// If BIND logs error messages about the root key being expired,
// you will need to update your keys. See https://www.isc.org/bind-keys
//========================================================================
dnssec-validation auto;
auth-nxdomain no; # conform to RFC1035
allow-query {
10.0.60/24;
127.0.0.1;
};
allow-transfer {
10.0.60/24;
127.0.0.1;
};
listen-on-v6 { any; };
};
/etc/bind/db.lab.loc
$ORIGIN lab.loc.
$TTL 24h ;$TTL (DNS time-to-live setting) used for all RRs without explicit TTL value
;SOA - Start of Authority. This is the record that states that this server is authoritative for the specified domain
;The SOA record lists the name server for the domain, and next the e-mail address of the administer of the domain
;(note that the @ has been replaced by a period).
@ IN SOA dns.lab.loc. root.lab.loc. (
2014032109 ; serial YYYYMMDDNN
10800 ; refresh (3 hours)
1800 ; retry (30 minutes)
604800 ; expire (1 week)
38400 ; minimum (10 hrs 40 min)
)
IN NS dns.lab.loc. ;Specifies the name server to use to look up a domain
; IN NS dns2.lab.loc. ;Specifies the name server to use to look up a domain
IN A 10.0.60.60 ; IP Address(es) of the DNS server(s)
; IN A 10.0.60.61 ; IP Address(es) of the DNS server(s)
IN MX 10 dns.lab.loc. ;Specifies mail server(s) for the domain
; HOSTS
dns IN A 10.0.60.60
;dns2 A 10.0.60.61
esxi01 IN A 10.0.60.71
esxi02 IN A 10.0.60.72
esxi03 IN A 10.0.60.73
freenas IN A 10.0.60.80
mail IN CNAME dns
dnsmaster IN CNAME dns
storage IN CNAME freenas
/etc/bind/db.10.0.60
; BIND reverse file for lab.loc
$ORIGIN 60.0.10.in-addr.arpa.
$TTL 24h
@ IN SOA dsn.lab.loc. root.lab.loc. (
2014032104 ; serial number YYMMDDNN
10800 ; Refresh (3 hours)
3600 ; Retry (1 hour)
604800 ; Expire (1 week)
38400 ; Min TTL (10 hours 40 minutes)
)
IN NS dns.lab.loc.
; IN NS dns2.lab.loc.
;LIST OF HOSTS (reverse)
60 IN PTR dns.lab.loc.
71 IN PTR esxi01.lab.loc.
72 IN PTR esxi02.lab.loc.
73 IN PTR esxi03.lab.loc.
80 IN PTR freenas.lab.loc.
DHCP configuration
Here there is just one file that has to be modified:Β dhcpd.conf
/etc/dhcp/dhcpd.conf
Here we need to enter the key in plain text.
# DHCPD
ddns-updates on;
ddns-update-style interim;
update-static-leases on;
authoritative;
key rndc-key { algorithm hmac-md5; secret +zZSeeetHWFdNwECit1Ktw==;}
allow unknown-clients;
use-host-decl-names on;
default-lease-time 1814400; #21 days
max-lease-time 1814400; #21 days
log-facility local7;
# lab.loc DNS zones
zone lab.loc. {
primary localhost; # This server is the primary DNS server for the zone
key rndc-key; # Use the key we defined earlier for dynamic updates
}
zone 60.0.10.in-addr.arpa. {
primary localhost; # This server is the primary DNS server for the zone
key rndc-key; # Use the key we defined earlier for dynamic updates
}
# lab.loc LAN scope
subnet 10.0.60.0 netmask 255.255.255.0 {
range 10.0.60.100 10.0.60.200;
option subnet-mask 255.255.255.0;
option routers 10.0.60.2;
option domain-name-servers 10.0.60.60;
option domain-name "lab.loc";
ddns-domainname "lab.loc.";
ddns-rev-domainname "in-addr.arpa.";
}
# lab.loc STATIC assigned group
group {
host freenas.lab.loc {
hardware ethernet 00:0c:29:18:af:b4;
fixed-address 10.0.60.80;
ddns-hostname "freenas";
}
host esxi01.lab.loc {
hardware ethernet 00:0c:29:d4:14:ce;
fixed-address 10.0.60.71;
ddns-hostname "esxi01";
}
host esxi02.lab.loc {
hardware ethernet 00:0c:29:2c:30:fd;
fixed-address 10.0.60.72;
ddns-hostname "esxi02";
}
host esxi03.lab.loc {
hardware ethernet 00:0c:29:46:90:fd;
fixed-address 10.0.60.73;
ddns-hostname "esxi03";
}
}
Once everything is configured, just restart bind and dhcp:
I personally needed a light editor, possibly not related to KDE or Gnome (which generally means plenty of packages installed).
I found Leafpad, which does the job. So, I generally install it as well:
With these packages, you should have a very light X environment, with xterm, a basic text editor and a basic bar with workspaces and current time.
Super basic. Super light. Super Functional. π
I liked also to customise the menu, because the default one was too messy for me.
To do this, create a .blackboxrc file if not present in your home directory.
After, just add the following:
session.menuFile: ~/.blackbox/blackbox-menu
Of course, make sure to have also a folder called .blackboxand the file blackbox-menu customised like this one: