Category Archives: Mac

Chef – notes

Websites: https://www.chef.io
Learning site: https://learn.chef.io

As any other Configuration Manager tools, the main goal is automate and keep consistency in the infrastructure:

  • create files if missing
  • ignore file/task if already up to date
  • replace with original version if modified

Typically, Chef is comprised of three parts:

  1. your workstation – where you create your recipes/cookbooks
  2. a Chef server – The guy who host the active version of recipes/cookbooks (central repository) and manage the nodes
  3. nodes – machines managed by Chef server. FYI, any nodes has Chef client installed.
diagram

picture source https://learn.chef.io

Generally, you deploy your cookbooks on your workstation and push them onto the Chef Server. The node(s) communicate with the Chef Server via chef-client and pulls and execute the cookbook.

There is no communication between the workstation and the node EXCEPT for the first initial bootstrap task. This is the only time when the workstation connects directly to the node and provides the details required to communicate with the Chef Server (Chef Server’s URL, validation Key). It also installs chef on the node and runs chef-client for the first time. During this time, the nodes gets registered on the Chef Sever and receive a unique client.pem key, that will be used by chef-client to authenticate afterwards.
The information gets stored in a Postgress DB, and there is some indexing happening as well in Apache Solr (Elastic Search in a Chef Server cluster environment).

Further explanation here: https://docs.chef.io/chef_overview.html

Some terms:

  • resource: part of the system in a desiderable state (e.g. package installed, file created…);
  • recipe: it contains declaration of resources, basically, the things to do;
  • cookbook: is a collection of recipes, templates, attributes, etc… basically The final collection of all.

Important to remember:

  • there are default actions. If not specified, the default action applies (e.g. :create for a file),
  • in the recipe you define WHAT but not HOW. The “how” is managed by Chef itself,
  • the order is important! For example, make sure to define the install of a package BEFORE setting a state enable. ONLY attributes can be listed without order.


Labs

Test images: http://chef.github.io/bento/ and https://atlas.hashicorp.com/bento
=> you can get these boxes using Vagrant

Example, how to get CentOS7 for Virtualbox and start it/connect/remove:

vagrant box add bento/centos-7.2 --provider=virtualbox

vagrant init bento/centos-7.2

vagrant up

vagrant ssh

vagrant destroy

Exercises:

Software links and info:

Chef DK: it provides tools (chef, knife, berks…) to manage your servers remotely from your workstation.
Download link here.

To communicate with the Chef Server, your workstation needs to have .chef/knife.rb file configured as well:

# See http://docs.chef.io/config_rb_knife.html for more information on knife configuration options

current_dir = File.dirname(__FILE__)
log_level                :info
log_location             STDOUT
node_name                "admin"
client_key               "#{current_dir}/admin.pem"
chef_server_url          "https://chef-server.test/organizations/myorg123"
cookbook_path            ["#{current_dir}/../cookbooks"]

Make sure to also have admin.pem (the RSA key) in the same .chef directory.

To fetch and verify the SSL certificate from the Chef server:

knife ssl fetch

knife ssl check

 

Chef DK also provides tools to allow you to configure a machine directly, but it is just for testing purposes. Syntax example:

chef-client --local-mode myrecipe.rb

 

 

Chef ServerDownload here.
To remember, Chef Server needs RSA keys (command line switch –filename) to communicate. We have user’s key, organisation key (chef-validator key).
There are different type of installation. Here you can find more information. And here more detail about the new HA version.

Chef Server can have a web interface, if you also install the Chef Management Console:

# chef-server-ctl install chef-manage

 

Alternatively you can use Hosted Chef service.

Chef Client:
(From official docs) The chef-client accesses the Chef server from the node on which it’s installed to get configuration data, performs searches of historical chef-client run data, and then pulls down the necessary configuration data. After the chef-client run is finished, the chef-client uploads updated run data to the Chef server.

 


Handy commands:

# Create a cookbook (structure) called chef_test01, into cookbooks dir
chef generate cookbook cookbooks/chef_test01

# Create a template for file "index.html" 
# this will generate a file "index.html.erb" under "cookbooks/templates" folder
chef generate template cookbooks/chef_test01 index.html

# Run a specific recipe web.rb of a cookbook, locally
# --runlist + --local-mode
chef-client --local-mode --runlist 'recipe[chef_test01::web]'

# Upload cookbook to Chef server
knife cookbook upload chef_test01

# Verify uploaded cookbooks (and versions)
knife cookbook list

# Bootstrap a node (to do ONCE)
# knife bootstrap ADDRESS --ssh-user USER --sudo --identity-file IDENTITY_FILE --node-name NODE_NAME
# Opt: --run-list 'recipe[RECIPE_NAME]'
knife bootstrap 10.0.3.1 --ssh-port 22 --ssh-user user1 --sudo --identity-file /home/me/keys/user1_private_key --node-name node1
# Verify that the node has been added
knife node list
knife node show node1

# Run cookbook on one node
# (--attribute ipaddress is used if the node has no resolvable FQDN)
knife ssh 'name:node1' 'sudo chef-client' --ssh-user user1 --identity-file /home/me/keys/user1_private_key --attribute ipaddress

# Delete the data about your node from the Chef server
knife node delete node1
knife client delete node1

# Delete Cookbook on Chef Server (select which version)
# use  --all --yes if you want remove everything
knife cookbook delete chef_test01

# Delete a role
knife role delete web

 


Practical examples:

Create file/directory

directory '/my/path'

file '/my/path/myfile' do
  content 'Content to insert in myfile'
  owner 'user1'
  group 'user1'
  mode '0644'
end

Package management

package 'httpd'

service 'httpd' do
  action [:enable, :start]
end

Use of template

template '/var/www/html/index.html' do
  source 'index.html.erb'
end

Use variables in the template

<html>
  <body>
    <h1>hello from <%= node['fqdn'] %></h1>
  </body>
</html>

 


General notes

Chef Supermarket

link here – Community cookbook repository.
Best way to get a cookbook from Chef Supermarket is using Berkshelf command (berks) as it resolves all the dependencies. knive supermarket does NOT resolve dependencies.

Add the cookbooks in Berksfile

source 'https://supermarket.chef.io'
cookbook 'chef-client'

And run

berks install

This will download the cookbooks and dependencies in ~/.berkshelf/cookbooks

Then to upload ALL to Chef Server, best way:

# Production
berks upload 

# Just to test (ignore SSL check)
berks upload --no-ssl-verify

 

Roles

Define a function of a node.
Stored as objects on the Chef server.
knife role create OR (better) knife role from file <role/myrole.json>. Using JSON is recommended as it can be version controlled.

Examples of web.json role:

{
   "name": "web",
   "description": "Role for Web Server",
   "json_class": "Chef::Role",
   "override_attributes": {
   },
   "chef_type": "role",
   "run_list": ["recipe[chef_test01::default]",
                "recipe[chef_test01::web]"
   ],
   "env_run_lists": {
   }
}

Commands:

# Push a role
knife role from file roles/web.json
knife role from file roles/db.json

# Check what's available
knife role list

# View the role pushed
knife role show web

# Assign a role to a specific node
knife node run_list set node1 "role[web]"
knife node run_list set node2 "role[db]"

# Verify
knife node show node1
knife node show node2

To apply the changes you need to run chef-client on the node.

You can also verify:

knife status 'role:web' --run-list

 


Kitchen

All the following is extracted from the official https://learn.chef.io

Test Kitchen helps speed up the development process by applying your infrastructure code on test environments from your workstation, before you apply your work in production.

Test Kitchen runs your infrastructure code in an isolated environment that resembles your production environment. With Test Kitchen, you continue to write your Chef code from your workstation, but instead of uploading your code to the Chef server and applying it to a node, Test Kitchen applies your code to a temporary environment, such as a virtual machine on your workstation or a cloud or container instance.

When you use the chef generate cookbook command to create a cookbook, Chef creates a file named .kitchen.yml in the root directory of your cookbook. .kitchen.yml defines what’s needed to run Test Kitchen, including which virtualisation provider to use, how to run Chef, and what platforms to run your code on.

Kitchen steps:

Kitchen WORKFLOW

Handy commands:

$ kitchen list
$ kitchen create
$ kitchen converge

 

Vim – remove the yellow highlight

Be honest. Sometimes happened also to you to see a file that you edited a while ago with Vim, still showing that terribly annoying yellow highlight. And I’m sure you probably gave up, thinking that the time was going to remove it.

Wrong! 😛

Here how to get rid of it:

  1. open the file
  2. press ESC
  3. type :nohl
  4. press Enter

Alternatively, the way I kept to use (which seems easier to remember), is actually search for some crazy string.
For example:

  1. open the file
  2. press ESC
  3. type /fkjsaddflkjasd;flka
    (randomly type stuffs)
  4. press Enter

Done 😉

Vim – Comment multi lines

First way:

v -> select whatever needs to be commented out
:
(appears this) :'<,’>
add s/^/#
(it will be now :'<,’>s/^/#)
Press Enter

Second way:

For commenting out a block of text, do the following:

  1. hit CTRL + v (visual block mode)
  2. use the down arrow keys to select the lines you want (it won’t highlight everything)
  3. Shift + i (capital I)
  4. insert the caracter/text you want to add at the beginning of the line (e.g. #)
  5. Press ESC.

 

To uncomment a block:

  1. Go the to first line of code where you want to start uncommenting from.
  2. Press 0 (To bring the cursor to the beginning of the line.)
  3. CTRL + v and select lines to be uncommented.
  4. x that will delete all the # characters vertically.

 

Source: https://www.quora.com/How-can-I-un-comment-a-block-of-text-in-Vim

Linux SSH auth passwordless using key

Pretty basic, but handy for whoever start playing with Linux.

Here simple steps to follow in order to have box1 to be able to connect securely to box2 over SSH without being required to insert password.
This is very handy if you run scripts 😉

On BOX1

You can run this as any user.

ssh-keygen -b 1024 -t rsa -f id_rsa -P ""

This will generate  ~/.ssh/id_rsa (private key) and  ~/.ssh/id_rsa.pub (public key).
The .pub is the key that needs to be appended in ~/.ssh/authorized_keys on BOX2.

If the following command is available, that’s the best/safest way to setup BOX2.

ssh-copy-id user@box2

Password for user on box2 will be requested.
Once completed, you can try to ssh user@box2 and theoretically you should be able to connect without need to insert the password again!

If ssh-copy-id does not exist (e.g. Mac or other Distros), you can scp the .pub file and append it as per below:

scp ~/.ssh/id_rsa.pub user@box2:/tmp

Then connect to box2 with user and run this:

cat /tmp/id_rsa.pub >> ~/.ssh/authorized_keys
rm -f /tmp/id_rsa.pub

After those 2 commands, the key should be added to the authorised ones, so  ssh user@box2 should work.

NOTE: if you are experiencing issues, please make sure that the permissions of id_rsa file is 600 on BOX1 and that sshd_conf on BOX2 is set to allow key auth connections

vim without .vimrc

If you want to run vim without executing a customer’s .vimrc, as they’ve got crazy colours and random stuff all over the show, just do use NONE as a special value to skip any .vimrc parsing;

vim -u NONE

You might need to run :set nocp in vim itself if you’re like me and used to the non-vi compatible features.

Ubuntu Mac Keyboard

Select the right model of your keyboard

Keyboard Model -> (vendor) Apple / (model) Apple

Switch the Command key with Control key

Go into System -> Preferences -> Keyboard
Click on the “Layouts” tab and then click the “Layout Options” button.
Click on “Alt/Win key behavior
Select “Control is mapped to Win keys (and the usual ctrl key).”

Choose right layout

Keyboard Preferences -> English US (Macintosh) layout

Create a bootable Sierra ISO for VMware

Open the Terminal app and run the following:

hdiutil attach /Applications/Install\ macOS\ Sierra.app/Contents/SharedSupport/InstallESD.dmg -noverify -nobrowse -mountpoint /Volumes/install_app
hdiutil create -o /tmp/Sierra.cdr -size 7316m -layout SPUD -fs HFS+J
hdiutil attach /tmp/Sierra.cdr.dmg -noverify -nobrowse -mountpoint /Volumes/install_build
asr restore -source /Volumes/install_app/BaseSystem.dmg -target /Volumes/install_build -noprompt -noverify -erase
rm /Volumes/OS\ X\ Base\ System/System/Installation/Packages
cp -rp /Volumes/install_app/Packages /Volumes/OS\ X\ Base\ System/System/Installation/
cp -rp /Volumes/install_app/BaseSystem.chunklist /Volumes/OS\ X\ Base\ System/BaseSystem.chunklist
cp -rp /Volumes/install_app/BaseSystem.dmg /Volumes/OS\ X\ Base\ System/BaseSystem.dmg
hdiutil detach /Volumes/install_app
hdiutil detach /Volumes/OS\ X\ Base\ System/
hdiutil convert /tmp/Sierra.cdr.dmg -format UDTO -o /tmp/Sierra.iso
mv /tmp/Sierra.iso.cdr ~/Desktop/Sierra.iso

NOTE: To have VMWare Workstation able to run MacOS X, you need to patch your version using this . If the file is no longer available, you can get a copy here.

If you want to force specific hardware parameters (like serial number etc), you need to add the following in your vmx file:

board-id.reflectHost = "FALSE"
board-id = <board-id>
hw.model.reflectHost = "FALSE"
hw.model = <product-name>
serialNumber.reflectHost = "FALSE"
serialNumber = <serial-number>
smbios.reflectHost = "FALSE"

To make sure some software like Google Music will recognise your VM, you need to apply also this change:

A) Remove these lines in the VMX file:

ethernet0.addressType = "generated"
ethernet0.generatedAddress = "xx:xx:xx:xx:xx:xx"
ethernet0.generatedAddressOffset = "0"

B) Add the following instead:

ethernet0.Address = "xx:xx:xx:xx:xx:xx"
ethernet0.addressType = "static"
ethernet0.checkMACAddress = "false"

Replace “xx:xx:xx:xx:xx:xx” with a real Apple MAC Address choosing from one of the listed here.


Sources:

 

SSH tunnel from A to B via jumpbox

Here a basic script that you can use if you want to connect from your local box, via a middle linux machine, to a third host.
It will also allow you to use FoxyProxy on your browser and browse the internal network of the destination box.

BOX_A <==== MIDDLE_BOX ====> BOX_B

The goal is having access from BOX_A to BOX_B via MIDDLE_BOX

MIDDLE_BOX is the only one that can talk withBOX_A and BOX_B

 

#!/bin/bash
#
# ==================================================== #
# Tunnel from CURRENT_HOST to DEST_HOST via MIDDLE_BOX #
# ==================================================== #
#
# The scripts connects the local port 8888 
# to the SSH port on DEST_BOX via MIDDLE_BOX.
#

MIDDLE_BOX_HOST="bastion_server.localdomain.loc"
MIDDLE_BOX_USER="username"
MIDDLE_BOX_SSH_PORT="22"

DEST_BOX_HOST="destination_host.domain.com"
DEST_BOX_USER="username"
DEST_BOX_SSH_PORT="22"

LOC_PORT=8888
SOCK_PORT=9050

############################################################

CHECK_TUNS=$(ps aux | grep "[s]sh -N -f -p $MIDDLE_BOX_SSH_PORT -L$LOC_PORT:$DEST_BOX_HOST:$DEST_BOX_SSH_PORT $MIDDLE_BOX_USER@$MIDDLE_BOX_HOST" | awk '{print $2}')

N_TUNS=$(echo $CHECK_TUNS | wc -l)

create_tunnel(){
  # Create a connection between localhost:$LOC_PORT to MIDDLE_BOX:SSH_PORT
  # It will ask for MIDDLE_BOX's password
  # -N -f keep the connection open in background executing No commands
  ssh -N -f -p $MIDDLE_BOX_SSH_PORT -L$LOC_PORT:$DEST_BOX_HOST:$DEST_BOX_SSH_PORT $MIDDLE_BOX_USER@$MIDDLE_BOX_HOST
  echo "Created new tunnel"
}

check_tunnel(){
nc -w 1 -z localhost $LOC_PORT > /dev/null 2>&1
}

reset_tunnel() {
for PID in $CHECK_TUNS; do
   kill -9 $PID > /dev/null 2>&1
   echo "Found multiple tunnels. Killed all."
done
}

# Hidden function. Add 'cleanup' as argument to close all the tunnels
[ "$1" == "cleanup" ] && reset_tunnel && exit 0

if [ $N_TUNS -eq 0 ] ; then
   create_tunnel
elif [ $N_TUNS -eq 1 ] ; then
   check_tunnel
   if [ $? -eq 0 ] ; then
      echo "Tunnel already up and running"
   else
      reset_tunnel
      create_tunnel
   fi
else
   reset_tunnel
   create_tunnel
fi


CHECK_SOCK=$(ps aux | grep -q "[s]sh -D$SOCK_PORT -p$LOC_PORT $DEST_BOX_USER@localhost")
if [ $? -eq 0 ] ; then
   echo "Sock already created on port $SOCK_PORT - just opening SSH shell on $DEST_BOX_HOST"
   ssh -p$LOC_PORT $DEST_BOX_USER@localhost
 else
   # This will open an SSH shell from DEST_BOX *AND* create a sock proxy on port $SOCK_PORT locally
   # You can use FoxyProxy in your browser to browse the DEST_BOX's network
   # Just set "localhost", dest port "$SOCK_PORT" and select "Socks Proxy"
   echo "Created sock on port $SOCK_PORT and ssh'ing on $DEST_BOX_HOST"
   ssh -D$SOCK_PORT -p$LOC_PORT $DEST_BOX_USER@localhost
fi

 

Backup – rsnapshot and rdiff (multiple backups)

This is a very basic/simple guide about how to setup incremental and versioned backups of your Linux computers and Mac. 🙂

Initial problem:

    • Time Machine is unreliable after a while, and when you put on sleep your Mac, most of the time it complains because the USB drive wasn’t disconnected properly :@
    • I’d like to be able to have an incremental/versioning backup system local BUT also have some of critical files uploaded in the cloud [using some cron and some cloud provider’s utility]
    • Time Machine on external drives uses ‘sparsebundle’ storage system, which is complicated to open and extract files from Linux command line [I’ve previously created a Time Machine on the pi, and I was thinking to create a sort of system to open the sparsebundle file, and upload the files during the night – but this doesn’t seem easy or neither really reliable]
    • Backing up VMs with Time Machine takes ages, as if a little bit changes, the whole content gets copied over (space and time consuming)

So… I needed something that could:

  • Do incremental backups storing only the differences (for VMs) to avoid to transfer every time GBs of data for little changes
  • Do versioning of small files (documents, videos, music, etc…) based on a custom schedule
  • Be accessible on the filesystem without tricky stuffs (like opening a ‘sparsebundle’ file
  • Be able to run on a raspberry pi and mostly likely, able to access Linux and Mac systems, and have a centralised backup system.

Answer: combination of rsnapshot and rdiff-backups… plus some sort of Cloud Provider’s utility to sync part of this content on the Cloud (still work in progress).
I found this nice article where it explains the differences between the two tools, and it should clarify why I’ve chosen to use a combination of both of them and not just one.
The main bit is this one:

rdiff-backup stores previous versions as compressed deltas to the current version similar to a version control system. rsnapshot uses actual files and hardlinks to save space. For small files, storage size is similar. For large files that change often, such as logfiles, databases, etc., rdiff-backup requires significantly less space for a given number of versions.

So, I’ve installed rsnapshot and rdiff-backups on my pi. Packages are available using apt-get command.
After that, I have created one rsnapshot configuration file for each of my linux machines (actually pi’s) and one for my Macrdiff-backup will be called within rsnapshot, in a post-exec script (option available, and very handy).

It’s clearly necessary to have SSH enable on your Linux and Mac machines. Also, in this particular case, I have added the following in visudo on the Mac, to allow the user to run pmset passwordless:

user ALL=(ALL) NOPASSWD: /usr/bin/pmset

Configuration files

I’m posting 2 configuration examples: one for my pi (local backup_, and the other onefor my Mac (remote backup – via ssh/rsync).
I’ve literally kept the original /etc/rsnapshot.conf just as reference – not actively using at all.

Here my custom configuration files:

/etc/default/rsnapshot

This is a file that I’ve created and I use it as “default/general” parameters that I include in any of the other custom files. Why? Just to avoid to copy and paste the same on any custom file 🙂

#####################################
# Default configuration paramenters #
#####################################
# just use include_conf <tab> file:
#include_conf /etc/default/rsnapshot
config_version 1.2
no_create_root 1
cmd_cp /bin/cp
cmd_rm /bin/rm
cmd_rsync /usr/bin/rsync
cmd_ssh /usr/bin/ssh
cmd_logger /usr/bin/logger
cmd_du /usr/bin/du
du_args -csh
link_dest 1
use_lazy_deletes 1
rsync_numtries 3
#stop_on_stale_lockfile 0

PI configuration file (local backup)

pi1_rsnap.conf

# pi1 conf file
include_conf /etc/default/rsnapshot
snapshot_root /USB/backups/pi1/
#retain hourly 6
retain daily 7
retain weekly 4
retain monthly 12
logfile /var/log/rsnapshot/p1.log
lockfile /USB/backups/rsnapshot_run/pi1.pid
#sync_first 1
verbose 2
loglevel 5
use_lazy_deletes 1
backup /home/ files/
backup /etc/ files/
backup /var/spool/cron/ files/
backup_script /usr/bin/dpkg --get-selections > packages.txt installed-packages/

This script copies home, etc, cron into /USB/backups/pi1/daily.0/files/.
The last line also execute the command and pull the output file and store within /USB/backups/pi1/daily.0/installed-packages/


The MAC configuration (remote backup).

This requires some extras.
What I’ve done is combining a pre and post script around the rsnapshot backup, in order to obtain the following:

  1. waking up the MAC via wake-on-lan package (this is possible because my MAC is connected also via ethernet)
  2. connect via ssh
  3. send a command to keep the disk on and avoid them to go in idle
  4. visually notify that the backup is about to run (in case someone is currently using the Mac)
  5. run the rsnapshot backup
  6. once finished, run rdiff-backup for the big files (VMs)
  7. once done, kill the process that was keeping the disks on
  8. visual notification sent to inform that backup has completed
  9. disconnect. If no one is connected, the Mac will go back in standby (if enabled).
  10. clean up old rdiff-backups

mac_rsnap.conf

# mac conf file
include_conf /etc/default/rsnapshot
snapshot_root /USB/backups/mac/
#retain hourly 6
#retain daily 7
retain weekly 4
retain monthly 12
logfile /var/log/rsnapshot/mac.log
lockfile /USB/backups/rsnapshot_run/mac.pid

#rsync_short_args -a
rsync_long_args --delete --numeric-ids --relative --delete-excluded --filter=". /etc/rsnapshot_configs/mac/<span style="color: #0000ff;">rsync_selections</span>"

#sync_first 1
verbose 1
loglevel 5
use_lazy_deletes 1

# Specify the path to a script (and any optional arguments) to run right
# before rsnapshot syncs files
<span style="color: #339966;">cmd_preexec</span> /etc/rsnapshot_configs/mac/<span style="color: #0000ff;">pre-exec.sh</span>

# Specify the path to a script (and any optional arguments) to run right
# after rsnapshot syncs files
<span style="color: #339966;">cmd_postexec</span> /etc/rsnapshot_configs/mac/<span style="color: #0000ff;">rdiff_vms.sh</span>

#Remote backup
</code><code>backup user@mac:/ files/

The following bash scripts have some parameters that need to be set manually (highlighted in orange)

pre-exec.sh

#!/bin/bash

# --------------------------------------------- #
# This script wake up the mac box via ethernet
# using wake-on-lan, wait for ssh connection,
# connects and issue a command to keep the
# disks on for the following backup tasks.
#
# There is a timeout for number of tries. If
# reached, an email notification will be sent.
# --------------------------------------------- #

# Email parameters
EMAIL="<span style="color: #ff9900;">[email protected]</span>"
SENDMAIL=<span style="color: #ff9900;">/usr/sbin/sendmail</span>

# MAC details
MACADDR="<span style="color: #ff9900;">xx:xx:xx:xx:xx:xx</span>"
USER=<span style="color: #ff9900;">user</span>
HOST=<span style="color: #ff9900;">mac</span>

# Estimated amount of time to get ssh available
waitBeforeTry=<span style="color: #ff9900;">40</span>

# Retries parameters
sleepSecInterval=5
maxConnectionAttempts=10

# --------------------------------------------- #
emailnotification () {
echo -e "Subject:$1\n" | $SENDMAIL $EMAIL
logger "${BASH_SOURCE[0]} PID $ - $1"
}

# Turn on your mac via Ethernet LAN
sudo /usr/sbin/etherwake $MACADDR

sleep $waitBeforeTry

index=1
while (( $index <= $maxConnectionAttempts ))
do
echo quit | telnet $HOST 22 2>/dev/null | grep -q Connected
if [ $? -ne 0 ] ; then
sleep $sleepSecInterval
((index+=1)) #; echo "DEBUG: $index"
else
break
fi
done

# Notify if reach max attempts
MSG="Unable to connect to $USER@$HOST after $maxConnectionAttempts attempts."
[ $index -eq $maxConnectionAttempts ] && emailnotification $MSG

# Connect via ssh and disable sleep and disksleep
ssh $USER@$HOST 'sudo pmset sleep 0'
ssh $USER@$HOST 'sudo pmset disksleep 0'
#ssh $USER@$HOST 'nohup pmset noidle > /dev/null 2>&1 &'
ssh $USER@$HOST ' osascript -e '"'"'display notification "Starting Backup in few seconds" with title "Backup starts" sound name "default" '"'"' '

sleep 5

rdiff_vms.sh

#!/bin/bash

# Script executed after rsnapshot
USER=<span style="color: #ff9900;">user</span>
HOST=<span style="color: #ff9900;">mac</span>

# ===================================================
rdiff-backup --exclude-symbolic-links $USER@$HOST::Users/user/Documents/VMs/ /USB/backups/mac/VMs/

# All files should be now backed up

# Re-setting previous values for sleep and disksleep... and notify
ssh $USER@$HOST 'sudo pmset sleep 10'
ssh $USER@$HOST 'sudo pmset disksleep 10'
#ssh $USER@$HOST 'pkill pmset noidle'
ssh $USER@$HOST ' osascript -e '"'"'display notification "Backup has now completed." with title "Backup Finished" sound name "default" '"'"' '

# Putting on sleep the box - NOT REQUIRED
# sleep will happen automatically and no risk to force sleep if I'm using it
#ssh $USER@$HOST 'sudo pmset sleepnow'

# Cleaning up old backups: remove backups older than 6 months
rdiff-backup --remove-older-than 6M --force /USB/backups/mac/VMs/

The following file is the one used as ‘filter‘ for rsync. It uses that syntax.
To clarify, this does the backup of Documents, Pictures, Movies, Music folders ONLY from the user called ‘user‘, excluding the subfolders ‘VMs‘ in Documents, all the folders that starts with ‘Season‘ in Movies, any other possible folders in ‘user’ home dir, and any file/folder starting with .Spotlight, .Trash and .DS_Store files in ANY subfolders.

rsync_selections

+ Users/
+ Users/user/
+ Users/user/Documents/
+ Users/user/Pictures/
+ Users/user/Movies/
+ Users/user/Music/
- .Spotlight*
- .Trash*
- .DS_Store
- Users/user/Documents/VMs/
- Users/user/Movies/Season*/
- Users/user/*
- Users/*
- /*

/etc/cron.d/rsnapshot
This is the CRON that executes the backup jobs.
The ‘less frequent’ job needs to run before the ‘most frequent’. I’ve explained this later in this post, however the reason is that the actual active sync happens JUST in the most frequent job, and the others are just rotations made with a ‘mv’ command. So, it’s important to make the rotation BEFORE the sync.

###############
# >>> MAC <<< #
###############
# set to run only weekly at 10:30 am on Monday
30 10 * * 1 user /usr/bin/rsnapshot -c /etc/rsnapshot_configs/mac/mac_rsnap.conf weekly
# Monthly rotation at 10:00 am (1st every month)
0 10 1 * * user /usr/bin/rsnapshot -c /etc/rsnapshot_configs/mac/mac_rsnap.conf monthly
###############
# >>> PI <<< #
###############
# Daily 9:30am
30 9 * * * root /usr/bin/rsnapshot -c /etc/rsnapshot_configs/pi_rsnap.conf daily
# Weekly 9:05am (Sunday)
5 9 * * 7 root /usr/bin/rsnapshot -c /etc/rsnapshot_configs/pi_rsnap.conf weekly
# Monthly 9:00am (1st every month)
0 9 1 * * root /usr/bin/rsnapshot -c /etc/rsnapshot_configs/pi_rsnap.conf monthly

Folders created:

/USB/                               [mount point of my external USB drive]
/USB/backups/                       [subfolder to keep all the backups]
/USB/backups/pi/                    [folder for 'pi' box]
/USB/backups/mac/                   [folder for 'mac']
/etc/rsnapshot_configs/             [where I keep all the conf files]
/var/log/rsnapshot/                 [log files - chmod 1777*]
/USB/backups/rsnapshot_run/         [dir for jobs' pids - chmod 1777*]

*Use chmod 1777 on logs and run folders if you want other users than root to run the backups and write log files.


Let’s clarify some bits and pieces

sync_first 1

To be sure to properly complete the first full backup, enable  sync_first setting this to 1. Once completed, remove/comment it out.
To execute the first sync, run the following:

rsnapshot -c my_rsnapshot.conf sync

Basically, run the sync as many times you want… and once you have finished, you will start invoking (with CRON) the daily, weekly, monthly… etc backups. REMEMBER to disable it once finished, otherwise you won’t actually run any sync!

TABs no spaces!

IMPORTANT: do NOT use spaces in the rsnapshot configuration files but only TABS!!!
Copy and paste might change tabs to spaces so be sure to review all your configs. Use the -t flag to test every time if syntax is correct.

Test your configuration (-t)

rsnapshot -t -c my_rsnapshot.conf <sync|daily|weekly... >

The -t will also display exactly the command that it’s going to be executed – very handy! 🙂

Remote backups

Another thing to keep in mind is that ‘REMOTE’ backups (whatever uses user@host …) are actually launching the command on the remote host so it’s required to have rsync installed on the remote machine too (and rdiff-backup if used too). Versions should also match. If not, at least rsync should be version >= 3.
To allow this to work on my Mac, for instance, I had to install “rdiff-backup” and install a newer version of “rsync”, as the default version is 2.6.x. I’ve used the Rudix packages. Easy easy 🙂

Retain daily/weekly/monthly… sync… wtf?!

Very important to understand about rsnapshot that made me kinda mad for few hours: the job that DOES the backup is the one on the top of the list (most frequent).
So, if you have daily, weekly, monthly… set as ‘retain’ parameters in the rsnapshot conf file, the one that does actually the copy of the files is ‘daily‘ (top of the list – most frequent). The other ones are JUST some sort of rotation of the folder tree. Literally a ‘mv’ command… that’s it. You can verify this using -t flag to see the commands.
So, don’t get confused 🙂

So, to summarise:

  • sync: first initial backup – handy especially to create the initial backup. This creates a .sync folder in snapshot_root.
  • daily: this is the one that does the copy (or the ‘most frequent’ backup set – in mac for example, I set that to be ‘weekly’ and ‘monthly’ only, so in that case, weekly is the most frequent backup set and it’s the one that does the sync
  • weekly/monthly… (less frequent backups): these are simply ‘mv’ commands.

To explain more in details… the flow of my Mac…
You run the first sync (as many times as you want), with ‘sync_first‘ enabled.

rsnapshot -c my_rsnapshot.conf sync

This creates the backup in /USB/backups/mac/.sync/
Than you run the crons. Weekly will be the first to run:

rsnapshot -c my_rsnapshot.conf weekly

This will actually run this move, creating the first weekly folder:

mv /USB/backups/mac/.sync/ /USB/backups/mac/weekly.0/

Than, DISABLE ‘sync_first’ and the next time the weekly cron will be executed, something like that will run, moving the weekly.0 to weekly.1, hard linking the identical files and sync’ing the ones that have been changed since:

mv /USB/backups/mac/weekly.0/ /USB/backups/mac/weekly.1/
/usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded \
    --link-dest=/USB/backups/mac/weekly.1/files/ /home/ \
    /USB/backups/mac/weekly.0/files/
[...]

Then, next time, weekly.2 and weekly.3 will be created: same method.
Until the LAST backup is created (#3, in this case -> 4 retention – from 0 to 3), the monthly job won’t take any affects.
Once we have /USB/backups/mac/weekly.3/, and this will be executed…

rsnapshot -c my_rsnapshot.conf monthly

… this will be executed:
mv /USB/backups/mac/weekly.3/ /USB/backups/mac/monthly.0/

And so and so…

Little note, keeping the above example. You might start this backup in the middle of month, so at the end of the month you won’t have reached the 4th weekly backup sets, but just the 2nd (#0 and #1). So.. what happens with the ‘monthly’ one that will run on the 1st of the month?
Answer: nothing.
Basically, this time the monthly backup will skip as the previous max retention limit is not reached yet. Weekly backups will continue rotating within themselves.
The first week of the second month, weekly backup will reach #2 (third backup). #1 => #2, #0 =>  #1 and the new backup stored in #0.
Second week #3 (4th and last). #2 => #3, #1 => #2, #0 =>  #1 and the new backup stored in #0. The #3 (oldest) should be the one that rotates… but the monthly cron won’t be executed until the next month. But there’s nothing to be worried about. Next weekly run, on the third week, the #3 will be marked for deletion, and a new #0 will be created.  Same for the forth week. Oldest backup deleted, max limit reached.
And here, we will get into the new month, where the monthly backup will be called BEFORE the weekly one, and it will rotate weekly.3 in monthly.0, and the weekly (#3 => monthly#0, #2 => #3, #1 => #2) freeing up ‘one space’ (#0). This will be filled up from the next ‘weekly’ run, and all will be ‘in sync’ for the next months. 🙂

I hope this example clarifies. 🙂

NOTE:
If you are decide, one day, to move your backup from one disk to another one, MAKE SURE to rsync preserving the hard links, otherwise your backup will raise like a cake in the oven! 🙂

Here a sample command:

rsync -az -H --delete --numeric-ids /path/to/source server2:/path/to/dest/