Category Archives: Scripting

Convert FLV to MP4

Quick and dirty lines to convert FLV files into MP4.

To do so, you need the package ffmpeg.

# Temporary move all the .flv files into a TMP directory
$ find . -type f -name "*.flv" -exec mv {} ../TMP/ \;

# Remove all spaces from the filename
$ cd ../TMP/
$ for file in *' '* ; do mv -- "$file" "${file// /_}" ; done

# Now, convert all the files from FLV to MP4 using ffmpeg
$ for file in `ls` ; do ffmpeg -i "$file" -c copy "${file//flv/mp4}" ; done

During the process, you might get some errors. You will see some .mp4 files with size 0 bites.

You can then try to remove the broken files and try converting re-encoding

# Find and delete files size 0
$ find . -type f -size 0 -print0 | xargs -0 rm

# Try re-encoding
$ for file in `ls *.flv` ; do ffmpeg -i "$file" -vcodec libx264 -acodec copy "${file//flv/mp4}" ; done

GIT tricks when you mess up a bit

Sometimes, especially at the beginning, a Developer makes loads of mistakes and here there are some workaround that can help. šŸ˜‰

Rewrite master’s history (TO AVOID)

git log --pretty=oneline

git rebase -i --root => s/pick/s => choose what to keep ‘pick‘ and what to remove ‘s

-> for each ‘pick’ you need to write a summary in the comments

Rewrite history of the current branch (squash)

Firstly, check how many ahead/behind is your branch comparing to the reference one (e.g. master/main)

STEP 1

git rebase -i HEAD~_number of commits_ (e.g. git rebase -i HEAD~3) => use the number of “ahead’s” that you found.

Make the required changes (pick/s) and save.

Then:
git push origin +mybranch

STEP 2

git rebase
Any conflicts?

YES -> fix them, git add . , git rebase --continue
NO -> git push + (forced push using +)

Fix a wrong merge on master

git revert -m

NOTE: merge commit does NOT count.

A -\
    K
    |
    L
    |
B -/
|
...

Here an example.
You want to go back to commit B (master).
A, B, K, L are commit ID – you can find them using git log command.

From B to A (A is the last merge that we want to delete), there are 2 commits, K and L.

If you want to go back to B:

git revert -m2 A
which means – go back of 2 commits from A

Happy fixing! šŸ™‚

Reverse SSH Tunnel

To allow LOCAL_SERVER behind a firewall/NAT/Home Router to be accessible via SSH from a REMOTE_SERVER you can use a ssh tunnel (reverse).

Basically, from your LOCAL_SERVER you forward port 22 (ssh) to another port on REMOTE_SERVER, for example 8000 and you can ssh into your LOCAL_SERVER from the public IP of the REMOTE_SERVER via port 8000.

To do so, you need to run the following from LOCAL_SERVER:

 local-server: ~ ssh -fNR 8000:localhost:22 <user>@<REMOTE_SERVER>

On REMOTE_SERVER you can use netstat -nlpt to check if there is a service listening on port 8000.

Example:

remote-server ~# netstat -nplt | grep 8000
tcp        0      0 0.0.0.0:8000            0.0.0.0:*               LISTEN      1396/sshd: root
tcp6       0      0 :::8000                 :::*                    LISTEN      1396/sshd: root

In this case, the REMOTE_SERVER allows connection from ALL the interfaces (0.0.0.0) to port 8000.
This means that, if the REMOTE_SERVER has IP 217.160.150.123, if you can connect to LOCAL_SERVER from a THIRD_SERVER using the following:

third-server: ~ ssh -p 8000 <user_local_server>@217.160.150.123

NOTE. If you see that the LISTEN connection on REMOTE_SERVER is bound to 127.0.0.1 and not to 0.0.0.0, it is probably related to the setting GatewayPorts set to no in /etc/ssh/sshd_config on REMOTE_SERVER.
Best setting is clientspecified (rather than yes) as per this post.

Set this value to yes and restart sshd service.

With that setting, you can potentially allow only connection from the REMOTE_SERVER to the LOCAL_SERVER, to increase security.
To do so, you need to use the following ssh command from LOCAL_SERVER:

 local-server: ~ ssh -fNR 127.0.0.1:8000:localhost:22 <user>@<REMOTE_SERVER>

With netstat, you’ll see now this:

remote-server:~# netstat -nplt | grep 8000
tcp        0      0 127.0.0.1:8000          0.0.0.0:*               LISTEN      1461/sshd: root

With this forward, you will be able to access LOCAL_SERVER ONLY from the REMOTE_SERVER itself:

remote-server: ~ ssh -p 8000 <user_local_server>@localhost

I hope this helps šŸ™‚

Happy tunnelling!

Windows 10 – VMWare Disk cleanup and shrink

Simple batch script to run as Administrator in order to cleanup the disk, defrag and shrink.

Please note that the shrink works only if the VMWare tools are installed on the guest VM.

@ECHO OFF
REM Make sure to run the script as Administrator
whoami /groups | find "S-1-16-12288" > nul

if %errorlevel% == 0 (
 echo Welcome, Admin
) else (
 echo You must run this script as Administrator. Aborting...
 goto EOF
)

REM Enable components to cleanup
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Active Setup Temp Folders" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\BranchCache" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Downloaded Program Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\GameNewsFiles" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\GameStatisticsFiles" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\GameUpdateFiles" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Internet Cache Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Memory Dump Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Offline Pages Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Old ChkDsk Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Previous Installations" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Recycle Bin" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Service Pack Cleanup" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Setup Log Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\System error memory dump files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\System error minidump files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Temporary Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Temporary Setup Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Temporary Sync Files" /V StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Thumbnail Cache" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Update Cleanup" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Upgrade Discarded Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\User file versions" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Windows Defender" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Windows Error Reporting Archive Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Windows Error Reporting Queue Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Windows Error Reporting System Archive Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Windows Error Reporting System Queue Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Windows ESD installation files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Windows Upgrade Log Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
 
REM Run cleanup
IF EXIST %SystemRoot%\SYSTEM32\cleanmgr.exe START /WAIT cleanmgr /sagerun:100

echo *** DEFRAGGING DRIVES ***
echo DEFRAG C:
defrag c: -f

if not exist "C:\Program Files\VMware\VMware Tools" goto NOVMWARE
echo.
echo *** SHRINKING DRIVES ***
cd "C:\Program Files\VMware\VMware Tools\"
VMwareToolboxCmd.exe disk shrink c:\

if not exist D: goto NODDRIVESHRINK
VMwareToolboxCmd.exe disk shrink D:\
:NODDRIVESHRINK
:NOVMWARE


echo *** SHUTTING DOWN ***
shutdown -s -t 30

:EOF
echo Terminating script
pause

Save this content in a .bat file and… enjoy it!

Email notification for successful SSH connection

If you manage a remote server, and you are a bit paranoiac aboutĀ the bad guys outside, it could be nice to have some sort of notification every time a SSH connection is successful.

I found this post and it seems working pretty well for me as well.
I’ve installed this on my CentOS7 server and seems working good! Of course, this in addition to an aggressive Fail2Ban setup.

  1. Make sure you have your MTA (Postfix/Sendmail…) configured to deliver emails to the userĀ root
  2. Make sure you get the emails for the userĀ root (otherwise doesn’t make any sense šŸ˜› )
  3. Create this script (this is a slightly modified version comparing with the one in the original post:
    #!/bin/sh
    if [ "$PAM_TYPE" != "open_session" ]
    then
      exit 0
    else
      {
        echo "User: $PAM_USER"
        echo "Remote Host: $PAM_RHOST"
        echo "Service: $PAM_SERVICE"
        echo "TTY: $PAM_TTY"
        echo "Date: `date`"
        echo "Server: `uname -a`"
      } | mail -s "$PAM_SERVICE login on `hostname -s` from user $PAM_USER@$PAM_RHOST" root
    fi
    exit 0
    
  4. Set the permission:
    chmod +x /usr/local/bin/send-mail-on-ssh-login.sh
  5. Append this line toĀ /etc/pam.d/sshd
    session optional pam_exec.so /usr/local/bin/send-mail-on-ssh-login.sh
  6. Ā …and that’s it! šŸ˜‰

 

If you’d like to have a specific domain/IP whitelisted, for example if you don’t want to get notified when you connect from your office or your home (fixed IP or dynamic IP is required), you can use this version of the script:

#!/bin/bash
if [ "$PAM_TYPE" != "open_session" ]; then
  exit 0
else
  MSG="$PAM_SERVICE login on `hostname -s` from user $PAM_USER@$PAM_RHOST"
  # check if the PAM_RHOST is shown as IP
  echo "$PAM_RHOST" | grep -q -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'
  if [ $? -eq 0 ]; then
    SRCIP=$PAM_RHOST
  else
    SRCIP=$(dig +short $PAM_RHOST)
  fi
  SAFEIP=$(dig +short myofficedomain.com)
  if [ "$SRCIP" == "$SAFEIP" ]; then
    echo "Authorised $MSG" | logger
  else
  {
    echo "User: $PAM_USER"
    echo "Remote Host: $PAM_RHOST"
    echo "Service: $PAM_SERVICE"
    echo "TTY: $PAM_TTY"
    echo "Date: `date`"
    echo "Server: `uname -a`"
  } | mail -s "Unexpected $MSG" root
  fi
fi
exit 0

The script will send an email ONLY if the source IP is notĀ the one fromĀ myofficedomain.com; however, it will log the authentication in /var/log/messages usingĀ logger command.

Chef – notes

Websites:Ā https://www.chef.io
Learning site:Ā https://learn.chef.io

As any other Configuration Manager tools, the main goal is automate and keep consistency in the infrastructure:

  • create files if missing
  • ignore file/task if already up to date
  • replace with original version if modified

Typically, Chef is comprised of three parts:

  1. your workstation – where you create your recipes/cookbooks
  2. a Chef server –Ā The guy who host the active version of recipes/cookbooks (central repository) and manage the nodes
  3. nodes – machines managed by Chef server. FYI, any nodes hasĀ Chef client installed.

diagram

picture source https://learn.chef.io

Generally, you deploy your cookbooks on your workstation and push them onto the Chef Server. The node(s) communicate with the Chef Server via chef-client and pulls and execute the cookbook.

There is no communication between the workstation and the node EXCEPT for the first initialĀ bootstrap task. This is the only time when the workstation connects directly to the node and provides the details required to communicate with the Chef Server (Chef Server’s URL, validation Key). It also installsĀ chef on the node and runsĀ chef-client for the first time. During this time, the nodes gets registeredĀ on the Chef Sever and receive a uniqueĀ client.pemĀ key, that will be used by chef-client to authenticate afterwards.
The information gets stored in a Postgress DB, and there is some indexing happening as well in ApacheĀ Solr (Elastic SearchĀ in a Chef Server cluster environment).

Further explanation here:Ā https://docs.chef.io/chef_overview.html

Some terms:

  • resource: part of the system in a desiderable state (e.g. package installed, file created…);
  • recipe: it contains declaration of resources, basically, theĀ things to do;
  • cookbook: is a collection of recipes, templates, attributes, etc… basically The final collection of all.

Important to remember:

  • there are default actions. If not specified, the default action applies (e.g.Ā :create for a file),
  • in the recipe you define WHAT but notĀ HOW. The “how” is managed by Chef itself,
  • theĀ order is important! For example, make sure to define the install of a package BEFORE setting a state enable. ONLYĀ attributes can be listed without order.


Labs

Test images:Ā http://chef.github.io/bento/Ā andĀ https://atlas.hashicorp.com/bento
=> you can get these boxes using Vagrant

Example, how to get CentOS7 for Virtualbox and start it/connect/remove:

vagrant box add bento/centos-7.2 --provider=virtualbox

vagrant init bento/centos-7.2

vagrant up

vagrant ssh

vagrant destroy

Exercises:

Software links and info:

Chef DK: it provides tools (chef, knife, berks…) to manage your servers remotely from your workstation.
Download link here.

To communicate with the Chef Server, your workstation needs to haveĀ .chef/knife.rb file configured as well:

# See http://docs.chef.io/config_rb_knife.html for more information on knife configuration options

current_dir = File.dirname(__FILE__)
log_level                :info
log_location             STDOUT
node_name                "admin"
client_key               "#{current_dir}/admin.pem"
chef_server_url          "https://chef-server.test/organizations/myorg123"
cookbook_path            ["#{current_dir}/../cookbooks"]

Make sure to also haveĀ admin.pem (the RSA key) in the sameĀ .chef directory.

To fetch and verify the SSL certificate from the Chef server:

knife ssl fetch

knife ssl check

 

Chef DKĀ also provides tools toĀ allow you to configure a machine directly, but it is just for testing purposes. Syntax example:

chef-client --local-mode myrecipe.rb

 

 

Chef Server:Ā Download here.
To remember, Chef Server needsĀ RSA keys (command line switch –filename) to communicate. We have user’s key, organisation key (chef-validator key).
There are different type of installation.Ā Here you can find more information. And here more detail about the new HA version.

Chef Server can have a web interface, if you also install the Chef Management Console:

# chef-server-ctl install chef-manage

 

Alternatively you can use Hosted Chef service.

Chef Client:
(From official docs) The chef-client accesses the Chef server from the node on which itā€™s installed to get configuration data, performs searches of historical chef-client run data, and then pulls down the necessary configuration data. After the chef-client run is finished, the chef-client uploads updated run data to the Chef server.

 


Handy commands:

# Create a cookbook (structure) called chef_test01, into cookbooks dir
chef generate cookbook cookbooks/chef_test01

# Create a template for file "index.html" 
# this will generate a file "index.html.erb" under "cookbooks/templates" folder
chef generate template cookbooks/chef_test01 index.html

# Run a specific recipe web.rb of a cookbook, locally
# --runlist + --local-mode
chef-client --local-mode --runlist 'recipe[chef_test01::web]'

# Upload cookbook to Chef server
knife cookbook upload chef_test01

# Verify uploaded cookbooks (and versions)
knife cookbook list

# Bootstrap a node (to do ONCE)
# knife bootstrap ADDRESS --ssh-user USER --sudo --identity-file IDENTITY_FILE --node-name NODE_NAME
# Opt: --run-list 'recipe[RECIPE_NAME]'
knife bootstrap 10.0.3.1 --ssh-port 22 --ssh-user user1 --sudo --identity-file /home/me/keys/user1_private_key --node-name node1
# Verify that the node has been added
knife node list
knife node show node1

# Run cookbook on one node
# (--attribute ipaddress is used if the node has no resolvable FQDN)
knife ssh 'name:node1' 'sudo chef-client' --ssh-user user1 --identity-file /home/me/keys/user1_private_key --attribute ipaddress

# Delete the data about your node from the Chef server
knife node delete node1
knife client delete node1

# Delete Cookbook on Chef Server (select which version)
# use  --all --yes if you want remove everything
knife cookbook delete chef_test01

# Delete a role
knife role delete web

 


Practical examples:

Create file/directory

directory '/my/path'

file '/my/path/myfile' do
  content 'Content to insert in myfile'
  owner 'user1'
  group 'user1'
  mode '0644'
end

Package management

package 'httpd'

service 'httpd' do
  action [:enable, :start]
end

Use of template

template '/var/www/html/index.html' do
  source 'index.html.erb'
end

Use variables in the template

<html>
  <body>
    <h1>hello from <%= node['fqdn'] %></h1>
  </body>
</html>

 


General notes

Chef Supermarket

link here –Ā Community cookbook repository.
Best way to get a cookbook from Chef Supermarket is usingĀ Berkshelf command (berks)Ā as it resolves all the dependencies.Ā knive supermarket does NOT resolve dependencies.

Add the cookbooks in Berksfile

source 'https://supermarket.chef.io'
cookbook 'chef-client'

And run

berks install

This will download the cookbooks and dependencies inĀ ~/.berkshelf/cookbooks

Then to upload ALL to Chef Server, best way:

# Production
berks upload 

# Just to test (ignore SSL check)
berks upload --no-ssl-verify

 

Roles

Define a function of a node.
Stored as objects on the Chef server.
knife role create OR (better)Ā knife role from file <role/myrole.json>.Ā Using JSON is recommended as it can be version controlled.

Examples of web.json role:

{
   "name": "web",
   "description": "Role for Web Server",
   "json_class": "Chef::Role",
   "override_attributes": {
   },
   "chef_type": "role",
   "run_list": ["recipe[chef_test01::default]",
                "recipe[chef_test01::web]"
   ],
   "env_run_lists": {
   }
}

Commands:

# Push a role
knife role from file roles/web.json
knife role from file roles/db.json

# Check what's available
knife role list

# View the role pushed
knife role show web

# Assign a role to a specific node
knife node run_list set node1 "role[web]"
knife node run_list set node2 "role[db]"

# Verify
knife node show node1
knife node show node2

To apply the changes you need to runĀ chef-client on the node.

You can also verify:

knife status 'role:web' --run-list

 


Kitchen

All the following is extracted from the officialĀ https://learn.chef.io

Test Kitchen helps speed up the development process by applying your infrastructure code on test environments from your workstation, before you apply your work in production.

Test Kitchen runs your infrastructure code in an isolated environment that resembles your production environment. With Test Kitchen, you continue to write your Chef code from your workstation, but instead of uploading your code to the Chef server and applying it to a node, Test Kitchen applies your code to a temporary environment, such as a virtual machine on your workstation or a cloud or container instance.

When you use the chef generate cookbook command to create a cookbook, Chef creates a file named .kitchen.yml in the root directory of your cookbook. .kitchen.yml defines what’s needed to run Test Kitchen, including which virtualisation provider to use, how to run Chef, and what platforms to run your code on.

Kitchen steps:

Kitchen WORKFLOW

Handy commands:

$ kitchen list
$ kitchen create
$ kitchen converge

 

Dynamic MOTD on Centos7

Just few steps!

InstallĀ figlet package:

yum install figlet

 

Create /etc/motd.sh script with this content:

#!/bin/sh
#
clear
figlet -f slant $(hostnamectl --pretty)
printf "\n"
printf "\t- %s\n\t- Kernel %s\n" "$(cat /etc/redhat-release)" "$(uname -r)"
printf "\n"



date=`date`
load=`cat /proc/loadavg | awk '{print $1}'`
root_usage=`df -h / | awk '/\// {print $(NF-1)}'`
memory_usage=`free -m | awk '/Mem:/ { total=$2 } /buffers\/cache/ { used=$3 } END { printf("%3.1f%%", used/total*100)}'`
swap_usage=`free -m | awk '/Swap/ { printf("%3.1f%%", "exit !$2;$3/$2*100") }'`
users=`users | wc -w`
time=`uptime | grep -ohe 'up .*' | sed 's/,/\ hours/g' | awk '{ printf $2" "$3 }'`
processes=`ps aux | wc -l`
ethup=$(ip -4 ad | grep 'state UP' | awk -F ":" '!/^[0-9]*: ?lo/ {print $2}')
ip=$(ip ad show dev $ethup |grep -v inet6 | grep inet|awk '{print $2}')

echo "System information as of: $date"
echo
printf "System load:\t%s\tIP Address:\t%s\n" $load $ip
printf "Memory usage:\t%s\tSystem uptime:\t%s\n" $memory_usage "$time"
printf "Usage on /:\t%s\tSwap usage:\t%s\n" $root_usage $swap_usage
printf "Local Users:\t%s\tProcesses:\t%s\n" $users $processes
echo

[ -f /etc/motd.tail ] && cat /etc/motd.tail || true

Make the script executable:

chmod +x /etc/motd.sh

Append this script to /etc/profile in order to be executed as last command once a user logs in:

echo "/etc/motd.sh" >> /etc/profile

Try and have fun! šŸ™‚

 

If you are using Debian, here the other guide.

Develop your site using WordPress and publish a static version on Cloud Files

WordPress is a CMS widely used nowadays to develop sites. Even if it wasnā€™t created with this intent (blogging was actually the original reason), due to its simplicity and popularity, loads of people are using this software to build their websites.

When the end goal is no longer a blog but a pure website, and youā€™re not using any ‘search’ or ‘comment’ features which still require php functions, it could be worth considering a completely ā€œstatic’edā€ site which is then published in a Cloud Files Public Container. You could take advantage of the built-in CDN capability and forget about scaling on demand and all the limitations that WordPress has in the Cloud.
Having a static site as opposed to a PHP site means you can bypass security patch installations and avoid the possible risk of compromised servers.
And last but not least: cost reduction!
When you serve a static site from Cloud Files you pay only for the space utilized and the bandwidth ā€“ no more charges for Cloud Server(s) and better performance! So… why not give it a try?! šŸ™‚

Side note: this concept can be applied to any CMS and any Cloud Platform. In this post I will use WordPress and Rackspace Public Cloud as examples.

Here is a list of user cases:

  • Existing website built on WordPress; without search fields, comments, forms (dynamic content) which is still being updated on regular basis with extra pages.
  • Brand new project based on WordPress where the end goal is to use WordPress as a Content Management System (CMS) to build a site, without any comment or search fields.
  • Old website built on an old version of WordPress that cannot be updated due to some plugins that wonā€™t work on newest versions of this CMS, with no dynamic functionalities required.
  • Legacy site built on WordPress that is no longer being developed/updated but is still required to stay online, with no dynamic functionalities required.

To be able to make a site build using a CMS completely static, any dynamic functionality needs to be disabled (e.g. comments, forms, search fields…).
It’s possible to manually re-integrate some of them relying on trusted external sources (e.g. Facebook comments) with the introduction of some javascript. But this will require some editing of the static pages generated, something that is outside the scope of this article. However I wanted to mention this, as it could be a limitation in the decision to make the site static.

What do you need?

  • a Cloud Load Balancer: this is just to keep a static IP
  • a small Cloud Server LAMP stack (1/2GB max) where you will host your site
  • a Cloud Files Public container to host the static version of the site
  • access to your DNS
  • some basic Linux knowledge to install some packages and run a few commands via command line

How to set this up?

Imagine you have www.mywpsite.com built on WordPress.
Itā€™s a site that you keep updating generally once or twice a week.
Comments are all disabled and you don’t have any search fields.
You found this post and you’re interested in trying to save some money, improve performance and forget about patching and updating your CMS.

So, how can you achieve this?

You will have to build a development infrastructure made up of a Cloud Load Balancer and a single Linux LAMP Cloud Server plus a Cloud Files Public Container for your production static site.

Your main domain www.mywpsite.com will need to point to the CNAME of the Cloud Files Container and no longer to your original site/server. You also need to create a new subdomain (e.g. admin.mywpsite.com) and point it to the development infrastructure: in specific, to the Cloud Load Balancer’s IP. You will then use admin.mywpsite.com/wp-admin/ instead of www.mywpsite.com/wp-admin/ to manage your site.

The only reason to use a Load Balancer in a single server’s solution is to keep the public IP, meaning you will no longer need to touch the DNS.
The goal is to image the server after you’ve completed making the changes and then delete it. In this way you won’t get charged for a resource that doesn’t serve any traffic and is there just for development. In this example, I’ve mentioned that changes are made a couple of times a week, so it’s cheaper and easier to always keep a Load Balancer with admin.mywpsite.com pointing to it, instead of a Cloud Server where you would be changing the DNS every time.
If you’re planning to keep the dev server up continuously, you could avoid using a Load Balancer entirely. However Cloud Servers might fail (we all know that Cloud is not designed to be resilient) so you may need to spin up a new server and have the DNS re-pointed again. In this case I do recommend keeping the DNS TTL as low as possible (e.g. 300 seconds).

Please note that the Site URL in your WordPress setup needs to be changed to admin.mywpsite.com. If you’re starting a new project you need to remember that the Site URL has to match the development subdomain that you have chosen. In our example, to admin.mywpsite.com.

Once you are happy with the site and everything works under admin.mywpsite.com, you will just need to run a command to make a static version of it, and another command to push the content onto the Cloud Files Container.

Once complete and verified, you can change the DNS for www.mywpsite.com to point to the Container, then image your dev server and delete it once the image has completed. You will create a new server from this image the next time you want to make a new change.

Here is a diagram that should help you to visualise the setup:

diagram

diagram

All clear?

So let’s do it!

How do you put this in practice?

Step by step guide (no downtime migration)
  1. Create a Cloud Server LAMP stack.
  2. Create a Cloud Load Balancer. Take note of its new IP.
  3. Add the server under the Load Balancer.
  4. Create a Cloud Files Public Container. Take note of its HTTP Public Link.
  5. Create the subdomain admin.mywpsite.com in your DNS and point it to the IP of your Cloud Load Balancer. Keep your www.mywpsite.com domain pointing to your live site to avoid downtime.
  6. Migrate WordPress files and database onto the new server and make sure NO changes on the live site will be done during the migration to avoid inconsistency of data.
  7. Change the Site URL to admin.mywpsite.com. Please note that this can be a bit tricky. Make sure all the references are properly updated. If it’s a new project, just install WordPress on it and set the Site URL directly to admin.mywpsite.com.
  8. Verify that you can access your site using admin.mywpsite.com and that also the wp-admin panel works correctly. Troubleshoot until this step is fully verified.

    NOTE: At this stage we have a new development site setup and the original live site still serving traffic. Next steps will complete the migration, having live traffic directly served from the Cloud Files Container.

  9. Install httrack and turbolift on your server.
    On a CentOS server you should be able to run the following:

    yum -y install pip httrack
    pip install turbolift
  10. [OPTIONAL, but recommended to avoid useless bandwidth charges]
    Manually set /etc/hosts to resolve to admin.mywpsite.com locally, to force httrack to pull the content without going via public net:

    echo "127.0.0.1 admin.mywpsite.com' >> /etc/hosts
  11. Combine these two commands to locally generate a static version of your admin.mywpsite.com and push it to the Cloud Files Container.
    To achieve that, you can use a simple BASH script like the one below:
    Script project:Ā https://bitbucket.org/thtieig/wp2static/

    #!/bin/bash
    
    #########################################################
    # Replace the quoted values with the right information: #
    # ------------------------------------------------------#
    
    USERNAME="yourusername"
    APIKEY="your_api_key"
    REGION="datacenter_region"
    CONTAINERNAME="mycontainer"
    DEVSITE="admin.mywpsite.com"
    
    #########################################################
    
    DEST=/tmp/$DEVSITE/ 
    USER=$(echo "$USERNAME" | tr '[:upper:]' '[:lower:]') 
    DC=$(echo "$REGION | tr '[:upper:]' '[:lower:]') 
    
    # Clean up / create destination directory
    ls $DEST > /dev/null 2&>1 && rm -rf $DEST || mkdir -p $DEST
    
    # Create static copy of the site
    httrack $DEVSITE -w -O "$DEST" -q -%i0 -I0 -K0 -X -*/wp-admin/* -*/wp-login*
    
    # Upload the static copy into the container via local network SNET (-I)
    turbolift -u $USER -a $APIKEY --os-rax-auth $DC -I upload -s $DEST/$DEVSITE -c $CONTAINERNAME
    
    
  12. Now you can test if the upload went well.
    Just open the HTTP link of your Cloud Files Container in your browser and make sure the site is displayed correctly.
  13. If all works as expected, you can now go and make your www.mywpsite.com domain to be a CNAME of that HTTP link instead. Once done, your live traffic will be served by your Cloud Files Container and no longer from your old live site!
  14. And yes, you can now image your Cloud Server and delete it once completed. Next time you need to edit/add some content, just spin up another server from its latest image and add it under the Cloud Load Balancer.

Happy “static’ing“! šŸ™‚