Space utilised one liners

# Current  folder space
du -sh <path>

# 10 biggest folders
du -m <path> | sort -nr | head -n 10

# Check high directories usage.
du -hcx --max-depth=5 | grep [0-9]G | sort -nr

# Exclude a path from the final calculation
cd /path
du -sh --exclude=./relative/path/to/uploads

# Check APPARENT size
du -h --apparent-size /path/file



# Check how much space is "wasted":
lsof | grep deleted | sed 's/^.* \(REG.*deleted.*$\)/\1/' | awk '{print $5, $3}' | sort | uniq | awk '{sum += $2 } END { print sum }'

# >> *if* the number is like "1.5e+10", you might need to use this to see that converted in MB or GB
lsof | grep deleted | sed 's/^.* \(REG.*deleted.*$\)/\1/' | awk '{print $5, $3}' | sort | uniq | awk '{sum += $2 } END { print sum " bytes - " sum/1024**2 " MB - " sum/1024**3 " G" }'

# Check the biggest files:
lsof | grep deleted | sed 's/^.* \(REG.*deleted.*$\)/\1/' | awk '{print $5, $3}' | sort | uniq | awk '{print $2, $1}' | sort -nr

>> than you can grep the file name from the output of "lsof | grep deleted" and check for the PID that holds that file (second column)
>> and issue the following command:
kill -HUP <PID>
>> And check again. This should release the used file.

 

Apparent size is the number of bytes your applications think are in the file. It’s the amount of data that would be transferred over the network (not counting protocol headers) if you decided to send the file over FTP or HTTP. It’s also the result of cat theFile | wc -c, and the amount of address space that the file would take up if you loaded the whole thing using mmap.

Disk usage is the amount of space that can’t be used for something else because your file is occupying that space.

In most cases, the apparent size is smaller than the disk usage because the disk usage counts the full size of the last (partial) block of the file, and apparent size only counts the data that’s in that last block. However, apparent size is larger when you have a sparse file (sparse files are created when you seek somewhere past the end of the file, and then write something there — the OS doesn’t bother to create lots of blocks filled with zeros — it only creates a block for the part of the file you decided to write to).

Source (clarification): http://stackoverflow.com/questions/5694741/why-is-the-output-of-du-often-so-different-from-du-b 

Nice / Renice… the dilemma

Here some handy notes!
-n 20 => HIGHEST
-n 19 => LOWEST

$ ps axl

nice -10 <command name> and nice -n 10 <command name> will do the same thing (both the above commands will make the process priority to the value 10).

A major misconception about nice command is that nice -10 <command name> will run that process with -10 (a higher priority).

In order to assign -10 priority to a command then you should run it as shown below.

nice –10 <command name>

http://www.cyberciti.biz/faq/change-the-nice-value-of-a-process/http://www.thegeekstuff.com/2013/08/nice-renice-command-examples

Renice (change priority of a running process).

Example – put a PID to LOWER priority
$ renice 19 PID

http://www.cyberciti.biz/faq/howto-change-unix-linux-process-priority/

Create and mount SWAP file

In the Cloud era, virtual servers come with no swap. And it’s perfectly fine, cause swapping isn’t good in terms of performace, and Cloud technology is designed for horizontal scaling, so, if you need more memory, add another server.

However, it could be handy sometimes to have a some more room for testing (and save some money).

So here below one quick script to automatically create a 4GB swap file, activate and also tune some system parameters to limit the use of the swap only when really necessary:

fallocate -l 4G /swapfile
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
sysctl vm.swappiness=0
echo 'vm.swappiness=0' >> /etc/sysctl.conf
tail /etc/sysctl.conf

NOTES:
Swappiness: setting to zero means that the swap won’t be used unless absolutely necessary (you run out of memory), while a swappiness setting of 100 means that programs will be swapped to disk almost instantly.

Holland backup setup

>> package:

yum install holland-mysqldump

>> Auto install script:

#!/bin/bash
which yum
rc=$?
if [ $rc != 0 ] ; then
    apt-get install -y python-setuptools python-mysqldb
else
    yum install -y MySQL-python
fi

if [ ! -f holland-1.0.10.tar.gz ] ; then
    wget http://hollandbackup.org/releases/stable/1.0/holland-1.0.10.tar.gz
fi
if [ ! -d holland-1.0.10 ] ; then
    tar zxf holland-1.0.10.tar.gz 
fi

cd holland-1.0.10
python setup.py install 

cd plugins/holland.lib.common/
python setup.py install

cd ../holland.lib.mysql/
python setup.py install

cd ../holland.backup.mysqldump/
python setup.py install

cd ../../config
mkdir -p /etc/holland/providers
mkdir -p /etc/holland/backupsets
cp holland.conf  /etc/holland/
cp providers/mysqldump.conf /etc/holland/providers/
cp backupsets/examples/mysqldump.conf /etc/holland/backupsets/default.conf

cd /etc/holland
sed -i 's=/var/spool/holland=/var/lib/mysqlbackup=g' holland.conf
sed -i 's/backups-to-keep = 1/backups-to-keep = 7/g' backupsets/default.conf
sed -i 's/file-per-database.*=.*no/file-per-database = yes/g' backupsets/default.conf
sed -i 's/file-per-database.*=.*no/file-per-database = yes/g' providers/mysqldump.conf

hour=$(python -c "import random; print random.randint(0, 4)")
minute=$(python -c "import random; print random.randint(0, 59)")
echo '# Dump mysql DB to files. Time was chosen randomly' > /etc/cron.d/holland
echo "$minute $hour * * * root /usr/local/bin/holland -q bk" >> /etc/cron.d/holland

mkdir -p /var/lib/mysqlbackup
mkdir -p /var/log/holland
chmod o-rwx /var/log/holland

echo 
echo
echo "Holland should be installed now."
echo "Test backup: holland bk"
echo "See cron job: cat /etc/cron.d/holland"
echo "See backups: find /var/lib/mysqlbackup/"

>> Where is the DB conf file:

/etc/holland/backupsets

>> When it runs

cat /etc/cron.d/holland

>> Test command to see if all works without actually run it

holland bk --dry-run

>> /etc/holland/backupsets/default.conf

[mysql:client]
defaults-extra-file = /root/.my.cnf

 

Rackspace Cloud – Remove old System IDs via command line

Rough script/instructions 🙂

>> set your variables:
TOKEN=""
REGION="lon"
DDI=""  < this is the account number

>> Generate a list of backup agents
curl -sH  "X-Auth-Token: $TOKEN" -H "Content-type: application/json" -X GET https://$REGION.backup.api.rackspacecloud.com/v1.0/$DDI/user/agents | python -m json.tool | egrep "MachineName|MachineAgentId" | awk -F":" '{print $2}' | sed 's/ //g' | sed '{N;s/\n//}' > list.txt

>> Manually remove WANTED backup agents (leave only the ones you want to remove):
vim list.txt 

>> Generate remove list
awk -F, '{print $1}' list.txt > remove.txt


>> generate the exec file to review
for AGENTID in `cat remove.txt`; do echo curl -sH \"X-Auth-Token: $TOKEN\" -H \"Content-type: application/json\" -X POST https://$REGION.backup.api.rackspacecloud.com/v1.0/$DDI/agent/delete -d \'{\"MachineAgentId\": $AGENTID}\' ; done >> exec_me

>> exec the API calls
/bin/bash exec_me

 

Lsync monitoring on Rackspace Cloud

mkdir -p /usr/lib/rackspace-monitoring-agent/plugins/
cd /usr/lib/rackspace-monitoring-agent/plugins/
wget https://raw.githubusercontent.com/racker/rackspace-monitoring-agent-plugins-contrib/master/lsyncd-status.sh
chmod 755 lsyncd-status.sh

You can test the above script by calling it directly to see if it is working and reporting stats:

/usr/lib/rackspace-monitoring-agent/plugins/lsyncd-status.sh

 

Now, we need to create the alert itself.
[To get the token, you can use this]

curl -i -X POST \
-H 'X-Auth-Token: [AUTH_TOKEN]' \
-H 'Content-Type: application/json; charset=UTF-8' \
-H 'Accept: application/json' \
--data-binary \
'{"label": "Lsyncd", "type": "agent.plugin", "details": {"file": "lsyncd-status.sh","args": ["arg1","arg2"]}}' \
'https://monitoring.api.rackspacecloud.com/v1.0/[ACCOUNT_ID]/entities/[ENTITY_ID]/checks'

NOTE: ENTITY_ID is the Monitoring ID, NOT the server ID!!

Once the alert has been created, you can add the alarm manually via the Control Panel:

if (metric['lsyncd_status'] != 'running') {
return new AlarmStatus(CRITICAL, 'Lsyncd Service is NOT running.');
}
if (metric['lsyncd_status'] == 'running' && metric['percent_used_watches'] >= 80) {
return new AlarmStatus(WARNING, 'Lsyncd is running but the number of directories has reached 80% of notify watches.');
}
if (metric['lsyncd_status'] == 'running' && metric['percent_used_watches'] >= 95) {
return new AlarmStatus(CRITICAL, 'Lsyncd is running but the number of directories has reached 95% of notify watches.');
}
return new AlarmStatus(OK, 'Lsyncd Service is running.');

Make sure to test and save the alert.

Rackspace Cloud Driveclient not working

First of all, checks the logs: /var/log/driveclient.log

You might find 403 errors and lines that are showing that the agent can’t connect properly.

In this case, the first step is trying to re-register the backup agent:
3) Maybe the customer has changed the API key so try re-register the backup agent:

# /usr/local/bin/driveclient --configure
WARNING: Agent already configured. Overwrite? [Y/n]: Y
Username: My_Username
Password: My_APIKey

Desired Output:

Registration successful!
Bootstrap created at: /etc/driveclient/bootstrap.json

In case you get something like “ERROR: Registration failed: Could not authenticate user. Identity returned 401“, this means that you probably need to force a bit the registration, using the following command:

# driveclient -u USER_NAME -k API_KEY -t LON -l raxcloudserver -a lon.backup.api.rackspacecloud.com -c

 

Force reset/repush network configuration Rackspace Cloud server

Run the following command on the Cloud server (this works only on Linux servers):

UUID=`uuidgen`; xenstore-write data/host/$UUID '{"name":"resetnetwork","value":""}'; sleep 10; xenstore-read data/guest/$UUID; unset UUID

If completed successfully it will return something like this:

{"message": "", "returncode": "0"}

 

Apache ProxyPass for WordPress master-slave setup

Simple way

Ensure certain traffic goes to a certain server (master), you can use this:

<LocationMatch "^/wordpress/wp-admin/?.*>
ProxyPreserveHost On
ProxyPass http://ip.of.master.server/
</LocationMatch>

 


For a better setup with Variables, just follow the… following steps 🙂

Step One: Configure Environment

We need to setup some environment variables to get this to work correctly.
Add the following to your environment on the slave server(s):

RHEL/CentOS: /etc/sysconfig/httpdi

OPTIONS="-DSLAVE"
export MASTER_SERVER="SERVERIP HERE"

Ubuntu: /etc/apache2/envvars

export APACHE_ARGUMENTS="-DSLAVE"
export MASTER_SERVER="SERVERIP HERE"

Step Two: Configure your VirtualHost

In your VirtualHost configuration do something like the following.

<IfDefine SLAVE>
RewriteEngine On
ProxyPreserveHost On
ProxyPass /wp-admin/http://${MASTER_SERVER}/wp-admin/
ProxyPassReverse /wp-admin/http://${MASTER_SERVER}/wp-admin/

RewriteCond %{REQUEST_METHOD} =POST
RewriteRule . http://${MASTER_SERVER}%{REQUEST_URI} [P]
</IfDefine>