Tag Archives: server

GlusterFS

Example of GlusterFS configuration on 2 servers with Block Storage attached.

This setup is suggested for TESTING purposes only. In a production environment please verify performances.

Create a separate network and map IP/servers' names in /etc/hosts
Append to /etc/hosts
# GlusterFS
192.168.3.5     gfs01
192.168.3.6     gfs02

>> On BOTH nodes:

yum update
wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
yum -y install parted lvm2 xfsprogs glusterfs glusterfs-fuse glusterfs-server
grep ^exclude /etc/yum.conf

[root@gfs01 sysconfig]# grep ^exclude /etc/yum.conf
exclude=gluster*
(in [main] section)

parted -s -- /dev/xvdb mktable gpt
parted -s -- /dev/xvdb mkpart primary 2048s 100%
parted -s -- /dev/xvdb set 1 lvm on
partx -a /dev/xvdb
pvcreate /dev/xvdb1 
vgcreate vggfs01 /dev/xvdb1 

lvcreate -l 100%VG -n gbrick1 vggfs01
mkfs.xfs -i size=512 /dev/vggfs01/gbrick1
echo '/dev/vggfs01/gbrick1 /data/gluster/gvol0 xfs inode64,nobarrier 0 0' >> /etc/fstab
mkdir -p /data/gluster/gvol0
mount /data/gluster/gvol0
mkdir -p /data/gluster/gvol0/brick1

/bin/systemctl start glusterd.service
/bin/systemctl status glusterd.service
systemctl enable glusterd.service

>> On NODE2
gluster peer probe gfs01
gluster peer status
gluster pool list

>> On NODE1
gluster peer probe gfs02
gluster peer status
gluster pool list

gluster volume create gvol0 replica 2 transport tcp gfs01:/data/gluster/gvol0/brick1 gfs02:/data/gluster/gvol0/brick1
gluster volume start gvol0
gluster volume info gvol0

gluster volume set gvol0 performance.cache-refresh-timeout 30
gluster volume set gvol0 performance.io-thread-count 32
gluster volume set gvol0 performance.cache-size 1073741824
gluster volume info gvol0



============================================================

TO MOUNT - Fuse (HA)
=> nodes need to be connected to the same Cloud Network

Append to /etc/hosts
# GlusterFS
192.168.3.5     gfs01
192.168.3.6     gfs02

yum -y install glusterfs glusterfs-fuse

modprobe fuse

echo 'gfs01:/gvol0 /mnt/gluster/gvol0 glusterfs defaults,backupvolfile-server=gfs02,_netdev 0 0' >> /etc/fstab

mkdir -p /mnt/gluster/gvol0

mount /mnt/gluster/gvol0


==========================================================
It seems that Debian 7 cannot have GlusterFS 3.7 but only from Debian 8.
http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.0/Debian/jessie/


==========================================================

Extra source: http://matty.digital/gluster

Extra commands
# gluster volume remove-brick gvol0 gfs02:/data/gluster/gvol0/brick1 force
# gluster peer detach gfs02
# gluster peer detach gfs01
# gluster peer probe gfs01

Rackspace Cloud – .localdomain added in /etc/hosts after reboot

There is an agent called “nova-agent” which runs on all Rackspace cloud virtualised servers. This agent handles all communication between the hypervisor and guest OS, and is used for decloning.

Because it is used during decloning, it owns the /etc/hosts file and many files related to DNS and networking (/etc/resolv.conf , /etc/sysconfig/network-scripts/ifcfg-eth0 ,etc)

It is unlikely, but possible, that the host reboot triggered nova-agent to reset your hosts file.

To prevent nova-agent from overwriting your files, you can change the attributes of the file using the following command:

# chattr +i /etc/hosts

This will make the file unwriteable even to root! To remove this restriction, use the following:

# chattr -i /etc/hosts

Linux Cloud Server migration script

This script allows you to migrate a Linux Server from one server to another one. It uses rsync and it could be use when you need to resize down a server for example, or if you want to migrate onto another Cloud Provider.

git clone git://github.com/cloudnull/InstanceSync.git

Source:
http://cloudnull.io/2012/07/cloud-server-migration/
https://github.com/cloudnull/InstanceSync

Rackspace – Cloud Monitoring – Ansible plugins

Install the required packages (Ubunto/Centos):

apt-get update && apt-get install python-apt python-pip build-essential python-dev git python-virtualenv -y

yum install python-pip git python-devel python-virtualenv gcc -y

Prepare the virtual environment

virtualenv /root/monitorenv
. /root/monitorenv/bin/activate
pip install paramiko PyYAML jinja2 httplib2 ansible

Download the playbook

git clone https://github.com/stevekaten/cloud-monitoring-plugin-deploy
cd cloud-monitoring-plugin-deploy

Install the required plugin:

ansible-playbook -i hosts holland_mysqldump.yml

	This will configure the holland_mysqldump plugin on the localhost.

ansible-playbook -i hosts mysql_slave.yml

	This will configure the mysql_slave plugin on the localhost.

ansible-playbook -i hosts port_check.yml

	This will fail with an error message informing you that you need to set a port.

ansible-playbook -i hosts port_check.yml -e port=8080

	This will configure the port_check plugin on the localhost checking if port 8080 is open.

ansible-playbook -i hosts port_check.yml -e '{"host":"rackspace.com","port":"80"}'

	This will configure the port_check plugin to check rackspace.com:80.

ansible-playbook -i hosts port_check.yml -e '{"host":"10.X.X.X","port":"3306"}'

	This will configure the port_check plugin to check mysql's port 3306 on the ServiceNet address.

ansible-playbook -i hosts lsyncd_status.yml

	This will configure the lsyncd_check plugin.

To UNINSTALL the monitoring, you need to delete the check, removing the related file from /etc/rackspace-monitoring-agent.conf.d/ and restart the Cloud Monitoring agent.

Rackspace – Cloud server inaccessible after creation from custom image

It happens that sometimes a server built from a custom image is not accessible. Sometimes the reason is becase the Nova agent was not running (for various reasons) on the source server and the networking wasn’t set correctly during the building process. This means the new server still have the old IP and routes of the original, the one used to create the image itself.

How to fix it?
Connect on the console and make sure xe-linux-distribution (xe-daemon) and Nova Agent are restarted/up and running.

Important: Make sure xe-linux-distribution is started BEFORE Nova Agent is.

Once this has been done run the following command on the Cloud server to force the Hypervisor to re-push the right configuration (this works only on Linux servers):

UUID=`uuidgen`; xenstore-write data/host/$UUID '{"name":"resetnetwork","value":""}'; sleep 10; xenstore-read data/guest/$UUID; unset UUID

# If completed successfully it will return something like this:
{"message": "", "returncode": "0"}

Rackspace – Cloud Sever autokill script

#!/bin/bash

# This script auto delete the current instance and ask the
# Autoscale Group to replace the node


###########################################################

CRED_FILE=/opt/autoscale/.credentials
AS_GRP_ID=a17b08b3-0c04-48e8-84a9-3070c29a27fa

###########################################################

# Gather info from credential file
USERNAME=$(grep username $CRED_FILE | awk -F= '{print $2}' | sed 's/ //g')
APIKEY=$(grep api_key $CRED_FILE | awk -F= '{print $2}' | sed 's/ //g')
REGION=$(grep region $CRED_FILE | awk -F= '{print $2}' | sed 's/ //g' | tr '[:upper:]' '[:lower:]')


SERVER_UID=$(xenstore-read name | sed 's/instance-//')

AUTH=$(
curl -sd \
"{
   \"auth\":
   {
        \"RAX-KSKEY:apiKeyCredentials\":
        {\"username\": \"$USERNAME\",
        \"apiKey\": \"$APIKEY\"}        }
}" \
-H 'Content-Type: application/json' \
'https://identity.api.rackspacecloud.com/v2.0/tokens' | python -m json.tool | grep -A 7 token | awk '/id/ { print $2 }' | tr -d '"' | tr -d ","
) 

TOKEN=$(echo $AUTH | awk '{print $1}')
ID=$(echo $AUTH | awk '{print $2}')


curl -sH "X-Auth-Token: $TOKEN" -H "Content-type: application/json" -X DELETE https://$REGION.autoscale.api.rackspacecloud.com/v1.0/$ID/groups/$AS_GRP_ID/servers/$SERVER_UID?replace=true