Author Archives: thtieig

Vim – remove the yellow highlight

Be honest. Sometimes happened also to you to see a file that you edited a while ago with Vim, still showing that terribly annoying yellow highlight. And I’m sure you probably gave up, thinking that the time was going to remove it.

Wrong! 😛

Here how to get rid of it:

  1. open the file
  2. press ESC
  3. type :nohl
  4. press Enter

Alternatively, the way I kept to use (which seems easier to remember), is actually search for some crazy string.
For example:

  1. open the file
  2. press ESC
  3. type /fkjsaddflkjasd;flka
    (randomly type stuffs)
  4. press Enter

Done 😉

Vim – Comment multi lines

First way:

v -> select whatever needs to be commented out
:
(appears this) :'<,’>
add s/^/#
(it will be now :'<,’>s/^/#)
Press Enter

Second way:

For commenting out a block of text, do the following:

  1. hit CTRL + v (visual block mode)
  2. use the down arrow keys to select the lines you want (it won’t highlight everything)
  3. Shift + i (capital I)
  4. insert the caracter/text you want to add at the beginning of the line (e.g. #)
  5. Press ESC.

 

To uncomment a block:

  1. Go the to first line of code where you want to start uncommenting from.
  2. Press 0 (To bring the cursor to the beginning of the line.)
  3. CTRL + v and select lines to be uncommented.
  4. x that will delete all the # characters vertically.

 

Source: https://www.quora.com/How-can-I-un-comment-a-block-of-text-in-Vim

phpMyAdmin under Varnish

Add the below in the Varnish VCL file, under sub vcl_recv

if (req.url ~ "(phpMyAdmin|phpmyadmin|phpMyAdmin/setup)") {
return (pass);
}

Than test VCL

# varnishd -C -f /etc/varnish/default.vcl

…and if all is good

# service varnish reload

NOTE: sometimes, altering phpMyAdmin auth method in /etc/phpMyAdmin/config.inc.php from cookie to http

Varnish – basic notes

ACLs: /etc/varnish/default.vcl

Memory usage:
grep VARNISH_STORAGE_SIZE /etc/sysconfig/varnish

Check how much memory can use: (check last parameter in the output line)

# ps aux | grep varnish
root     27093  0.0  0.1 112304  1140 ?        Ss   16:28   0:00 /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /etc/varnish/default.vcl -T 127.0.0.1:6082 -t 120 -w 50,1000,120 -u varnish -g varnish -S /etc/varnish/secret -s file,/var/lib/varnish/varnish_storage.bin,256M
varnish  27094  0.1  0.9 21760528 9240 ?       Sl   16:28   0:02 /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /etc/varnish/default.vcl -T 127.0.0.1:6082 -t 120 -w 50,1000,120 -u varnish -g varnish -S /etc/varnish/secret -s file,/var/lib/varnish/varnish_storage.bin,256M


>> Test VCL
# varnishd -C -f /etc/varnish/default.vcl


>> Test if varnish works
# varnishstat

 

Sophos antivirus notes

Generic checks

ps aux |grep sav (check process)

/opt/sophos-av/bin/savdstatus --version (version, last update, thread data)
/opt/sophos-av/bin/savconfig -v (info about exclusions, where the Datacentre is that hosts that Sophos device, named scans etc )
/opt/sophos-av/bin/savconfig get TalpaOperations (check disabled mode)

cat /proc/sys/talpa/intercept-filters/VettingController/ops (check all modes)
/opt/sophos-av/bin/savconfig set TalpaOperations -- -open (set mode to disabled for open/read)
/opt/sophos-av/bin/savconfig get TalpaOperations

cat /proc/sys/talpa/intercept-filters/VettingController/ops
-open
+close
+exec
+mount
+umount

/opt/sophos-av/bin/savconfig query NamedScans (Check Scheduled Scans)
/opt/sophos-av/bin/savconfig query NamedScans SEC:FullSystemScan (Check Scheduled Scans with argument)
/opt/sophos-av/bin/savconfig add ExcludeFilePaths /home/user1/ (ADD Exclude files' path)
/opt/sophos-av/bin/savconfig remove ExcludeFilePaths /home/user1/ (REMOVE Exclude files' path)


# Check Global exclusions 
/opt/sophos-av/bin/savconfig query ExcludeFileOnGlob && /opt/sophos-av/bin/savconfig query ExcludeFilePaths

/opt/sophos-av/bin/savdctl disable (disable on-access scanning)
/opt/sophos-av/bin/savdstatus (check)
Sophos Anti-Virus is active but on-access scanning is not running

To get ON-Access Scanning back, restart all Sophos related services:
for i in `chkconfig --list |grep sav |awk '{print $1}'`;do echo -e "\n\e[93mShow service $i restart \e[0m\n";service $i restart;done

Scan

>> Perform the scan -> this will create a log
savscan -nc -f -s --no-follow-symlinks --backtrack-protection --quarantine <path/to/scan> (manual scan)

>> Than, check the log to see what it has been found from the manual scan
/opt/sophos-av/bin/savlog --today --utc | grep detected (check threats for today -)
grep INFECTED /opt/sophos-av/log/savd.log | grep -P -o '(?<=arg>)/[^<]*(?=</arg)' | sort -u (check  all threats)
savscan --help

Example for multiple folders with final report:

(suggested to run in a screen session)

  1. Create a temporary folder:
    mkdir -p /tmp/scantmp/ > && cd $_
  2. list all directories that you want to scan (full path) into a file called list_folder.txt within the temp folder;
  3. Run the following:
    for i in `cat list_folder.txt` ; do nice / renice -n 19 savscan -nc -f -s --no-follow-symlinks --backtrack-protection --quarantine $i 2>&1 >> scan.log ; done
    /opt/sophos-av/bin/savlog --today --utc | grep "Threat detected" | awk -F" " '{print $2}' > report.txt
    
  4. Check report.txt 

 

RPM – Yum notes

>> update all, including what is set as exclude in yum.conf
# yum check-update --disableexcludes=all

>> update only the security related patches the server
# yum update --security

>> how to rollback a recent package
# rpm -Uvh --rollback '1 hour ago'

>> check if a packaged is patched against a particular CVE with:
rpm -q --changelog {package-name} | grep CVE-NUMBER 

>> Check if a package is from repository or not
Example: httpd

1) Get the PID of httpd
# netstat -tnlp | grep httpd
tcp        0      0 :::80                       :::*                        LISTEN      7568/httpd          
tcp        0      0 :::443                      :::*                        LISTEN      7568/httpd        

# lsof -p 7568 | less
(and find what's the "bin" for httpd, in this case is /usr/sbin/httpd )

# rpm -qf /usr/sbin/httpd
httpd-2.2.15-30.el6_5.x86_64

Verified: package part of RH repositories

 

Develop your site using WordPress and publish a static version on Cloud Files

WordPress is a CMS widely used nowadays to develop sites. Even if it wasn’t created with this intent (blogging was actually the original reason), due to its simplicity and popularity, loads of people are using this software to build their websites.

When the end goal is no longer a blog but a pure website, and you’re not using any ‘search’ or ‘comment’ features which still require php functions, it could be worth considering a completely “static’ed” site which is then published in a Cloud Files Public Container. You could take advantage of the built-in CDN capability and forget about scaling on demand and all the limitations that WordPress has in the Cloud.
Having a static site as opposed to a PHP site means you can bypass security patch installations and avoid the possible risk of compromised servers.
And last but not least: cost reduction!
When you serve a static site from Cloud Files you pay only for the space utilized and the bandwidth – no more charges for Cloud Server(s) and better performance! So… why not give it a try?! 🙂

Side note: this concept can be applied to any CMS and any Cloud Platform. In this post I will use WordPress and Rackspace Public Cloud as examples.

Here is a list of user cases:

  • Existing website built on WordPress; without search fields, comments, forms (dynamic content) which is still being updated on regular basis with extra pages.
  • Brand new project based on WordPress where the end goal is to use WordPress as a Content Management System (CMS) to build a site, without any comment or search fields.
  • Old website built on an old version of WordPress that cannot be updated due to some plugins that won’t work on newest versions of this CMS, with no dynamic functionalities required.
  • Legacy site built on WordPress that is no longer being developed/updated but is still required to stay online, with no dynamic functionalities required.

To be able to make a site build using a CMS completely static, any dynamic functionality needs to be disabled (e.g. comments, forms, search fields…).
It’s possible to manually re-integrate some of them relying on trusted external sources (e.g. Facebook comments) with the introduction of some javascript. But this will require some editing of the static pages generated, something that is outside the scope of this article. However I wanted to mention this, as it could be a limitation in the decision to make the site static.

What do you need?

  • a Cloud Load Balancer: this is just to keep a static IP
  • a small Cloud Server LAMP stack (1/2GB max) where you will host your site
  • a Cloud Files Public container to host the static version of the site
  • access to your DNS
  • some basic Linux knowledge to install some packages and run a few commands via command line

How to set this up?

Imagine you have www.mywpsite.com built on WordPress.
It’s a site that you keep updating generally once or twice a week.
Comments are all disabled and you don’t have any search fields.
You found this post and you’re interested in trying to save some money, improve performance and forget about patching and updating your CMS.

So, how can you achieve this?

You will have to build a development infrastructure made up of a Cloud Load Balancer and a single Linux LAMP Cloud Server plus a Cloud Files Public Container for your production static site.

Your main domain www.mywpsite.com will need to point to the CNAME of the Cloud Files Container and no longer to your original site/server. You also need to create a new subdomain (e.g. admin.mywpsite.com) and point it to the development infrastructure: in specific, to the Cloud Load Balancer’s IP. You will then use admin.mywpsite.com/wp-admin/ instead of www.mywpsite.com/wp-admin/ to manage your site.

The only reason to use a Load Balancer in a single server’s solution is to keep the public IP, meaning you will no longer need to touch the DNS.
The goal is to image the server after you’ve completed making the changes and then delete it. In this way you won’t get charged for a resource that doesn’t serve any traffic and is there just for development. In this example, I’ve mentioned that changes are made a couple of times a week, so it’s cheaper and easier to always keep a Load Balancer with admin.mywpsite.com pointing to it, instead of a Cloud Server where you would be changing the DNS every time.
If you’re planning to keep the dev server up continuously, you could avoid using a Load Balancer entirely. However Cloud Servers might fail (we all know that Cloud is not designed to be resilient) so you may need to spin up a new server and have the DNS re-pointed again. In this case I do recommend keeping the DNS TTL as low as possible (e.g. 300 seconds).

Please note that the Site URL in your WordPress setup needs to be changed to admin.mywpsite.com. If you’re starting a new project you need to remember that the Site URL has to match the development subdomain that you have chosen. In our example, to admin.mywpsite.com.

Once you are happy with the site and everything works under admin.mywpsite.com, you will just need to run a command to make a static version of it, and another command to push the content onto the Cloud Files Container.

Once complete and verified, you can change the DNS for www.mywpsite.com to point to the Container, then image your dev server and delete it once the image has completed. You will create a new server from this image the next time you want to make a new change.

Here is a diagram that should help you to visualise the setup:

diagram

diagram

All clear?

So let’s do it!

How do you put this in practice?

Step by step guide (no downtime migration)
  1. Create a Cloud Server LAMP stack.
  2. Create a Cloud Load Balancer. Take note of its new IP.
  3. Add the server under the Load Balancer.
  4. Create a Cloud Files Public Container. Take note of its HTTP Public Link.
  5. Create the subdomain admin.mywpsite.com in your DNS and point it to the IP of your Cloud Load Balancer. Keep your www.mywpsite.com domain pointing to your live site to avoid downtime.
  6. Migrate WordPress files and database onto the new server and make sure NO changes on the live site will be done during the migration to avoid inconsistency of data.
  7. Change the Site URL to admin.mywpsite.com. Please note that this can be a bit tricky. Make sure all the references are properly updated. If it’s a new project, just install WordPress on it and set the Site URL directly to admin.mywpsite.com.
  8. Verify that you can access your site using admin.mywpsite.com and that also the wp-admin panel works correctly. Troubleshoot until this step is fully verified.

    NOTE: At this stage we have a new development site setup and the original live site still serving traffic. Next steps will complete the migration, having live traffic directly served from the Cloud Files Container.

  9. Install httrack and turbolift on your server.
    On a CentOS server you should be able to run the following:

    yum -y install pip httrack
    pip install turbolift
  10. [OPTIONAL, but recommended to avoid useless bandwidth charges]
    Manually set /etc/hosts to resolve to admin.mywpsite.com locally, to force httrack to pull the content without going via public net:

    echo "127.0.0.1 admin.mywpsite.com' >> /etc/hosts
  11. Combine these two commands to locally generate a static version of your admin.mywpsite.com and push it to the Cloud Files Container.
    To achieve that, you can use a simple BASH script like the one below:
    Script project: https://bitbucket.org/thtieig/wp2static/

    #!/bin/bash
    
    #########################################################
    # Replace the quoted values with the right information: #
    # ------------------------------------------------------#
    
    USERNAME="yourusername"
    APIKEY="your_api_key"
    REGION="datacenter_region"
    CONTAINERNAME="mycontainer"
    DEVSITE="admin.mywpsite.com"
    
    #########################################################
    
    DEST=/tmp/$DEVSITE/ 
    USER=$(echo "$USERNAME" | tr '[:upper:]' '[:lower:]') 
    DC=$(echo "$REGION | tr '[:upper:]' '[:lower:]') 
    
    # Clean up / create destination directory
    ls $DEST > /dev/null 2&>1 && rm -rf $DEST || mkdir -p $DEST
    
    # Create static copy of the site
    httrack $DEVSITE -w -O "$DEST" -q -%i0 -I0 -K0 -X -*/wp-admin/* -*/wp-login*
    
    # Upload the static copy into the container via local network SNET (-I)
    turbolift -u $USER -a $APIKEY --os-rax-auth $DC -I upload -s $DEST/$DEVSITE -c $CONTAINERNAME
    
    
  12. Now you can test if the upload went well.
    Just open the HTTP link of your Cloud Files Container in your browser and make sure the site is displayed correctly.
  13. If all works as expected, you can now go and make your www.mywpsite.com domain to be a CNAME of that HTTP link instead. Once done, your live traffic will be served by your Cloud Files Container and no longer from your old live site!
  14. And yes, you can now image your Cloud Server and delete it once completed. Next time you need to edit/add some content, just spin up another server from its latest image and add it under the Cloud Load Balancer.

Happy “static’ing“! 🙂

Plesk notes

 

>> Get FTP passwords
# mysql psa -e "select sys_users.login,sys_users.home,domains.name,accounts.password from sys_users,domains,accounts,hosting where sys_users.id=hosting.sys_user_id AND domains.id=hosting.dom_id AND accounts.id=sys_users.account_id"


>> Get email passwords
# /usr/local/psa/admin/sbin/mail_auth_view/usr/local/psa/bin/admin --show-password <----- Plesk 10 and up
cat /etc/psa/.psa.shadow <----- Plesk 6 and up


>> Check which MTA
# alternatives --display mta


>> check mailq (yum install pfHandle)
# pfHandle -s

!!! if you use qmail -> qmHandle


>> Check list of messages queued
# pfHandle -d

!!! If pfHandle does not work, just check inside /var/spool/postfix/



>> Connect to MySQL
mysql -uadmin -p`cat /etc/psa/.psa.shadow`


>> Check version
# cat /usr/local/psa/version 


>> Setup Holland
backupsets/default.conf

[mysql:client]
user = admin
password = file:/etc/psa/.psa.shadow 


>> Check license
/usr/bin/curl -s -k https://127.0.0.1:8443/enterprise/control/agent.php -H "HTTP_AUTH_LOGIN: admin" -H "HTTP_AUTH_PASSWD: `/usr/local/psa/bin/admin --show-password`" -H "HTTP_PRETTY_PRINT: true" -H "Content-Type: text/xml" -d "<packet> <server> <get> <key/> </get> </server> </packet>" | egrep -ohm 1 "PLSK\.[0-9]{8}"


>> Remove license (physically from the server)
[root@344668-web1 ~]# mv /etc/sw/keys/keys/keyXXNb8YmF  /home/user/
[root@344668-web1 ~]#


>> Plesk main logs
MAIL: /usr/local/psa/var/log/maillog
ACCESS LOGS: /var/www/vhosts/*/logs/access_log



>> One-liner to generate reports from the Access Logs

> General report
grep -h "04.Jun.2015" /var/www/vhosts/*/logs/access_log | awk '{print $1}' | sort | uniq -c | sort -nr | head -n 20

> per site report
for i in `find /var/www/vhosts/*/logs/access_log -not -empty `;do echo -n "$i - " ; awk '{print $1}' $i | sort | uniq -c | sort -n | tail -1 ; done | sort –k3 -n | column –t




>> Add custom configuration to Apache under Plesk

# cd /var/www/vhosts/system/DOMAIN.com/conf        
If there is no vhost.conf file then I can create it and add the necessary custom configuration

Need to reconfigure the Plesk Domain - this will Include the custome vhost.conf file
# /usr/local/psa/admin/sbin/httpdmng  -h
# /usr/local/psa/admin/sbin/httpdmng --reconfigure-domain DOMAIN.com



>> Disable SSLv3 on Plesk

If you need to disable SSLv3 on Plesk boxes, here is how to do it:

If nginx is running on port 443, use the following KB: http://kb.sp.parallels.com/en/120083
If Apache is configured on port 443, create /etc/httpd/conf.d/ zz050-psa-disable-weak-ssl-ciphers.conf:

SSLHonorCipherOrder on
SSLProtocol -ALL +TLSv1
SSLCipherSuite ALL:!ADH:!LOW:!SSLv2:!EXP:+HIGH:+MEDIUM


# /usr/local/psa/bin/ipmanage -l
State Type IP                               Clients Hosting PublicIP 
1     S    eth0:172.54.10.212/255.255.252.0 0       0                
0     E    eth2:10.0.1.128/255.255.254.0 0       0                
0     S    eth0:172.54.10.27/255.255.252.0  0       161              
0     E    eth0:172.54.10.28/255.255.252.0  0       1  

# /usr/local/psa/bin/ipmanage -r 172.54.10.212
Error occured while sending feedback. HTTP code returned: 502
SUCCESS: Removal of IP '172.54.10.212' completed.

# /usr/local/psa/bin/ipmanage -l
State Type IP                               Clients Hosting PublicIP 
0     E    eth2:10.0.1.128/255.255.254.0 0       0                
0     S    eth0:172.54.10.27/255.255.252.0  0       161              
0     E    eth0:172.54.10.28/255.255.252.0  0       1  

 

PHP-FPM pool sizes explanation

If you see these settings in a PHP-FPM config. file (real life example):

pm.max_children = 100
pm.start_servers = 30
pm.min_spare_servers = 30
pm.max_spare_servers = 100

What does it actually mean?

1) There will never be less than 30 processes (because it starts with 30, and minimum spare is 30)

2) There will always be at least 30 idle processes, except when more than 70 are in use.

3) Idle processes will never be killed off, unless they hit pm.max_requests (because the number of processes cannot possibly exceed the maximum spare)

Too many idle processes is bad, because they use up virtual memory. It is especially bad on a server with no swap, because then they can’t be shunted out of RAM to make room for active processes.
Also, remember that the settings are per-pool. On this particular server, there were three pools with identical settings. The result was that there were never less than 90 PHP processes, even if all were idle.

So please be mindful of the law of unintended consequences, especially when creating additional PHP FPM pools.

 

Credits to my ex colleague Danny 🙂