Tag Archives: basic

Fail2ban notes

General notes about Fail2ban

### Fail2Ban ###

Best practise:
- do NOT edit /etc/fail2ban/jail.conf BUT create a new /etc/fail2ban/jail.local file

=============================================================
# Test fail2ban regex:
example: fail2ban-regex /var/log/secure /etc/fail2ban/filter.d/sshd.conf
example2: fail2ban-regex --print-all-matched/var/log/secure /etc/fail2ban/filter.d/sshd.conf

=============================================================
# Remove email notifications:

comment out 'sendmail-whois' from action in [ssh-iptables]
FYI, comment with # at the BEGINNING of the line like this or it won't work!!!

[ssh-iptables]

enabled  = true
filter   = sshd
action   = iptables[name=SSH, port=ssh, protocol=tcp]
#           sendmail-whois[name=SSH, dest=root, [email protected], sendername="Fail2Ban"]
logpath  = /var/log/secure
maxretry = 5


=============================================================
# Wordpress wp-login - block POST attacks

/etc/fail2ban/jail.local

[apache-wp-login]
enabled = true
port = http,https
filter = apache-wp-login
logpath = /var/log/httpd/blog.tian.it-access.log
maxretry = 3
bantime = 604800 ; 1 week
findtime = 120

----------------------------------------------------------------------------------------------------------------------

/etc/fail2ban/filter.d/apache-wp-login.conf
[Definition]
failregex = <HOST>.*POST.*wp-login.php HTTP/1.1
ignoreregex =

=============================================================

# Manually ban an IP:
fail2ban-client -vvv set <CHAIN> banip <IP>

# Check status of sshd chain
fail2ban-client status sshd

How to “SSH” brute force

If you want to make safer your remote server, it is good practise to use a good combination of sshd setup and fail2ban.

Firstly, you should setup your server to allow only key auth, and no passwords. This will drastically reduce the risk. This means anyway that you need to keep your ssh key safe and you won’t be able to access your server unless you have this key. Most of the time is something possible 🙂

For this reason, I’m explaining here how I configured my server.

SSHD

/etc/ssh/sshd_config

Have these settings in the config file (NOTE: the verbosity is for Fail2ban)

LogLevel VERBOSE

PasswordAuthentication no

(restart sshd)

/etc/fail2ban/jail.local

[DEFAULT]
# Ban hosts for 
# one hour:
#bantime = 3600

# one day:
bantime = 86400

# A host is banned if it has generated "maxretry" during the last "findtime"
# # seconds.
findtime  = 30

# # "maxretry" is the number of failures before a host get banned.
maxretry = 5

# Override /etc/fail2ban/jail.d/00-firewalld.conf:
banaction = iptables-multiport

[sshd]
enabled = true
filter = sshd-aggressive
port     = ssh
logpath  = /var/log/secure
maxretry = 3
findtime = 30
bantime  = 86400

/etc/fail2ban/filter.d/sshd.conf

Add a custom section after the ddos one:

custom = ^%(__prefix_line_sl)sDisconnected from <HOST> port [0-9]+ \[preauth\]$

This line matches whoever tries to connect without a proper ssh key.

Add this line to include custom to the sshd-aggressive setup:

aggressive = %(normal)s
             %(ddos)s
             %(custom)s

 

Rsync – exclude

>> Exclude .txt files [! CASE SENSITIVE]
$ rsync -avz --exclude '*.txt' source/ destination/

>> Exclude from file list
$cat exclude-list.txt 
*.JPG
*.TMP
*.PDF
*.jpg
*.tmp
*.pdf
*.zip
relative/path1/
relative/path2/

$ rsync -avz --exclude-from 'exclude-list.txt' /source/path/ /dest/path/ | tee rsync-report.txt


>> Exclude directory 
$ rsync -avz --exclude 'folder1_within_source' --exclude 'folder2_within_source/subfolder2' source/ destination/

 

Screen – basic commands

>> Create a screen session (labelled)
screen -R 'myscreen'

>> Detach screen
ctrl+A (hold them) + D

>> Check current screen sessions
screen -ls

>>Re-attach screen session
screen -r <screen name, from screen -ls including the PID>
e.g. screen -r 4238.myscreen

>> Quit session
screen -X -S [session # you want to kill] quit


=======================================

>> When it gets badly stuck

screen -ls | grep pts | cut -d. -f1 | awk '{print $1}' | xargs kill 
screen -ls | grep Attached | cut -d. -f1 | awk '{print $1}' | xargs kill 

ref: http://askubuntu.com/questions/356006/kill-a-screen-session

 

Lsyncd – basic setup

This is an example where you install Lsyncd on a CentOS master server and you sync the folder ‘data’ on a slave server with IP 10.0.0.3

First of all, your master server needs an SSH key setup AND the slave has to have it configured, to allow passwordless SSH connection

Here an article that tells you how to do it.

Configure Lsyncd

/etc/lsyncd.conf

-- comments with "--"
settings {
logfile = "/var/log/lsyncd/lsyncd.log",
statusFile = "/var/log/lsyncd/lsyncd-status.log",
statusInterval = 20
}

sync {
default.rsync,
source="/data/",
target="10.0.0.3:/data/",
rsync = {
compress = true,
archive = true,
verbose = true,
rsh = "/usr/bin/ssh -p 22 -o StrictHostKeyChecking=no"
},
-- excludeFrom = "/etc/lsyncd.exclusions"
}

 

Add the service and enable it

chkconfig --add lsyncd
chkconfig lsyncd on

On CentoS7 use this:

systemctl enable lsyncd.service

 

Logrotate

Once installed, you also need to be sure that Lsyncd logs are managed by Logrotate.

Create/update this file: /etc/logrotate.d/lsyncd

/var/log/lsyncd/*log {
    missingok
    notifempty
    sharedscripts
    postrotate
    if [ -f /var/lock/lsyncd ]; then
      /sbin/service lsyncd restart > /dev/null 2>/dev/null || true
    fi
    endscript
}

 

On CentOS7, you need to use sistemctl instead service command:

/var/log/lsyncd/*log {
missingok
notifempty
sharedscripts
postrotate
if [ -f /var/lock/lsyncd ]; then
/bin/systemctl restart lsyncd.service > /dev/null 2>/dev/null || true
fi
endscript
}

Test the logrotate config

You can test this using the command:

logrotate -d /etc/logrotate.d/lsyncd

 

For more advanced Lsyncd configuration, check this article 🙂

Space utilised one liners

# Current  folder space
du -sh <path>

# 10 biggest folders
du -m <path> | sort -nr | head -n 10

# Check high directories usage.
du -hcx --max-depth=5 | grep [0-9]G | sort -nr

# Exclude a path from the final calculation
cd /path
du -sh --exclude=./relative/path/to/uploads

# Check APPARENT size
du -h --apparent-size /path/file



# Check how much space is "wasted":
lsof | grep deleted | sed 's/^.* \(REG.*deleted.*$\)/\1/' | awk '{print $5, $3}' | sort | uniq | awk '{sum += $2 } END { print sum }'

# >> *if* the number is like "1.5e+10", you might need to use this to see that converted in MB or GB
lsof | grep deleted | sed 's/^.* \(REG.*deleted.*$\)/\1/' | awk '{print $5, $3}' | sort | uniq | awk '{sum += $2 } END { print sum " bytes - " sum/1024**2 " MB - " sum/1024**3 " G" }'

# Check the biggest files:
lsof | grep deleted | sed 's/^.* \(REG.*deleted.*$\)/\1/' | awk '{print $5, $3}' | sort | uniq | awk '{print $2, $1}' | sort -nr

>> than you can grep the file name from the output of "lsof | grep deleted" and check for the PID that holds that file (second column)
>> and issue the following command:
kill -HUP <PID>
>> And check again. This should release the used file.

 

Apparent size is the number of bytes your applications think are in the file. It’s the amount of data that would be transferred over the network (not counting protocol headers) if you decided to send the file over FTP or HTTP. It’s also the result of cat theFile | wc -c, and the amount of address space that the file would take up if you loaded the whole thing using mmap.

Disk usage is the amount of space that can’t be used for something else because your file is occupying that space.

In most cases, the apparent size is smaller than the disk usage because the disk usage counts the full size of the last (partial) block of the file, and apparent size only counts the data that’s in that last block. However, apparent size is larger when you have a sparse file (sparse files are created when you seek somewhere past the end of the file, and then write something there — the OS doesn’t bother to create lots of blocks filled with zeros — it only creates a block for the part of the file you decided to write to).

Source (clarification): http://stackoverflow.com/questions/5694741/why-is-the-output-of-du-often-so-different-from-du-b 

MySQL notes

MySQL backup – mysqldump
shell> mysqldump [options] db_name [tbl_name …] > db_name.sql
shell> mysqldump [options] –databases db_name … > multi_db.sql
shell> mysqldump [options] –all-databases > all_dbs.sql

Importing MySQL Table
To import the table run the following command from the command line:
shell> mysql -D dbname < tableName.sql

Check database space
SELECT table_schema “Data Base Name”, sum( data_length + index_length ) / 1024 / 1024 “Data Base Size in MB” FROM information_schema.TABLES GROUP BY table_schema ;

MySQL Uptime
a) mysql> SHOW GLOBAL STATUS;
b) # mysqladmin version | grep -i uptime

innodb_open_files
mysql> show global variables like “innodb_open_files”\G

Binary Logs
>> Enable:
> /etc/my.cnf
log-bin = /var/lib/mysql/bin-log

Enable the slow query log
slow-query-log = 1

Log queries that take longer than 2 seconds
long-query-time = 2

Set ‘max_connections’:
>> On the fly (GLOBAL variable. we can increase it on the fly without restarting mysqld service)
[Check] select @@global.max_connections;
[Set] set @@global.max_connections=300;
[Re-Check] select @@global.max_connections;
(or mysql> set global max_connections=250;)

>> CHANGE on /etc/my.cnf
max_connections = 50
max-connections = 50

set @@global.max_connections=default;

Set the query_cache_size to 16MB, query_cache_type to 1 and query_cache_limit to 1MB

mysql> set global query_cache_size=16*1024*1024;
Query OK, 0 rows affected (0.00 sec)

mysql> set global query_cache_type=1;
Query OK, 0 rows affected (0.00 sec)

mysql> set global query_cache_limit=1*1024*1024;
Query OK, 0 rows affected (0.00 sec)

Check variables
select @@global.max_connections;OR
show variables;show variables like ‘%max%’;

Disable InnoDB
[mysqld]
skip-innodb
default-storage-engine = myisam

Check if Query Cache is enabled:
SHOW VARIABLES LIKE ‘have_query_cache’;

Check Query Cache statistics:
show status like ‘Qcache%’;

MySQL’s maximum memory usage is dangerously high
>> (read_buffer_size + read_rnd_buffer_size + sort_buffer_size + thread_stack + join_buffer_size) x max_connections
=> change max_connections

wait_timeout (global variable)
mysql> show processlist;
If there are too many queries, it might be a bug in the code (for example no “close connections”). In this case, it would be safer to setup a wait_timeout low, maybe 180 (seconds -> 3mins) to make sure the sleeping connections will get dropped at that time.

 

 

LVM for dummies

You have your disk /dev/sdc

You need to cfdisk/fdisk it to set the flag “Linux LVM”, (flag 8E in cfdisk).

After that, you need to make this partition/device a physical volume (pvcreate /dev/sdc1) to make this device “usable” in a Virtual Group (VG).

The VG si basically a huge disk that can be partitioned in Logical Volumes (LVs).

Once is done, you need to extend the VG to include this new device (pv) => vgextend vglocal00 /dev/sdc1

Now the space is available to the VG vglocal00 and can be used to create/extend Logical Volumes (LV), which are some sort of “partitions” of the VG.

The LV is your “new device to format”.

DISK --> 8E flag --> PV ---> VG ---> LV1
			      |_____ LV2
			      |_____ LV3

 

GIT basic commands

Check branch

git branch <name new branch>

Show changes after your last commit

git diff

rollback to previous change (specific file) to the latest commit

git checkout -- testfile

Delete branch

git branch -D <branch name>

Push new branch to the origin (my ‘git space’)

git push -u origin <branch name>

Restore file from upstream

git checkout upstream/master -- <filename>

Commit changes in one single line

git commit -a -m "comment"

If you want to merge the recent changes committed on the master branch into your dev branch

git checkout dev      # gets you "on branch dev"
git fetch origin        # gets you up to date with origin
git merge origin/master

If you want to reset ALL from the version ‘on the web’

git fetch origin
git reset --hard origin/<branch>

Source: http://rogerdudler.github.io/git-guide/

Docker basic commands

Check containers

# docker ps -a

Connect to a container

# docker start <ID>
# docker attach <ID>

Exit from a container

-> type 'exit'

Remove all of Docker containers:

docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)