Windows 10 – VMWare Disk cleanup and shrink

Simple batch script to run as Administrator in order to cleanup the disk, defrag and shrink.

Please note that the shrink works only if the VMWare tools are installed on the guest VM.

@ECHO OFF
REM Make sure to run the script as Administrator
whoami /groups | find "S-1-16-12288" > nul

if %errorlevel% == 0 (
 echo Welcome, Admin
) else (
 echo You must run this script as Administrator. Aborting...
 goto EOF
)

REM Enable components to cleanup
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Active Setup Temp Folders" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\BranchCache" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Downloaded Program Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\GameNewsFiles" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\GameStatisticsFiles" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\GameUpdateFiles" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Internet Cache Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Memory Dump Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Offline Pages Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Old ChkDsk Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Previous Installations" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Recycle Bin" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Service Pack Cleanup" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Setup Log Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\System error memory dump files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\System error minidump files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Temporary Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Temporary Setup Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Temporary Sync Files" /V StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Thumbnail Cache" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Update Cleanup" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Upgrade Discarded Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\User file versions" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Windows Defender" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Windows Error Reporting Archive Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Windows Error Reporting Queue Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Windows Error Reporting System Archive Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Windows Error Reporting System Queue Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Windows ESD installation files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Windows Upgrade Log Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
 
REM Run cleanup
IF EXIST %SystemRoot%\SYSTEM32\cleanmgr.exe START /WAIT cleanmgr /sagerun:100

echo *** DEFRAGGING DRIVES ***
echo DEFRAG C:
defrag c: -f

if not exist "C:\Program Files\VMware\VMware Tools" goto NOVMWARE
echo.
echo *** SHRINKING DRIVES ***
cd "C:\Program Files\VMware\VMware Tools\"
VMwareToolboxCmd.exe disk shrink c:\

if not exist D: goto NODDRIVESHRINK
VMwareToolboxCmd.exe disk shrink D:\
:NODDRIVESHRINK
:NOVMWARE


echo *** SHUTTING DOWN ***
shutdown -s -t 30

:EOF
echo Terminating script
pause

Save this content in a .bat file and… enjoy it!

Manage PDF files

Merge multiple files into single PDF

I’m sure that we all had the need to send a single PDF file, maybe a signed contract. Yes, those 20 or more pages that you need to return, probably with just two of them filled up and signed.

Some PDF give you the ability to digitally sign them. But in my experience, most of them aren’t so modern.

So, what do I do?

I print ONLY the pages that I need to sign, scan them and here I am, with the need to “rebuild” the PDF, replacing the pages signed.

Example.
You have the file contract.pdf, with 20 pages and you need to sign page 10 and page 20.
The scan has a different resolution (or, even worse, it’s a different format, like jpg).

Here the command to make the magic happen:

convert contract.pdf[0-8] mypage10.jpg contract.pdf[10-18] mypage20.jpg -resize 1240x1753 -extent 1240x1753 -gravity center -units PixelsPerInch -density 150x150 contract_signed.pdf

The bit before -resize is pretty self explanatory. The bit after is a way to have the size of all pages fitting an A4 format, with a good printable resolution.

Of course, to make this happen, you need Linux (or WSL on Windows 10) and imagemagick installed.

Another way is using ghostscript.

A simple Ghostscript command to merge two PDFs in a single file is shown below:

gs -dNOPAUSE -sDEVICE=pdfwrite -sOUTPUTFILE=combine.pdf -dBATCH 1.pdf 2.pdf

What about a quick onliner to reduce and convert to grayscale your pdf?

ghostscript -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/ebook -sProcessColorModel=DeviceGray -sColorConversionStrategy=Gray -dNOPAUSE -dQUIET -dBATCH -sOutputFile=output.pdf input.pdf

PDF size reduce

Sometimes instead, you need to reduce the size of an existing PDF. Here a handy oneliner, using ghostscript:

ghostscript -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/printer -dNOPAUSE -dQUIET -dBATCH -sOutputFile=output.pdf input.pdf

Other options for PDFSETTINGS:

  • /screen selects low-resolution output similar to the Acrobat Distiller “Screen Optimized” setting.
  • /ebook selects medium-resolution output similar to the Acrobat Distiller “eBook” setting.
  • /printer selects output similar to the Acrobat Distiller “Print Optimized” setting.
  • /prepress selects output similar to Acrobat Distiller “Prepress Optimized” setting.
  • /default selects output intended to be useful across a wide variety of uses, possibly at the expense of a larger output file.

Happy PDF’ing 🙂


Sources:
https://stackoverflow.com/questions/23214617/imagemagick-convert-image-to-pdf-with-a4-page-size-and-image-fit-to-page
https://www.shellhacks.com/merge-pdf-files-linux-command-line/

https://gist.github.com/firstdoit/6390547

Migrate Linux Subsystem from one PC to another

Are you enjoying your favorite Linux distro running within the Windows 10 Linux Subsystem?

Have you configured all nicely?

What happened if you get a new pc and you’d like to migrate your VM across?

This is what happened to me. And looking around, I found this post that gave me this kinda-dirty way, but did work!

After that, I decided to review the steps, and I’ve added these directories in the exclude’s list, to make clearer the process of export/import:

/dev
/proc
/sys
/run
/tmp
/media
/mnt
/var/cache
/var/run

Of course, if you have important data in these folders and you want to move across too, just update the one-liner below accordingly. 😉

On your OLD PC

  • Open your Linux VM
  • Get inside your Downloads directory (replace <user> with your username):

    cd /mnt/c/Users/<user>/Downloads
  • Make sure to be root (sudo su -)
  • Run:

    tar -cvpzf backup.tar.gz --exclude=/backup.tar.gz --exclude=/dev --exclude=/proc --exclude=/sys --exclude=/run --exclude=/tmp --exclude=/media --exclude=/mnt --exclude=/var/cache --exclude=/var/run --one-file-system /

    NOTE: you could achieve the same using the option --exclude-from=file.txt, and having the list of exclusions in this file. I used a one-liner as it’s quicker to copy and paste.
  • Once done, close your Linux VM
  • Verify that you have a new file called backup.tar.bz in Downloads

On your NEW PC

  • Install from Microsoft Store the same Linux VM (or reinstall in the same way you have done originally on your old pc)
  • Copy across your backup.tar.bz within your new Downloads folder
  • Open the VM that you’ve just installed (minimal setup – this will be completely overwritten, so don’t be bothered too much)
  • Once you’re inside and your backup.tar.bz is in Download, run the following (replace <user> with your username):

    sudo tar -xpzf /mnt/c/Users/<user>/Downloads/backup.tar.gz -C / --numeric-owner
  • Ignore the errors
  • Close and re-open the VM: DONE! 🙂

Happy migration! 😉

Auto mount an encrypted IMG file stored on NFS share

Yes, here we are again.
Now that I have a NAS at home, it’s about time to get rid of all these single USB disks connected to my Raspberry PIs.

I have a share called nfsshare available from my NAS (IP: 192.168.1.10). The full share path is 192.168.1.10:/volume1/nfsshare. My NAS handles NFS version 4.

So, here what I’ve done, to setup my Banana Pro Pi with Armbian based on Debian 10 (buster).

Configure NFS client

First of all, we need to create the mount point where we’re going to access the nfs share (let’s use /nfs) and install the packages for NFS.

mkdir /nfs
apt-get install nfs-common

Once done, a minimal tuning of idmapd.conf, if you have defined a domain/workgroup within your network. In this example I’m using mydomain.loc.

sed -i 's/#Domain = local.domain.edu/Domain = mydomain.loc/' /etc/idmapd.conf

Update our /etc/fstab file, to make sure it mounts at boot, and test if all works as expected:

192.168.1.10:/volume1/nfsshare /nfs nfs4 auto,_netdev,nofail,noatime,nolock 0 0

I have used _netdev to make sure that the system understands that this needs the network up before trying to mount, and, if something goes wrong, the boot continues (nofail). This is very handy on systems without a proper monitor where you rely on ssh connections.

Now, with a simple mount /nfs command, you should be able to get the share mounted. df -Th or mount commands are what I would you to verify.

Cool, we have now the share mounted. Issue a quick shutdown -r now to see if all works as expected. Once your device is back online, ssh into it and check with df -Th or mount commands again. Hopefully, you can see your nfs share mounted to /nfs.

Create and configure your Encrypted “space”

I have already discussed something about encrypted devices in another post. This will be a revised version of the previous post, without custom scripts, but simply using what Debian offers out of the box.

Create an empty IMG file to host our encrypted space

I have decided to create 500GB of encrypted space to store my data. To do so, I did the following:

  • install the required software for encryption
  • create a sparsefile (on my /nfs share)
  • encrypt it
  • format it (ext4)
  • setup the auto mount
apt-get install cryptsetup

dd of=/nfs/file_container.img bs=1 count=0 seek=500G

cryptsetup -y luksFormat /nfs/file_container.img
cryptsetup luksOpen /nfs/file_container.img cryptcontainer

mkfs.ext4 -L cryptarchive /dev/mapper/cryptcontainer

During the above steps, you will be asked to set a passphrase, and use it to open the IMG file. Pretty straight forward, isn’t it?

Cool. Now we have 500GB sparsefile called file_container.img store on our share /nfs ready to be mounted somewhere and utilised.

To make sure we can mount at boot, we need a secret key that we are going to use to decrypt the IMG file without typing any passphrase.

Let’s create this key stored under /root (in this example). You can store wherever you want, as long as it’s accessible before the decryption start. Another good place is /boot.

dd if=/dev/urandom of=/root/keyfile bs=1024 count=4
chmod 0400 /root/keyfile

Now we need to add this key within the IMG file

cryptsetup luksAddKey /nfs/file_container.img /root/keyfile

Next step, is to instruct /etc/crypttab, with the details about our encrypted file and its key.
Just add the following line at the end of /etc/crypttab file.

cryptcontainer /nfs/file_container.img /root/keyfile luks

Now, there is a problem. Your OS needs to know that the IMG file isn’t stored locally and has a dependency on the NFS share. In fact, if the OS tries to decrypt the IMG file before mounting the NFS share, it will fail, and you might get stuck in a no-end booting, forcing sometimes to get your mini monitor for troubleshooting, a spare keyboard and anti-human positions to reach your small Pi etc etc… basically, a nightmare!

So, here a trick that seems working.
In Debian, there is a file called /etc/default/cryptdisks
Within this file, we are going to make sure that CRYPTDISKS_ENABLE is set to yes and CRYPTDISKS_MOUNT is set to our NFS mount (/nfs). In this way, the service that handles the encryption/decription will wait for /nfs mounted before starting.
IMPORTANT: this must be a mountpoint within /etc/fstab

Here the content of my /etc/default/cryptdisks file

# Run cryptdisks initscripts at startup? Default is Yes.
CRYPTDISKS_ENABLE=Yes

# Mountpoints to mount, before cryptsetup is invoked at initscripts. Takes
# mountpoins which are configured in /etc/fstab as arguments. Separate
# mountpoints by space.
# This is useful for keyfiles on removable media. Default is unset.
CRYPTDISKS_MOUNT="/nfs"

# Default check script. Takes effect, if the 'check' option is set in crypttab
# without a value.
CRYPTDISKS_CHECK=blkid

Amazing! Now, just the last bit: update /etc/fstab with the reference of our device. Because now we have setup all the necessary to open the encrypted IMG file and associate it to a mountable device. But we haven’t mounted it yet!

Create the mount point:

mkdir /cryptoarchive

Update /etc/fstab, appending this line:

/dev/mapper/cryptcontainer /cryptoarchive ext4 defaults,nofail 0 2

Again, the nofail, as for the NFS share, to avoid the boot process to get stuck in case of errors, and allow you to ssh into the device and troubleshoot.

Now we’re ready to try a mount /cryptoarchive, a df -Th and mount checks, and also a shutdown -r now, to verify that the NFS share gets mounted and the IMG encrypted disk mounted and available too.

Happy playing! 😉

Reduce fail2ban.sqlite3 file

You might face an increase of the file /var/lib/fail2ban/fail2ban.sqlite3

Here few commands that allows you to dig within the db, and clean up some rows, reducing its size.

Open the db:
sqlite3 /var/lib/fail2ban/fail2ban.sqlite3

Now, check all the tables available:
sqlite> .tables
bans fail2banDb jails logs

Generally, the “bans” table is the one that uses more space. You can check the content of this table using some SELECT statements like:
sqlite> SELECT * FROM bans limit 1;
With this, you can check one single row, and all its parts and content.

If you identify, for example, that there are very old entries (in my case, entries from 2 years ago, from 2018 and 219), you can trim all those entries with this command:
sqlite> DELETE FROM bans WHERE DATE(timeofban, 'unixepoch') < '2020-01-01'; VACUUM;

After running the above command, I got my db shrank.
A restart of fail2ban services will reload the db and release the space of the previous db.

Sources:
https://jim-zimmerman.com/?p=1234
https://serverfault.com/questions/1002315/fail2bans-database-is-too-large-over-500mb-how-do-i-get-it-to-a-reasonable-s

Linux WiFi manual setup

You might have faced to have your laptop that doesn’t boot with your nice GUI interface, with Network Manager that handles your wifi connection. Maybe due to a failed update or a broken package.

Well, it happened to me exactly for that reason: some issues with an upgrade. And how can you fix a broken package or dependency without internet connection?

Oooh yes, that’s a nightmare! Thankfully, I found this handy article, which I will list some handy commands, that did help me in restoring the connection on my laptop, allowing me to fix the upgrade and restore its functionality.

NOTE: I had iwconfig and wpasupplicant already installed. If not, I should have downloaded the packages and all their dependencies and manually install them with dpkg -i command

Identify what’s the name of your wifi interface

iwconfig

This should return something like wlp4s0

Guessing that you know already the SSID (e.g. HomeFancyWiFi) of your wifi and the password (e.g. myWiFiPassw0rd), you can run directly this command:

wpa_passphrase HomeFancyWiFi myWiFiPassw0rd | sudo tee /etc/wpa_supplicant.conf
wpa_supplicant -c /etc/wpa_supplicant.conf -i wlp4s0

This will generate the config file, connect to the wifi. Once you see that all works as expected, you could use the -B flag to put the wpa_suppicant in background and release the terminal.

wpa_supplicant -B -c /etc/wpa_supplicant.conf -i wlp4s0

Alternatively, you can move to another tab (ALT+F1,F2,F3… in the text mode console), and run dhcpclient to get an IP and the DNS set.

dhclient wlp4s0

Once done, you can run iwconfig just to verify that the interface has the IP and do some basic network troubleshooting like ping etc to make sure all works, and you can go back to fix your broken upgrade 🙂

MySQL Replication

This is a copy and paste of some old notes about MySQL replication. I have never fully reviewed this content, or neither finished with the script. I save this anyway, in case I will need some of this info in the future 😉
MySQL Replication NOTES

Master Setup/etc/my.cnf changes

# The following items need to be set:
log-bin=/var/lib/mysqllogs/ServerID-theServerShortName-binary-log
binlog-format=MIXED
expire_logs_days=7
server-id=<server_number>

# replication user
PASS=$(tr -cd '[:alnum:]' < /dev/urandom | fold -w12 | head -n1)
echo "This is the password (take note): $PASS"
mysql -e "GRANT REPLICATION SLAVE ON *.* to repl_user IDENTIFIED BY '$PASS'"

# Dump and copy across
mysqldump -A --flush-privileges --master-data=1 | gzip -1 > ~myuser/master_data.sql.gz
scp ~myuser/master_data.sql.gz $SLAVEIP:/home/myuser

# Restart Master
service mysqld restart

# === take notes of the following ====
# Get replication POSITION
zgrep -m 1 -P 'CHANGE MASTER' ~myuser/master.sql.gz | sed 's/^.*\(MASTER_LOG_FILE=.*\)$/\1/'

# Get new MySQL password to set on the slave
grep password /root/.my.cnf | awk -F= '{print $2}'

Slave Setup

# Verify timezones match between master and slave!/etc/my.cnf changes

# The following items need to be set:
relay-log=/var/lib/mysqllogs/ServerID-theServerShortName-relay-log
relay-log-space-limit = 16G
read-only=1
server-id=<server_number>
report-host=<server_number> #This allows show slave hosts; to work on the master.

# Import the data
echo "zcat /home/myuser/master.sql.gz | mysql"

# Update /root/.my.cnf with password set in the Master (importing ALL the db will overwrite users and passwords too)

# Restart Slave
service mysqld restart

# Enable repication (replace accordingly with position from latest Master's steps)
mysql
mysql> CHANGE MASTER TO MASTER_HOST = '$MASTERIP', MASTER_PORT = 3306, MASTER_USER = 'repl_user', MASTER_PASSWORD = '$PASS', MASTER_LOG_FILE='752118-Db01A-binary-log.000001', MASTER_LOG_POS=107;
mysql> START SLAVE;
mysql> CHECK SLAVE STATUS\G

==========================================================================
Trying to automate: ****DRAFT*****

#>>> On MASTER <<<#

MASTERIP=""
SLAVEIP=""

# On DEDICATED:
MYHOST=$(hostname -a)
SERVERID=$(echo $MYHOST| awk -F- '{print $1}')

# On CLOUD:
MYHOST=$(hostname)
SERVERID=$(echo $MYHOST| awk -F- '{print $1}')

#>> Create a dump and copy across
mysqldump -A --flush-privileges --master-data=1 | gzip -1 > ~myuser/master.sql.gz
scp ~myuser/master.sql.gz $SLAVEIP:/home/myuser/

#>> Set my.cnf

#> Unset possible pre-sets
for LINE in log-bin binlog-format expire_logs_days server-id ; do sed -i "/^.*$LINE.*=.*$/ s/^/#/" -i /etc/my.cnf ; done

#> Make sure all are commented out
for LINE in log-bin binlog-format expire_logs_days server-id ; do grep $LINE /etc/my.cnf ; done

#> Apply new parameters
PASS=$(tr -cd '[:alnum:]' < /dev/urandom | fold -w12 | head -n1)
sed -i "/\[mysqld\]/a \#REPLICATION\nlog-bin=\/var\/lib\/mysqllogs\/$SERVERID-binary-log\nbinlog-format=MIXED\nexpire_logs_days=7\nserver-id=$SERVERID" /etc/my.cnf

service mysqld restart

#>> Set replication user
mysql -e "GRANT REPLICATION SLAVE ON *.* to repl_user IDENTIFIED BY '$PASS'"

#>> Get output to run on the SLAVE

echo "zcat /home/myuser/master.sql.gz | mysql"

POSITION=zgrep -m 1 -P 'CHANGE MASTER' ~myuser/master.sql.gz | sed 's/^.*\(MASTER_LOG_FILE.*\)$/\1/'
echo "mysql -e \"CHANGE MASTER TO MASTER_HOST = '$MASTERIP', MASTER_PORT = 3306, MASTER_USER = 'repl_user', MASTER_PASSWORD = '$PASS', $POSITION;"

POSITION=zgrep -m 1 -P 'CHANGE MASTER' ~myuser/master.sql.gz | sed 's/^.*\(MASTER_LOG_FILE=.*\)$/\1/'

MASTER_LOG_FILE='752118-Db01A-binary-log.000001', MASTER_LOG_POS=107;

#>>> On SLAVE <<<#
for LINE in relay-log relay-log-space-limit read-only server-id report-host ; do grep $LINE /etc/my.cnf ; done

relay-log=/var/lib/mysqllogs/ServerID-theServerShortName-relay-log
relay-log-space-limit = 16G
read-only=1
server-id=<server_number>

report-host=<server_number> #This allows show slave hosts; to work on the master.

 

Docker How to

This is a collection of notes extracted by the Udemy course Docker Mastery.

 

Install docker

$ sudo curl -sSL https://get.docker.com/ | sh

 

  • Docker has now a versioning like Ubuntu YY.MM
  • prev Docker Engine => Docker CE (Community Edition)
  • prev Docker Data Center => Docker EE (Enterprise edition) -> includes paid product and support
  • 2 versions:
    • Edge: released monthly and supported for a month.
    • Stable: released quarterly and support for 4 months (extend support via Docker EE)

 

$ docker version
Client:
Version: 17.05.0-ce
API version: 1.29
Go version: go1.7.5
Git commit: 89658be
Built: Thu May 4 22:10:54 2017
OS/Arch: linux/amd64

Server:
Version: 17.05.0-ce
API version: 1.29 (minimum version 1.12)
Go version: go1.7.5
Git commit: 89658be
Built: Thu May 4 22:10:54 2017
OS/Arch: linux/amd64
Experimental: false

 

Client -> the CLI installed on your current machine
Server -> Engine always on, is the one that receives commands via API via the Client

New format:

docker <command> <subcommands> [opts]

 

Let’s play with Containers

Create a Nginx container:

$ docker container run --publish 80:80 --detach nginx

=> publish: connect local machine port (host) 80 to the port 80 of the container
=> detach: run the container in background
=> nginx: this is the image we want to run. Docker will look locally if there is an image cached; if not, it will get the default public ‘nginx’ image from Docker Hub, using nginx:latest (unless you specify a version/tag)

NOTE: every time you do ‘run’, docker Engine won’t clone the image but it will run an extra layer on top of the image, assign a virtual IP and doing the port binding (if requested) and
run whatever is specified under CMD in the Dockerfile

$ docker container run --publish 80:80 --detach nginx
c984b4231c5bf532efece5ee1a0d553182c4526e792a772d68d2c68204491d4e

$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c984b4231c5b nginx "nginx -g 'daemon ..." 12 seconds ago Up 11 seconds 0.0.0.0:80->80/tcp jolly_edison

$ docker container stop c98
c98

$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

$ docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c984b4231c5b nginx "nginx -g 'daemon ..." 27 seconds ago Exited (0) 4 seconds ago jolly_edison
bf3de98723a2 nginx "nginx -g 'daemon ..." 2 minutes ago Exited (0) 2 minutes ago angry_agnesi
957a1a710145 nginx "nginx -g 'daemon ..." 5 minutes ago Exited (0) 4 minutes ago infallible_colden

https://github.com/docker/compose/releasesCURIOSITY: the name gets automatically created if not specified, using from a random open source list of emotions_scientists

Check what’s happening within a container

$ docker container top <container_name>

$ docker container logs <container_name>

$ docker container inspect <container_name>

$ docker container stat # global live view of containers' stats
$ docker container stat <container_name> # live view of specific container

 

$ docker container run --publish 80:80 --detach --name webhost nginx
5f8314d5d4e0907025578b696d5d1f5df3633620ee64575bfee5b8441054e168

$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5f8314d5d4e0 nginx "nginx -g 'daemon ..." 5 seconds ago Up 4 seconds 0.0.0.0:80->80/tcp webhost

$ docker container logs webhost
172.17.0.1 - - [06/Jun/2017:11:15:38 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:53.0) Gecko/20100101 Firefox/53.0" "-"
172.17.0.1 - - [06/Jun/2017:11:15:39 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:53.0) Gecko/20100101 Firefox/53.0" "-"
172.17.0.1 - - [06/Jun/2017:11:15:40 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:53.0) Gecko/20100101 Firefox/53.0" "-"

$ docker container top webhost
UID PID PPID C STIME TTY TIME CMD
root 4467 4449 0 12:15 ? 00:00:00 nginx: master process nginx -g daemon off;
systemd+ 4497 4467 0 12:15 ? 00:00:00 nginx: worker process

$ docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5f8314d5d4e0 nginx "nginx -g 'daemon ..." 3 minutes ago Up 3 minutes 0.0.0.0:80->80/tcp webhost
c984b4231c5b nginx "nginx -g 'daemon ..." 6 minutes ago Exited (0) 6 minutes ago jolly_edison
bf3de98723a2 nginx "nginx -g 'daemon ..." 8 minutes ago Exited (0) 8 minutes ago angry_agnesi
957a1a710145 nginx "nginx -g 'daemon ..." 11 minutes ago Exited (0) 10 minutes ago infallible_colden

$ docker container rm 5f8 c98 bf3 957
c98
bf3
957
Error response from daemon: You cannot remove a running container 5f8314d5d4e0907025578b696d5d1f5df3633620ee64575bfee5b8441054e168. Stop the container before attempting removal or force remove

=> Safety mesure. You can’t remove running containers, unless using

-f

  to force

 

The process that runs in the container is clearly visible and listed on the main host simply running

ps aux

 .
In fact, a process running in a container is a process that runs on the host machine, but just in a separate user space.

$ docker container run --publish 80:80 --detach --name webhost nginx
6d6700cf4d98745cdb51a2267a0b1a69d33a60ed5a23786316af9f4993af793d

$ docker top webhost
UID PID PPID C STIME TTY TIME CMD
root 5455 5436 0 12:33 ? 00:00:00 nginx: master process nginx -g daemon off;
systemd+ 5487 5455 0 12:33 ? 00:00:00 nginx: worker process

$ ps aux | grep nginx
root 5455 0.0 0.0 32412 5168 ? Ss 12:33 0:00 nginx: master process nginx -g daemon off;
systemd+ 5487 0.0 0.0 32916 2500 ? S 12:33 0:00 nginx: worker process
user 5547 0.0 0.0 14224 968 pts/1 S+ 12:33 0:00 grep --color=auto nginx$ docker login

 

Change default container’s command

$ docker container run -it --name proxy nginx bash

=> t -> sudo tty; i -> interactive
=> ‘bash‘ -> command we want to run once the container starts
When you create this container, you change the default command to run.
This means that the nginx container started ‘bash’ instead of the default ‘nginx’ command.
Once you exit, the container stops. Why? Because a container runs UNTIL the main process runs.

Instead, if you want to run ‘bash’ as ADDITIONAL command, you need to use this, on an EXISTING/RUNNING container:

$ docker container exec -it <container_name> bash

 

How to run a CentOS minimal image to run (container)

$ docker container run -d -it --name centos centos:7
$ docker container attach centos

 

Quick cleanup [DANGEROUS!]

$ docker rm -f $(docker container ls -a -q)

 


Run CentOS container

$ docker container run -it --name centos centos
Unable to find image 'centos:latest' locally
latest: Pulling from library/centos
d5e46245fe40: Pull complete
Digest: sha256:aebf12af704307dfa0079b3babdca8d7e8ff6564696882bcb5d11f1d461f9ee9
Status: Downloaded newer image for centos:latest
[root@8bdc267ea364 /]#

 

List running containers

$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
86004f16905f nginx "nginx -g 'daemon ..." 12 minutes ago Up 12 minutes 80/tcp nginx2
53c2610e1caa nginx "nginx -g 'daemon ..." 14 minutes ago Up 14 minutes 80/tcp nginx

 

List ALL container (running and stopped)

$ docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8bdc267ea364 centos "/bin/bash" About a minute ago Exited (127) 6 seconds ago centos
c6edf5df433d nginx "bash" 8 minutes ago Exited (127) 4 minutes ago proxy
86004f16905f nginx "nginx -g 'daemon ..." 12 minutes ago Up 12 minutes 80/tcp nginx2
53c2610e1caa nginx "nginx -g 'daemon ..." 14 minutes ago Up 14 minutes 80/tcp nginx

 

Start existing container and get prompt

$ docker container start -ai centos
[root@8bdc267ea364 /]#

 

ALPINE – minimal image (less than 4MB)

$ docker pull alpine
Using default tag: latest
latest: Pulling from library/alpine
2aecc7e1714b: Pull complete
Digest: sha256:0b94d1d1b5eb130dd0253374552445b39470653fb1a1ec2d81490948876e462c
Status: Downloaded newer image for alpine:latest

$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
centos latest 3bee3060bfc8 19 hours ago 193MB
nginx latest 958a7ae9e569 6 days ago 109MB
alpine latest a41a7446062d 11 days ago 3.97MB <<<<<<
httpd latest e0645af13ada 3 weeks ago 177MB
mysql latest e799c7f9ae9c 3 weeks ago 407MB

 

Alpine has NO bash in it. It comes with just

sh

 .
You can use

apk

 to install packages.

NOTE: You can run commands that are already existing/present in the image ONLY.


Docker NETWORK

Docker daemon creates a bridged network – using NAT (docker0/bridge).
Each container will get an interface part of this network => by default, each container can communicate between each other without the need to expose the port using

-p

 . The

-p / --publish

 is to “connect” the host’s port with the container’s port.

You can anyway create new virtual networks and/or add multiple interfaces, if needed.

Some commands:

$ docker container run -p 80:80 --name web -d nginx
39a1f4db967edb1bbfa2d15f2ad0bf0394c2ae40bb22266ac0c3873db2cbea7d

$ docker container port web
80/tcp -> 0.0.0.0:80

$ docker container inspect --format '{{ .NetworkSettings.IPAddress }}' web
172.17.0.2

$ docker network ls
NETWORK ID NAME DRIVER SCOPE
fb59a42ff104 bridge bridge local
25eda154bf6f host host local
effb256fdda7 none null local

=> Bridge – network interface where containers gets connected by default
=> Host – allows a container to attach DIRECTLY to the host’s network, bypassing the Bridge network
=> none – removes eth0 in the container, leaving only ‘localhost’ interface

$ docker network inspect bridge
[
{
"Name": "bridge",
"Id": "fb59a42ff104945c8e41510f51d8007f97a30734b64f862f342d1739bec721a7",
"Created": "2017-06-06T11:59:26.589409813+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"Containers": {
"39a1f4db967edb1bbfa2d15f2ad0bf0394c2ae40bb22266ac0c3873db2cbea7d": {
"Name": "web",
"EndpointID": "723b80d9709e7fb89612d6f16af4223867971a1070db740ff0a4ce4ad497d044",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]

 

$ docker network create my_vnet 
b0a0a4e6e529681dd6437a55a5495e928a7cb3af42d3e38298cb36b54c9892e0

=> by default it uses the ‘bridge’ driver

$ docker network inspect my_vnet --format '{{ .Containers }}'
map[9faec11e14697854b51275930817b03eb648baea0e2508195c2bf758d909d503:{nginx2 f889852f5c86bea984b28237f376e8ad2d1aa86335eed307209d25d44dfdba91 02:42:ac:12:00:02 172.18.0.2/16 }]

$ docker network connect my_vnet web

=> add new ntw interface part of my_vnet to container ‘web’

$ docker container inspect web | less
[...]
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "fb59a42ff104945c8e41510f51d8007f97a30734b64f862f342d1739bec721a7",
"EndpointID": "723b80d9709e7fb89612d6f16af4223867971a1070db740ff0a4ce4ad497d044",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:02"
},
"my_vnet": {
"IPAMConfig": {},
"Links": null,
"Aliases": [
"39a1f4db967e"
],
"NetworkID": "b0a0a4e6e529681dd6437a55a5495e928a7cb3af42d3e38298cb36b54c9892e0",
"EndpointID": "344fd404d81f0e7df86984c3f856d70600eebe8109c6bdcb852577005e5ee5e1",
"Gateway": "172.18.0.1",
"IPAddress": "172.18.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:12:00:03"
}
[...]


DNS

Because of the nature of containers (create/destroy), you cannot rely on IPs.
Docker uses the containers’ names as hostname. This feature is NOT by default if you
use the standard bridge, but it gets enabled if you create a new network.

Example where we run two Elasticsearch containers, on mynet using the alias feature:

$ docker container run -d --network my_vnet --net-alias search --name els1 elasticsearch:2
$ docker container run -d --network my_vnet --net-alias search --name els2 elasticsearch:2
--net-alias <name>

=> this helps in setting the SAME name (Round Robin DNS), for example, if you want to run a pool of search servers

 

To quickly test, you can use this command to hit “search” DNS name, automatically created:

$ docker container run --rm --net my_vnet centos curl -s search:9200

-> example where you can run a specific command from a specific image, and remove all the data related to the container (quick check). In this case, CentOs default has curl, so you can run it.
Please note the 

--rm

 flag. This creates a container that will get removed as soon as you do CTRL+C. Very handy to quickly test a container.

Running multiple time, you should be able to see the 2 elasticsearch node replying.

 


Docker IMAGES

Image is the app binaries + all the required dependencies + metadata
There is NO kernel/drivers (these are shared with the host OS).

Official images have:

  • only ‘official’ in the description
  • NO ‘/’ in the name
  • extensive documentation

NON official have generally this format <organisationID>/<appname>
(e.g. mysql/mysql-server => this is not officially maintained by Docker but from MySQL team.)

 

Images are TAGs.
You can use tags to get the image that you want.
Images have multiple tags, so you might end up getting the same image, using
different tags.

 

IMAGE Layers

Images are designed to use Union file system

$ docker image history <image>

=> shows the changes in layers

 

unique SHA per layer.

When you create an image you start with a basic layer.
For example, if you pull two images based on Ubuntu 16.04, when you get the second image, you will get just the extra missing layers, as you have already downloaded and cached the basic Ubuntu 16.04 layer (same SHA).
=> you will never store the same image more than once on the filesystem
=> you won’t upload/download the layer that exists already on the other side

It’s like the concept of a VM snapshot.
The original container is read only. Whatever you change/add/modify/remove on the container that you run is stored in a rw layer.
If you run multiple containers from the same image, you will get an extra layer created per container, which stores just the differences between the original container image.

# Tag an image from nginx to myusername/nginx

$ docker image tag nginx myusername/nginx
$ docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: myusername
Password:
Login Succeeded

=> creates a file here:

~/.docker/config.json

Make sure to do

docker logout

  on untrusted machines, to remove this file.

 

# Push the image

$ docker push myusername/nginx
The push refers to a repository [docker.io/myusername/nginx]
a552ca691e49: Mounted from library/nginx
7487bf0353a7: Mounted from library/nginx$ docker container run -d -it --name centos centos:7
$ docker container attach centos
8781ec54ba04: Mounted from library/nginx
latest: digest: sha256:41ad9967ea448d7c2b203c699b429abe1ed5af331cd92533900c6d77490e0268 size: 948

 

# Change tag and re-push

$ docker image tag myusername/nginx myusername/nginx:justtestdontuse

 

$ docker push myusername/nginx:justtestdontuse
The push refers to a repository [docker.io/myusername/nginx]
a552ca691e49: Layer already exists
7487bf0353a7: Layer already exists
8781ec54ba04: Layer already exists
justtestdontuse: digest: sha256:41ad9967ea448d7c2b203c699b429abe1ed5af331cd92533900c6d77490e0268 size: 948

=> it understands that the image already in the hub myusername/nginx is the same asmyusername/nginx:justtestdontuse, so it doesn’t upload any content (space saving), but it creates a new entry in the hub.

 


Dockerfile

This file describe how your container should be built. It generally uses a default image and you add your customisation. This is also best practise.

 

FROM -> use this as initial layer were to build the rest on top.
Best practise is to use an official image supported by Docker Hub, so you will be
sure that it is always up to date (security as well).

Any extra line in the file is an extra layer in your container. The use of

&&

  among commands help to keep multiple commands on the same layer.

ENV -> are variable injected in the container (best practise as you don’t want any sensitive information stored within the container).

RUN -> are generally commands to install software / configure.
Generally there is a RUN for logging, to redirect logging to stdout/stderr. This is best practise. No syslog etc.

EXPOSE -> set which port can be published, which means, which ports I allow the container to receive traffic to. You still need the option

--publish (-p)

  to actually expose the port.

CMD -> final command that will be executed (generally the main binary)

 

To build the container from the Dockerfile (in the directory where Dockerfile exists):

$ docker image build -t myusername/mynginx .

 

Every time one step changes, from that step till the end, all will be re-created.
This means that you should keep the bits that are changing less frequently on the top, and put on the bottom the ones that are changing more frequently, to make quicker the creation of the container.

 

# Dockerfile Example

# How extend/change an existing official image from Docker Hub

FROM nginx:latest
# highly recommend you always pin versions for anything beyond dev/learn

WORKDIR /usr/share/nginx/html
# change working directory to root of nginx webhost
# using WORKDIR is prefered to using 'RUN cd /some/path'

COPY index.html index.html
# replace index.html in /usr/share/nginx/html with the one currently stored
# in the directory where the Dockerfile is present

# There is no need to use CMD because it is already specified in the image
# nginx:latest, in FROM
# This container will inherit ALL (but ENVs) from the upstream image.

 

Example: CentOS container with Apache and custom index.html file:

# Dockerfile Example

FROM centos:7

RUN yum -y update && \
    yum -y install httpd && \
    yum clean all

EXPOSE 80 443

RUN ln -sf /dev/stdout /var/log/httpd/access.log \
        && ln -sf /dev/stderr /var/log/httpd/error.log

WORKDIR /var/www/html

COPY index.html index.html

CMD ["/usr/sbin/httpd","-DFOREGROUND"]

 

Example: Using Alpine HTTPD image and run custom index.html file:

# Dockerfile Example

FROM httpd:alpine

WORKDIR /usr/local/apache2/htdocs/

COPY index.html index.html

 

Copy all the content of the current directory into the WORKDIR directory 

COPY . .

 


A container should be immutable and ephemeral. Which means that you could remove/delete/re-deploy without affecting the data (database, config files, key files etc…)

Unique data should be somewhere else => Data Volumes and Bind Mounts

 

Volumes

Need manual deletion -> preserve the data

In the Dockerfile the command 

VOLUME

 specifies that the container will create a new volume location on the host and assign this into the specified path in the container. All the files will be preserved if the container gets removed.

 

Let’s try using mysql container:

$ docker container run -d --name mysql -e MYSQL_ALLOW_EMPTY_PASSWORD=true mysql
b33d251935b0d22ae56ab569a32dd709ab2102359cf5f8297a3162383f226704

$ docker container inspect mysql
[...]
 "Mounts": [
            {
                "Type": "volume",
                "Name": "57fec8ec83c2cb32d4fbcfbcbacc2a6f84ae978e35d7ac0918aec8f8dbd8565a",
                "Source": "/var/lib/docker/volumes/57fec8ec83c2cb32d4fbcfbcbacc2a6f84ae978e35d7ac0918aec8f8dbd8565a/_data",
                "Destination": "/var/lib/mysql",
                "Driver": "local",
                "Mode": "",
                "RW": true,
                "Propagation": ""
            }
        ],

[...]
"Volumes": {
                "/var/lib/mysql": {}
            },
[...]

This container was created using

VOLUME /var/lib/mysql

  command in the Dockerfile.
Once the container got created, a new volume got created as well and mounted. Using 

inspect

 we can see those details.

$ docker container inspect mysql | less
$ docker volume ls
DRIVER              VOLUME NAME
local               57fec8ec83c2cb32d4fbcfbcbacc2a6f84ae978e35d7ac0918aec8f8dbd8565a
$ docker volume inspect 57fec8ec83c2cb32d4fbcfbcbacc2a6f84ae978e35d7ac0918aec8f8dbd8565a 
[
    {
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/57fec8ec83c2cb32d4fbcfbcbacc2a6f84ae978e35d7ac0918aec8f8dbd8565a/_data",
        "Name": "57fec8ec83c2cb32d4fbcfbcbacc2a6f84ae978e35d7ac0918aec8f8dbd8565a",
        "Options": {},
        "Scope": "local"
    }
]
$ 

 

Every time you create a container, it will create a new volume, unless you specify.

You can create/specify a specific volume to multiple containers using 

-v <volume_name:container_path>

 option flag.

$ docker container run -d --name mysql2 -e MYSQL_ALLOW_EMPTY_PASSWORD=true -v mysql-dbdata:/var/lib/mysql  mysql
3f36be229a857b0898812c1614903d8bcf7000d8717510300c175bf2968a7829
$ docker container run -d --name mysql3 -e MYSQL_ALLOW_EMPTY_PASSWORD=true -v mysql-dbdata:/var/lib/mysql  mysql
5f12cf8f4fd3d7606682474b130d0560197704a79c8ee56af54359ff79fbb555
$ docker volume ls
DRIVER              VOLUME NAME
local               57fec8ec83c2cb32d4fbcfbcbacc2a6f84ae978e35d7ac0918aec8f8dbd8565a
local               mysql-dbdata
$ docker volume inspect mysql-dbdata
[
    {
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/mysql-dbdata/_data",
        "Name": "mysql-dbdata",
        "Options": {},
        "Scope": "local"
    }
]
$ 

Checking the mysql2 and mysql3 containers:

$ docker container inspect mysql2 
[...]
        "Mounts": [
            {
                "Type": "volume",
                "Name": "mysql-dbdata",
                "Source": "/var/lib/docker/volumes/mysql-dbdata/_data",
                "Destination": "/var/lib/mysql",
                "Driver": "local",
                "Mode": "z",
                "RW": true,
                "Propagation": ""
            }
        ],
[...]
            "Volumes": {
                "/var/lib/mysql": {}
            },
[...]



$ docker container inspect mysql3 
[...]
        "Mounts": [
            {
                "Type": "volume",
                "Name": "mysql-dbdata",
                "Source": "/var/lib/docker/volumes/mysql-dbdata/_data",
                "Destination": "/var/lib/mysql",
                "Driver": "local",
                "Mode": "z",
                "RW": true,
                "Propagation": ""
            }
        ],
[...]
            "Volumes": {
                "/var/lib/mysql": {}
            },
[...]


 

Bind Mounting

Mount a directory of the host on a specific container’s path.

Same flag as Volumes 

-v

  but it starts with a path and not a name.
Use 

-v <host_path:container_path>

 option flag.

This can be handy for a webserver, for example, that shares the /var/www folder stored locally on the host.

 


Docker Compose

  • YAML file (replace shell script where you would save all the
    docker run

     commands)

    1. containers
    2. network
    3. volumes
  • CLI docker-compose (locally)

This tool is ideal for local development and testing – not for production.

By default, Compose does print on stout logs.

On linux, you need to install the binary. It is available here.

 

 

Fail2ban Debian 9

Scratch pad with conf files to configure Fail2ban on Debian 9

This setup will configure Fail2ban to monitor SSH and keep track of the bad guys. Every time an IP gets banned, it will be stored in

/etc/fail2ban/ip.blacklist

 .
This files gets processed every time Fail2ban restarts.
A cron will sanitise the file daily.

HOW TO

1) Create a custom action file:

/etc/fail2ban/action.d/iptables-allports-CUSTOM.conf 
# Fail2Ban configuration file

[INCLUDES]

before = iptables-common.confhttps://docs.google.com/document/d/1DjP5z7tvkaMWJMZXVAnMOCgfynfQNHvRkqJyxQdEB84/edit?usp=sharing


[Definition]

# Option:  actionstart
# Notes.:  command executed once at the start of Fail2Ban.
# Values:  CMD
#
actionstart = <iptables> -N f2b-<name>
              <iptables> -A f2b-<name> -j <returntype>
              <iptables> -I <chain> -p <protocol> -j f2b-<name>
              # Persistent banning of IPs
              cat /etc/fail2ban/ip.blacklist | grep -v ^\s*#|awk '{print $1}' | while read IP; do <iptables> -I f2b-<name> 1 -s $IP -j DROP; done

# Option:  actionstop
# Notes.:  command executed once at the end of Fail2Ban
# Values:  CMD
#
actionstop = <iptables> -D <chain> -p <protocol> -j f2b-<name>
             <iptables> -F f2b-<name>
             <iptables> -X f2b-<name>

# Option:  actioncheck
# Notes.:  command executed once before each actionban command
# Values:  CMD
#
actioncheck = <iptables> -n -L <chain> | grep -q 'f2b-<name>[ \t]'

# Option:  actionban
# Notes.:  command executed when banning an IP. Take care that the
#          command is executed with Fail2Ban user rights.
# Tags:    See jail.conf(5) man page
# Values:  CMD
#
actionban = <iptables> -I f2b-<name> 1 -s <ip> -j <blocktype>
            # Persistent banning of IPs
            echo '<ip>' >> /etc/fail2ban/ip.blacklist

# Option:  actionunban
# Notes.:  command executed when unbanning an IP. Take care that the
#          command is executed with Fail2Ban user rights.
# Tags:    See jail.conf(5) man page
# Values:  CMD
#
actionunban = <iptables> -D f2b-<name> -s <ip> -j <blocktype>

[Init]

2) Create

/etc/fail2ban/jail.local
# Fail2Ban custom configuration file.


[DEFAULT]

# "ignoreip" can be an IP address, a CIDR mask or a DNS host
ignoreip = 127.0.0.1 192.168.1.0/24 192.168.2.0/24

# Ban forever => -1
#bantime=-1

# Ban 3 days => 259200
bantime = 259200 

# A host is banned if it has generated "maxretry" during the last "findtime" seconds.
findtime = 30

banaction = iptables-allports-CUSTOM

[sshd]
enabled = true
filter = sshd
logfile = /var/log/auth.log
maxretry = 3

3) Remove the default debian jail configuration (is integrated in the above custom jail.local file):

rm -f /etc/fail2ban/jail.d/defaults-debian.conf

4) Set this cron:

# Daily rotate of ip.blacklist
0 20 * * * tail -100 /etc/fail2ban/ip.blacklist | sort | uniq > /tmp/ip.blacklist ; cat /tmp/ip.blacklist > /etc/fail2ban/ip.blacklist ; rm -f /tmp/ip.blacklist > /dev/null 2>&1

5) Run the cron manually once, just to be sure all works AND to have an empty file

6) Restart Fail2ban … and good luck 😉

 

 

Ubuntu 16.04 – Wake on LAN

I have struggled a bit trying to understand while my Ubuntu 16.04 wasn’t waking up with the common 

etherwake

  commad.

I found the solution on this link:

you should disable Default option in Network-Manager GUI and enable only the Magic option. If you try this, then you should check if everything is ok opening the shell and issuing this command:

sudo ethtool *<your_eth_device_here>*

You should see the line:

Wake-on: g

If it’s not g but d or something else, something could be wrong.

Once done that, and verified with the command 

ethtool <myNetinterface> | grep "Wake-on:" 

 , all started to work again 🙂