GIT tricks when you mess up a bit

Sometimes, especially at the beginning, a Developer makes loads of mistakes and here there are some workaround that can help. 😉

Rewrite master’s history (TO AVOID)

git log --pretty=oneline

git rebase -i --root => s/pick/s => choose what to keep ‘pick‘ and what to remove ‘s

-> for each ‘pick’ you need to write a summary in the comments

Rewrite history of the current branch (squash)

Firstly, check how many ahead/behind is your branch comparing to the reference one (e.g. master/main)

STEP 1

git rebase -i HEAD~_number of commits_ (e.g. git rebase -i HEAD~3) => use the number of “ahead’s” that you found.

Make the required changes (pick/s) and save.

Then:
git push origin +mybranch

STEP 2

git rebase
Any conflicts?

YES -> fix them, git add . , git rebase --continue
NO -> git push + (forced push using +)

Fix a wrong merge on master

git revert -m

NOTE: merge commit does NOT count.

A -\
    K
    |
    L
    |
B -/
|
...

Here an example.
You want to go back to commit B (master).
A, B, K, L are commit ID – you can find them using git log command.

From B to A (A is the last merge that we want to delete), there are 2 commits, K and L.

If you want to go back to B:

git revert -m2 A
which means – go back of 2 commits from A

Happy fixing! 🙂

Reverse SSH Tunnel

To allow LOCAL_SERVER behind a firewall/NAT/Home Router to be accessible via SSH from a REMOTE_SERVER you can use a ssh tunnel (reverse).

Basically, from your LOCAL_SERVER you forward port 22 (ssh) to another port on REMOTE_SERVER, for example 8000 and you can ssh into your LOCAL_SERVER from the public IP of the REMOTE_SERVER via port 8000.

To do so, you need to run the following from LOCAL_SERVER:

 local-server: ~ ssh -fNR 8000:localhost:22 <user>@<REMOTE_SERVER>

On REMOTE_SERVER you can use netstat -nlpt to check if there is a service listening on port 8000.

Example:

remote-server ~# netstat -nplt | grep 8000
tcp        0      0 0.0.0.0:8000            0.0.0.0:*               LISTEN      1396/sshd: root
tcp6       0      0 :::8000                 :::*                    LISTEN      1396/sshd: root

In this case, the REMOTE_SERVER allows connection from ALL the interfaces (0.0.0.0) to port 8000.
This means that, if the REMOTE_SERVER has IP 217.160.150.123, if you can connect to LOCAL_SERVER from a THIRD_SERVER using the following:

third-server: ~ ssh -p 8000 <user_local_server>@217.160.150.123

NOTE. If you see that the LISTEN connection on REMOTE_SERVER is bound to 127.0.0.1 and not to 0.0.0.0, it is probably related to the setting GatewayPorts set to no in /etc/ssh/sshd_config on REMOTE_SERVER.
Best setting is clientspecified (rather than yes) as per this post.

Set this value to yes and restart sshd service.

With that setting, you can potentially allow only connection from the REMOTE_SERVER to the LOCAL_SERVER, to increase security.
To do so, you need to use the following ssh command from LOCAL_SERVER:

 local-server: ~ ssh -fNR 127.0.0.1:8000:localhost:22 <user>@<REMOTE_SERVER>

With netstat, you’ll see now this:

remote-server:~# netstat -nplt | grep 8000
tcp        0      0 127.0.0.1:8000          0.0.0.0:*               LISTEN      1461/sshd: root

With this forward, you will be able to access LOCAL_SERVER ONLY from the REMOTE_SERVER itself:

remote-server: ~ ssh -p 8000 <user_local_server>@localhost

I hope this helps 🙂

Happy tunnelling!

Virtualhost and Letsencrypt

Quick guideline about how to install multiple sites on a single server using Virtualhosting, and have the SSL certificate installed and automatically renewed using Letsencrypt.

There are plenty of how to online, but I wanted to have a quick reference page for myself 🙂

Firstly, this has been tested on Debian 12, but it should work on previous Debian versions and Ubuntu too.

Apache setup and virtualhosts

Firstly, install Apache and other packages that you will mostly likely need, especially if you run WordPress or any php based framework:

apt-get install apache2 php php-mysql libapache2-mod-php php-gd php-curl net-tools telnet dos2unix

Now, you should create the folder structure to host your sites. I used /var/www/virtualhosts/<site>/public_html

I made sure permissions were set correctly too:

chown -R www-data:www-data /var/www/
find /var/www -type -d -exec chmod 775 {} \;

Now, create a virtualhost file for each site. In the following example we are going to create the conf file for site1.

Create /etc/apache2/sites-available/site1.conf

<VirtualHost *:80>
    ServerName site1.com
    ServerAlias www.site1.com
    ServerAdmin webmaster@localhost
    DocumentRoot /var/www/virtualhosts/site1/public_html

    <Directory /var/www/virtualhosts/site1/public_html>
        Options -Indexes +FollowSymLinks
        AllowOverride All
    </Directory>

    ErrorLog ${APACHE_LOG_DIR}/site1-error.log
    CustomLog ${APACHE_LOG_DIR}/site1-access.log combined
</VirtualHost>

Do the same for all the sites you have.

Once done, upload the content of your sites in public_html folder.

Disable all the default Apache sites and enable the ones you have created. You can use the commands a2dissite and a2ensite or manually create symbolic links into /etc/apache2/sites-enabled/

Check that all the virtualhosts are properly loaded:

source /etc/apache2/envvars
apache2 -S

You should see all your sites under *80 section.
Right now we have enabled only Apache on port 80 to return the sites we have hosted. No 443 yet.

Now, you can use curl to do some tests to see if the virtual hosts are responding correctly.

~ curl -IH'Host: site1.com' http://<server_IP>  # to get the header of site1.com
~ curl -H'Host: site1.com' http://<server_IP>  # to get the full page of site1.com

Hopefully all works (if not, troubleshoot it heheh), let’s point our DNS to our server, and test directly using the domain names.

All good? Cool!

Make sure now that your firewall allows port 80 and port 443. Even if you’re considering to serve your site ONLY over SSL (port 443), the certbot tool that does the auto-renewal of the certificate needs port 80 open.

Installation and configuration of certbot – Letsencrypt

As root, issue the below commands:

apt-get install snapd
snap install core
snap refresh core
snap install --classic certbot
ln -s /snap/bin/certbot /usr/bin/certbot

You have now the certbot tool installed.

Following the above example of site1.com, we are going now to get the SSL certificate for that site (even the www.site1.com one), and let the tool install and configure everything automatically.

certbot --apache -d site1.com -d www.site1.com

Hopefully all goes well 🙂 Repeat for each of your sites accordingly.

Once done with all the sites, just to make sure the auto-renewal works, you can also issue a dry-run check:

certbot renew --dry-run

Letsencrypt certificates last 90 days (afaik), but the certbot tool installed in this way does the auto-renewal in an automatic fashion.
If you’re curios where this is written (you might think about cron but unable to find anything – like it happend to me).
If this is the case, you can try to run this command, and you may find the certbot listed:

systemctl list-timers

More information are available on the official website at this address.

You can now test using curl again, but hitting https instead of http:

~ curl -IH'Host: site1.com' https://<server_IP>  # to get the header of site1.com
~ curl -H'Host: site1.com' https://<server_IP>  # to get the full page of site1.com

Oh, one note.
By default, at least at the time when I’m writing this article, once you install the certificate, the *80 virtualhost of your site will be modified, adding the following lines, which force a 302 redirect from http to https.

RewriteEngine on
RewriteCond %{SERVER_NAME} =www.site1.com [OR]
RewriteCond %{SERVER_NAME} =site1.com
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]

If it’s what you want – cool.
If you still want to serve your site on http AND https, comment out (or delete) those new lines.

Happy virtualhosting and ssl’ing! 🙂

Windows 10 – VMWare Disk cleanup and shrink

Simple batch script to run as Administrator in order to cleanup the disk, defrag and shrink.

Please note that the shrink works only if the VMWare tools are installed on the guest VM.

@ECHO OFF
REM Make sure to run the script as Administrator
whoami /groups | find "S-1-16-12288" > nul

if %errorlevel% == 0 (
 echo Welcome, Admin
) else (
 echo You must run this script as Administrator. Aborting...
 goto EOF
)

REM Enable components to cleanup
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Active Setup Temp Folders" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\BranchCache" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Downloaded Program Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\GameNewsFiles" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\GameStatisticsFiles" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\GameUpdateFiles" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Internet Cache Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Memory Dump Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Offline Pages Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Old ChkDsk Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Previous Installations" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Recycle Bin" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Service Pack Cleanup" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Setup Log Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\System error memory dump files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\System error minidump files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Temporary Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Temporary Setup Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Temporary Sync Files" /V StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Thumbnail Cache" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Update Cleanup" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Upgrade Discarded Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\User file versions" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Windows Defender" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Windows Error Reporting Archive Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Windows Error Reporting Queue Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Windows Error Reporting System Archive Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Windows Error Reporting System Queue Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Windows ESD installation files" /v StateFlags0100 /d 2 /t REG_DWORD /f
REG ADD "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\Windows Upgrade Log Files" /v StateFlags0100 /d 2 /t REG_DWORD /f
 
REM Run cleanup
IF EXIST %SystemRoot%\SYSTEM32\cleanmgr.exe START /WAIT cleanmgr /sagerun:100

echo *** DEFRAGGING DRIVES ***
echo DEFRAG C:
defrag c: -f

if not exist "C:\Program Files\VMware\VMware Tools" goto NOVMWARE
echo.
echo *** SHRINKING DRIVES ***
cd "C:\Program Files\VMware\VMware Tools\"
VMwareToolboxCmd.exe disk shrink c:\

if not exist D: goto NODDRIVESHRINK
VMwareToolboxCmd.exe disk shrink D:\
:NODDRIVESHRINK
:NOVMWARE


echo *** SHUTTING DOWN ***
shutdown -s -t 30

:EOF
echo Terminating script
pause

Save this content in a .bat file and… enjoy it!

Manage PDF files

Merge multiple files into single PDF

I’m sure that we all had the need to send a single PDF file, maybe a signed contract. Yes, those 20 or more pages that you need to return, probably with just two of them filled up and signed.

Some PDF give you the ability to digitally sign them. But in my experience, most of them aren’t so modern.

So, what do I do?

I print ONLY the pages that I need to sign, scan them and here I am, with the need to “rebuild” the PDF, replacing the pages signed.

Example.
You have the file contract.pdf, with 20 pages and you need to sign page 10 and page 20.
The scan has a different resolution (or, even worse, it’s a different format, like jpg).

Here the command to make the magic happen:

convert contract.pdf[0-8] mypage10.jpg contract.pdf[10-18] mypage20.jpg -resize 1240x1753 -extent 1240x1753 -gravity center -units PixelsPerInch -density 150x150 contract_signed.pdf

The bit before -resize is pretty self explanatory. The bit after is a way to have the size of all pages fitting an A4 format, with a good printable resolution.

Of course, to make this happen, you need Linux (or WSL on Windows 10) and imagemagick installed.

Another way is using ghostscript.

A simple Ghostscript command to merge two PDFs in a single file is shown below:

gs -dNOPAUSE -sDEVICE=pdfwrite -sOUTPUTFILE=combine.pdf -dBATCH 1.pdf 2.pdf

What about a quick onliner to reduce and convert to grayscale your pdf?

ghostscript -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/ebook -sProcessColorModel=DeviceGray -sColorConversionStrategy=Gray -dNOPAUSE -dQUIET -dBATCH -sOutputFile=output.pdf input.pdf

PDF size reduce

Sometimes instead, you need to reduce the size of an existing PDF. Here a handy oneliner, using ghostscript:

ghostscript -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/printer -dNOPAUSE -dQUIET -dBATCH -sOutputFile=output.pdf input.pdf

Other options for PDFSETTINGS:

  • /screen selects low-resolution output similar to the Acrobat Distiller “Screen Optimized” setting.
  • /ebook selects medium-resolution output similar to the Acrobat Distiller “eBook” setting.
  • /printer selects output similar to the Acrobat Distiller “Print Optimized” setting.
  • /prepress selects output similar to Acrobat Distiller “Prepress Optimized” setting.
  • /default selects output intended to be useful across a wide variety of uses, possibly at the expense of a larger output file.

Happy PDF’ing 🙂


Sources:
https://stackoverflow.com/questions/23214617/imagemagick-convert-image-to-pdf-with-a4-page-size-and-image-fit-to-page
https://www.shellhacks.com/merge-pdf-files-linux-command-line/

https://gist.github.com/firstdoit/6390547

Migrate Linux Subsystem from one PC to another

Are you enjoying your favorite Linux distro running within the Windows 10 Linux Subsystem?

Have you configured all nicely?

What happened if you get a new pc and you’d like to migrate your VM across?

This is what happened to me. And looking around, I found this post that gave me this kinda-dirty way, but did work!

After that, I decided to review the steps, and I’ve added these directories in the exclude’s list, to make clearer the process of export/import:

/dev
/proc
/sys
/run
/tmp
/media
/mnt
/var/cache
/var/run

Of course, if you have important data in these folders and you want to move across too, just update the one-liner below accordingly. 😉

On your OLD PC

  • Open your Linux VM
  • Get inside your Downloads directory (replace <user> with your username): cd /mnt/c/Users/<user>/Dowloads
  • Make sure to be root (sudo su -)
  • Run:
    tar -cvpzf backup.tar.gz --exclude=/backup.tar.gz --exclude=/dev --exclude=/proc --exclude=/sys --exclude=/run --exclude=/tmp --exclude=/media --exclude=/mnt --exclude=/var/cache --exclude=/var/run --one-file-system /
    NOTE: you could achieve the same using the option --exclude-from=file.txt, and having the list of exclusions in this file. I used a one-liner as it’s quicker to copy and paste.
  • Once done, close your Linux VM
  • Verify that you have a new file called backup.tar.bz in Dowloads

On your NEW PC

  • Install from Microsoft Store the same Linux VM (or reinstall in the same way you have done originally on your old pc)
  • Copy across your backup.tar.bz within your new Downloads folder
  • Open the VM that you’ve just installed (minimal setup – this will be completely overwritten, so don’t be bothered too much)
  • Once you’re inside and your backup.tar.bz is in Download, run the following (replace <user> with your username):
    sudo tar -xpzf /mnt/c/Users/<user>/Dowloads/backup.tar.gz -C / --numeric-owner
  • Ignore the errors
  • Close and re-open the VM: DONE! 🙂

Happy migration! 😉

Auto mount an encrypted IMG file stored on NFS share

Yes, here we are again.
Now that I have a NAS at home, it’s about time to get rid of all these single USB disks connected to my Raspberry PIs.

I have a share called nfsshare available from my NAS (IP: 192.168.1.10). The full share path is 192.168.1.10:/volume1/nfsshare. My NAS handles NFS version 4.

So, here what I’ve done, to setup my Banana Pro Pi with Armbian based on Debian 10 (buster).

Configure NFS client

First of all, we need to create the mount point where we’re going to access the nfs share (let’s use /nfs) and install the packages for NFS.

mkdir /nfs
apt-get install nfs-common

Once done, a minimal tuning of idmapd.conf, if you have defined a domain/workgroup within your network. In this example I’m using mydomain.loc.

sed -i 's/#Domain = local.domain.edu/Domain = mydomain.loc/' /etc/idmapd.conf

Update our /etc/fstab file, to make sure it mounts at boot, and test if all works as expected:

192.168.1.10:/volume1/nfsshare /nfs nfs4 auto,_netdev,nofail,noatime,nolock 0 0

I have used _netdev to make sure that the system understands that this needs the network up before trying to mount, and, if something goes wrong, the boot continues (nofail). This is very handy on systems without a proper monitor where you rely on ssh connections.

Now, with a simple mount /nfs command, you should be able to get the share mounted. df -Th or mount commands are what I would you to verify.

Cool, we have now the share mounted. Issue a quick shutdown -r now to see if all works as expected. Once your device is back online, ssh into it and check with df -Th or mount commands again. Hopefully, you can see your nfs share mounted to /nfs.

Create and configure your Encrypted “space”

I have already discussed something about encrypted devices in another post. This will be a revised version of the previous post, without custom scripts, but simply using what Debian offers out of the box.

Create an empty IMG file to host our encrypted space

I have decided to create 500GB of encrypted space to store my data. To do so, I did the following:

  • install the required software for encryption
  • create a sparsefile (on my /nfs share)
  • encrypt it
  • format it (ext4)
  • setup the auto mount
apt-get install cryptsetup

dd of=/nfs/file_container.img bs=1 count=0 seek=500G

cryptsetup -y luksFormat /nfs/file_container.img
cryptsetup luksOpen /nfs/file_container.img cryptcontainer

mkfs.ext4 -L cryptarchive /dev/mapper/cryptcontainer

During the above steps, you will be asked to set a passphrase, and use it to open the IMG file. Pretty straight forward, isn’t it?

Cool. Now we have 500GB sparsefile called file_container.img store on our share /nfs ready to be mounted somewhere and utilised.

To make sure we can mount at boot, we need a secret key that we are going to use to decrypt the IMG file without typing any passphrase.

Let’s create this key stored under /root (in this example). You can store wherever you want, as long as it’s accessible before the decryption start. Another good place is /boot.

dd if=/dev/urandom of=/root/keyfile bs=1024 count=4
chmod 0400 /root/keyfile

Now we need to add this key within the IMG file

cryptsetup luksAddKey /nfs/file_container.img /root/keyfile

Next step, is to instruct /etc/crypttab, with the details about our encrypted file and its key.
Just add the following line at the end of /etc/crypttab file.

cryptcontainer /nfs/file_container.img /root/keyfile luks

Now, there is a problem. Your OS needs to know that the IMG file isn’t stored locally and has a dependency on the NFS share. In fact, if the OS tries to decrypt the IMG file before mounting the NFS share, it will fail, and you might get stuck in a no-end booting, forcing sometimes to get your mini monitor for troubleshooting, a spare keyboard and anti-human positions to reach your small Pi etc etc… basically, a nightmare!

So, here a trick that seems working.
In Debian, there is a file called /etc/default/cryptdisks
Within this file, we are going to make sure that CRYPTDISKS_ENABLE is set to yes and CRYPTDISKS_MOUNT is set to our NFS mount (/nfs). In this way, the service that handles the encryption/decription will wait for /nfs mounted before starting.
IMPORTANT: this must be a mountpoint within /etc/fstab

Here the content of my /etc/default/cryptdisks file

# Run cryptdisks initscripts at startup? Default is Yes.
CRYPTDISKS_ENABLE=Yes

# Mountpoints to mount, before cryptsetup is invoked at initscripts. Takes
# mountpoins which are configured in /etc/fstab as arguments. Separate
# mountpoints by space.
# This is useful for keyfiles on removable media. Default is unset.
CRYPTDISKS_MOUNT="/nfs"

# Default check script. Takes effect, if the 'check' option is set in crypttab
# without a value.
CRYPTDISKS_CHECK=blkid

Amazing! Now, just the last bit: update /etc/fstab with the reference of our device. Because now we have setup all the necessary to open the encrypted IMG file and associate it to a mountable device. But we haven’t mounted it yet!

Create the mount point:

mkdir /cryptoarchive

Update /etc/fstab, appending this line:

/dev/mapper/cryptcontainer /cryptoarchive ext4 defaults,nofail 0 2

Again, the nofail, as for the NFS share, to avoid the boot process to get stuck in case of errors, and allow you to ssh into the device and troubleshoot.

Now we’re ready to try a mount /cryptoarchive, a df -Th and mount checks, and also a shutdown -r now, to verify that the NFS share gets mounted and the IMG encrypted disk mounted and available too.

Happy playing! 😉

Reduce fail2ban.sqlite3 file

You might face an increase of the file /var/lib/fail2ban/fail2ban.sqlite3

Here few commands that allows you to dig within the db, and clean up some rows, reducing its size.

Open the db:
sqlite3 /var/lib/fail2ban/fail2ban.sqlite3

Now, check all the tables available:
sqlite> .tables
bans fail2banDb jails logs

Generally, the “bans” table is the one that uses more space. You can check the content of this table using some SELECT statements like:
sqlite> SELECT * FROM bans limit 1;
With this, you can check one single row, and all its parts and content.

If you identify, for example, that there are very old entries (in my case, entries from 2 years ago, from 2018 and 219), you can trim all those entries with this command:
sqlite> DELETE FROM bans WHERE DATE(timeofban, 'unixepoch') < '2020-01-01'; VACUUM;

After running the above command, I got my db shrank.
A restart of fail2ban services will reload the db and release the space of the previous db.

Sources:
https://jim-zimmerman.com/?p=1234
https://serverfault.com/questions/1002315/fail2bans-database-is-too-large-over-500mb-how-do-i-get-it-to-a-reasonable-s

Linux WiFi manual setup

You might have faced to have your laptop that doesn’t boot with your nice GUI interface, with Network Manager that handles your wifi connection. Maybe due to a failed update or a broken package.

Well, it happened to me exactly for that reason: some issues with an upgrade. And how can you fix a broken package or dependency without internet connection?

Oooh yes, that’s a nightmare! Thankfully, I found this handy article, which I will list some handy commands, that did help me in restoring the connection on my laptop, allowing me to fix the upgrade and restore its functionality.

NOTE: I had iwconfig and wpasupplicant already installed. If not, I should have downloaded the packages and all their dependencies and manually install them with dpkg -i command

Identify what’s the name of your wifi interface

iwconfig

This should return something like wlp4s0

Guessing that you know already the SSID (e.g. HomeFancyWiFi) of your wifi and the password (e.g. myWiFiPassw0rd), you can run directly this command:

wpa_passphrase HomeFancyWiFi myWiFiPassw0rd | sudo tee /etc/wpa_supplicant.conf
wpa_supplicant -c /etc/wpa_supplicant.conf -i wlp4s0

This will generate the config file, connect to the wifi. Once you see that all works as expected, you could use the -B flag to put the wpa_suppicant in background and release the terminal.

wpa_supplicant -B -c /etc/wpa_supplicant.conf -i wlp4s0

Alternatively, you can move to another tab (ALT+F1,F2,F3… in the text mode console), and run dhcpclient to get an IP and the DNS set.

dhclient wlp4s0

Once done, you can run iwconfig just to verify that the interface has the IP and do some basic network troubleshooting like ping etc to make sure all works, and you can go back to fix your broken upgrade 🙂

MySQL Replication

This is a copy and paste of some old notes about MySQL replication. I have never fully reviewed this content, or neither finished with the script. I save this anyway, in case I will need some of this info in the future 😉
MySQL Replication NOTES

Master Setup/etc/my.cnf changes

# The following items need to be set:
log-bin=/var/lib/mysqllogs/ServerID-theServerShortName-binary-log
binlog-format=MIXED
expire_logs_days=7
server-id=<server_number>

# replication user
PASS=$(tr -cd '[:alnum:]' < /dev/urandom | fold -w12 | head -n1)
echo "This is the password (take note): $PASS"
mysql -e "GRANT REPLICATION SLAVE ON *.* to repl_user IDENTIFIED BY '$PASS'"

# Dump and copy across
mysqldump -A --flush-privileges --master-data=1 | gzip -1 > ~myuser/master_data.sql.gz
scp ~myuser/master_data.sql.gz $SLAVEIP:/home/myuser

# Restart Master
service mysqld restart

# === take notes of the following ====
# Get replication POSITION
zgrep -m 1 -P 'CHANGE MASTER' ~myuser/master.sql.gz | sed 's/^.*\(MASTER_LOG_FILE=.*\)$/\1/'

# Get new MySQL password to set on the slave
grep password /root/.my.cnf | awk -F= '{print $2}'

Slave Setup

# Verify timezones match between master and slave!/etc/my.cnf changes

# The following items need to be set:
relay-log=/var/lib/mysqllogs/ServerID-theServerShortName-relay-log
relay-log-space-limit = 16G
read-only=1
server-id=<server_number>
report-host=<server_number> #This allows show slave hosts; to work on the master.

# Import the data
echo "zcat /home/myuser/master.sql.gz | mysql"

# Update /root/.my.cnf with password set in the Master (importing ALL the db will overwrite users and passwords too)

# Restart Slave
service mysqld restart

# Enable repication (replace accordingly with position from latest Master's steps)
mysql
mysql> CHANGE MASTER TO MASTER_HOST = '$MASTERIP', MASTER_PORT = 3306, MASTER_USER = 'repl_user', MASTER_PASSWORD = '$PASS', MASTER_LOG_FILE='752118-Db01A-binary-log.000001', MASTER_LOG_POS=107;
mysql> START SLAVE;
mysql> CHECK SLAVE STATUS\G

==========================================================================
Trying to automate: ****DRAFT*****

#>>> On MASTER <<<#

MASTERIP=""
SLAVEIP=""

# On DEDICATED:
MYHOST=$(hostname -a)
SERVERID=$(echo $MYHOST| awk -F- '{print $1}')

# On CLOUD:
MYHOST=$(hostname)
SERVERID=$(echo $MYHOST| awk -F- '{print $1}')

#>> Create a dump and copy across
mysqldump -A --flush-privileges --master-data=1 | gzip -1 > ~myuser/master.sql.gz
scp ~myuser/master.sql.gz $SLAVEIP:/home/myuser/

#>> Set my.cnf

#> Unset possible pre-sets
for LINE in log-bin binlog-format expire_logs_days server-id ; do sed -i "/^.*$LINE.*=.*$/ s/^/#/" -i /etc/my.cnf ; done

#> Make sure all are commented out
for LINE in log-bin binlog-format expire_logs_days server-id ; do grep $LINE /etc/my.cnf ; done

#> Apply new parameters
PASS=$(tr -cd '[:alnum:]' < /dev/urandom | fold -w12 | head -n1)
sed -i "/\[mysqld\]/a \#REPLICATION\nlog-bin=\/var\/lib\/mysqllogs\/$SERVERID-binary-log\nbinlog-format=MIXED\nexpire_logs_days=7\nserver-id=$SERVERID" /etc/my.cnf

service mysqld restart

#>> Set replication user
mysql -e "GRANT REPLICATION SLAVE ON *.* to repl_user IDENTIFIED BY '$PASS'"

#>> Get output to run on the SLAVE

echo "zcat /home/myuser/master.sql.gz | mysql"

POSITION=zgrep -m 1 -P 'CHANGE MASTER' ~myuser/master.sql.gz | sed 's/^.*\(MASTER_LOG_FILE.*\)$/\1/'
echo "mysql -e \"CHANGE MASTER TO MASTER_HOST = '$MASTERIP', MASTER_PORT = 3306, MASTER_USER = 'repl_user', MASTER_PASSWORD = '$PASS', $POSITION;"

POSITION=zgrep -m 1 -P 'CHANGE MASTER' ~myuser/master.sql.gz | sed 's/^.*\(MASTER_LOG_FILE=.*\)$/\1/'

MASTER_LOG_FILE='752118-Db01A-binary-log.000001', MASTER_LOG_POS=107;

#>>> On SLAVE <<<#
for LINE in relay-log relay-log-space-limit read-only server-id report-host ; do grep $LINE /etc/my.cnf ; done

relay-log=/var/lib/mysqllogs/ServerID-theServerShortName-relay-log
relay-log-space-limit = 16G
read-only=1
server-id=<server_number>

report-host=<server_number> #This allows show slave hosts; to work on the master.