SSH keys and Windows – easy way

How many times I’ve helped customers and friends on this? I’ve lost the count.

So, I thought to write a little article to help whoever will face the same in the future. This might clean up a bit my karma too heheh πŸ˜€

Jokes apart, at work we mostly likely get a Windows laptop and you might need to work on remote Linux servers. And yes, you also might have Putty pre installed or shining on your desktop. And it’s perfectly fine, until you realise that the remote server is accessible ONLY using SSH keys. And here is when the fun starts.

Sounds familiar?

You can configure that with Putty, but generating the keys, setting up the application etc could be messy.

Since Windows 10 (latest versions), Microsoft added WSL, which is basically a minimal Linux virtual machine running on your Windows pc/laptop. This means that you can SUPER EASILY create ssh keys, use them and remove all the potential issues that you can face while configuring Putty.

So, ready to do it?

You can follow the official documentation here: https://learn.microsoft.com/en-us/windows/wsl/install

In short:

  1. Open CMD as Administrator
  2. run wsl --install

Yes, that easy!

The process will require you to reboot the Windows machine – mostly likely – but after that, you’ll have a proper Linux shell available.

At that point, you can use ssh-keygen command to generate your keys.

You will be able to see the content of the public key simply using cat command:

cat .ssh/id_rsa.pub

And yes, you will use the output of that command to configure your cloud provider or your remote server to accept ssh connections using the key.

Quick note: if you want to connect from your shining brand new WSL shell to your remote server, you need to be sure that the content of .ssh/id_rsa.pub is stored in .ssh/authorized_keys under the user’s home, on the remote server.

For example, to connect to myserver.example.com (public IP 213.045.046.32), as root, you need to:

  1. create the ssh key on the local WSL user account
  2. have the content of .ssh/id_rsa.pub appended/added into /root/.ssh/authorized_keys
  3. from WSL shell run ssh myserver.example.com -l root or ssh 213.045.046.32 -l root

And yes, super easy, isn’t it? πŸ™‚

Enjoy! πŸ˜‰

Ping on WSL

Do we really need to be root to simply to a ping from your WSL on Windows?
Apparently yes.
You have probably faced the following error:

ping: socktype: SOCK_RAW
ping: socket: Operation not permitted
ping: => missing cap_net_raw+p capability or setuid?

Wanna fix it?

sudo setcap cap_net_raw+ep /bin/ping

Run this command once, and the command ping will be “usable” again πŸ˜‰

Happy ping’ing! πŸ˜‰

Restore root access on Linux server

I have been working in IT since a while already, and I have faced multiple times customers that have accidentally lost the root password or the ssh key. In general, their servers is nicely up and running but they can’t connect anymore.

In the Cloud era, 99% of the time is going to be a virtual server, which makes things much easier. Of course, the same approach can be used with physical servers, but the “move disk” that I’m going to explain in a bit requires… literally… plug/unplug the disk πŸ™‚

Firstly, you can connect from your pc to a remote Linux server using:

  • username and password
  • ssh key

In both cases, the Linux server has “something stored” in it, a file (all Linux is based on files)… or more, that we can potentially edit/replace if we have access to the disk.

Getting there, don’t we? πŸ™‚

Cool. So, if we loose the password or the ssh key, and we still desperately need to access that server because we didn’t think about backing up (veeery bad – slap on your hands now!) or our laptop with the ssh key broke (didn’t you backed it up either?? Really? another slap!), one option is actually the following:

  1. Spin up a new server (we’re going to call it Saviour), and verify we can connect to it
  2. Remove the OS disk from the inaccessible server (called Desperate) and connect to Saviour
  3. Modify/replace files from Saviour onto Desperate’s disk
  4. Move back the Desperate’s disk into Desperate server
  5. Test if you can now connect to Desperate
  6. Delete Saviour

As I said before, this could work also with physical servers. The only bit that changes is that you literally need to remove the disk, install to the new server, and put it back. If you don’t have another server, you can use your laptop and a USB adapter, but you still need to have Linux. Anyway, I’m sure you can figure out how to adapt these instructions.

Now, let’s start, but DO IT AT YOUR OWN RISK.

I assume we have Saviour up and running. And we can connect via ssh. If not, stop reading and do something, come on! πŸ™‚

Works? Niiice!
Next, we move Desperate’s disk to Saviour and we make sure it’s visible.

You can use fdisk -l to see if there is a new disk (it’s generally the latest entry).
Here an example:

root@saviour:~# fdisk -l
Disk /dev/vda: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 6CBB44F1-D559-9B42-A076-7C0EA2B76310

Device      Start      End  Sectors  Size Type
/dev/vda1  262144 20971486 20709343  9.9G Linux root (x86-64)
/dev/vda14   2048     8191     6144    3M BIOS boot
/dev/vda15   8192   262143   253952  124M EFI System

Partition table entries are not in disk order.


Disk /dev/vdb: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 6CBB44F1-D559-9B42-A076-7C0EA2B76310

Device      Start      End  Sectors  Size Type
/dev/vdb1  262144 20971486 20709343  9.9G Linux root (x86-64)
/dev/vdb14   2048     8191     6144    3M BIOS boot
/dev/vdb15   8192   262143   253952  124M EFI System

Partition table entries are not in disk order.

We need to get the device ID. Using the example above I can see that Desperate’s disk is /dev/vdb1. How do I know it? Well, bit of experience I guess. But mainly, in this case, disks are listed as vdx. The “x” starts from “a” and continues till “z“. Saviour has one single disk of 10GB, which is the first (a) – of course. The second (b), has to be our Desperate’s disk.

Let’s create a temporary mount point /desperate to mount that disk and let’s mount it!
I assume that our disk was formatted as ext4. If not, you can try to skip the -t option and let the mount command to guess the filesystem or pass the right parameter.

root@saviour:~# mkdir /desperate
root@saviour:~# mount -t ext4 /dev/vdb1 /desperate
root@saviour:~# ls /desperate/
bin  boot  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

If the ls command worked, it means we’re good to continue!

Restore SSH KEY connectivity

If we were able to connect to Savoiur as root using ssh key, it means that the root user on Saviour is properly setup. So, we can copy the same configuration onto the Desperate’s disk to restore ssh key connectivity!

SSH key works storing the public key into a file called authorized_key in .ssh of the user you’re connecting to the server, in this case root.

Simply, let’s copy that file onto Desperate’s disk, in the same path!

root@saviour:~# cp /root/.ssh/authorized_keys /desperate/root/.ssh/authorized_keys
root@saviour:~#

To be extremely sure, we can verify the copy using md5sum (OPTIONAL), and see that the number generated from both files is identical:

root@saviour:~# md5sum /root/.ssh/authorized_keys
17c1bba0ef42de1899b650b60dede12b  /root/.ssh/authorized_keys
root@saviour:~# md5sum /desperate/root/.ssh/authorized_keys
17c1bba0ef42de1899b650b60dede12b  /desperate/root/.ssh/authorized_keys

If we just want to restore ssh key connectivity, we should be good to go, simply turning off Saviour, move back Desperate’s disk in Desperate server, and once up and running, trying to ssh to it using Desperate’s IP/fqdn.

If you have also username and password connectivity to restore, here an example just for root user – but it can be used for all the users, but I won’t explain here how to do so, as I would recommend to restore root and make all the changes from the restored server to avoid misconfigurations.

Restore SSH root password access

The password of a user is stored into /etc/shadow, of course, not in clear.

If you have forgotten the root password, let’s set a root password on Saviour, and use what we can find in that file to update the one on Desperate.

Using the command passwd, as root, we can immediately set/update the password – this time, take a note of it! πŸ˜‰

And after have set it, we can get the line where it is stored, using grep, for example (or simply opening the file).

root@saviour:~# passwd
New password:
Retype new password:
passwd: password updated successfully
root@saviour:~# grep root /etc/shadow
root:$y$j9T$usUs5.xlf7HQj90AaeYYN.$SR2YO6yamYA1L59bUa193ndPgiyt1nEgCfkgjXEAxJ9:19986:0:99999:7:::

In this example we need to replace the line that starts with root in /desperate/etc/shadow with this one: root:$y$j9T$usUs5.xlf7HQj90AaeYYN.$SR2YO6yamYA1L59bUa193ndPgiyt1nEgCfkgjXEAxJ9:19986:0:99999:7:::
You can use your favourite editor to do so. Make sure you change ONLY that line and save.
You can use the same grep command to verify that it matches.

root@saviour:~# grep root /etc/shadow
root:$y$j9T$usUs5.xlf7HQj90AaeYYN.$SR2YO6yamYA1L59bUa193ndPgiyt1nEgCfkgjXEAxJ9:19986:0:99999:7:::
root@saviour:~# grep root /desperate/etc/shadow
root:$y$j9T$usUs5.xlf7HQj90AaeYYN.$SR2YO6yamYA1L59bUa193ndPgiyt1nEgCfkgjXEAxJ9:19986:0:99999:7:::

At this stage, the root user on Desperate should have the same password that you have set on Saviour.

Turn off Saviour, move the disk back to Desperate, turn it on and test! If all works, let’s thank Saviour, and delete it.

I hope this helps and yes… time to think about backups strategies πŸ˜‰

Happy restoring! πŸ˜‰

qBittorrent on Raspberry Pi

…in case you don’t want to spend too much time and have a nice torrent client with web interface installed on your Raspberry Pi, qBittorrent is probably a good option (even full of settings).

Of course, we need to install the package (we install the one with NO GUI, only web interface):

apt install qbittorrent-nox

We also want to create a user with minimum privileges to run this software.
We are going to create a user called torrent part of a new group called torrent as well, within a home directory in /var/torrent
After that, we will add our own user (e.g. user1 to the torrent group, in order to allow you to access the downloaded files.
We are also creating a downloads folder, setting the right permissions.

adduser --system --group --home /var/torrent torrent
adduser user1 torrent
mkdir -p /var/torrent/downloads
chown torrent:torrent /var/torrent/downloads
chmod 770 /var/torrent/downloads

Now we’re going to create a new systemd service file: /etc/systemd/system/qbittorrent-nox.service

[Unit]
Description=BitTorrent Client
After=network.target

[Service]
Type=forking
User=torrent
Group=torrent
UMask=002
ExecStart=/usr/bin/qbittorrent-nox -d --webui-port=8080
Restart=on-failure

[Install]
WantedBy=multi-user.target

Enable and start the service:

systemctl enable qbittorrent-nox
systemctl start qbittorrent-nox

You should now be able to connect to your raspberry pi’s IP, on port 8080.

You will be asked to login.
Note: default username isΒ admin,Β and the password isΒ adminadmin.
Once logged in, under Settings > WebUI, you can whitelist your local network (e.g. 192.168.10.0/8) to skip authentication.

…and yes, that’s it πŸ˜‰

I told you it was a quick one!

Enjoy!

rTorrent – ruTorrent – Lighttpd

Messy article, but let’s try to summarise a bit.

We’re going to run a very lightweight Torrent client setup, with graphical interface, on a Debian server (an old raspberry pi, to be precise).

Install rtorrent

apt-get install rtorrent

Configure rtorrent to run as a service

Configure user

Let’s create a user torrent to run the rtorrent service

useradd -m -d /var/torrent torrent

Create the service

Create the file /etc/systemd/system/rtorrent.service

[Unit]
Description=rTorrent Daemon
After=network.target

[Service]
[Unit]
Description=rTorrent Daemon
After=network.target

[Service]
Type=simple
KillMode=process
User=torrent
ExecStartPre=/bin/bash -c "if test -e /var/torrent/rtorrent/.session/rtorrent.lock && test -z pidof rtorrent; then rm -f /var/torrent/rtorrent/.session/rtorrent.lock;
 fi"
ExecStart=/usr/bin/rtorrent
WorkingDirectory=/var/torrent/rtorrent
Restart=on-failure
[Install]
WantedBy=multi-user.target

Enable the service

systemctl enable rtorrent

Configure rtorrent for user torrent

As root su - torrent to become torrent user. Then run the following oneliner: (ref. CONFIG Template Β· rakshasa/rtorrent Wiki Β· GitHub)

curl -Ls "https://raw.githubusercontent.com/wiki/rakshasa/rtorrent/CONFIG-Template.md" \
    | sed -ne "/^######/,/^### END/p" \
    | sed -re "s:/home/USERNAME:$HOME:" >~/.rtorrent.rc
mkdir -p ~/rtorrent/

Edit your new .rtorrent.rc file accordingly as per below:

# Some extra parameters
dht.mode.set = auto
dht_port = 6881
protocol.pex.set = yes
trackers.use_udp.set = yes

# This will run XMLRPC interface (disabling manual/text mode version)
system.daemon.set = true
scgi_port = 127.0.0.1:5000

Note that port 50000 is the default one that you should open/forward on/from your router.

Install Lighttpd

Let’s install the basic, setting right permissions

apt-get install lighttpd php-cgi
lighty-enable-mod fastcgi 
lighty-enable-mod fastcgi-php

# Get latest ruTorrent package and install
cd /var/www/html/
git clone https://github.com/Novik/ruTorrent.git

chown-R www-data:www-data /var/www/
service lighttpd force-reload

Verify creating a page called test.php into /var/www/html/ with the content <?php phpinfo(); ?> and check in the browser if it works. It should. If not… tough life πŸ˜‰

Web configuration file for rTorrent: /etc/lighttpd/conf-available/20-rtorrent

server.modules += ( "mod_fastcgi" )
server.modules += ( "mod_scgi" )

mimetype.assign             = (
      ".rpm"          =>      "application/x-rpm",
      ".pdf"          =>      "application/pdf",
      ".sig"          =>      "application/pgp-signature",
      ".spl"          =>      "application/futuresplash",
      ".class"        =>      "application/octet-stream",
      ".ps"           =>      "application/postscript",
      ".torrent"      =>      "application/x-bittorrent",
      ".dvi"          =>      "application/x-dvi",
      ".gz"           =>      "application/x-gzip",
      ".pac"          =>      "application/x-ns-proxy-autoconfig",
      ".swf"          =>      "application/x-shockwave-flash",
      ".tar.gz"       =>      "application/x-tgz",
      ".tgz"          =>      "application/x-tgz",
      ".tar"          =>      "application/x-tar",
      ".zip"          =>      "application/zip",
      ".mp3"          =>      "audio/mpeg",
      ".m3u"          =>      "audio/x-mpegurl",
      ".wma"          =>      "audio/x-ms-wma",
      ".wax"          =>      "audio/x-ms-wax",
      ".ogg"          =>      "application/ogg",
      ".wav"          =>      "audio/x-wav",
      ".gif"          =>      "image/gif",
      ".jar"          =>      "application/x-java-archive",
      ".jpg"          =>      "image/jpeg",
      ".jpeg"         =>      "image/jpeg",
      ".png"          =>      "image/png",
      ".xbm"          =>      "image/x-xbitmap",
      ".xpm"          =>      "image/x-xpixmap",
      ".xwd"          =>      "image/x-xwindowdump",
      ".css"          =>      "text/css",
      ".html"         =>      "text/html",
      ".htm"          =>      "text/html",
      ".js"           =>      "text/javascript",
      ".asc"          =>      "text/plain",
      ".c"            =>      "text/plain",
      ".cpp"          =>      "text/plain",
      ".log"          =>      "text/plain",
      ".conf"         =>      "text/plain",
      ".text"         =>      "text/plain",
      ".txt"          =>      "text/plain",
      ".dtd"          =>      "text/xml",
      ".xml"          =>      "text/xml",
      ".mpeg"         =>      "video/mpeg",
      ".mpg"          =>      "video/mpeg",
      ".mov"          =>      "video/quicktime",
      ".qt"           =>      "video/quicktime",
      ".avi"          =>      "video/x-msvideo",
      ".asf"          =>      "video/x-ms-asf",
      ".asx"          =>      "video/x-ms-asf",
      ".wmv"          =>      "video/x-ms-wmv",
      ".bz2"          =>      "application/x-bzip",
      ".tbz"          =>      "application/x-bzip-compressed-tar",
      ".tar.bz2"      =>      "application/x-bzip-compressed-tar",
      # default mime type
      ""              =>      "application/octet-stream",
     )

scgi.server = ( "/RPC2" =>
    ( "127.0.0.1" =>
        (
            "host" => "127.0.0.1",
            "port" => 5000,
            "check-local" => "disable"
        )
    )
)

fastcgi.server = ( ".php" => ((
                 "bin-path" => "/usr/bin/php-cgi",
                 "socket" => "/tmp/php.socket"
)))

Enable the configuration linking and reload lighttpd

ln -sf /etc/lighttpd/conf-available/20-rtorrent /etc/lighttpd/conf-enable/20-rtorrent
systemctl reload lighttpd

Now, if all went well, you should be able to start the rtorrent service

systemctl start rtorrent

…and theoretically, even open ruTorrent in your browser.

ruTorrent, currently is configured by default to connect to 127.0.0.1:5000. You can change these parameters within its config.php file.

However, we used the default settings in this how to, and you should be able to have it working simply…

http://<ip_of_your_raspberry_pi>/ruTorrent/

Give it a try… and good luck! πŸ˜‰

Mount Windows Network drives in WSL

In Windows WSL, you can access the local disk navigating the path /mnt/c/ for the C: drive, for example.

Sometimes, network drives mounted on boot aren’t automatically mounted within your WSL Linux shell. You can do it manually using the following commands:

# For a drive already mapped in Windows (e.g. Z: drive)
$ sudo mkdir /mnt/z
$ sudo mount -t drvfs Z: /mnt/z

# For a network drive accessible via \\myserver\dir1 in Explorer
$ sudo mkdir /mnt/dir1
$ sudo mount -t drvfs '\\myserver\dir1' /mnt/dir1

Convert FLV to MP4

Quick and dirty lines to convert FLV files into MP4.

To do so, you need the package ffmpeg.

# Temporary move all the .flv files into a TMP directory
$ find . -type f -name "*.flv" -exec mv {} ../TMP/ \;

# Remove all spaces from the filename
$ cd ../TMP/
$ for file in *' '* ; do mv -- "$file" "${file// /_}" ; done

# Now, convert all the files from FLV to MP4 using ffmpeg
$ for file in `ls` ; do ffmpeg -i "$file" -c copy "${file//flv/mp4}" ; done

During the process, you might get some errors. You will see some .mp4 files with size 0 bites.

You can then try to remove the broken files and try converting re-encoding

# Find and delete files size 0
$ find . -type f -size 0 -print0 | xargs -0 rm

# Try re-encoding
$ for file in `ls *.flv` ; do ffmpeg -i "$file" -vcodec libx264 -acodec copy "${file//flv/mp4}" ; done

GIT tricks when you mess up a bit

Sometimes, especially at the beginning, a Developer makes loads of mistakes and here there are some workaround that can help. πŸ˜‰

Rewrite master’s history (TO AVOID)

git log --pretty=oneline

git rebase -i --root => s/pick/s => choose what to keep ‘pick‘ and what to remove ‘s

-> for each ‘pick’ you need to write a summary in the comments

Rewrite history of the current branch (squash)

Firstly, check how many ahead/behind is your branch comparing to the reference one (e.g. master/main)

STEP 1

git rebase -i HEAD~_number of commits_ (e.g. git rebase -i HEAD~3) => use the number of “ahead’s” that you found.

Make the required changes (pick/s) and save.

Then:
git push origin +mybranch

STEP 2

git rebase
Any conflicts?

YES -> fix them, git add . , git rebase --continue
NO -> git push + (forced push using +)

Fix a wrong merge on master

git revert -m

NOTE: merge commit does NOT count.

A -\
    K
    |
    L
    |
B -/
|
...

Here an example.
You want to go back to commit B (master).
A, B, K, L are commit ID – you can find them using git log command.

From B to A (A is the last merge that we want to delete), there are 2 commits, K and L.

If you want to go back to B:

git revert -m2 A
which means – go back of 2 commits from A

Happy fixing! πŸ™‚

Reverse SSH Tunnel

To allow LOCAL_SERVER behind a firewall/NAT/Home Router to be accessible via SSH from a REMOTE_SERVER you can use a ssh tunnel (reverse).

Basically, from your LOCAL_SERVER you forward port 22 (ssh) to another port on REMOTE_SERVER, for example 8000 and you can ssh into your LOCAL_SERVER from the public IP of the REMOTE_SERVER via port 8000.

To do so, you need to run the following from LOCAL_SERVER:

 local-server: ~ ssh -fNR 8000:localhost:22 <user>@<REMOTE_SERVER>

On REMOTE_SERVER you can use netstat -nlpt to check if there is a service listening on port 8000.

Example:

remote-server ~# netstat -nplt | grep 8000
tcp        0      0 0.0.0.0:8000            0.0.0.0:*               LISTEN      1396/sshd: root
tcp6       0      0 :::8000                 :::*                    LISTEN      1396/sshd: root

In this case, the REMOTE_SERVER allows connection from ALL the interfaces (0.0.0.0) to port 8000.
This means that, if the REMOTE_SERVER has IP 217.160.150.123, if you can connect to LOCAL_SERVER from a THIRD_SERVER using the following:

third-server: ~ ssh -p 8000 <user_local_server>@217.160.150.123

NOTE. If you see that the LISTEN connection on REMOTE_SERVER is bound to 127.0.0.1 and not to 0.0.0.0, it is probably related to the setting GatewayPorts set to no in /etc/ssh/sshd_config on REMOTE_SERVER.
Best setting is clientspecified (rather than yes) as per this post.

Set this value to yes and restart sshd service.

With that setting, you can potentially allow only connection from the REMOTE_SERVER to the LOCAL_SERVER, to increase security.
To do so, you need to use the following ssh command from LOCAL_SERVER:

 local-server: ~ ssh -fNR 127.0.0.1:8000:localhost:22 <user>@<REMOTE_SERVER>

With netstat, you’ll see now this:

remote-server:~# netstat -nplt | grep 8000
tcp        0      0 127.0.0.1:8000          0.0.0.0:*               LISTEN      1461/sshd: root

With this forward, you will be able to access LOCAL_SERVER ONLY from the REMOTE_SERVER itself:

remote-server: ~ ssh -p 8000 <user_local_server>@localhost

I hope this helps πŸ™‚

Happy tunnelling!

Virtualhost and Letsencrypt

Quick guideline about how to install multiple sites on a single server using Virtualhosting, and have the SSL certificate installed and automatically renewed using Letsencrypt.

There are plenty of how to online, but I wanted to have a quick reference page for myself πŸ™‚

Firstly, this has been tested on Debian 12, but it should work on previous Debian versions and Ubuntu too.

Apache setup and virtualhosts

Firstly, install Apache and other packages that you will mostly likely need, especially if you run WordPress or any php based framework:

apt-get install apache2 php php-mysql libapache2-mod-php php-gd php-curl net-tools telnet dos2unix

Now, you should create the folder structure to host your sites. I used /var/www/virtualhosts/<site>/public_html

I made sure permissions were set correctly too:

chown -R www-data:www-data /var/www/
find /var/www -type -d -exec chmod 775 {} \;

Now, create a virtualhost file for each site. In the following example we are going to create the conf file for site1.

Create /etc/apache2/sites-available/site1.conf

<VirtualHost *:80>
    ServerName site1.com
    ServerAlias www.site1.com
    ServerAdmin webmaster@localhost
    DocumentRoot /var/www/virtualhosts/site1/public_html

    <Directory /var/www/virtualhosts/site1/public_html>
        Options -Indexes +FollowSymLinks
        AllowOverride All
    </Directory>

    ErrorLog ${APACHE_LOG_DIR}/site1-error.log
    CustomLog ${APACHE_LOG_DIR}/site1-access.log combined
</VirtualHost>

Do the same for all the sites you have.

Once done, upload the content of your sites in public_html folder.

Disable all the default Apache sites and enable the ones you have created. You can use the commands a2dissite and a2ensite or manually create symbolic links into /etc/apache2/sites-enabled/

Check that all the virtualhosts are properly loaded:

source /etc/apache2/envvars
apache2 -S

You should see all your sites under *80 section.
Right now we have enabled only Apache on port 80 to return the sites we have hosted. No 443 yet.

Now, you can use curl to do some tests to see if the virtual hosts are responding correctly.

~ curl -IH'Host: site1.com' http://<server_IP>  # to get the header of site1.com
~ curl -H'Host: site1.com' http://<server_IP>  # to get the full page of site1.com

Hopefully all works (if not, troubleshoot it heheh), let’s point our DNS to our server, and test directly using the domain names.

All good? Cool!

Make sure now that your firewall allows port 80 and port 443. Even if you’re considering to serve your site ONLY over SSL (port 443), the certbot tool that does the auto-renewal of the certificate needs port 80 open.

Installation and configuration of certbot – Letsencrypt

As root, issue the below commands:

apt-get install snapd
snap install core
snap refresh core
snap install --classic certbot
ln -s /snap/bin/certbot /usr/bin/certbot

You have now the certbot tool installed.

Following the above example of site1.com, we are going now to get the SSL certificate for that site (even the www.site1.com one), and let the tool install and configure everything automatically.

certbot --apache -d site1.com -d www.site1.com

Hopefully all goes well πŸ™‚ Repeat for each of your sites accordingly.

Once done with all the sites, just to make sure the auto-renewal works, you can also issue a dry-run check:

certbot renew --dry-run

Letsencrypt certificates last 90 days (afaik), but the certbot tool installed in this way does the auto-renewal in an automatic fashion.
If you’re curios where this is written (you might think about cron but unable to find anything – like it happend to me).
If this is the case, you can try to run this command, and you may find the certbot listed:

systemctl list-timers

More information are available on the official website at this address.

You can now test using curl again, but hitting https instead of http:

~ curl -IH'Host: site1.com' https://<server_IP>  # to get the header of site1.com
~ curl -H'Host: site1.com' https://<server_IP>  # to get the full page of site1.com

Oh, one note.
By default, at least at the time when I’m writing this article, once you install the certificate, the *80 virtualhost of your site will be modified, adding the following lines, which force a 302 redirect from http to https.

RewriteEngine on
RewriteCond %{SERVER_NAME} =www.site1.com [OR]
RewriteCond %{SERVER_NAME} =site1.com
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]

If it’s what you want – cool.
If you still want to serve your site on http AND https, comment out (or delete) those new lines.

Happy virtualhosting and ssl’ing! πŸ™‚