Category Archives: Linux

Reduce fail2ban.sqlite3 file

You might face an increase of the file /var/lib/fail2ban/fail2ban.sqlite3

Here few commands that allows you to dig within the db, and clean up some rows, reducing its size.

Open the db:
sqlite3 /var/lib/fail2ban/fail2ban.sqlite3

Now, check all the tables available:
sqlite> .tables
bans fail2banDb jails logs

Generally, the “bans” table is the one that uses more space. You can check the content of this table using some SELECT statements like:
sqlite> SELECT * FROM bans limit 1;
With this, you can check one single row, and all its parts and content.

If you identify, for example, that there are very old entries (in my case, entries from 2 years ago, from 2018 and 219), you can trim all those entries with this command:
sqlite> DELETE FROM bans WHERE DATE(timeofban, 'unixepoch') < '2020-01-01'; VACUUM;

After running the above command, I got my db shrank.
A restart of fail2ban services will reload the db and release the space of the previous db.

Sources:
https://jim-zimmerman.com/?p=1234
https://serverfault.com/questions/1002315/fail2bans-database-is-too-large-over-500mb-how-do-i-get-it-to-a-reasonable-s

Linux WiFi manual setup

You might have faced to have your laptop that doesn’t boot with your nice GUI interface, with Network Manager that handles your wifi connection. Maybe due to a failed update or a broken package.

Well, it happened to me exactly for that reason: some issues with an upgrade. And how can you fix a broken package or dependency without internet connection?

Oooh yes, that’s a nightmare! Thankfully, I found this handy article, which I will list some handy commands, that did help me in restoring the connection on my laptop, allowing me to fix the upgrade and restore its functionality.

NOTE: I had iwconfig and wpasupplicant already installed. If not, I should have downloaded the packages and all their dependencies and manually install them with dpkg -i command

Identify what’s the name of your wifi interface

iwconfig

This should return something like wlp4s0

Guessing that you know already the SSID (e.g. HomeFancyWiFi) of your wifi and the password (e.g. myWiFiPassw0rd), you can run directly this command:

wpa_passphrase HomeFancyWiFi myWiFiPassw0rd | sudo tee /etc/wpa_supplicant.conf
wpa_supplicant -c /etc/wpa_supplicant.conf -i wlp4s0

This will generate the config file, connect to the wifi. Once you see that all works as expected, you could use the -B flag to put the wpa_suppicant in background and release the terminal.

wpa_supplicant -B -c /etc/wpa_supplicant.conf -i wlp4s0

Alternatively, you can move to another tab (ALT+F1,F2,F3… in the text mode console), and run dhcpclient to get an IP and the DNS set.

dhclient wlp4s0

Once done, you can run iwconfig just to verify that the interface has the IP and do some basic network troubleshooting like ping etc to make sure all works, and you can go back to fix your broken upgrade 🙂

MySQL Replication

This is a copy and paste of some old notes about MySQL replication. I have never fully reviewed this content, or neither finished with the script. I save this anyway, in case I will need some of this info in the future 😉
MySQL Replication NOTES

Master Setup/etc/my.cnf changes

# The following items need to be set:
log-bin=/var/lib/mysqllogs/ServerID-theServerShortName-binary-log
binlog-format=MIXED
expire_logs_days=7
server-id=<server_number>

# replication user
PASS=$(tr -cd '[:alnum:]' < /dev/urandom | fold -w12 | head -n1)
echo "This is the password (take note): $PASS"
mysql -e "GRANT REPLICATION SLAVE ON *.* to repl_user IDENTIFIED BY '$PASS'"

# Dump and copy across
mysqldump -A --flush-privileges --master-data=1 | gzip -1 > ~myuser/master_data.sql.gz
scp ~myuser/master_data.sql.gz $SLAVEIP:/home/myuser

# Restart Master
service mysqld restart

# === take notes of the following ====
# Get replication POSITION
zgrep -m 1 -P 'CHANGE MASTER' ~myuser/master.sql.gz | sed 's/^.*\(MASTER_LOG_FILE=.*\)$/\1/'

# Get new MySQL password to set on the slave
grep password /root/.my.cnf | awk -F= '{print $2}'

Slave Setup

# Verify timezones match between master and slave!/etc/my.cnf changes

# The following items need to be set:
relay-log=/var/lib/mysqllogs/ServerID-theServerShortName-relay-log
relay-log-space-limit = 16G
read-only=1
server-id=<server_number>
report-host=<server_number> #This allows show slave hosts; to work on the master.

# Import the data
echo "zcat /home/myuser/master.sql.gz | mysql"

# Update /root/.my.cnf with password set in the Master (importing ALL the db will overwrite users and passwords too)

# Restart Slave
service mysqld restart

# Enable repication (replace accordingly with position from latest Master's steps)
mysql
mysql> CHANGE MASTER TO MASTER_HOST = '$MASTERIP', MASTER_PORT = 3306, MASTER_USER = 'repl_user', MASTER_PASSWORD = '$PASS', MASTER_LOG_FILE='752118-Db01A-binary-log.000001', MASTER_LOG_POS=107;
mysql> START SLAVE;
mysql> CHECK SLAVE STATUS\G

==========================================================================
Trying to automate: ****DRAFT*****

#>>> On MASTER <<<#

MASTERIP=""
SLAVEIP=""

# On DEDICATED:
MYHOST=$(hostname -a)
SERVERID=$(echo $MYHOST| awk -F- '{print $1}')

# On CLOUD:
MYHOST=$(hostname)
SERVERID=$(echo $MYHOST| awk -F- '{print $1}')

#>> Create a dump and copy across
mysqldump -A --flush-privileges --master-data=1 | gzip -1 > ~myuser/master.sql.gz
scp ~myuser/master.sql.gz $SLAVEIP:/home/myuser/

#>> Set my.cnf

#> Unset possible pre-sets
for LINE in log-bin binlog-format expire_logs_days server-id ; do sed -i "/^.*$LINE.*=.*$/ s/^/#/" -i /etc/my.cnf ; done

#> Make sure all are commented out
for LINE in log-bin binlog-format expire_logs_days server-id ; do grep $LINE /etc/my.cnf ; done

#> Apply new parameters
PASS=$(tr -cd '[:alnum:]' < /dev/urandom | fold -w12 | head -n1)
sed -i "/\[mysqld\]/a \#REPLICATION\nlog-bin=\/var\/lib\/mysqllogs\/$SERVERID-binary-log\nbinlog-format=MIXED\nexpire_logs_days=7\nserver-id=$SERVERID" /etc/my.cnf

service mysqld restart

#>> Set replication user
mysql -e "GRANT REPLICATION SLAVE ON *.* to repl_user IDENTIFIED BY '$PASS'"

#>> Get output to run on the SLAVE

echo "zcat /home/myuser/master.sql.gz | mysql"

POSITION=zgrep -m 1 -P 'CHANGE MASTER' ~myuser/master.sql.gz | sed 's/^.*\(MASTER_LOG_FILE.*\)$/\1/'
echo "mysql -e \"CHANGE MASTER TO MASTER_HOST = '$MASTERIP', MASTER_PORT = 3306, MASTER_USER = 'repl_user', MASTER_PASSWORD = '$PASS', $POSITION;"

POSITION=zgrep -m 1 -P 'CHANGE MASTER' ~myuser/master.sql.gz | sed 's/^.*\(MASTER_LOG_FILE=.*\)$/\1/'

MASTER_LOG_FILE='752118-Db01A-binary-log.000001', MASTER_LOG_POS=107;

#>>> On SLAVE <<<#
for LINE in relay-log relay-log-space-limit read-only server-id report-host ; do grep $LINE /etc/my.cnf ; done

relay-log=/var/lib/mysqllogs/ServerID-theServerShortName-relay-log
relay-log-space-limit = 16G
read-only=1
server-id=<server_number>

report-host=<server_number> #This allows show slave hosts; to work on the master.

 

Docker How to

This is a collection of notes extracted by the Udemy course Docker Mastery.

 

Install docker

 

  • Docker has now a versioning like Ubuntu YY.MM
  • prev Docker Engine => Docker CE (Community Edition)
  • prev Docker Data Center => Docker EE (Enterprise edition) -> includes paid product and support
  • 2 versions:
    • Edge: released monthly and supported for a month.
    • Stable: released quarterly and support for 4 months (extend support via Docker EE)

 

 

Client -> the CLI installed on your current machine
Server -> Engine always on, is the one that receives commands via API via the Client

New format: docker <command> <subcommands> [opts]

 

Let’s play with Containers

Create a Nginx container:

=> publish: connect local machine port (host) 80 to the port 80 of the container
=> detach: run the container in background
=> nginx: this is the image we want to run. Docker will look locally if there is an image cached; if not, it will get the default public ‘nginx’ image from Docker Hub, using nginx:latest (unless you specify a version/tag)

NOTE: every time you do ‘run’, docker Engine won’t clone the image but it will run an extra layer on top of the image, assign a virtual IP and doing the port binding (if requested) and
run whatever is specified under CMD in the Dockerfile

https://github.com/docker/compose/releasesCURIOSITY: the name gets automatically created if not specified, using from a random open source list of emotions_scientists

Check what’s happening within a container

 

=> Safety mesure. You can’t remove running containers, unless using -f  to force

 

The process that runs in the container is clearly visible and listed on the main host simply running ps aux .
In fact, a process running in a container is a process that runs on the host machine, but just in a separate user space.

 

Change default container’s command

=> t -> sudo tty; i -> interactive
=> ‘bash‘ -> command we want to run once the container starts
When you create this container, you change the default command to run.
This means that the nginx container started ‘bash’ instead of the default ‘nginx’ command.
Once you exit, the container stops. Why? Because a container runs UNTIL the main process runs.

Instead, if you want to run ‘bash’ as ADDITIONAL command, you need to use this, on an EXISTING/RUNNING container:

 

How to run a CentOS minimal image to run (container)

 

Quick cleanup [DANGEROUS!]

 


Run CentOS container

 

List running containers

 

List ALL container (running and stopped)

 

Start existing container and get prompt

 

ALPINE – minimal image (less than 4MB)

 

Alpine has NO bash in it. It comes with just sh .
You can use apk to install packages.

NOTE: You can run commands that are already existing/present in the image ONLY.


Docker NETWORK

Docker daemon creates a bridged network – using NAT (docker0/bridge).
Each container will get an interface part of this network => by default, each container can communicate between each other without the need to expose the port using -p . The -p / --publish is to “connect” the host’s port with the container’s port.

You can anyway create new virtual networks and/or add multiple interfaces, if needed.

Some commands:

=> Bridge – network interface where containers gets connected by default
=> Host – allows a container to attach DIRECTLY to the host’s network, bypassing the Bridge network
=> none – removes eth0 in the container, leaving only ‘localhost’ interface

 

=> by default it uses the ‘bridge’ driver

=> add new ntw interface part of my_vnet to container ‘web’


DNS

Because of the nature of containers (create/destroy), you cannot rely on IPs.
Docker uses the containers’ names as hostname. This feature is NOT by default if you
use the standard bridge, but it gets enabled if you create a new network.

Example where we run two Elasticsearch containers, on mynet using the alias feature:

--net-alias <name>
=> this helps in setting the SAME name (Round Robin DNS), for example, if you want to run a pool of search servers

 

To quickly test, you can use this command to hit “search” DNS name, automatically created:

-> example where you can run a specific command from a specific image, and remove all the data related to the container (quick check). In this case, CentOs default has curl, so you can run it.
Please note the  --rm flag. This creates a container that will get removed as soon as you do CTRL+C. Very handy to quickly test a container.

Running multiple time, you should be able to see the 2 elasticsearch node replying.

 


Docker IMAGES

Image is the app binaries + all the required dependencies + metadata
There is NO kernel/drivers (these are shared with the host OS).

Official images have:

  • only ‘official’ in the description
  • NO ‘/’ in the name
  • extensive documentation

NON official have generally this format <organisationID>/<appname>
(e.g. mysql/mysql-server => this is not officially maintained by Docker but from MySQL team.)

 

Images are TAGs.
You can use tags to get the image that you want.
Images have multiple tags, so you might end up getting the same image, using
different tags.

 

IMAGE Layers

Images are designed to use Union file system

=> shows the changes in layers

 

unique SHA per layer.

When you create an image you start with a basic layer.
For example, if you pull two images based on Ubuntu 16.04, when you get the second image, you will get just the extra missing layers, as you have already downloaded and cached the basic Ubuntu 16.04 layer (same SHA).
=> you will never store the same image more than once on the filesystem
=> you won’t upload/download the layer that exists already on the other side

It’s like the concept of a VM snapshot.
The original container is read only. Whatever you change/add/modify/remove on the container that you run is stored in a rw layer.
If you run multiple containers from the same image, you will get an extra layer created per container, which stores just the differences between the original container image.

# Tag an image from nginx to myusername/nginx

=> creates a file here: ~/.docker/config.json
Make sure to do docker logout  on untrusted machines, to remove this file.

 

# Push the image

 

# Change tag and re-push

 

=> it understands that the image already in the hub myusername/nginx is the same asmyusername/nginx:justtestdontuse, so it doesn’t upload any content (space saving), but it creates a new entry in the hub.

 


Dockerfile

This file describe how your container should be built. It generally uses a default image and you add your customisation. This is also best practise.

 

FROM -> use this as initial layer were to build the rest on top.
Best practise is to use an official image supported by Docker Hub, so you will be
sure that it is always up to date (security as well).

Any extra line in the file is an extra layer in your container. The use of &&  among commands help to keep multiple commands on the same layer.

ENV -> are variable injected in the container (best practise as you don’t want any sensitive information stored within the container).

RUN -> are generally commands to install software / configure.
Generally there is a RUN for logging, to redirect logging to stdout/stderr. This is best practise. No syslog etc.

EXPOSE -> set which port can be published, which means, which ports I allow the container to receive traffic to. You still need the option --publish (-p)  to actually expose the port.

CMD -> final command that will be executed (generally the main binary)

 

To build the container from the Dockerfile (in the directory where Dockerfile exists):

 

Every time one step changes, from that step till the end, all will be re-created.
This means that you should keep the bits that are changing less frequently on the top, and put on the bottom the ones that are changing more frequently, to make quicker the creation of the container.

 

 

Example: CentOS container with Apache and custom index.html file:

 

Example: Using Alpine HTTPD image and run custom index.html file:

 

Copy all the content of the current directory into the WORKDIR directory  COPY . .

 


A container should be immutable and ephemeral. Which means that you could remove/delete/re-deploy without affecting the data (database, config files, key files etc…)

Unique data should be somewhere else => Data Volumes and Bind Mounts

 

Volumes

Need manual deletion -> preserve the data

In the Dockerfile the command  VOLUME specifies that the container will create a new volume location on the host and assign this into the specified path in the container. All the files will be preserved if the container gets removed.

 

Let’s try using mysql container:

This container was created using VOLUME /var/lib/mysql  command in the Dockerfile.
Once the container got created, a new volume got created as well and mounted. Using  inspect we can see those details.

 

Every time you create a container, it will create a new volume, unless you specify.

You can create/specify a specific volume to multiple containers using  -v <volume_name:container_path> option flag.

Checking the mysql2 and mysql3 containers:

 

Bind Mounting

Mount a directory of the host on a specific container’s path.

Same flag as Volumes  -v  but it starts with a path and not a name.
Use  -v <host_path:container_path> option flag.

This can be handy for a webserver, for example, that shares the /var/www folder stored locally on the host.

 


Docker Compose

  • YAML file (replace shell script where you would save all the docker run commands)
    1. containers
    2. network
    3. volumes
  • CLI docker-compose (locally)

This tool is ideal for local development and testing – not for production.

By default, Compose does print on stout logs.

On linux, you need to install the binary. It is available here.

 

 

Fail2ban Debian 9

Scratch pad with conf files to configure Fail2ban on Debian 9

This setup will configure Fail2ban to monitor SSH and keep track of the bad guys. Every time an IP gets banned, it will be stored in /etc/fail2ban/ip.blacklist .
This files gets processed every time Fail2ban restarts.
A cron will sanitise the file daily.

HOW TO

1) Create a custom action file: /etc/fail2ban/action.d/iptables-allports-CUSTOM.conf

2) Create /etc/fail2ban/jail.local

3) Remove the default debian jail configuration (is integrated in the above custom jail.local file):

4) Set this cron:

5) Run the cron manually once, just to be sure all works AND to have an empty file

6) Restart Fail2ban … and good luck 😉

 

 

Ubuntu 16.04 – Wake on LAN

I have struggled a bit trying to understand while my Ubuntu 16.04 wasn’t waking up with the common  etherwake  commad.

I found the solution on this link:

you should disable Default option in Network-Manager GUI and enable only the Magic option. If you try this, then you should check if everything is ok opening the shell and issuing this command:

You should see the line:

If it’s not g but d or something else, something could be wrong.

Once done that, and verified with the command  ethtool <myNetinterface> | grep "Wake-on:" , all started to work again 🙂

 

Ubuntu 16.04 with Office 2010, Photoshop CS2, Spotify and Skype

I can finally decommission my Windows VM!

Yes. I was keeping a Windows VM to use Office and Photoshop. Libreoffice and GIMP are alternative options that where not sufficient – at least for me. On top of that, Skype and Spotify were another couple of software that weren’t really working well or available (at least a while ago).

Now, I have a full working-workstation based on Ubuntu 16.04 LTS – MATE!

Desktop Screenshot

How to?

Well, here some easy instructions.

What you need?

  • Office Pro 2010 license
  • Office Pro 2010 installer (here where to download if you have lost it – 32bit version)
  • Photoshop installer: Adobe has now released version C2 free. You need an Adobe account. They provide installer and serial. For the installer, here the direct link
  • Spotify account
  • Skype account
  • Ubuntu 16.04 LTS 64 bit installed 🙂

Let’s install!

Spotify

For Spotify, I’ve just simply followed this: https://www.spotify.com/it/download/linux/

Skype

For Skype, I have downloaded the deb from https://www.skype.com/en/get-skype/

 

Office 2010 – Photoshop CS2

A bit more complicated how to install Office 2010 and Photoshop… but not too much 🙂
Just follow these instructions.

Firstly, we need to enable i386 architecture

Then, add WineHQ repositories and install the latest stable version:

Install some extra packages, including winbind and the utility winetricks and create some symlinks

NOTE: very importante the package winbind. Don’t miss this or Office won’t install.

Create the environment (assuming your user is called user)

Install some required packages, using winetricks

After that, let’s make some changes to Wine conf.

As described to this post, add riched20 and gdiplus libraries (snipped below):

Click the Libraries tab. Currently, there will be only a single entry for *msxml6 (native,built-in).
Now click in the ‘New override for library’ combo box and type ‘rich’. Click the down-arrow. That should now display an item called riched20. Click [Add].
In the same override combo box, now type ‘gdip’. Click the down-arrow. You should now see an item called gdiplus. Click on it and then click [Add]

Now… let’s install!

This command is valid for both software: Office and Photoshop.

With this configuration, you should be able to complete the setup and see under “Others” menu (in Ubuntu MATE) the apps installed. Please note that you might need to reboot your box to see the app actually there.

During the Office setup, I choose the Custom setup, as I just wanted Word, Excel and Power Point. I selected “Run all from My Computer” to be sure there won’t be any extra to install while using the software, and after, I’ve de-selected/excluded what I didn’t want.

 

Once completed with the setup, if you don’t see the apps under “Others” menu, you can run them via command line (e.g. run Excel):

Office will ask to activate. I wasn’t able to activate it via Internet, so I have called the number found at this page.

The only issue I’ve experienced was that Word was showing “Configuring Office 2010…” and taking time to start. After that, I was getting a pop up asking to reboot. Saying “yes” was making all crashing. Saying “no” was allowing me to use Word with no issues.

I found this patch that worked perfectly:

Just do wine cmd  and paste the above command, or wine regedit and add manually the key.

Apart of this… all went smoothly. I have been able also to install the language packs, using the same procedure wine setup.exe  and I’m very happy now! 🙂

Have fun!

Grub console how to

I’m sure it happened to migrate a linux server, maybe in a slightly dirty way (rsync’ing) or had some issues with the boot loader.

And when you reach the point with this:

…and you start to cry (or almost) 🙂

Well, here some steps that helped me to boot the server and restore grub.

Use  ls to see the list of available partitions. Find the one where you know (or think) the kernel is installed. In my case it was  (hd0,msdos1) , which is basically /dev/sda1

After that, use the following:

With these commands, I have been able to boot into my OS.

After that, I re-installed grub:

NOTE: UUID could be a cause of failed boot too.
Under Debian/Ubuntu there is a file  /etc/default/grub where you can disable the UUID format. This could generate some issues if you have swapped the disk so it might be good to check this config file and eventually enable  GRUB_DISABLE_LINUX_UUID=true and re run the  update-grub . To remember as well, the UUID is set in  /etc/fstab . You can replace that with /dev/sdXy accordingly as well.

I hope this will help someone else that, like me, got stuck in restoring a VM.

 


Sources:

TOP – memory explanation

(just few notes – to avoid to forget)

  • VIRT: not really relevant nowadays. It’s the memory that the process could use. But the OS loads only what needed, so rarely really used. On 32bit OS, it could be the only time when you need to keep an eye as the OS can allocate up to 2-3GB only.
  • RES: Resident Set Size memory – this is the actual memory in RAM. On low used machines, it might still show high usage even if not utilised as the process to free-up the memory costs more than leaving it. In fact, Linux OS tends to use as much memory available (“unused memory is wasted memory“).
  • SH: this is the shared memory which generally contains libraries etc

Kernel space – User space – Containers – Virtualisation

How many times I’ve heard “well, a container is like a super light-weight virtual machine“. And yes, true, I admit as well, that I was one of them.

But I wasn’t happy about this answer, so I did some researches and I think now I have a better understanding and I feel the pain of my friends where I was simplistically (and wrongly) saying that – public apologies 😛 🙂

 

So… let’s start…

 

Concept 1: Virtual memory.

Virtual memory is the collective memory used by processes (RAM, disk swap, etc).

Of this virtual memory, we have generally a separation beween 2 types:

  • kernel space: reserverd for the kernel and generally drivers
  • user space: for the applications, incluse libraries

This separation serves to provide memory protection and hardware protection from malicious or errant software behavior.

NOTE1: User space is not namespace.

 

NOTE2: FUSE is not really related with this topic, but could confuse someone. So, just to clarify: FUSE – (Filesystem in Userspace) is a software interface for Unix-like computer operating systems that lets non-privileged users create their own file systems without editing kernel code. This is achieved by running file system code in user space while the FUSE module provides only a “bridge” to the actual kernel interfaces.

Modern kernels have cgroups and namespace capabilities.

  • Cgroups can restrict what you can USE -> CPU, memory, storage, network, devices, etc. Also allows to ‘freeze’.
  • Namespace can restrict what you SEE -> PID, mnt, UID/GID, etc…

Containers runtimes (like LXC, Docker, etc…) are using cgroups and namespaces to create separate isolated user-space entities called ‘containers‘.
Containers have basically no overhead because they are using the same system calls to the host kernel => No need of emuation or virtual machine.

They use the same kernel of the host (this is a key difference with virtualisation). So, currently, you cannot run Windows containers on a Linux host. But you can still run different versions of Linux, as they all share the same kernel.

Virtualisation: fully isolated OS, running its own kernel.

  • Full virtualised: (eg. VMWare, Virtuabox, ESXi…). The OS in the VM is not aware to be a VM. Hypervisor emulates the hardware platform for the guest OS and then translates the hardware accesses requests to the physical hardware. Hypervisor provides the drivers to the guest OS.
    => higher overhead because hardware virtualisation BUT best isolation and security
  • Para virtualised: (XEN, KVM) the OS in the VM knows to be virtualised. Drivers are sending instructions directly to the hardware of the host, via the Hypervisor. Hardware is not virtualised BUT the OS runs in isolation.
    => better performance and ability to use recent hardware drivers directly BUT guest OS needs to be modified to use paravirtualised devices

NOTE: Emulation is not platform virtualisation (e.g. QEMU)
With emulation you can emulate different architectures (e.g. ARM/RISC…) on a host that has a differnt instruction set (eg. i386). Performances are cleary not ideal.


Main sources: