Tag Archives: notes

MySQL Replication

This is a copy and paste of some old notes about MySQL replication. I have never fully reviewed this content, or neither finished with the script. I save this anyway, in case I will need some of this info in the future 😉

 

Docker How to

This is a collection of notes extracted by the Udemy course Docker Mastery.

 

Install docker

 

  • Docker has now a versioning like Ubuntu YY.MM
  • prev Docker Engine => Docker CE (Community Edition)
  • prev Docker Data Center => Docker EE (Enterprise edition) -> includes paid product and support
  • 2 versions:
    • Edge: released monthly and supported for a month.
    • Stable: released quarterly and support for 4 months (extend support via Docker EE)

 

 

Client -> the CLI installed on your current machine
Server -> Engine always on, is the one that receives commands via API via the Client

New format: docker <command> <subcommands> [opts]

 

Let’s play with Containers

Create a Nginx container:

=> publish: connect local machine port (host) 80 to the port 80 of the container
=> detach: run the container in background
=> nginx: this is the image we want to run. Docker will look locally if there is an image cached; if not, it will get the default public ‘nginx’ image from Docker Hub, using nginx:latest (unless you specify a version/tag)

NOTE: every time you do ‘run’, docker Engine won’t clone the image but it will run an extra layer on top of the image, assign a virtual IP and doing the port binding (if requested) and
run whatever is specified under CMD in the Dockerfile

https://github.com/docker/compose/releasesCURIOSITY: the name gets automatically created if not specified, using from a random open source list of emotions_scientists

Check what’s happening within a container

 

=> Safety mesure. You can’t remove running containers, unless using -f  to force

 

The process that runs in the container is clearly visible and listed on the main host simply running ps aux .
In fact, a process running in a container is a process that runs on the host machine, but just in a separate user space.

 

Change default container’s command

=> t -> sudo tty; i -> interactive
=> ‘bash‘ -> command we want to run once the container starts
When you create this container, you change the default command to run.
This means that the nginx container started ‘bash’ instead of the default ‘nginx’ command.
Once you exit, the container stops. Why? Because a container runs UNTIL the main process runs.

Instead, if you want to run ‘bash’ as ADDITIONAL command, you need to use this, on an EXISTING/RUNNING container:

 

How to run a CentOS minimal image to run (container)

 

Quick cleanup [DANGEROUS!]

 


Run CentOS container

 

List running containers

 

List ALL container (running and stopped)

 

Start existing container and get prompt

 

ALPINE – minimal image (less than 4MB)

 

Alpine has NO bash in it. It comes with just sh .
You can use apk to install packages.

NOTE: You can run commands that are already existing/present in the image ONLY.


Docker NETWORK

Docker daemon creates a bridged network – using NAT (docker0/bridge).
Each container will get an interface part of this network => by default, each container can communicate between each other without the need to expose the port using -p . The -p / --publish is to “connect” the host’s port with the container’s port.

You can anyway create new virtual networks and/or add multiple interfaces, if needed.

Some commands:

=> Bridge – network interface where containers gets connected by default
=> Host – allows a container to attach DIRECTLY to the host’s network, bypassing the Bridge network
=> none – removes eth0 in the container, leaving only ‘localhost’ interface

 

=> by default it uses the ‘bridge’ driver

=> add new ntw interface part of my_vnet to container ‘web’


DNS

Because of the nature of containers (create/destroy), you cannot rely on IPs.
Docker uses the containers’ names as hostname. This feature is NOT by default if you
use the standard bridge, but it gets enabled if you create a new network.

Example where we run two Elasticsearch containers, on mynet using the alias feature:

--net-alias <name>
=> this helps in setting the SAME name (Round Robin DNS), for example, if you want to run a pool of search servers

 

To quickly test, you can use this command to hit “search” DNS name, automatically created:

-> example where you can run a specific command from a specific image, and remove all the data related to the container (quick check). In this case, CentOs default has curl, so you can run it.
Please note the  --rm flag. This creates a container that will get removed as soon as you do CTRL+C. Very handy to quickly test a container.

Running multiple time, you should be able to see the 2 elasticsearch node replying.

 


Docker IMAGES

Image is the app binaries + all the required dependencies + metadata
There is NO kernel/drivers (these are shared with the host OS).

Official images have:

  • only ‘official’ in the description
  • NO ‘/’ in the name
  • extensive documentation

NON official have generally this format <organisationID>/<appname>
(e.g. mysql/mysql-server => this is not officially maintained by Docker but from MySQL team.)

 

Images are TAGs.
You can use tags to get the image that you want.
Images have multiple tags, so you might end up getting the same image, using
different tags.

 

IMAGE Layers

Images are designed to use Union file system

=> shows the changes in layers

 

unique SHA per layer.

When you create an image you start with a basic layer.
For example, if you pull two images based on Ubuntu 16.04, when you get the second image, you will get just the extra missing layers, as you have already downloaded and cached the basic Ubuntu 16.04 layer (same SHA).
=> you will never store the same image more than once on the filesystem
=> you won’t upload/download the layer that exists already on the other side

It’s like the concept of a VM snapshot.
The original container is read only. Whatever you change/add/modify/remove on the container that you run is stored in a rw layer.
If you run multiple containers from the same image, you will get an extra layer created per container, which stores just the differences between the original container image.

# Tag an image from nginx to myusername/nginx

=> creates a file here: ~/.docker/config.json
Make sure to do docker logout  on untrusted machines, to remove this file.

 

# Push the image

 

# Change tag and re-push

 

=> it understands that the image already in the hub myusername/nginx is the same asmyusername/nginx:justtestdontuse, so it doesn’t upload any content (space saving), but it creates a new entry in the hub.

 


Dockerfile

This file describe how your container should be built. It generally uses a default image and you add your customisation. This is also best practise.

 

FROM -> use this as initial layer were to build the rest on top.
Best practise is to use an official image supported by Docker Hub, so you will be
sure that it is always up to date (security as well).

Any extra line in the file is an extra layer in your container. The use of &&  among commands help to keep multiple commands on the same layer.

ENV -> are variable injected in the container (best practise as you don’t want any sensitive information stored within the container).

RUN -> are generally commands to install software / configure.
Generally there is a RUN for logging, to redirect logging to stdout/stderr. This is best practise. No syslog etc.

EXPOSE -> set which port can be published, which means, which ports I allow the container to receive traffic to. You still need the option --publish (-p)  to actually expose the port.

CMD -> final command that will be executed (generally the main binary)

 

To build the container from the Dockerfile (in the directory where Dockerfile exists):

 

Every time one step changes, from that step till the end, all will be re-created.
This means that you should keep the bits that are changing less frequently on the top, and put on the bottom the ones that are changing more frequently, to make quicker the creation of the container.

 

 

Example: CentOS container with Apache and custom index.html file:

 

Example: Using Alpine HTTPD image and run custom index.html file:

 

Copy all the content of the current directory into the WORKDIR directory  COPY . .

 


A container should be immutable and ephemeral. Which means that you could remove/delete/re-deploy without affecting the data (database, config files, key files etc…)

Unique data should be somewhere else => Data Volumes and Bind Mounts

 

Volumes

Need manual deletion -> preserve the data

In the Dockerfile the command  VOLUME specifies that the container will create a new volume location on the host and assign this into the specified path in the container. All the files will be preserved if the container gets removed.

 

Let’s try using mysql container:

This container was created using VOLUME /var/lib/mysql  command in the Dockerfile.
Once the container got created, a new volume got created as well and mounted. Using  inspect we can see those details.

 

Every time you create a container, it will create a new volume, unless you specify.

You can create/specify a specific volume to multiple containers using  -v <volume_name:container_path> option flag.

Checking the mysql2 and mysql3 containers:

 

Bind Mounting

Mount a directory of the host on a specific container’s path.

Same flag as Volumes  -v  but it starts with a path and not a name.
Use  -v <host_path:container_path> option flag.

This can be handy for a webserver, for example, that shares the /var/www folder stored locally on the host.

 


Docker Compose

  • YAML file (replace shell script where you would save all the docker run commands)
    1. containers
    2. network
    3. volumes
  • CLI docker-compose (locally)

This tool is ideal for local development and testing – not for production.

By default, Compose does print on stout logs.

On linux, you need to install the binary. It is available here.

 

 

Docker and Kubernetes notes

[Raw notes from this free course: https://www.udacity.com/course/scalable-microservices-with-kubernetes–ud615 ]


Docker is one of the most famous container in use nowadays.

Docker container features/best practise:

  • is portable because you keep all what you need for your application in it (libraries etc) – always run the same, regardless of the environment;
  • reduce conflicts between teams running different software on the same infrastructure;
  • minimal: best practise is to keep as minimal as possible its content;
  • you can ‘freeze’ it and move to another host, if required (using the cgroup capability);
  • no hard coded values in it: variable passed during the deploy or pulled from a file mounted externally;
  • you can mount external storage;
  • you can expose a port -> for example you can have a web app listening on port 80. You can expose port 80 of your container so when you connect to the host’s port 80, traffic will be redirected to the container. This “port forwarding” is the container runtime’s job;
  • ‘Dockerfile’ is the configuration file for the container. You can speficy the image that you want to use (FROM …), which port to expose, the storage to mount etc;

COMMANDS:

 
Dockerfile

 

Push container to repository
Dockerhub -> default public (you can also have private)

docker tag -h
Add tag – then login and push

 


Create/Package container (5% of the work)

  • App configuration
  • Service Discovery
  • Managing Updates
  • Monitoring

Kubernetes -> Cluster like single machine
You need to describe the apps and how they interact between each others

POD
– collection of containers (possible multiple apps on different containers)
– shares network namespace (IP)
– shares storage volumes

=> created with conf files

Monitoring
Rediness -> container ready
Liveness -> app not working / restart app
Configmaps
Secrets

Services -> labels

Deployments
Desidered state

Scaling -> increase “replicas”

Rolling updates – CTO roll => deploy new version, get traffic, stop traffic prev version, remove prev verision (this per each POD)

Compromised Email troubleshooting notes

Here some notes about how to troubleshoot a server that got compromised by a php script.

Check email queue

  • Qmail -> qmHandle
  • Postfix -> pmHandle / postqueue

Get some email IDs

Check for X-PHP header in the mail message
Look for the UID and script that sent the message

Find the script and UID

=> permissions issue??

Move away the file(s) and chown 000
!! if the file starts with – , you need to user chown — 000 filename

Disable execution php following this how to

Delete all the messages containing that header


Extra notes:

Check the queue:

See the content of a message:

Check for “X-PHP-Originating-Script” header, which generally gives you the name of the script that generate the email

If they are sent to a specific domain, you can block some domains in Postfix following this guide

Redis – basic checks

Example:

 

Linux resource checks notes

atop utility

 

 

Fail2ban notes

General notes about Fail2ban

How to “SSH” brute force

If you want to make safer your remote server, it is good practise to use a good combination of sshd setup and fail2ban.

Firstly, you should setup your server to allow only key auth, and no passwords. This will drastically reduce the risk. This means anyway that you need to keep your ssh key safe and you won’t be able to access your server unless you have this key. Most of the time is something possible 🙂

For this reason, I’m explaining here how I configured my server.

SSHD

/etc/ssh/sshd_config

Have these settings in the config file (NOTE: the verbosity is for Fail2ban)

(restart sshd)

/etc/fail2ban/jail.local

/etc/fail2ban/filter.d/sshd.conf

Add a custom section after the ddos one:

This line matches whoever tries to connect without a proper ssh key.

Add this line to include custom to the sshd-aggressive setup:

 

MySQL notes

MySQL backup – mysqldump
shell> mysqldump [options] db_name [tbl_name …] > db_name.sql
shell> mysqldump [options] –databases db_name … > multi_db.sql
shell> mysqldump [options] –all-databases > all_dbs.sql

Importing MySQL Table
To import the table run the following command from the command line:
shell> mysql -D dbname < tableName.sql

Check database space
SELECT table_schema “Data Base Name”, sum( data_length + index_length ) / 1024 / 1024 “Data Base Size in MB” FROM information_schema.TABLES GROUP BY table_schema ;

MySQL Uptime
a) mysql> SHOW GLOBAL STATUS;
b) # mysqladmin version | grep -i uptime

innodb_open_files
mysql> show global variables like “innodb_open_files”\G

Binary Logs
>> Enable:
> /etc/my.cnf
log-bin = /var/lib/mysql/bin-log

Enable the slow query log
slow-query-log = 1

Log queries that take longer than 2 seconds
long-query-time = 2

Set ‘max_connections’:
>> On the fly (GLOBAL variable. we can increase it on the fly without restarting mysqld service)
[Check] select @@global.max_connections;
[Set] set @@global.max_connections=300;
[Re-Check] select @@global.max_connections;
(or mysql> set global max_connections=250;)

>> CHANGE on /etc/my.cnf
max_connections = 50
max-connections = 50

set @@global.max_connections=default;

Set the query_cache_size to 16MB, query_cache_type to 1 and query_cache_limit to 1MB

mysql> set global query_cache_size=16*1024*1024;
Query OK, 0 rows affected (0.00 sec)

mysql> set global query_cache_type=1;
Query OK, 0 rows affected (0.00 sec)

mysql> set global query_cache_limit=1*1024*1024;
Query OK, 0 rows affected (0.00 sec)

Check variables
select @@global.max_connections;OR
show variables;show variables like ‘%max%’;

Disable InnoDB
[mysqld]
skip-innodb
default-storage-engine = myisam

Check if Query Cache is enabled:
SHOW VARIABLES LIKE ‘have_query_cache’;

Check Query Cache statistics:
show status like ‘Qcache%’;

MySQL’s maximum memory usage is dangerously high
>> (read_buffer_size + read_rnd_buffer_size + sort_buffer_size + thread_stack + join_buffer_size) x max_connections
=> change max_connections

wait_timeout (global variable)
mysql> show processlist;
If there are too many queries, it might be a bug in the code (for example no “close connections”). In this case, it would be safer to setup a wait_timeout low, maybe 180 (seconds -> 3mins) to make sure the sleeping connections will get dropped at that time.