LXC attach extra block device to container on CentOS/RHEL 7.x

You might ask why do I need an extra block device?
Most of the time you do not need it but special cases might occur and I’ll give an example: we want to test GlusterFS with distributed/replicated bricks using plain block devices inside containers.

1st make sure you have installed lxc
yum install lxc lxc-templates

Setup the container
lxc-create --template=centos --name=gluster01 --bdev=lvm --vgname=cached_raid1 --fssize=4G

Create 2 new extra block devices
lvcreate -V 10G --thin -n gluster01_brick1 raid_10/thin_pool_secondary
lvcreate -V 10G --thin -n gluster01_brick2 raid_10/thin_pool_secondary

Grab minor and major of the new block device
ls -al /dev/mapper/raid_10-gluster01_brick1
ls -al /dev/mapper/raid_10-gluster01_brick2

Configure the container with the new block device
vi /var/lib/lxc/gluster01/config

add

# Mounting extra disk
lxc.aa_profile = lxc-container-default-with-mounting
lxc.cgroup.devices.allow = b 253:37 rwm # 1st brick
lxc.cgroup.devices.allow = b 253:38 rwm # 2nd brick
lxc.autodev = 1
lxc.hook.autodev = /var/lib/lxc/gluster01/autodev_block.sh

Create /var/lib/lxc/gluster01/autodev_block.sh
with the following content

#!/bin/bash
cd ${LXC_ROOTFS_MOUNT}/dev
mknod -m 666 sdb b 253 37
mknod -m 666 sdc b 253 38

then make it executable

chmod +x /var/lib/lxc/gluster01/autodev_block.sh

Start the container and check everything
lxc-start --name=gluster01

and run inside the container:

fdisk -l

You should see 2 disks of 10GB.

libvirt usefull commands – Part 1

In each part will present 5 usefull libvirt commands that are not very popular.

  • Increase memory
  • Attach an extra disk
  • Extra network interface
  • Take screenshot
  • Change vCPU number

 

Increase memory

virsh setmaxmem <vm_name> 1024M --config && virsh setmem <vm_name> 1024M --config

 

Attach an extra disk

virsh attach-disk gluster01 /dev/raid_0/gluster01_brick1 vdb --cache none --config 

 

Extra network interface

virsh attach-interface --domain <vm_name> --type network --source <network> --model virtio --mac 12:23:34:45:56:67 --config

 

Take screenshot

virsh screenshot <vm_name>

 

Change vCPU number

virsh setvcpus <vm_name> 4 --config

Highly Available RabbitMQ messaging cluster

What is RabbitMQ? Robust messaging for applications; it is easy to use, runs on all major OSes, open source and commercially supported.

Why HA? For redundancy, in case one fails would like to preserve the queues.

This time I will take another approach and deploy, configure with Ansible.

On your ansible machine make sure you have the following 2 files and 2 LXC containers already installed (DNS records included).

[root@ansible01 ansible]# ls
files hosts tasks

[root@ansible01 ansible]# less hosts

[rabbitmqs]
rabbitmq01.example.com
rabbitmq02.example.com

[root@ansible01 tasks]# ls
install-rabbitmq-cluster.yml

[root@ansible01 tasks]# less install-rabbitmq-cluster.yml

---
#
# Config RabbitMQ Cluster
#
- hosts: rabbitmqs

sudo: no

tasks:

- name: Config rabbitmq01 basic openstack packages
yum: name=centos-release-openstack-mitaka state=latest

- name: Force a yum upgrade
shell: yum -y upgrade

- name: Install python-openstackclient
yum: name=python-openstackclient state=latest

- name: Install openstack-selinux
yum: name=openstack-selinux state=latest

- name: Install mariadb
yum: name=mariadb state=latest

- name: Install python2-PyMySQL
yum: name=python2-PyMySQL state=latest

- name: Install EPEL repo
yum: name=epel-release state=latest

- name: Install net-tools - reprequisites for RabbitMQ
yum: name=net-tools state=latest

- name: Install Erlang - prerequisite for RabbitMQ
yum: name=python2-PyMySQL state=latest

- name: Install RabbitMQ
yum: name=rabbitmq-server state=latest

- name: Start RabbitMQ
shell: systemctl enable rabbitmq-server

- hosts: rabbitmq01.example.com

sudo: no

tasks:

- name: Copy erlang cookie to secondary node
shell: scp /var/lib/rabbitmq/.erlang.cookie root@rabbitmq02.example.com:/var/lib/rabbitmq/.erlang.cookie
- hosts: rabbitmqs

sudo: no

tasks:

- name: Start RabbitMQ
shell: systemctl start rabbitmq-server
- hosts: rabbitmq02.example.com

sudo: no

tasks:

- name: Join secondary node to cluster - stage 1
shell: rabbitmqctl stop_app

- name: Join secondary node to cluster - stage 2
shell: rabbitmqctl join_cluster --ram rabbit@rabbitmq01

- name: Join secondary node to cluster - stage 3
shell: rabbitmqctl start_app

- hosts: rabbitmq01.example.com

sudo: no

tasks:

- name: Ensure that all queues are mirrored across all running nodes
shell: rabbitmqctl set_policy ha-all '^(?!amq\.).*' '{"ha-mode": "all"}'

 

Then you execute the playbook like this:

ansible-playbook ansible/tasks/install-rabbitmq-cluster.yml -i /root/ansible/hosts --ask-pass

 

And Voila! your RabbitMQ cluster is UP.

HAProxy active/backup load balancer setup on CentOS/RHEL 7.x

In these days everyone wants to use High Availability and this is a good thing, so no more single points of failure.
Will demonstrate how this can be achieved using the following:
– lxc containers
– haproxy
– pacemaker
– corosync
– CentOS 7.x

Steps to follow:

If you are using lxc containers, “which” is not installed by default and it is needed
yum -y install which

Install pacemaker and pcs
yum -y install pacemaker pcs

Enable and restart the services
systemctl enable pcsd.service && systemctl start pcsd.service
systemctl enable pacemaker.service && systemctl start pacemaker.service
systemctl enable corosync.service && systemctl start corosync.service

Make sure you remove fqdn record from line containing "127.0.0.1" in /etc/hosts

Add node01 and node02 to /etc/hosts or as DNS records
vi /etc/hosts

And add:

192.168.100.2 node01
192.168.100.3 node02

Setup a password for hacluster user
passwd hacluster

Very important: permit traffic in firewalld
firewall-cmd --add-service=high-availability && \
   firewall-cmd --permanent --add-service=high-availability

Authentication between cluster members and cluster creation
pcs cluster auth node01 node02
pcs cluster setup --name haproxy_cluster node01 node02
pcs cluster start --all

Nodes must be restarted before moving on!

Check everything is right
pcs status
pcs status corosync
pcs status nodes

Disable stonith or your resources will not start
pcs property set stonith-enabled=false

Create the VIP resource for HAProxy
pcs resource create virtual_ip_haproxy ocf:heartbeat:IPaddr2 ip=192.168.100.100 cidr_netmask=24 op monitor interval=10s

Check again if the resource has started
pcs status resources

And:

ip a sh

Setup bash alias to copy haproxy.cfg to secondary node before restarting the service on primary
vi ~/.bashrc

And add:

alias haproxy_restart="haproxy -f /etc/haproxy/haproxy.cfg -c && \
    scp /etc/haproxy/haproxy.cfg root@node02:/etc/haproxy/haproxy.cfg && sleep 5 && systemctl restart haproxy"

Get haproxy ocf resource from web
cd /usr/lib/ocf/resource.d/heartbeat && \
    curl -O https://raw.githubusercontent.com/thisismitch/cluster-agents/master/haproxy && \
    chmod +x haproxy

Add resource for haproxy in cluster
pcs resource create haproxy ocf:heartbeat:haproxy op monitor interval=10s

Create a clone of haproxy resource that must run on both nodes
pcs resource clone haproxy

Now everything should be right, the only thing that remains to configure is /etc/haproxy.cfg and it is not the scope of this post.

GlusterFS replicated-distributed storage pool on CentOS/RHEL 7.x

GlusterFS – a storage technology  stack that got momentum in the last days and is mature enough to be used in production. The setup I will explain here is similar to RAID10 but over network, using two machines.
If we were using only one machine with disk redundancy, at some point we would have reached hardware limitations while still having a single point of failure, the machine itself; here comes GlusterFS that was designed to be used on commodity hardware and scale.
It allows creation of different storage configurations: replicated, distributed, stripped or a combinations of them.

The scenario:
– distributed-replicated (similar to RAID10)
– number of bricks: 4 (2 per machine)
– accessing the cluster: native

Steps to follow:

Prepare two machines with latest CentOS 7.x
The machines must have 2 disks each; remember that you can use also VMs if it is a test setup but for production it is recommended to use bare-metal.

Make sure all implied nodes have DNS records
Or you can use /etc/hosts

Install GlusterFS repository
yum -y install epel-release centos-release-gluster38.noarch

Install glusterfs-server on both nodes
yum -y install glusterfs-server policycoreutils-python

Start and enable the service
systemctl start glusterd &&  systemctl enable glusterd

Make sure the nodes know one about each other
You can run the command only on gluster01.storage.domain.tld
gluster peer probe gluster02.storage.domain.tld

and you should receive a message:

peer probe: success

Check again the status:

gluster peer status

a response similar should be printed:

Number of Peers: 1

Hostname: gluster02.storage.domain.tld
Port: 24007
Uuid: 9fba506-3a7a-4c5e-94fa-1aaf83f7429t
State: Peer in Cluster (Connected)

Prepare the bricks on each node
We consider using 2 local disks, one for each brick.
mkfs.xfs /dev/sdb && mkfs.xfs /dev/sdc && \
mkdir -p /gluster/brick1 /gluster/brick2 && \
mount /dev/sdb /gluster/brick1 && \
mount /dev/sdc /gluster/brick2 && \
mkdir /gluster/brick1/test && \
mkdir /gluster/brick2/test

and extend the /etc/fstab with:

/dev/sdb /gluster/brick1 xfs defaults 0 0
/dev/sdc /gluster/brick2 xfs defaults 0 0

Make sure it can traverse firewall
firewall-cmd --zone=public --add-port=24007-24008/tcp --permanent && firewall-cmd --reload

for bricks:

firewall-cmd --zone=public --add-port=24009-24010/tcp --permanent && firewall-cmd --reload

and for the clients (Glusterfs/NFS/CIFS Clients):

firewall-cmd --zone=public --add-service=nfs --add-service=samba --add-service=samba-client --permanent

 

firewall-cmd --zone=public --add-port=111/tcp --add-port=139/tcp --add-port=445/tcp --add-port=965/tcp --add-port=2049/tcp \
--add-port=38465-38469/tcp --add-port=631/tcp --add-port=111/udp --add-port=963/udp --add-port=49152-49251/tcp --permanent

 

firewall-cmd --reload

Creation of the volume
gluster volume create test-volume replica 2 transport tcp gluster01.storage.domain.tld:/gluster/brick1/test gluster02.storage.domain.tld:/gluster/brick1/test gluster01.storage.domain.tld:/gluster/brick2/test gluster02.storage.domain.tld:/gluster/brick2/test

Start and verify the new volume
gluster volume start test-volume

gluster volume info all

Access the volume from a 3rd machine
For this you need to install client packages:

yum -y install glusterfs glusterfs-fuse attr

and then mount the volume:

mount -t glusterfs gluster01.storage.domain.tld:/test-volume /mnt/test-volume

 

How to add to /etc/fstab with backup server:

mkdir /gluster && echo "gluster01.storage.domain.tld:/test-volume /gluster glusterfs defaults,_netdev,log-level=WARNING,log-file=/var/log/gluster.log,backupvolfile-server=gluster02.storage.domain.tld 0 0" >> /etc/fstab && mount -a

 

Galera Cluster for MySQL – True Multi-Master on CentOS/RHEL 7.x

Galera Cluster for MySQL – The True Multi-Master

Advantages:

  • High Availability
  • Multi-Master – read and write on any cluster node, anytime
  • No slave lag
  • No lost transactions
  • Row level replication
  • Highly Scalable

 

Steps to setup a Cluster:

Install 3 containers with at least 10 GB of disk space

 

Package pre-requisites on all nodes
yum -y install rsync which socat net-tools

 

Add MariaDB repository
for host in {galera01,galera02,galera03}; do ssh $host \
"tee /etc/yum.repos.d/MariaDB.repo << mariadbrepo
[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/5.5/centos7-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
mariadbrepo"; done

 

Install MariaDB
for host in {galera01,galera02,galera03}; do ssh $host \
    "yum -y install MariaDB-Galera-server"; done

 

Start MySQL individually and setup SST user before configuring the cluster

mysql_secure_installation


mysql -u root -p XXXXXXXX

Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 4827
Server version: 5.5.48-MariaDB-wsrep MariaDB Server, wsrep_25.14.r9949137

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> grant all privileges on *.* to sst_user@'192.168.1.%' identified by 'kjxhdgewu6dsc3f';

Query OK, 0 rows affected (0.04 sec)

MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.05 sec)

 

After this step is done stop mysql:

/etc/init.d/mysql stop

 

Configure your nodes

vi /etc/my.cnf.d/server.cnf

 

And change to:

# this is read by the standalone daemon and embedded servers
[server]

# this is only for the mysqld standalone daemon
[mysqld]

#
# * Galera-related settings
#
[galera]
binlog_format=row
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
bind-address=0.0.0.0
innodb_log_file_size=100M
innodb_file_per_table
innodb_flush_log_at_trx_commit=2
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://192.168.1.101,192.168.1.102,192.168.1.103"
wsrep_cluster_name='Galera_Cluster'
wsrep_node_address='192.168.1.101'
wsrep_node_name='galera01'
wsrep_sst_method=rsync
wsrep_sst_auth=sst_user:kjxhdgewu6dsc3f

# this is only for embedded server
[embedded]

# This group is only read by MariaDB-5.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mysqld-5.5]

# These two groups are only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]

[mariadb-5.5]

 

Start the nodes

1st node is always special:
/etc/init.d/mysql start --wsrep-new-cluster

The other 2 nodes will just join the cluster:
/etc/init.d/mysql start

 

Check the status of you cluster

mysql -u root -p XXXXXXXX

Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 4827
Server version: 5.5.48-MariaDB-wsrep MariaDB Server, wsrep_25.14.r9949137

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> SHOW GLOBAL STATUS LIKE 'wsrep_%';

 

HAProxy monitoring for Galera 

Steps below must be done on all Galera nodes:

wget -O /usr/bin/clustercheck https://raw.githubusercontent.com/olafz/percona-clustercheck/master/clustercheck

chmod +x /usr/bin/clustercheck

yum -y install xinetd

echo "mysqlchk   9200/tcp" | tee -a /etc/services

tee /etc/xinetd.d/mysqlchk << hapcheck
# default: on
# description: mysqlchk
service mysqlchk
{
disable = no
flags = REUSE
socket_type = stream
port = 9200
wait = no
user = nobody
server = /usr/bin/clustercheck
log_on_failure += USERID
only_from = 0.0.0.0/0
per_source = UNLIMITED
}
hapcheck

 

Setup a cluster monitoring MySQL user

[root@galera01 ~]# mysql -u root -p XXXXXXXX
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 5615
Server version: 5.5.48-MariaDB-wsrep MariaDB Server, wsrep_25.14.r9949137

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> grant select on *.* to cmon@'192.168.1.%' identified by 'dshdj4fh44nofd';

MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.05 sec)

 

HAProxy configuration

vi /etc/haproxy/haproxy.cfg

And add the lines below:

listen galera *:3306
         mode tcp
         option httpchk
         balance leastconn
         server galera01 192.168.1.101:3306 check port 9200
         server galera02 192.168.1.102:3306 check port 9200
         server galera03 192.168.1.103:3306 check port 9200

 

Now you can connect to 192.168.1.100:3306 (the Virtual IP of the HAProxy Load Balancer)

Fast content serving WordPress HA with Caching, Galera & GlusterFS

Everyone wants to make the most of the least and for this we must head for smart choices. After many searches and test-driving solutions and complex combinations of memcached, redis, tmpfs ramdisk and various WordPress plugins I found a decent combination that everyone can apply.

For prerequisites I will point to other posts of mine and will concentrate on optimizations for this one.

Goals to achieve:

  • High Availability
  • Load balancing
  • Scalability
  • Centralized storage
  • Reduced server overhead and fast loading content

Prerequisites:

 

First if you have a classic WordPress installation test it’s performance here: http://gtmetrix.com and they will give you a detailed report and also suggestions what you should improve.

I was running a poor performing WP until now and decided to take action.

 

Steps:

Setup a galera cluster and migrate your database into

Then point wp-config.php to use the floating IP of Galera Cluster.

 

Setup you new WP instances

for host in {wordpress01,wordpress02}; do ssh $host \
   "yum -y install httpd"; done

for host in {wordpress01,wordpress02}; do ssh $host \
   "rm -fv /etc/httpd/conf.d/welcome.conf"; done

for host in {wordpress01,wordpress02}; do ssh $host \
   "tee /etc/httpd/conf.d/wordpress.conf << wp_alfa_beta
    <Directory "/wordpress">
    AllowOverride All
    </Directory>

    <filesMatch "\.(jpg|jpeg|js|html|css)$">
    SetOutputFilter DEFLATE
    </filesMatch>

    <VirtualHost *:80>
    ServerAdmin user@domain.com
    ServerName domain.tld
    ServerAlias http://www.domain.tld

    DocumentRoot /wordpress

    </VirtualHost>
    wp_alfa_beta"; done

 

Make sure you have a running GlusterFS on decent hardware

On every WP instance do:

mkdir /wordpress

yum -y install nfs-utils attr

systemctl enable rpcbind && systemctl start rpcbind

mount -t nfs gluster01.stor.domain.tld:/wordpress /wordpress

With a small comment here: if you are running WP instances and GlusterFS nodes as virtual machines on common hypervisors point every WP instance to the closest GlusterFS node.

Do not use glusterfs-fuse because it is slow and does not cope.

Example:

hv01: wordpress01, gluster01

hv02: wordpress02, gluster02

 

[root@wordpress01 ~] mount -t nfs gluster01.stor.domain.tld:/wordpress /wordpress

echo "gluster01.stor.domain.tld:/wordpress /wordpress nfs defaults,_netdev 0 0" >> /etc/fstab

[root@wordpress02 ~] mount -t nfs gluster02.stor.domain.tld:/wordpress /wordpress

echo "gluster02.stor.domain.tld:/wordpress /wordpress nfs defaults,_netdev 0 0" >> /etc/fstab

 

Sync your old content to the GlusterFS storage

[root@wordpress01_old ~]# rsync -avzr /var/www/html/* root@wordpress01.domain.tld:/wordpress/

[root@wordpress01 ~]# chown -R apache.apache /wordpress/*

 

Start you WP instances and get ready

for host in {wordpress01,wordpress02}; do ssh $host \
   "systemctl enable httpd && systemctl start httpd"; done

 

High Availability and Load Balancing will be achieved via HAProxy

We presume HAProxy is already installed.

vi /etc/haproxy/haproxy.cfg

and add something similar:

# Frontends
listen http *:80
option http-server-close # very important for keep-alive
option forwardfor except 127.0.0.0/8
acl wordpress hdr(host) -i domain.tld
use_backend wp_cluster if wordpress

# Backends
backend wp_cluster
server wordpress01 192.168.100.101:80 check
server wordpress02 192.168.100.102:80 check

 

Cache validator inside .htaccess

Make sure you .htaccess contains ETag header

<FilesMatch "\.(html|htm)$">
AddDefaultCharset UTF-8
<ifModule mod_headers.c>
Header unset Pragma
FileETag INode MTime Size
Header set Cache-Control "public, max-age=604800, must-revalidate"
Header set Connection keep-alive
</ifModule>
</FilesMatch>

<FilesMatch "\.(?i:ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf|x-html|css|xml|js|woff|woff2|ttf|svg|eot)(\.gz)?$">
<IfModule mod_expires.c>
ExpiresActive On
ExpiresDefault A0
ExpiresByType image/gif A2592000
ExpiresByType image/png A2592000
ExpiresByType image/jpg A2592000
ExpiresByType image/jpeg A2592000
ExpiresByType image/ico A2592000
ExpiresByType image/svg+xml A2592000
ExpiresByType text/css A2592000
ExpiresByType text/javascript A2592000
ExpiresByType application/javascript A2592000
ExpiresByType application/x-javascript A2592000
</IfModule>
<IfModule mod_headers.c>
Header unset Pragma
FileETag INode MTime Size
Header set Cache-Control "public, max-age=2592000"
Header set Connection keep-alive
</IfModule>
</FilesMatch>

 

Once you have you WP running it’s time to optimize content

The best plugins I could find until now are:

Most efficient .css & .js minify solution until now

wget https://github.com/yui/yuicompressor/releases/download/v2.4.8/yuicompressor-2.4.8.jar

yum install java-1.8.0-openjdk

cd /wordpress/wp-content/themes/<theme>

for i in `ls . |grep .css`; do cp -p "$i" "$i"_orig && java -jar /root/yuicompressor-2.4.8.jar --type css -o "$i" "$i"_orig; done

cd /wordpress/wp-content/themes/<theme>/css

for i in `ls . |grep .css`; do cp -p "$i" "$i"_orig && java -jar /root/yuicompressor-2.4.8.jar --type css -o "$i" "$i"_orig; done

cd /wordpress/wp-content/themes/<theme>/js

for i in `ls . |grep .js`; do cp -p "$i" "$i"_orig && java -jar /root/yuicompressor-2.4.8.jar --type js -o "$i" "$i"_orig; done

 

WP Fastest Cache (generates persistent static html content for every dynamic page) – download

Features:

  1. Mod_Rewrite which is the fastest method is used in this plugin
  2. All cache files are deleted when a post or page is published
  3. Admin can delete all cached files from the options page
  4. Admin can delete minified css and js files from the options page
  5. Block cache for specific page or post with Short Code
  6. Cache Timeout – All cached files are deleted at the determinated time
  7. Cache Timeout for specific pages
  8. Enable/Disable cache option for mobile devices
  9. Enable/Disable cache option for logged-in users
  10. SSL support
  11. CDN support
  12. Preload Cache – Create the cache of all the site automatically

Performance Optimization:

  1. Generating static html files from your dynamic WordPress blog
  2. Minify Html – You can decrease the size of page
  3. Minify Css – You can decrease the size of css files
  4. Enable Gzip Compression – Reduce the size of files sent from your server to increase the speed to which they are transferred to the browser.
  5. Leverage browser caching – Reduce page load times for repeat visitors
  6. Combine CSS – Reduce number of HTTP round-trips by combining multiple CSS resources into one
  7. Combine JS

WP Smush – Image Optimization (does Lossless image optimization) – download

Features:

  • Optimize your images using advanced lossless compression techniques.
  • Process JPEG, GIF and PNG image files.
  • Auto-smush your attachments on upload.
  • Manually smush your attachments individually in the media library, or in bulk 50 attachments at a time.
  • Smush all standard web-sized images 1MB or smaller.
  • Smush images with no slowdown using WPMU DEV’s fast, reliable Smush API.
  • View advanced compression stats per-attachment and library totals.

WP Deffered JavaScripts (delays the load of JavaScripts for faster response)

This plugin defer the loading of all JavaScripts added by the way ofwp_enqueue_script(), using LABJS. The result is a significant optimization of loading time.

It is compatible with all WordPress JavaScript functions (wp_localize_script(), js in header, in footer…) and works with all well coded plugins.

ShortPixel Image Optimizer

ShortPixel makes your website load faster by reducing the size of your images and helps you rank better in Google search. With both lossy and lossless image compression available for all common image types (PNG, JPG, GIF), plus PDF files. The plugin is free to use for 100 images/month.

How does it work?

Both new and old images can be optimized with ShortPixel. Once activated, the plugin instantly processes new images uploaded to your website. Bulk optimization will automatically process all your past image gallery with one click. Images and thumbnails are processed in the cloud, and replaced back into your website. It’s simple to use, yet incredibly powerful.

Plugin features. What you see is what you get:

  • supports PNG, JPG, GIF (still and animated) images and PDF documents
  • thumbnails and featured images are also optimized
  • CMYK to RGB conversion
  • free 100 image credits/month. Images that are optimized less that 5% are bonus
  • no file size limit
  • originals are saved in a backup folder and can be manually restored
  • ‘Bulk’ optimize past gallery with one click
  • 40 days optimization report with all image details and overall statistics
  • works great for eCommerce websites using WooCommerce plugin
  • multisite support for a single API key
  • compatible with WP Engine hosted websites
  • compatible with WPML and WPML Media plugins.

 

 

Test the performance of you WP again

Performance test here: http://gtmetrix.com and compare the detailed report for more suggestions what you could improve.

HA ELK Stack – Elasticsearch, Logstash & Kibana on CentOS/RHEL 7.x

ELK‘s purpose is to gather logs from multiple sources, store them in a centralized location and use Kibana to visualize them.
Short description of the involved components:

  • Elasticsearch – open source tool for storing logs for future use
  • Logtash – collecting and parsing logs
  • Kibana – web interface used for searching and viewing of the indexed logs
  • Filebeats – plays the role of the agent that streams logs from clients to Logstash

Logs that are centralized can be used to identify complex problems that need a bird’s eye view, problems that occur in distributed systems, multi-backend applications and other use cases.

For this setup will be using 6 LXC containers with the following roles:

  • 2 for Kibana
  • 2 for Logstash
  • 2 for Elasticsearch

 

And here are the steps needed for the setup:

Install Java 8 on all containers
for host in {kibana01,kibana02,logstash01,logstash02,elastic01,elastic02}; do \
    ssh $host "wget --no-cookies --no-check-certificate \
    --header 'Cookie:gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie' 'http://download.oracle.com/otn-pub/java/jdk/8u73-b02/jdk-8u73-linux-x64.rpm' && \
    yum -y localinstall jdk-8u73-linux-x64.rpm"

 

Install Elasticsearch on all containers
We will be using Elasticsearch in client mode on kibana0{1,2}, logstash0{1,2} and master mode on elastic0{1,2}

for host in {kibana01,kibana02,logstash01,logstash02,elastic01,elastic02}; do ssh $host \
"tee /etc/yum.repos.d/elasticsearch.repo << elasticrepo
[elasticsearch-2.x]
name=Elasticsearch repository for 2.x packages
baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
elasticrepo";done

for host in {kibana01,kibana02,logstash01,logstash02,elastic01,elastic02}; do ssh $host "yum -y install elasticsearch"; done

 

Give a distinct names to all hosts, setup node roles and IPs

vi /etc/elasticsearch/elasticsearch.yml

Append similar to the following lines, matched for each host:

# kibana01
node.name: kibana01
network.host: 192.168.100.101
discovery.zen.ping.unicast.hosts: ["elastic01", "elastic02"]
node.master: false
node.data: false

# kibana02
node.name: kibana02
network.host: 192.168.100.102
discovery.zen.ping.unicast.hosts: ["elastic01", "elastic02"]
node.master: false
node.data: false

# logstash01
node.name: logstash01
network.host: 192.168.100.103
discovery.zen.ping.unicast.hosts: ["elastic01", "elastic02"]
node.master: false
node.data: false

# logstash02
node.name: logstash02
network.host: 192.168.100.104
discovery.zen.ping.unicast.hosts: ["elastic01", "elastic02"]
node.master: false
node.data: false

# elastic01
node.name: elastic01
network.host: 192.168.100.105
discovery.zen.ping.unicast.hosts: ["elastic01", "elastic02"]

# elastic02
node.name: elastic02
network.host: 192.168.100.106
discovery.zen.ping.unicast.hosts: ["elastic01", "elastic02"]

 

Enable Elasticsearch services and start the cluster


for host in {elastic01,elastic02,logstash01,logstash02,kibana01,kibana02}; do ssh $host \
    "systemctl enable elasticsearch && systemctl start elasticsearch"; done

 

Install Kibana and configure it
for host in {kibana01,kibana02}; do ssh $host \
"tee /etc/yum.repos.d/kibana.repo << kibanarepo
[kibana-4.4]
name=Kibana repository for 4.4.x packages
baseurl=http://packages.elastic.co/kibana/4.4/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
kibanarepo"; done

for host in {kibana01,kibana02}; do ssh $host "yum -y install kibana && chkconfig kibana on"; done

vi /opt/kibana/config/kibana.yml

Locate server.host and replace “0.0.0.0” with “localhost”

for host in {kibana01,kibana02}; do ssh $host "systemctl start kibana"; done

 

Configure Kibana with Filebeat index template:

for host in {kibana01,kibana02}; do ssh $host \
"curl -O https://gist.githubusercontent.com/thisismitch/3429023e8438cc25b86c/raw/d8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.json"; done

for host in {kibana01,kibana02}; do ssh $host "curl -XPUT 'http://localhost:9200/_template/filebeat?pretty' -d@filebeat-index-template.json"; done

 

Setup nginx reverse proxy on Kibana containers

for host in {kibana01,kibana02}; do ssh $host "yum -y install epel-release"; done
for host in {kibana01,kibana02}; do ssh $host "yum -y install nginx httpd-tools"; done

Make sure /etc/nginx/nginx.conf looks like this:

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

events {
worker_connections 1024;
}

http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;

sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;

include /etc/nginx/mime.types;
default_type application/octet-stream;

# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
}

And /etc/nginx/conf.d/kibana.conf looks like this:

server {
listen 80;

server_name kibana01.mgmt.zeding.ro;

auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;

location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}

 

for host in {kibana01,kibana02}; do ssh $host "systemctl enable nginx && systemctl start nginx"; done

 

Install Logstash and basic configure it

for host in {logstash01,logstash02}; do ssh $host \
"tee /etc/yum.repos.d/logstash.repo << logstashrepo
[logstash-2.2]
name=logstash repository for 2.2 packages
baseurl=http://packages.elasticsearch.org/logstash/2.2/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1
logstashrepo"; done

for host in {logstash01,logstash02}; do ssh $host "yum -y install logstash"; done

 

Generate a multi-domain/multi-IP SSL certificate for Logtash

The multi-IP certificate steps will be done on the 1st logstash container.

curl -O https://github.com/driskell/log-courier/blob/1.x/src/lc-tlscert/lc-tlscert.go

yum -y install golang-pkg-linux-amd64-1.3.3-2.el7_0.noarch

go build lc-tlscert.go

./lc-tlscert

cp selfsigned.crt /etc/pki/tls/certs/logstash-forwarder.crt

cp selfsigned.key /etc/pki/tls/private/logstash-forwarder.key

chmod 644 /etc/pki/tls/private/logstash-forwarder.key

systemctl restart logstash.service

 

When this step is done make sure you use this multi-IP certificate on all logstash instances and filebeats clients.

 

Configure beats input for Logstash

for host in {logstash01,logstash02}; do ssh $host \
“tee/etc/logstash/conf.d/02-beats-input.conf << beat_input
input {
   beats {
      port => 5044
      ssl => true
      ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
      ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
   }
}
beat_input"; done

 

for host in {logstash01,logstash02}; do ssh $host \
“tee/etc/logstash/conf.d/10-syslog-filter.conf << syslog_filter
filter {
   if [type] == "syslog" {
      grok {
         match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
         add_field => [ "received_at", "%{@timestamp}" ]
         add_field => [ "received_from", "%{host}" ]
      }
      syslog_pri { }
      date {
         match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
      }
   }
}
syslog_filter"; done

 

for host in {logstash01,logstash02}; do ssh $host \
“tee/etc/logstash/conf.d/30-elasticsearch-output << elk_out
output {
   elasticsearch {
      hosts => ["localhost:9200"]
      sniffing => true
      manage_template => false
      index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}" 
     document_type => "%{[@metadata][type]}"
   }
}
elk_out"; done

 

Test the configurations with:

service logstash configtest

and if everything returns OK:

chkconfig logstash on && systemctl restart logstash

 

Setup Filebeat on clients

tee /etc/yum.repos.d/elastic-beats.repo << beatsrepo
[beats] 
name=Elastic Beats Repository 
baseurl=https://packages.elastic.co/beats/yum/el/$basearch 
enabled=1 
gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch 
gpgcheck=1
beatsrepo

 

Install Filebeats:
yum -y install filebeat

 

Configure Filebeats:

vi /etc/filebeat/filebeat.yml

find document_type and make it look like this:
document_type: syslog

change "localhost" to the private IP address (or hostname, if you went with that option) of any Logstash container.

find the "tls" section and make it look like this:
# Optional TLS. By default is off.
tls:
# List of root certificates for HTTPS server verifications
#certificate_authorities: ["/etc/pki/root/ca.pem"]
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

Start and enable Filebeats service:

systemctl start filebeat && systemctl enable filebeat

 

High Availability for Kibana will be achieved via HAProxy

We presume HAProxy is already installed.

vi /etc/haproxy/haproxy.cfg

and add something similar:

# Frontends
listen http *:80
option http-server-close
option forwardfor except 127.0.0.0/8
acl kibana_internal hdr(host) -i kibana.domain.tld
use_backend kibana_cluster if kibana_internal

# Backends
backend kibana_cluster
server kibana01 192.168.100.101:80 check
server kibana02 192.168.100.102:80 check

 

High Availability for Logstash will be achieved via DNS round robin

Make sure you add this in you DNS zone:

logstash01.domain.tld. IN A 192.168.100.103
logstash02.domain.tld. IN A 192.168.100.104
logstash.domain.tld. IN CNAME logstash01.domain.tld.
logstash.domain.tld. IN CNAME logstash02.domain.tld.

DHCP HA Setup on CentOS/RHEL 7.x

On primary server

#dhcpd.conf

authoritative;
option domain-name "subdomain.domain.tld";
option domain-name-servers 8.8.8.8, 8.8.4.4;

default-lease-time 600;
max-lease-time 7200;

# fail over configuration

failover peer "dhcp02" {
    primary; # This is the primary
    address 192.168.1.103; # This DHCP Server IP Address
    port 647;
    peer address 192.168.1.104; # Secondary DHCP Server ip address
    peer port 647;
    max-response-delay 60;
    max-unacked-updates 10;
    load balance max seconds 3;
    mclt 3600;
    split 128;
}

subnet 192.168.1.0 netmask 255.255.255.0 {
    option domain-name "subdomain.domain.tld";
    option domain-name-servers 8.8.8.8, 8.8.4.4;
    option broadcast-address 192.168.1.255;
    option routers 192.168.1.1;
    pool {
        failover peer "dhcp02";
        range 192.168.1.150 192.168.1.250;
        default-lease-time 6000;
        max-lease-time 72000;
    }
}

On secondary server

#dhcpd.conf

authoritative;
option domain-name "subdomain.domain.tld";
option domain-name-servers 8.8.8.8, 8.8.4.4;

default-lease-time 600;
max-lease-time 7200;

# fail over configuration

failover peer "dhcp02" {
    secondary; # This is the secondary
    address 192.168.1.104; # This DHCP Server IP Address
    port 647;
    peer address 192.168.1.103; # Primary DHCP Server ip address
    peer port 647;
    max-response-delay 60;
    max-unacked-updates 10;
    load balance max seconds 3;
}

subnet 192.168.1.0 netmask 255.255.255.0 {
    option domain-name "subdomain.domain.tld";
    option domain-name-servers 8.8.8.8, 8.8.4.4;
    option broadcast-address 192.168.1.255;
    option routers 192.168.1.1;
    pool {
        failover peer "dhcp02";
        range 192.168.1.150 192.168.1.250;
        default-lease-time 6000;
        max-lease-time 72000;
    }
}

Customize CRUSH MAP for Ceph – Mixing SSD with SATA on same hosts

Someone might wonder why this is needed? It’s one step further to hyper-convergence and mixing different types of disks in same boxes.
CRUSH is flexible and aware of the topology; will create two pools “ssd” and “sata”.

Prerequisites: ceph cluster must be up and running

 

Backup you existing crushmap

Exporting the compiled crushmap

ceph osd getcrushmap -o default.coloc

Decompile

crushtool -d default.coloc -o default.txt

 

Customizing the crushmap

Copy default crush map and start customizing it

cp default.txt customized.txt

vi customized.txt

 

Your new map should look similar:

# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable straw_calc_version 1

# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4
device 5 osd.5
device 6 osd.6
device 7 osd.7
device 8 osd.8

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# OSD SATAs

host ceph01-sata {
   id -2 # do not change unnecessarily
   # weight 0.000
   alg straw
   hash 0
   item osd.1 weight 1.000
   item osd.2 weight 1.000
}

host ceph02-sata {
   id -3 # do not change unnecessarily
   # weight 0.000
   alg straw
   hash 0
   item osd.4 weight 1.000
   item osd.5 weight 1.000
}

host ceph03-sata {
   id -4 # do not change unnecessarily
   # weight 0.000
   alg straw
   hash 0
   item osd.7 weight 1.000
   item osd.8 weight 1.000
}

# OSD SSDs

host ceph01-ssd {
   id -22 # do not change unnecessarily
   # weight 0.000
   alg straw
   hash 0
   item osd.0 weight 1.000
}

host ceph02-ssd {
   id -23 # do not change unnecessarily
   # weight 0.000
   alg straw
   hash 0
   item osd.3 weight 1.000
}

host ceph03-ssd {
   id -24 # do not change unnecessarily
   # weight 0.000
   alg straw
   hash 0
   item osd.6 weight 1.000
}

# SATA ROOT

root sata {
   id -1 # do not change unnecessarily
   # weight 0.000
   alg straw
   hash 0
   item ceph01-sata weight 2.000
   item ceph02-sata weight 2.000
   item ceph03-sata weight 2.000
}

# SSD ROOT

root ssd {
   id -21 # do not change unnecessarily
   # weight 0.000
   alg straw
   hash 0
   item ceph01-ssd weight 2.000
   item ceph02-ssd weight 2.000
   item ceph03-ssd weight 2.000
}

# SSD RULE

# rules
rule ssd {
   ruleset 0
   type replicated
   min_size 1
   max_size 10
   step take ssd
   step chooseleaf firstn 0 type host
   step emit
}

# SATA RULE

rule sata {
   ruleset 1
   type replicated
   min_size 1
   max_size 10
   step take sata
   step chooseleaf firstn 0 type host
   step emit
}

 

Compile and import the new customized map

crushtool -c customized.txt -o customized.coloc

ceph osd setcrushmap -i customized.coloc

Now check you new topology:

ceph osd map

 

Create the new pools and apply rules

ceph osd pool create ssd 128 128

ceph osd pool create sata 128 128

ceph osd pool set ssd crush_ruleset 0

ceph osd pool set sata crush_ruleset 1

 

Testing your new pools

Create a block device in each pool

rbd create test_sata --size 500G --pool sata --image-feature layering

rbd --image test_sata -p sata info

rbd create test_ssd --size 100G --pool ssd --image-feature layering

rbd --image test_ssd -p ssd info