HA ELK Stack – Elasticsearch, Logstash & Kibana on CentOS/RHEL 7.x

ELK‘s purpose is to gather logs from multiple sources, store them in a centralized location and use Kibana to visualize them.
Short description of the involved components:

  • Elasticsearch – open source tool for storing logs for future use
  • Logtash – collecting and parsing logs
  • Kibana – web interface used for searching and viewing of the indexed logs
  • Filebeats – plays the role of the agent that streams logs from clients to Logstash

Logs that are centralized can be used to identify complex problems that need a bird’s eye view, problems that occur in distributed systems, multi-backend applications and other use cases.

For this setup will be using 6 LXC containers with the following roles:

  • 2 for Kibana
  • 2 for Logstash
  • 2 for Elasticsearch


And here are the steps needed for the setup:

Install Java 8 on all containers
for host in {kibana01,kibana02,logstash01,logstash02,elastic01,elastic02}; do \
    ssh $host "wget --no-cookies --no-check-certificate \
    --header 'Cookie:gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie' 'http://download.oracle.com/otn-pub/java/jdk/8u73-b02/jdk-8u73-linux-x64.rpm' && \
    yum -y localinstall jdk-8u73-linux-x64.rpm"


Install Elasticsearch on all containers
We will be using Elasticsearch in client mode on kibana0{1,2}, logstash0{1,2} and master mode on elastic0{1,2}

for host in {kibana01,kibana02,logstash01,logstash02,elastic01,elastic02}; do ssh $host \
"tee /etc/yum.repos.d/elasticsearch.repo << elasticrepo
name=Elasticsearch repository for 2.x packages

for host in {kibana01,kibana02,logstash01,logstash02,elastic01,elastic02}; do ssh $host "yum -y install elasticsearch"; done


Give a distinct names to all hosts, setup node roles and IPs

vi /etc/elasticsearch/elasticsearch.yml

Append similar to the following lines, matched for each host:

# kibana01
node.name: kibana01
discovery.zen.ping.unicast.hosts: ["elastic01", "elastic02"]
node.master: false
node.data: false

# kibana02
node.name: kibana02
discovery.zen.ping.unicast.hosts: ["elastic01", "elastic02"]
node.master: false
node.data: false

# logstash01
node.name: logstash01
discovery.zen.ping.unicast.hosts: ["elastic01", "elastic02"]
node.master: false
node.data: false

# logstash02
node.name: logstash02
discovery.zen.ping.unicast.hosts: ["elastic01", "elastic02"]
node.master: false
node.data: false

# elastic01
node.name: elastic01
discovery.zen.ping.unicast.hosts: ["elastic01", "elastic02"]

# elastic02
node.name: elastic02
discovery.zen.ping.unicast.hosts: ["elastic01", "elastic02"]


Enable Elasticsearch services and start the cluster

for host in {elastic01,elastic02,logstash01,logstash02,kibana01,kibana02}; do ssh $host \
    "systemctl enable elasticsearch && systemctl start elasticsearch"; done


Install Kibana and configure it
for host in {kibana01,kibana02}; do ssh $host \
"tee /etc/yum.repos.d/kibana.repo << kibanarepo
name=Kibana repository for 4.4.x packages
kibanarepo"; done

for host in {kibana01,kibana02}; do ssh $host "yum -y install kibana && chkconfig kibana on"; done

vi /opt/kibana/config/kibana.yml

Locate server.host and replace “” with “localhost”

for host in {kibana01,kibana02}; do ssh $host "systemctl start kibana"; done


Configure Kibana with Filebeat index template:

for host in {kibana01,kibana02}; do ssh $host \
"curl -O https://gist.githubusercontent.com/thisismitch/3429023e8438cc25b86c/raw/d8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.json"; done

for host in {kibana01,kibana02}; do ssh $host "curl -XPUT 'http://localhost:9200/_template/filebeat?pretty' [email protected]"; done


Setup nginx reverse proxy on Kibana containers

for host in {kibana01,kibana02}; do ssh $host "yum -y install epel-release"; done
for host in {kibana01,kibana02}; do ssh $host "yum -y install nginx httpd-tools"; done

Make sure /etc/nginx/nginx.conf looks like this:

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

events {
worker_connections 1024;

http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;

sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;

include /etc/nginx/mime.types;
default_type application/octet-stream;

# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;

And /etc/nginx/conf.d/kibana.conf looks like this:

server {
listen 80;

server_name kibana01.mgmt.zeding.ro;

auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;

location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;


for host in {kibana01,kibana02}; do ssh $host "systemctl enable nginx && systemctl start nginx"; done


Install Logstash and basic configure it

for host in {logstash01,logstash02}; do ssh $host \
"tee /etc/yum.repos.d/logstash.repo << logstashrepo
name=logstash repository for 2.2 packages
logstashrepo"; done

for host in {logstash01,logstash02}; do ssh $host "yum -y install logstash"; done


Generate a multi-domain/multi-IP SSL certificate for Logtash

The multi-IP certificate steps will be done on the 1st logstash container.

curl -O https://github.com/driskell/log-courier/blob/1.x/src/lc-tlscert/lc-tlscert.go

yum -y install golang-pkg-linux-amd64-1.3.3-2.el7_0.noarch

go build lc-tlscert.go


cp selfsigned.crt /etc/pki/tls/certs/logstash-forwarder.crt

cp selfsigned.key /etc/pki/tls/private/logstash-forwarder.key

chmod 644 /etc/pki/tls/private/logstash-forwarder.key

systemctl restart logstash.service


When this step is done make sure you use this multi-IP certificate on all logstash instances and filebeats clients.


Configure beats input for Logstash

for host in {logstash01,logstash02}; do ssh $host \
“tee/etc/logstash/conf.d/02-beats-input.conf << beat_input
input {
   beats {
      port => 5044
      ssl => true
      ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
      ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
beat_input"; done


for host in {logstash01,logstash02}; do ssh $host \
“tee/etc/logstash/conf.d/10-syslog-filter.conf << syslog_filter
filter {
   if [type] == "syslog" {
      grok {
         match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
         add_field => [ "received_at", "%{@timestamp}" ]
         add_field => [ "received_from", "%{host}" ]
      syslog_pri { }
      date {
         match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
syslog_filter"; done


for host in {logstash01,logstash02}; do ssh $host \
“tee/etc/logstash/conf.d/30-elasticsearch-output << elk_out
output {
   elasticsearch {
      hosts => ["localhost:9200"]
      sniffing => true
      manage_template => false
      index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}" 
     document_type => "%{[@metadata][type]}"
elk_out"; done


Test the configurations with:

service logstash configtest

and if everything returns OK:

chkconfig logstash on && systemctl restart logstash


Setup Filebeat on clients

tee /etc/yum.repos.d/elastic-beats.repo << beatsrepo
name=Elastic Beats Repository 


Install Filebeats:
yum -y install filebeat


Configure Filebeats:

vi /etc/filebeat/filebeat.yml

find document_type and make it look like this:
document_type: syslog

change "localhost" to the private IP address (or hostname, if you went with that option) of any Logstash container.

find the "tls" section and make it look like this:
# Optional TLS. By default is off.
# List of root certificates for HTTPS server verifications
#certificate_authorities: ["/etc/pki/root/ca.pem"]
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

Start and enable Filebeats service:

systemctl start filebeat && systemctl enable filebeat


High Availability for Kibana will be achieved via HAProxy

We presume HAProxy is already installed.

vi /etc/haproxy/haproxy.cfg

and add something similar:

# Frontends
listen http *:80
option http-server-close
option forwardfor except
acl kibana_internal hdr(host) -i kibana.domain.tld
use_backend kibana_cluster if kibana_internal

# Backends
backend kibana_cluster
server kibana01 check
server kibana02 check


High Availability for Logstash will be achieved via DNS round robin

Make sure you add this in you DNS zone:

logstash01.domain.tld. IN A
logstash02.domain.tld. IN A
logstash.domain.tld. IN CNAME logstash01.domain.tld.
logstash.domain.tld. IN CNAME logstash02.domain.tld.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.