When it comes to centralizing logs of various sources (operating systems, databases, webservers, etc.) the ELK stack is becoming more and more popular in the open source world. ELK stands for Elasticsearch, Logstash and Kibana. Elasticsearch is based on Apache Lucene and the primary goal is to provide distributed search and analytic functions. Logstash is responsible to collect logs from a variety of systems and is able to forward these to Elasticsearch. Kibana is the data visualization platform on top of Elasticsearch. Nowadays the term ELK seems not be used anymore and people speak about the Elastic Stack. In this post I’ll look at how you can use these tools to centralize your PostgreSQL log file(s) into the Elastic Stack.

As usual my VMs are running CentOS although that should no be very important for the following (except for the yum commands). As Elasticsearch, Kibana and Logstash are all based on Java you’ll need to install java before starting. There are yum and apt repositories available but I’ll use the manual way for getting the pieces up and running.

The goal is to have one VM running Elasticsearch, Logstash and Kibana and another VM which will run the PostgreSQL instance and Filebeat. Lets start with the first one by installing java and setting up a dedicated user for running the stack:

[root@elk ~]# yum install -y java-1.8.0-openjdk
[root@elk ~]# groupadd elk
[root@elk ~]# useradd -g elk elk
[root@elk ~]# passwd elk
[root@elk ~]# mkdir -p /opt/elk
[root@elk ~]# chown elk:elk /opt/elk
[root@elk ~]# su - elk
[elk@elk ~]$ cd /opt/elk

The first of the products we’re going to install is Elasticsearch which is quite easy (we’ll not set up a distributed mode, only single node for the scope of this post). All we need to do is to download the tar file, extract and start:

[elk@elk ~]$ wget https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/2.3.4/elasticsearch-2.3.4.tar.gz
[elk@elk elk]$ tar -axf elasticsearch-2.3.4.tar.gz
[elk@elk elk]$ ls -l
total 26908
drwxrwxr-x. 6 elk elk     4096 Jul 27 09:17 elasticsearch-2.3.4
-rw-rw-r--. 1 elk elk 27547169 Jul  7 15:05 elasticsearch-2.3.4.tar.gz
[elk@elk elk]$ cd elasticsearch-2.3.4
[elk@elk elasticsearch-2.3.4]$ bin/elasticsearch &

This will start up Elasticsearch and print some messages to the screen:

[2016-07-27 09:20:10,529][INFO ][node                     ] [Shinchuko Lotus] version[2.3.4], pid[10112], build[e455fd0/2016-06-30T11:24:31Z]
[2016-07-27 09:20:10,534][INFO ][node                     ] [Shinchuko Lotus] initializing ...
[2016-07-27 09:20:11,090][INFO ][plugins                  ] [Shinchuko Lotus] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
[2016-07-27 09:20:11,114][INFO ][env                      ] [Shinchuko Lotus] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [46.8gb], net total_space [48.4gb], spins? [unknown], types [rootfs]
[2016-07-27 09:20:11,115][INFO ][env                      ] [Shinchuko Lotus] heap size [1015.6mb], compressed ordinary object pointers [true]
[2016-07-27 09:20:11,115][WARN ][env                      ] [Shinchuko Lotus] max file descriptors [4096] for elasticsearch process likely too low, consider increasing to at least [65536]
[2016-07-27 09:20:12,637][INFO ][node                     ] [Shinchuko Lotus] initialized
[2016-07-27 09:20:12,637][INFO ][node                     ] [Shinchuko Lotus] starting ...
[2016-07-27 09:20:12,686][INFO ][transport                ] [Shinchuko Lotus] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2016-07-27 09:20:12,690][INFO ][discovery                ] [Shinchuko Lotus] elasticsearch/zc26XSa5SA-f_Kvm_jfthA
[2016-07-27 09:20:15,769][INFO ][cluster.service          ] [Shinchuko Lotus] new_master {Shinchuko Lotus}{zc26XSa5SA-f_Kvm_jfthA}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-07-27 09:20:15,800][INFO ][gateway                  ] [Shinchuko Lotus] recovered [0] indices into cluster_state
[2016-07-27 09:20:15,803][INFO ][http                     ] [Shinchuko Lotus] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}
[2016-07-27 09:20:15,803][INFO ][node                     ] [Shinchuko Lotus] started

The default port is 9200 and you should now be able to talk to Elasticsearch:

[elk@elk elasticsearch-2.3.4]$ curl -X GET http://localhost:9200/
{
  "name" : "Shinchuko Lotus",
  "cluster_name" : "elasticsearch",
  "version" : {
    "number" : "2.3.4",
    "build_hash" : "e455fd0c13dceca8dbbdbb1665d068ae55dabe3f",
    "build_timestamp" : "2016-06-30T11:24:31Z",
    "build_snapshot" : false,
    "lucene_version" : "5.5.0"
  },
  "tagline" : "You Know, for Search"
}

Looks good. The next product we’ll need to install is Kibana. The setup itself is as easy as setting up Elasticsearch:

[elk@elk elasticsearch-2.3.4]$ cd /opt/elk/
[elk@elk elk]$ wget https://download.elastic.co/kibana/kibana/kibana-4.5.3-linux-x64.tar.gz
[elk@elk elk]$ tar -axf kibana-4.5.3-linux-x64.tar.gz
[elk@elk elk]$ cd kibana-4.5.3-linux-x64
[elk@elk kibana-4.5.3-linux-x64]$ grep elasticsearch.url config/kibana.yml 
elasticsearch.url: "http://localhost:9200"
[elk@elk kibana-4.5.3-linux-x64]$ bin/kibana &

Similar to Elasticsearch the startup messages are written to the screen:

  log   [09:27:30.208] [info][status][plugin:kibana] Status changed from uninitialized to green - Ready
  log   [09:27:30.237] [info][status][plugin:elasticsearch] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [09:27:30.239] [info][status][plugin:kbn_vislib_vis_types] Status changed from uninitialized to green - Ready
  log   [09:27:30.242] [info][status][plugin:markdown_vis] Status changed from uninitialized to green - Ready
  log   [09:27:30.253] [info][status][plugin:metric_vis] Status changed from uninitialized to green - Ready
  log   [09:27:30.257] [info][status][plugin:spyModes] Status changed from uninitialized to green - Ready
  log   [09:27:30.261] [info][status][plugin:statusPage] Status changed from uninitialized to green - Ready
  log   [09:27:30.263] [info][status][plugin:table_vis] Status changed from uninitialized to green - Ready
  log   [09:27:30.270] [info][listening] Server running at http://0.0.0.0:5601
  log   [09:27:35.320] [info][status][plugin:elasticsearch] Status changed from yellow to yellow - No existing Kibana index found
[2016-07-27 09:27:35,513][INFO ][cluster.metadata         ] [Shinchuko Lotus] [.kibana] creating index, cause [api], templates [], shards [1]/[1], mappings [config]
[2016-07-27 09:27:35,938][INFO ][cluster.routing.allocation] [Shinchuko Lotus] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
  log   [09:27:38.746] [info][status][plugin:elasticsearch] Status changed from yellow to green - Kibana index ready

To check if Kibana is really working point your browser to http://[hostname]:5601 (192.168.22.173 in my case):
kibana_01

The third product we’ll need is Logstash. It is almost the same procedure for getting it up and running:

[elk@elk kibana-4.5.3-linux-x64]$ cd /opt/elk/
[elk@elk elk]$ wget https://download.elastic.co/logstash/logstash/logstash-all-plugins-2.3.4.tar.gz
[elk@elk elk]$ tar -axf logstash-all-plugins-2.3.4.tar.gz
[elk@elk elk]$ cd logstash-2.3.4

To test if Logstash is running fine start a very simple pipeline:

[elk@elk logstash-2.3.4]$ bin/logstash -e 'input { stdin { } } output { stdout {} }'
Settings: Default pipeline workers: 1
Pipeline main started

Once this is up type something on the command line to check if Logstash is responding:

yipphea
2016-07-27T07:52:43.607Z elk yipphea

Looks good as well. For now we can stop Logstash again by “Control-c”:

^CSIGINT received. Shutting down the agent. {:level=>:warn}
stopping pipeline {:id=>"main"}
Received shutdown signal, but pipeline is still waiting for in-flight events
to be processed. Sending another ^C will force quit Logstash, but this may cause data loss. {:level=>:warn}

Pipeline main has been shutdown

Now we need to do some configuration to prepare Logtsash for receiving our PostgreSQL log file(s) through Filebeat. Filebeat will be responsible to forward the PostgreSQL log file(s) to Logstash.

[elk@elk logstash-2.3.4]$ pwd
/opt/elk/logstash-2.3.4
[elk@elk logstash-2.3.4]$ mkdir conf.d
[elk@elk logstash-2.3.4]$ cat conf.d/02-beats-input.conf
input {
  beats {
    port => 5044
    ssl => false
  }
}

What this is doing is telling Logstash that it shall create a new input of type beats and listen on port 5044 for incoming data. In addition to this input plugin Logstash will need an ouput plugin to know what it shall do with data coming in. As we want to send all the data to Elasticsearch we need to specify this:

[elk@elk logstash-2.3.4]$ cat conf.d/10-elasticsearch-output.conf
output {
  elasticsearch {
    hosts => ["localhost:9200"]
    sniffing => true
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

Lets test if the configuration is fine (Logstash will read all configuration files in order):

[elk@elk logstash-2.3.4]$ bin/logstash --config /opt/elk/logstash-2.3.4/conf.d/ --configtest
Configuration OK

As all seems fine we can start Logstash with our new configuration:

[elk@elk logstash-2.3.4]$ bin/logstash --config /opt/elk/logstash-2.3.4/conf.d/ &
Settings: Default pipeline workers: 1
Pipeline main started

For being able to easily use the Filebeat index patterns in Kibana we’ll load the template dashboards provided by Elastic:

[elk@elk logstash-2.3.4]$ cd /opt/elk/
[elk@elk elk]$ wget http://download.elastic.co/beats/dashboards/beats-dashboards-1.2.3.zip
[elk@elk elk]$ unzip beats-dashboards-1.2.3.zip
[elk@elk elk]$ cd beats-dashboards-1.2.3
[elk@elk beats-dashboards-1.2.3]$ ./load.sh

Time to switch to the PostgreSQL VM to install Filebeat:

postgres@centos7:/u01/app/postgres/product/ [PG1] pwd
/u01/app/postgres/product
postgres@centos7:/u01/app/postgres/product/ [PG1] wget https://download.elastic.co/beats/filebeat/filebeat-1.2.3-x86_64.tar.gz
postgres@centos7:/u01/app/postgres/product/ [PG1] tar -axf filebeat-1.2.3-x86_64.tar.gz

Filebeat comes with an index template for Elasticsearch which we will now need to transfer to the host where Elasticsearch runs on for being able to load it:

postgres@centos7:/u01/app/postgres/product/filebeat-1.2.3-x86_64/ [PG1] ls *template*
filebeat.template.json
postgres@centos7:/u01/app/postgres/product/filebeat-1.2.3-x86_64/ [PG1] scp filebeat.template.json [email protected]:/var/tmp/
[email protected]'s password: 
filebeat.template.json                                                      100%  814     0.8KB/s   00:00  

Locally on the host where Elasticsearch runs on we can now load the template into Elasticsearch:

[elk@elk elk]$ curl -XPUT 'http://localhost:9200/_template/filebeat' -d@/var/tmp/filebeat.template.json
{"acknowledged":true}
[elk@elk elk]$ 

Back to the PostgreSQL VM we need to configure Filebeat itself by adapting the filebeat.yml configuration file:

postgres@centos7:/u01/app/postgres/product/filebeat-1.2.3-x86_64/ [PG1] ls
filebeat  filebeat.template.json  filebeat.yml

There are only a few important points to configure. The first one is to tell Filebeat where to look for the PostgreSQL log files:

filebeat:
  # List of prospectors to fetch data.
  prospectors:
    # Each - is a prospector. Below are the prospector specific configurations
    -
      # Paths that should be crawled and fetched. Glob based paths.
      # To fetch all ".log" files from a specific level of subdirectories
      # /var/log/*/*.log can be used.
      # For each file found under this path, a harvester is started.
      # Make sure not file is defined twice as this can lead to unexpected behaviour.
      paths:
        - /u02/pgdata/PG1/pg_log/*.log

Afterwards make sure you disable/uncomment the Elasticsearch ouput plugin under the “output” section:

  ### Elasticsearch as output
  #elasticsearch:
    # Array of hosts to connect to.
    # Scheme and port can be left out and will be set to the default (http and 9200)
    # In case you specify and additional path, the scheme is required: http://localhost:9200/path
    # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
    #hosts: ["localhost:9200"]

Finally enable the Logstash output plugin in the same section (provide the host and port where you Logstash is running):

       ### Logstash as output
  logstash:
    # The Logstash hosts
    hosts: ["192.168.22.173:5044"]

The host and port specified here must match to what we specified in 02-beats-input.conf when we configured Logstash above. This should be sufficiant to startup Filebeat:

postgres@centos7:/u01/app/postgres/product/filebeat-1.2.3-x86_64/ [PG1] ./filebeat &

If everything is working fine we should now be able to ask Elasticsearch for our data from the PostgreSQL log file(s):

[elk@elk elk]$ curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'
{
  "took" : 4,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 971,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "filebeat-2016.07.27",
      "_type" : "log",
      "_id" : "AVYrey_1IfCoJOBMaP1F",
      "_score" : 1.0,
      "_source" : {
        "message" : "2016-05-15 06:57:34.030 CEST - 2 - 20831 -  - @ LOG:  MultiXact member wraparound protections are now enabled",
        "@version" : "1",
        "@timestamp" : "2016-07-27T08:31:44.940Z",
        "source" : "/u02/pgdata/PG1/pg_log/postgresql-Sun.log",
        "type" : "log",
        "input_type" : "log",
        "fields" : null,
        "count" : 1,
        "beat" : {
          "hostname" : "centos7.local",
          "name" : "centos7.local"
        },
        "offset" : 112,
        "host" : "centos7.local",
        "tags" : [ "beats_input_codec_plain_applied" ]
      }
    }, {
      "_index" : "filebeat-2016.07.27",
      "_type" : "log",
      "_id" : "AVYrey_1IfCoJOBMaP1M",
      "_score" : 1.0,
      "_source" : {
        "message" : "2016-05-15 07:20:46.060 CEST - 2 - 20835 -  - @ LOG:  autovacuum launcher shutting down",
        "@version" : "1",
        "@timestamp" : "2016-07-27T08:31:44.940Z",
        "fields" : null,
        "beat" : {
          "hostname" : "centos7.local",
          "name" : "centos7.local"
        },
        "source" : "/u02/pgdata/PG1/pg_log/postgresql-Sun.log",
        "offset" : 948,
        "type" : "log",
        "count" : 1,
        "input_type" : "log",
        "host" : "centos7.local",
        "tags" : [ "beats_input_codec_plain_applied" ]
      }
    }, {
      "_index" : "filebeat-2016.07.27",
      "_type" : "log",
      "_id" : "AVYrey_1IfCoJOBMaP1P",
      "_score" : 1.0,
      "_source" : {
        "message" : "2016-05-15 07:20:46.919 CEST - 5 - 20832 -  - @ LOG:  checkpoint complete: wrote 0 buffers (0.0%); 0 transaction log file(s) added, 0 removed, 0 recycled; write=0.000 s, sync=0.000 s, total=0.678 s; sync files=0, longest=0.000 s, average=0.000 s; distance=10908 kB, estimate=10908 kB",
        "@version" : "1",
        "@timestamp" : "2016-07-27T08:31:44.940Z",
        "source" : "/u02/pgdata/PG1/pg_log/postgresql-Sun.log",
        "input_type" : "log",
        "count" : 1,
        "beat" : {
          "hostname" : "centos7.local",
          "name" : "centos7.local"
        },
        "type" : "log",
        "fields" : null,
        "offset" : 1198,
        "host" : "centos7.local",
        "tags" : [ "beats_input_codec_plain_applied" ]
      }
    }, {
      "_index" : "filebeat-2016.07.27",
      "_type" : "log",
      "_id" : "AVYrey_1IfCoJOBMaP1R",
      "_score" : 1.0,
      "_source" : {
        "message" : "2016-05-15 09:36:34.600 CEST - 1 - 2878 -  - @ LOG:  database system was shut down at 2016-05-15 07:20:46 CEST",
        "@version" : "1",
        "@timestamp" : "2016-07-27T08:31:44.940Z",
        "beat" : {
          "hostname" : "centos7.local",
          "name" : "centos7.local"
        },
        "offset" : 1565,
        "input_type" : "log",
        "count" : 1,
        "fields" : null,
        "source" : "/u02/pgdata/PG1/pg_log/postgresql-Sun.log",
        "type" : "log",
        "host" : "centos7.local",
        "tags" : [ "beats_input_codec_plain_applied" ]
      }
    }, {
      "_index" : "filebeat-2016.07.27",
      "_type" : "log",
      "_id" : "AVYrey_1IfCoJOBMaP1X",
      "_score" : 1.0,
      "_source" : {
        "message" : "2016-05-15 09:39:21.313 CEST - 3 - 3048 - [local] - postgres@postgres STATEMENT:  insert into t1 generate_series(1,1000000);",
        "@version" : "1",
        "@timestamp" : "2016-07-27T08:31:44.940Z",
        "offset" : 2216,
        "type" : "log",
        "input_type" : "log",
        "count" : 1,
        "fields" : null,
        "beat" : {
          "hostname" : "centos7.local",
          "name" : "centos7.local"
        },
        "source" : "/u02/pgdata/PG1/pg_log/postgresql-Sun.log",
        "host" : "centos7.local",
        "tags" : [ "beats_input_codec_plain_applied" ]
      }
    }, {
      "_index" : "filebeat-2016.07.27",
      "_type" : "log",
      "_id" : "AVYrey_1IfCoJOBMaP1e",
      "_score" : 1.0,
      "_source" : {
        "message" : "2016-05-15 09:43:24.366 CEST - 3 - 3397 - [local] - postgres@postgres CONTEXT:  while updating tuple (0,1) in relation \"t1\"",
        "@version" : "1",
        "@timestamp" : "2016-07-27T08:31:44.940Z",
        "offset" : 3165,
        "type" : "log",
        "input_type" : "log",
        "count" : 1,
        "fields" : null,
        "beat" : {
          "hostname" : "centos7.local",
          "name" : "centos7.local"
        },
        "source" : "/u02/pgdata/PG1/pg_log/postgresql-Sun.log",
        "host" : "centos7.local",
        "tags" : [ "beats_input_codec_plain_applied" ]
      }
    }, {
      "_index" : "filebeat-2016.07.27",
      "_type" : "log",
      "_id" : "AVYrey_1IfCoJOBMaP1l",
      "_score" : 1.0,
      "_source" : {
        "message" : "2016-05-15 09:43:46.776 CEST - 10 - 3397 - [local] - postgres@postgres CONTEXT:  while updating tuple (0,1) in relation \"t1\"",
        "@version" : "1",
        "@timestamp" : "2016-07-27T08:31:44.940Z",
        "beat" : {
          "hostname" : "centos7.local",
          "name" : "centos7.local"
        },
        "offset" : 4045,
        "type" : "log",
        "count" : 1,
        "source" : "/u02/pgdata/PG1/pg_log/postgresql-Sun.log",
        "input_type" : "log",
        "fields" : null,
        "host" : "centos7.local",
        "tags" : [ "beats_input_codec_plain_applied" ]
      }
    }, {
      "_index" : "filebeat-2016.07.27",
      "_type" : "log",
      "_id" : "AVYrey_1IfCoJOBMaP1r",
      "_score" : 1.0,
      "_source" : {
        "message" : "2016-05-15 09:45:39.837 CEST - 9 - 3048 - [local] - postgres@postgres ERROR:  type \"b\" does not exist at character 28",
        "@version" : "1",
        "@timestamp" : "2016-07-27T08:31:44.940Z",
        "input_type" : "log",
        "count" : 1,
        "beat" : {
          "hostname" : "centos7.local",
          "name" : "centos7.local"
        },
        "source" : "/u02/pgdata/PG1/pg_log/postgresql-Sun.log",
        "offset" : 4799,
        "type" : "log",
        "fields" : null,
        "host" : "centos7.local",
        "tags" : [ "beats_input_codec_plain_applied" ]
      }
    }, {
      "_index" : "filebeat-2016.07.27",
      "_type" : "log",
      "_id" : "AVYrey_1IfCoJOBMaP1w",
      "_score" : 1.0,
      "_source" : {
        "message" : "2016-05-15 09:45:49.843 CEST - 14 - 3048 - [local] - postgres@postgres ERROR:  current transaction is aborted, commands ignored until end of transaction block",
        "@version" : "1",
        "@timestamp" : "2016-07-27T08:31:44.940Z",
        "type" : "log",
        "fields" : null,
        "beat" : {
          "hostname" : "centos7.local",
          "name" : "centos7.local"
        },
        "source" : "/u02/pgdata/PG1/pg_log/postgresql-Sun.log",
        "offset" : 5400,
        "input_type" : "log",
        "count" : 1,
        "host" : "centos7.local",
        "tags" : [ "beats_input_codec_plain_applied" ]
      }
    }, {
      "_index" : "filebeat-2016.07.27",
      "_type" : "log",
      "_id" : "AVYrey_1IfCoJOBMaP1x",
      "_score" : 1.0,
      "_source" : {
        "message" : "2016-05-15 09:45:49.843 CEST - 15 - 3048 - [local] - postgres@postgres STATEMENT:  alter table t1 add column b int;",
        "@version" : "1",
        "@timestamp" : "2016-07-27T08:31:44.940Z",
        "offset" : 5559,
        "count" : 1,
        "fields" : null,
        "beat" : {
          "hostname" : "centos7.local",
          "name" : "centos7.local"
        },
        "source" : "/u02/pgdata/PG1/pg_log/postgresql-Sun.log",
        "type" : "log",
        "input_type" : "log",
        "host" : "centos7.local",
        "tags" : [ "beats_input_codec_plain_applied" ]
      }
    } ]
  }
}

Quite a lot of information, so it is really working 🙂 When we can ask Elasticsearch we should be able to use Kibana on the same data, too, shouldn’t we? Fire up your browser and point it to your Kibana URL (192.168.22.173:5601 in my case). You should see the “filebeat-*” index pattern in the upper left:

kibana_02

Select the “filebeat-*” index pattern:
kibana_03

To make this index the default one click on the green star:
kibana_04

Time to discover our data by using the “Discover” menu on the top:
kibana_05

The result of that should be that you can see all the PostgreSQL log messages on the screen:
kibana_06

Try to search something, e.g. “checkpoint complete”:
kibana_07

Not much happening on my instance so lets do some checkpoints:

postgres@centos7:/u01/app/postgres/product/filebeat-1.2.3-x86_64/ [PG1] sqh
psql (9.6beta1 dbi services build)
Type "help" for help.

(postgres@[local]:5432) [postgres] > checkpoint;
CHECKPOINT
Time: 100.403 ms
(postgres@[local]:5432) [postgres] > checkpoint;
CHECKPOINT
Time: 100.379 ms
(postgres@[local]:5432) [postgres] > checkpoint;
CHECKPOINT
Time: 100.364 ms
(postgres@[local]:5432) [postgres] > checkpoint;
CHECKPOINT
Time: 100.321 ms
(postgres@[local]:5432) [postgres] > checkpoint;
CHECKPOINT
Time: 100.370 ms
(postgres@[local]:5432) [postgres] > checkpoint;
CHECKPOINT
Time: 100.282 ms
(postgres@[local]:5432) [postgres] > checkpoint;
CHECKPOINT
Time: 100.411 ms
(postgres@[local]:5432) [postgres] > checkpoint;
CHECKPOINT
Time: 101.166 ms
(postgres@[local]:5432) [postgres] > checkpoint;
CHECKPOINT
Time: 100.392 ms
(postgres@[local]:5432) [postgres] > checkpoint;
CHECKPOINT
Time: 100.322 ms
(postgres@[local]:5432) [postgres] > checkpoint;
CHECKPOINT
Time: 100.367 ms
(postgres@[local]:5432) [postgres] > checkpoint;
CHECKPOINT
Time: 100.320 ms
(postgres@[local]:5432) [postgres] > checkpoint;
CHECKPOINT
Time: 100.328 ms
(postgres@[local]:5432) [postgres] > checkpoint;
CHECKPOINT
Time: 100.285 ms

What is the picture now:
kibana_08

Isn’t that great? All the information near realtime, just by using a browser. In my case logs are coming from a single PostreSQL instance but logs could be coming from hundreds of instances. Logs could also be coming from webservers, application servers, operating system, network, … . All centralized in one place ready to analyze. Even better you could use Watcher to alert on changes on your data.

Ah, I can already hear it: But I need to see my performance metrics as well. No problem, there is the jdbc input plugin for Logstash. What can you do with that? Once configured you can query what ever you want from your database. Lets do a little demo.

As we downloaded Logstash with all plugins included already the jdbc input plugin is already there:

[elk@elk logstash-2.3.4]$ pwd
/opt/elk/logstash-2.3.4
[elk@elk logstash-2.3.4]$ find . -name *jdbc*
./vendor/bundle/jruby/1.9/gems/jdbc-sqlite3-3.8.11.2
./vendor/bundle/jruby/1.9/gems/jdbc-sqlite3-3.8.11.2/jdbc-sqlite3.gemspec
./vendor/bundle/jruby/1.9/gems/jdbc-sqlite3-3.8.11.2/lib/jdbc
./vendor/bundle/jruby/1.9/gems/jdbc-sqlite3-3.8.11.2/lib/sqlite-jdbc-3.8.11.2.jar
./vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-3.1.0
./vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-3.1.0/lib/logstash/inputs/jdbc.rb
./vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-3.1.0/lib/logstash/plugin_mixins/jdbc.rb
./vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-3.1.0/logstash-input-jdbc.gemspec
./vendor/bundle/jruby/1.9/gems/sequel-4.36.0/lib/sequel/adapters/jdbc
./vendor/bundle/jruby/1.9/gems/sequel-4.36.0/lib/sequel/adapters/jdbc/jdbcprogress.rb
./vendor/bundle/jruby/1.9/gems/sequel-4.36.0/lib/sequel/adapters/jdbc.rb
./vendor/bundle/jruby/1.9/specifications/jdbc-sqlite3-3.8.11.2.gemspec
./vendor/bundle/jruby/1.9/specifications/logstash-input-jdbc-3.1.0.gemspec

What you’ll need to provide in addtition is the jdbc driver for the database you want to connect to. In my case for PostgreSQL:

[elk@elk logstash-2.3.4]$ cd /opt/ elk/
[elk@elk elk]$ mkdir jdbc
[elk@elk elk]$ cd jdbc/
[elk@elk jdbc]$ wget https://jdbc.postgresql.org/download/postgresql-9.4.1209.jar
[elk@elk jdbc]$ ls
postgresql-9.4.1209.jar

All we need to do from here on is to configure another input and output plugin for Logstash:

[elk@elk conf.d]$ pwd
/opt/elk/logstash-2.3.4/conf.d
[elk@elk conf.d]$ cat 03-jdbc-postgres-input.conf
input {
    jdbc {
        jdbc_connection_string => "jdbc:postgresql://192.168.22.99:5432/postgres"
        jdbc_user => "postgres"
        jdbc_password => "postgres"
        jdbc_validate_connection => true
        jdbc_driver_library => "/opt/elk/jdbc/postgresql-9.4.1209.jar"
        jdbc_driver_class => "org.postgresql.Driver"
        statement => "SELECT * from pg_stat_activity"
    }
}
output {
    stdout { codec => json_lines }
}

Re-test if the configuration is fine:

[elk@elk logstash-2.3.4]$ bin/logstash --config /opt/elk/logstash-2.3.4/conf.d/ --configtest
Configuration OK

And then kill and restart Logstash:

[elk@elk logstash-2.3.4]$ bin/logstash --config /opt/elk/logstash-2.3.4/conf.d/

You should see data from pg_stat_activity right on the screen:

[elk@elk logstash-2.3.4]$ bin/logstash --config /opt/elk/logstash-2.3.4/conf.d/ 
Settings: Default pipeline workers: 1
Pipeline main started
{"datid":13322,"datname":"postgres","pid":3887,"usesysid":10,"usename":"postgres","application_name":"","client_addr":{"type":"inet","value":"192.168.22.173"},"client_hostname":null,"client_port":58092,"backend_start":"2016-07-27T13:15:25.421Z","xact_start":"2016-07-27T13:15:25.716Z","query_start":"2016-07-27T13:15:25.718Z","state_change":"2016-07-27T13:15:25.718Z","wait_event_type":null,"wait_event":null,"state":"active","backend_xid":null,"backend_xmin":{"type":"xid","value":"1984"},"query":"SELECT * from pg_stat_activity","@version":"1","@timestamp":"2016-07-27T13:15:26.712Z"}
{"message":"2016-07-27 15:15:25.422 CEST - 1 - 3887 - 192.168.22.173 - [unknown]@[unknown] LOG:  connection received: host=192.168.22.173 port=58092","@version":"1","@timestamp":"2016-07-27T13:15:32.054Z","source":"/u02/pgdata/PG1/pg_log/postgresql-Wed.log","offset":84795,"type":"log","input_type":"log","count":1,"beat":{"hostname":"centos7.local","name":"centos7.local"},"fields":null,"host":"centos7.local","tags":["beats_input_codec_plain_applied"]}
{"message":"2016-07-27 15:15:25.454 CEST - 2 - 3887 - 192.168.22.173 - postgres@postgres LOG:  connection authorized: user=postgres database=postgres","@version":"1","@timestamp":"2016-07-27T13:15:32.054Z","source":"/u02/pgdata/PG1/pg_log/postgresql-Wed.log","fields":null,"beat":{"hostname":"centos7.local","name":"centos7.local"},"count":1,"offset":84932,"type":"log","input_type":"log","host":"centos7.local","tags":["beats_input_codec_plain_applied"]}

As we want to have this data in Elasticsearch and analyze it with Kibana adjust the configuration to look like this and then restart Logstash:

input {
    jdbc {
        jdbc_connection_string => "jdbc:postgresql://192.168.22.99:5432/postgres"
        jdbc_user => "postgres"
        jdbc_password => "postgres"
        jdbc_validate_connection => true
        jdbc_driver_library => "/opt/elk/jdbc/postgresql-9.4.1209.jar"
        jdbc_driver_class => "org.postgresql.Driver"
        statement => "SELECT * from pg_stat_activity"
        schedule => "* * * * *"
    }
}
output {
    elasticsearch {
        index => "pg_stat_activity"
        document_type => "pg_stat_activity"
        document_id => "%{uid}"
        hosts => ["localhost:9200"]
    }
}

Once your restarted head over to Kibana and create a new index:
kibana_09

When you “Discover” you should see the data from pg_stat_activity:
kibana_10

Have fun with your data …