Monday, 29 October 2018

Cross-Cluster Search with ElasticSearch and Kibana

Code

Seeds configuration to be uploaded from the local file when docker builds:
FROM docker.elastic.co/elasticsearch/elasticsearch:6.4.2
COPY --chown=elasticsearch:elasticsearch elasticsearch.yml /usr/share/elasticsearch/config/

Compose file which starts 2 clusters with 2 nodes each:
version: '2.2'
services:
kra-1:
image: docker.elastic.co/elastic/elastic:6.4.2
build:
context: .
container_name: kra-1
environment:
- cluster.name=Krakow
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=lon-1,lon-2,kra-1,kra-2"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9201:9200
networks:
- esnet
healthcheck:
test: curl --silent --fail localhost:9200/_cat/health || exit 1
interval: 1m
timeout: 10s
retries: 5
kra-2:
image: docker.elastic.co/elastic/elastic:6.4.2
build:
context: .
container_name: kra-2
environment:
- cluster.name=Krakow
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=lon-1,lon-2,kra-1,kra-2"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata2:/usr/share/elasticsearch/data
ports:
- 9202:9200
networks:
- esnet
healthcheck:
test: curl --silent --fail localhost:9200/_cat/health || exit 1
interval: 1m
timeout: 10s
retries: 5
lon-1:
image: docker.elastic.co/elastic/elastic:6.4.2
build:
context: .
container_name: lon-1
environment:
- cluster.name=London
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=lon-1,lon-2,kra-1,kra-2"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata3:/usr/share/elasticsearch/data
ports:
- 9203:9200
networks:
- esnet
healthcheck:
test: curl --silent --fail localhost:9200/_cat/health || exit 1
interval: 1m
timeout: 10s
retries: 5
lon-2:
image: docker.elastic.co/elastic/elastic:6.4.2
build:
context: .
container_name: lon-2
environment:
- cluster.name=London
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=lon-1,lon-2,kra-1,kra-2"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata4:/usr/share/elasticsearch/data
ports:
- 9204:9200
networks:
- esnet
healthcheck:
test: curl --silent --fail localhost:9200/_cat/health || exit 1
interval: 1m
timeout: 10s
retries: 5
kibana:
image: docker.elastic.co/kibana/kibana:6.4.2
container_name: kibana
environment:
SERVER_NAME: kibana.localhost
ELASTICSEARCH_URL: http://kra-1:9200
ports:
- 5601:5601
depends_on:
- "kra-1"
networks:
- esnet

volumes:
esdata1:
driver: local
esdata2:
driver: local
esdata3:
driver: local
esdata4:
driver: local

networks:
esnet:


A configuration of seeds for cross-cluster configuration:

network.host: 0.0.0.0
search:
remote:
Krakow:
seeds:
- kra-1:9300
- kra-2:9300
London:
seeds:
- lon-1:9300
- lon-2:9300

Data bulks require additional empty line at the end of the json file:

{ "index" : { "_index" : "point-of-interest", "_type" : "coffee", "_id" : "11" } }
{ "address": "20, Old Street", "rating": 2, "shop": "Starbucks" }
{ "index" : { "_index" : "point-of-interest", "_type" : "coffee", "_id" : "12" } }
{ "address": "1, High Street", "rating": 5, "shop": "Starbucks" }
{ "index" : { "_index" : "point-of-interest", "_type" : "coffee", "_id" : "13" } }
{ "address": "New Corner Place", "rating": 3, "shop": "Costa" }

{ "index" : { "_index" : "point-of-interest", "_type" : "coffee", "_id" : "11" } }
{ "address": "Nowa 1", "rating": 4, "shop": "Starbucks" }
{ "index" : { "_index" : "point-of-interest", "_type" : "coffee", "_id" : "12" } }
{ "address": "Rynek Glowny 4", "rating": 5, "shop": "Starbucks" }
{ "index" : { "_index" : "point-of-interest", "_type" : "coffee", "_id" : "13" } }
{ "address": "Slawkowska 22", "rating": 0, "shop": "Costa" }
{ "index" : { "_index" : "point-of-interest", "_type" : "coffee", "_id" : "14" } }
{ "address": "Warszawska 8", "rating": 4, "shop": "Costa" }

Commands

To build and start container (in the folder where docker-compose.yml exists):
docker-compose stop && docker-compose build && docker-compose up

To create indices (per cluster)
curl -X PUT "localhost:9201/point-of-interest"
curl -X PUT "localhost:9203/point-of-interest"

To check if cross clustering seeds are attached and healthy:
curl -XGET -H 'Content-Type: application/json' localhost:9201/_remote/info?pretty

To put data (per cluster):
curl -s -H "Content-Type: application/x-ndjson" -XPOST localhost:9201/_bulk --data-binary "@krakow.json"; echo
curl -s -H "Content-Type: application/x-ndjson" -XPOST localhost:9203/_bulk --data-binary "@london.json"; echo

Kibana

As this is totally proof of concept and no specific data has been provided so there is a lack of specific information provided within logs. That makes there are not visible correctly on kibana search - because there is no timedate provided. That means it is not visible by default. Anyway as Kibana is using kra-1 node as search node which is using cross-cluster functionality, so cross-clustering is available by default and there is no any additional action needed before searching. To proof cross-clustering is working on Kibana follow it:
  • Go to http://localhost:5601 once docker started all containers
  • Go to "Discover" tab
  • On filter search change "*" into "*:*" and tap Enter
  • Scroll down to see results

Looking for a results

There are many ways to look after results in indices and clusters. Remember you can use wildcards, specific names and commas to be more specific on results which gives you complete freedom on result searching. In fact, you can search on different indices across different clusters, you can use wildcards on multiple similar clusters or indices, you can add part of the name or be more specific. You can also mix multiple clusters using commas to separate them. There is complete freedom on search. This is subject to both - ElasticSearch and later Kibana:

London:point-of-interest 
*:* 
Krakow:point-of-* 
*:point-of-interest 
Krakow:point-of-interest,London:point-of-interest 
London*:*


Friday, 6 March 2015

Cleaning old backups keeping other for specific time amount in bash

If you are writing scripts to manage backups on Linux by yourself I believe you were wondering how to manage them by some time period - not to delete too much and make your backup partition full too fast. Usually there is no need to use backups older than 1-2 weeks but sometimes you need to check some changes which were made 3-4 months ago. Especially when you are storing some databases and these were changing (or maybe they were some table addition/deletions).

In this scenario let say that we want to keep our backups for last 6 months. Because usually there is no need to keep daily backups older than 7-10 days so we decided to keep daily backups for last 10 days (made every day), then keep all backups from Sunday for next 3 months but also backups from Sunday from the beginning of month for next 9 months... Sounds crazy? Not at all! Many times in past I had to restore some pretty old backups to some important production servers. What is a simplest solution? I prefer not to use perl (don't know why, just don't like it) so I write that small solution with bash and awk:

# find /path/to/backup/ -daystart -mtime +10  -printf "%Te %Ta %h/%f\n" | awk '($2!="Sun"){print}' | xarg rm -rf {}
# find /path/to/backup/ -daystart -mtime +91  -printf "%Te %Ta %h/%f\n" | awk '($2=="Sun" && $1>7){print}'  | xarg rm -rf {}
# find /path/to/backup/ -daystart -mtime +273 -printf "%Te %Ta %h/%f\n" | awk '($2=="Sun" && $1<=7){print}' | xarg rm -rf {}

Indeed it is very simple! I think there is no need to explain these lines - every single one search on backup path and deletes files which matches some specific condition - files older than 10 days but not from Sunday, files older than 3 months, from Sunday, but not from first week of month and finally files older than 9 months, from Sunday which are from first week of month. In fact there is no need to use awk in last condition also to add $2=="Sun" in second line but I think that will a bit simpler explain how to manage some backups :).


Tuesday, 23 April 2013

Checking Dell server Service Tag and BIOS version remotely

Because I am maintaining some Dell servers in my network, so it's really useful to know some (maybe for most of you obvious) commands. For example some of you probably lost somewhere notepad with written all of your Dell's Service Tag (serial numbers). Please look below, how simple can be checking some important things on your server:

- How to check service tag?
- Simplest way to check service tag on your (Linux) server is using dmidecode:

dmidecode | grep -i serial
Service Tag will be available as 'Serial Number'.

- How to check BIOS version on server?
- Again use dmidecode!
dmidecode | less | grep -i BIOS
And line similar to 'BIOS Revision' should appears. 

Wednesday, 17 April 2013

Upgrading CentOS to version 5.9 with DAHDI drivers

I am supporting some servers that have been used with Asterisk (in this case for call recording). Because this company has installed DAHDI drivers for Digium card so I didn't expect any big issue during upgrading from older CentOS 5.x to newest 5.9. Unfortunately I was wrong. Just after upgrading packages and rebooting server I was unable to load DAHDI kernel modules. How to solve this issue? It's really easy! Just upgrade DAHDI drivers to newest. Sounds nice? Ok let's do it:

cd /usr/src/
wget http://downloads.asterisk.org/pub/telephony/dahdi-linux/dahdi-linux-2.6.2.tar.gz
tar zxvf dahdi-linux-2.6.2.tar.gz
cd dahdi-linux-2.6.2.tar.gz
make all
make install
asterisk -rx "core stop now"
/etc/init.d/dahdi restart
asterisk
chkconfig dahdi on

And that's all. It's really nice and quick job. I hope that I was able to solve that kind of problem.

Saturday, 6 April 2013

Qlikview reload - on client side

About year ago I was implementing QlikView for one of my clients. He gave me an interesting problem to solve - possibility to reload from website. I spend on this problem few days without any result, but there it is - my solution. I wrote it few months ago on QV forum and now I am sharing this method with you.

Directy there is no way to do something like this, but you can do "reload" button on the other way.

1. Install IIS Server + CGI and PHP on the same which includes QV Server - but of course you'll need to do it on another port than 80.

2. Configure IIS by changing anonymous access from IUSR to Admin or another username on computer who has privileges to run programs.

3. Build website on new IIS server which will include exec function to batch (from point 5).

4. Download psexec

5. In batch file referee to psexec which will include command to reload report as admin user

6. On report add reload button which will referee to new website.

7. At the end. On new web site you can add automatic redirection to QV reports.

Please note that this method is potentially insecure and someone can use it to run another program on the server.