Code
Seeds configuration to be uploaded from the local file when docker builds:
FROM docker.elastic.co/elasticsearch/elasticsearch:6.4.2
COPY --chown=elasticsearch:elasticsearch elasticsearch.yml /usr/share/elasticsearch/config/
Compose file which starts 2 clusters with 2 nodes each:
version: '2.2'
services:
kra-1:
image: docker.elastic.co/elastic/elastic:6.4.2
build:
context: .
container_name: kra-1
environment:
- cluster.name=Krakow
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=lon-1,lon-2,kra-1,kra-2"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9201:9200
networks:
- esnet
healthcheck:
test: curl --silent --fail localhost:9200/_cat/health || exit 1
interval: 1m
timeout: 10s
retries: 5
kra-2:
image: docker.elastic.co/elastic/elastic:6.4.2
build:
context: .
container_name: kra-2
environment:
- cluster.name=Krakow
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=lon-1,lon-2,kra-1,kra-2"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata2:/usr/share/elasticsearch/data
ports:
- 9202:9200
networks:
- esnet
healthcheck:
test: curl --silent --fail localhost:9200/_cat/health || exit 1
interval: 1m
timeout: 10s
retries: 5
lon-1:
image: docker.elastic.co/elastic/elastic:6.4.2
build:
context: .
container_name: lon-1
environment:
- cluster.name=London
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=lon-1,lon-2,kra-1,kra-2"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata3:/usr/share/elasticsearch/data
ports:
- 9203:9200
networks:
- esnet
healthcheck:
test: curl --silent --fail localhost:9200/_cat/health || exit 1
interval: 1m
timeout: 10s
retries: 5
lon-2:
image: docker.elastic.co/elastic/elastic:6.4.2
build:
context: .
container_name: lon-2
environment:
- cluster.name=London
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=lon-1,lon-2,kra-1,kra-2"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata4:/usr/share/elasticsearch/data
ports:
- 9204:9200
networks:
- esnet
healthcheck:
test: curl --silent --fail localhost:9200/_cat/health || exit 1
interval: 1m
timeout: 10s
retries: 5
kibana:
image: docker.elastic.co/kibana/kibana:6.4.2
container_name: kibana
environment:
SERVER_NAME: kibana.localhost
ELASTICSEARCH_URL: http://kra-1:9200
ports:
- 5601:5601
depends_on:
- "kra-1"
networks:
- esnet
volumes:
esdata1:
driver: local
esdata2:
driver: local
esdata3:
driver: local
esdata4:
driver: local
networks:
esnet:
A configuration of seeds for cross-cluster configuration:
network.host: 0.0.0.0
search:
remote:
Krakow:
seeds:
- kra-1:9300
- kra-2:9300
London:
seeds:
- lon-1:9300
- lon-2:9300
Data bulks require additional empty line at the end of the json file:
{ "index" : { "_index" : "point-of-interest", "_type" : "coffee", "_id" : "11" } }
{ "address": "20, Old Street", "rating": 2, "shop": "Starbucks" }
{ "index" : { "_index" : "point-of-interest", "_type" : "coffee", "_id" : "12" } }
{ "address": "1, High Street", "rating": 5, "shop": "Starbucks" }
{ "index" : { "_index" : "point-of-interest", "_type" : "coffee", "_id" : "13" } }
{ "address": "New Corner Place", "rating": 3, "shop": "Costa" }
{ "index" : { "_index" : "point-of-interest", "_type" : "coffee", "_id" : "11" } }
{ "address": "Nowa 1", "rating": 4, "shop": "Starbucks" }
{ "index" : { "_index" : "point-of-interest", "_type" : "coffee", "_id" : "12" } }
{ "address": "Rynek Glowny 4", "rating": 5, "shop": "Starbucks" }
{ "index" : { "_index" : "point-of-interest", "_type" : "coffee", "_id" : "13" } }
{ "address": "Slawkowska 22", "rating": 0, "shop": "Costa" }
{ "index" : { "_index" : "point-of-interest", "_type" : "coffee", "_id" : "14" } }
{ "address": "Warszawska 8", "rating": 4, "shop": "Costa" }
Commands
To build and start container (in the folder where docker-compose.yml exists):
docker-compose stop && docker-compose build && docker-compose up
To create indices (per cluster)
curl -X PUT "localhost:9201/point-of-interest"
curl -X PUT "localhost:9203/point-of-interest"
To check if cross clustering seeds are attached and healthy:
curl -XGET -H 'Content-Type: application/json' localhost:9201/_remote/info?pretty
To put data (per cluster):
curl -s -H "Content-Type: application/x-ndjson" -XPOST localhost:9201/_bulk --data-binary "@krakow.json"; echo
curl -s -H "Content-Type: application/x-ndjson" -XPOST localhost:9203/_bulk --data-binary "@london.json"; echo
Kibana
As this is totally proof of concept and no specific data has been provided so there is a lack of specific information provided within logs. That makes there are not visible correctly on kibana search - because there is no timedate provided. That means it is not visible by default. Anyway as Kibana is using kra-1 node as search node which is using cross-cluster functionality, so cross-clustering is available by default and there is no any additional action needed before searching. To proof cross-clustering is working on Kibana follow it:
- Go to http://localhost:5601 once docker started all containers
- Go to "Discover" tab
- On filter search change "*" into "*:*" and tap Enter
- Scroll down to see results
Looking for a results
There are many ways to look after results in indices and clusters. Remember you can use wildcards, specific names and commas to be more specific on results which gives you complete freedom on result searching. In fact, you can search on different indices across different clusters, you can use wildcards on multiple similar clusters or indices, you can add part of the name or be more specific. You can also mix multiple clusters using commas to separate them. There is complete freedom on search. This is subject to both - ElasticSearch and later Kibana:
London:point-of-interest
*:*
Krakow:point-of-*
*:point-of-interest
Krakow:point-of-interest,London:point-of-interest
London*:*