Run, Secure, and Deploy Elastic Stack on Docker šŸ³

Compose & Configure Elastic Stack (ELK) on Docker for smaller-scale Production deployments and Development

Sherif Abdel-Naby
Towards Data Science

--

What is Elastic Stack?

Elastic Stack (aka ELK) is the current go-to stack for centralized structured logging for your organization. It collects, ingests, and stores your servicesā€™ logs (also metrics) while making them searchable & aggregatable & observable. And later on, build alerts and dashboards based on these data.

Elastic Stack in Action.

Running Elastic Stack

In this post, weā€™ll compose, configure, secure, and deploy Elastic Stack using Docker & Docker-Compose. What weā€™ll build can be used for development and a small scale production deployment on a docker host.

  • Building an Image for each component.
  • Parameterizing configuration & avoid hardcoding credentials.
  • Setting up Elasticsearch as a production single node cluster ready to be scaled.
  • Setting up security and encryption.
  • Persisting secrets, certificates, and data outside containers.
  • Compose everything together in a Docker-Compose.

To skip the step by step tutorial you can check out my Elastdocker template repository to spin up Elastic Stack on Docker with a single command.

git clone https://github.com/sherifabdlnaby/elastdocker.git
cd elastdocker
make setup & make elk

Elastdocker template has a more polished configuration and better security, as well as monitoring and tools that are often used with Elastic Stack. These extra additions were omitted from this tutorial to keep it simpler.

Steps

  1. Create Docker Images for our stack (with our configuration).
  2. Make a setup container to generate Elasticsearch Keystore & Certifications.
  3. Put the Stack together using Docker-Compose.
  4. Start shipping logs! šŸŽ‰

1. Create Docker Images for our stack.

Letā€™s start by making our root directory elk and create five separate folders; elasticsearch ,logstash ,kibana , setup, andsecrets directories.

Also, create a .env file in the root directory that will hold our parameters.

Elasticsearch

Inside ./elasticsearch directory create Dockerfile and add the following:

We then also add elasticsearch.yml to the ./elastichsearch/config/ directory, this file will be loaded on container startup to configure elasticsearch.

  • Notice that ${ELASTIC_XXX}config will be set from the containerā€™s environment variables which we will pass later at runtime.

Logstash
Inside ./logstash directory create Dockerfile and add the following:

Add logstash.yml to logstash/config directory.

Add logstash.conf to logstash/pipeline directory.

A simple pipeline that forwards data from Beats = to => Elasticsearch.

Kibana

Inside kibana directory create Dockerfile and add the following:

Add kibana.yml to kibana\config directory:

2. Make a setup container to generate Elasticsearch Keystore & Certifications

We need to create a container that will generate Elasticsearch Keystore that contains Elasticsearchā€™s Users (and Super User) passwords, and other credentials (just as AWS Key if youā€™re using S3 plugin for example.
This Keystore is created by elasticsearch-keystore present in Elasticsearch docker image.

2.1 Add setup-keystore.sh to /setup

This script creates a Keystore and writes it to /secrets/elasticsearch.keystore it then adds $ELASTIC_PASSWORD as the default superuser elastic , We will pass $ELASTIC_PASSWORD to the setup container using docker-compose later.

2.2 Add setup-certs.sh to /setup

This script creates a self-signed single PKCS#12 keystore that includes the node certificate, node key, and CA certificate.

2.3 Create docker-compose.setup.yml in the root directory that will start the setup container.

This docker-compose creates an ephemeral container that generates the keystore and certs using the scripts we wrote earlier.

Run using the following command:

docker-compose -f docker-compose.setup.yml run --rm keystore
docker-compose -f docker-compose.setup.yml run --rm certs

This will create the container and removes them as they finish to save space. saving its output to /secrets directory.

3. Put the Stack together using Docker-Compose.

Finally, Create the main docker-compose.yml file that will put everything together.

Whatā€™s in the Docker-Compose file?

  1. We declare the Elasticsearchā€™s Volume that will persist the data.
  2. We declare the secret keystore passed to Elasticsearch that contains the credentials (currently only the superuser password, but can later hold many other credentials by extending the setup-keystore.sh script)
  3. We declare the 3 services of ELK (Elasticsearch, Logstash, and Kibana)
    passing the Environment Variables required for building the images as well as running the containers, recall back all the config files that contained ${ELASTIC_XXX} configuration, you must see them passed here. Note that we also donā€™t hardcode their values In the Docker-Compose file but instead declare everything in the .env file so that it holds all the parameters.
  4. We also declared the recommended settings for Elasticsearch such as ulimit and ES_JAVA_OPTS as well as setting HEAP size for the stack components.

Add all the parameters to the .env file

Start the stack

Run The following commands:

Setup (Run only once)

docker-compose -f docker-compose.setup.yml run --rm keystore
docker-compose -f docker-compose.setup.yml run --rm certs

Start Elastic Stack

docker-compose up -d

Go to localhost:5601
Username: elastic
Password: changeme

Login To Elastic Kibana

4. Start shipping logs! šŸŽ‰

You can now start shipping logs & metrics to Elastic Stack.
1. Use Filebeat to ship logs from services.
2. Use Metricbeat to ship metrics from services.
Either directly to Elasticsearch or Ingest them using Logstash. And the browse them in Kibanaā€™s Discover window.

There are numerous guides to ship logs to Elastic Stack once itā€™s up.

Where to go from here?

We can still enhance this composition further, we can enable Self-Monitoring, or add Prometheus Exporters to Monitor the stack with Grafana, Enable SSL for HTTP Layer in addition to the inner-node transport encryption we enabled, add health-checks to images, use tools like ElastAlert or Curator to leverage the stack, add more nodes to the cluster.

Use Elastdocker instead.

Elastdocker is a template repository that has the more sophisticated version of the Elastic Stack setup we just made including all the points mentioned right above.

Scaling Elastic Stack

This setup can handle a small workload for production but it will eventually need to scale.

  • You will need to decouple the stack components to be independently deployable to separate/different docker hosts.
  • You may need to add extra Elasticsearch nodes.
  • You may need to scale out Logstash.
  • Translate the current setup to a Kubernetes setup with Pods and Charts instead. (only if you have K8S in your organization.)
Possible Elastic Stack de-composed into pods for a K8S deployment.
Possible K8S Deployment for Elastic Stack Decomposed into Pods

You can take a look at managed Elastic Stack offerings.

Code used in this post: https://github.com/sherifabdlnaby/medium-elastic-stack

--

--