One of the things I love about DigitalOcean is their applications. You can spin up a droplet to run a number of popular stacks. From LAMP, Redmine and Ghost, to Docker, Django and WordPress. My favourite, of course, is the ELK stack application. It’s a Ubuntu box with the latest version of Elasticsearch, Logstash and Kibana already set up and ready to run. Read more to learn how to get it up and running properly.
Preparation
The first thing you’ll need is a DigitalOcean account. If you don’t have one, feel free to use this referral link to sign up. You’ll get a $10 credit if you use it!
Once you’ve set up your account, you can click on the big green “Create Droplet” button to create your first server instance or droplet. You need to specify a name and a size for the droplet. The recommended size for the ELK Stack droplet is $20 per month one with 2GB of RAM and 2 CPU’s. However, I have managed to run this on the lowest, 5$ an hour droplet. You also need to specify the region where you want your server to run. Choose the one closest to your or your client base. Under Select Image you now have to choose the Applications tab, and then the ELK Logging Stack button. All that’s left now to do is to either add an SSH key, or choose to have the root password emailed to you, and to click on the Create Droplet button.
DigitalOcean will now set up your ELK box for you. It will take about a minute. Once it’s done, you can log onto it using the public key you chose or password you were emailed. The username is root
.
I start out by stopping all the services and cleaning out unwanted log files so that I can start with a clean slate. If you plan on changing the cluster name, cleaning out the log files can prevent some confusion later on:
service elasticsearch status service logstash stop service logstash-web stop service logstash-forwarder stop rm /var/log/elasticsearch/* rm /var/log/logstash/*
Configuration
Both Nginx and Kibana is setup in such a way that it needs no tweaking. We’ll be focusing on Logstash.
The service defaults:
The fact that both Elasticsearch and Logstash run on the Java VM means that the runtime environment for both of them can be tweaked to a great extent. For Elasticsearch it’s important that need set the heap size and prevent the memory from swapping.
vi /etc/default/elasticsearch
Ensure that the following lines are uncommented and contain the correct value:
ES_HEAP_SIZE=750m MAX_LOCKED_MEMORY=unlimited
The heap size for Elasticsearch should generally be half the memory of the server, and not more than 32GB. Since we’re also running Logstash on the server, I’ve decreased the heap size a bit to ensure there’s room for Logstash as well.
For Logstash we just set the HEAP size.
vi /etc/default/logstash
LS_HEAP_SIZE=256m
Elasticsearch
Once the service has been set up properly, we can now tweak Elasticsearch to meet our needs. This includes setting the node and cluster name, as well as deciding on the number of shards and replicas.
vi /etc/elasticsearch/elasticsearch.yml
# Set the node and cluster name cluster.name: eagerelk node.name: node01 # Set the number of shards and replicas. This is a development setting! index.number_of_shards: 1 index.number_of_replicas: 0 # Turn off multicast discovery.zen.ping.multicast.enabled: false discovery.zen.ping.unicast.hosts: ["localhost"] # Provide a list of possible masters # Ensure that memory isn't swapped. This might prevent Elasticsearch to start on boxes with less than 1GB of memory bootstrap.mlockall: true
Logstash
Next up is Logstash. In general I don’t need the Logstash web interface or the forwarder on an Elasticsearch node, so I disable them both so that they don’t start up with the server.
~~
sudo vi /etc/init/logstash-web.conf
~~
start on never # Was start on virtual-filesystems
sudo update-rc.d -f logstash-forwarder remove
The default logstash configuration is set up to push events coming in through Lumberjack to Elasticsearch. Ensure that you’ve set it up the way you like it before continuing.
Run the services
Everything is now set up as we’d like it. We can now start up the different services and monitor their logs to ensure that everything is fine:
Elasticsearch
sudo service elasticsearch start tail -f /var/log/elasticsearch/clustername.log
Logstash
service logstash start tail -f /var/log/logstash/logstash.log
And that’s it!
Subscribe To Our Newsletter
Join our mailing list to receive the latest news and updates from our team.