Using Filebeat to ingest apache logs

By | December 7, 2018

This tutorial on using Filebeat to ingest apache logs will show you how to create a working system in a jiffy. I will not go into minute details since I want to keep this post simple and sweet. I will just show the bare minimum which needs to be done to make the system work.


Apache logs are everywhere. Even Buzz LightYear knew that.
Using filebeat to ingest apache logs
And then there is a growing user base of people who are increasingly using ELK stack to handle the logs. Sooner or later you will end up with Apache logs which you will want to push into the Elasticsearch cluster.

There are two popular ways of getting the logs in Elasticsearch cluster. Filebeats and Logstash. Filebeats is light weight application where as Logstash is a big heavy application with correspondingly richer feature set.


Filebeat has been made highly configurable to enable it to handle a large variety of log formats. In real world however there are a few industry standard log formats which are very common. So to make life easier filebeat comes with modules. Each standard logging format has its own module. All you have to do is to enable it. No messing around in the config files, no need to handle edge cases. Everything has been handled. Since I am using filebeat to ingest apache logs I will enable the apache2 module.

First install and start Elasticsearch and Kibana. Then you have to install some plugins.

If you have a multi-node cluster then you have to install these on all the nodes.
This might be a bug as of now. But I had to restart all the nodes for changes to take effect.

Then install the filebeats.

Then you make changes to the /etc/filebeat/filebeat.yml file to specify the connections. Since I am not using security this section will be easy.

Then you enable the apache2 module.

The settings for this module will be found in /etc/filebeat/modules.d/apache2.yml. If you open it you will see that there is an option to provide the path for the access and error logs. In case the logs are in custom location rather the usual place (for a given logging format and OS) then you can provide the paths to the logs.

Best practice is to leave it as it is and let filebeat figure out the location based on OS you are using. And I will do the same.

With that done the next command to run is

Setup makes sure that the mapping of the fields in Elasticsearch is right for the fields which are present in the given log.

Before we start using filebeat to ingest apache logs we should check if things are ok. Use this command:

You want to see all OK there.

Once that is done then run the filebeat.

To stop it

However since I do not have apache server running I downloaded some logs for demo purpose. And I will pass them at command line. Hence I need to run the filebeat in foreground.

And that is it.
Filebeat will by default create an index starting with the name filebeat-. Check your cluster to see if the logs were indexed or not. Or better still use kibana to visualize them.

With Kibana 6.5.2 onwards you get logs view (it is still in beta). That supports infinite scroll. Something which the community has been asking for so so long. Do try that since you already have apache logs in the cluster now.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.