Creating Box Plot in Kibana using Vega is what this tutorial all about. To keep it simple I will use hard coded data. However in the second part of this tutorial the data will come via aggregation from elasticsearch and will be plotted. WHY Box plots are very useful and manufacturing engineers especially love them.… Read More »
This tutorial on adding painless scripted field in kibana will give you a quick start on this rather useful feature. If you ever inherit an elasticsearch index and are wishing for some extra fields then scripted fields can save you lot of efforts. As usual we will start with why followed by how.
“Microsystems was unexpected at this time.” is a frustrating error which appears when you run Logstash 7.6.1 on a windows machine in command line window. Here is a workaround to get around it. Environment Logstash version: 7.6.1 Operating system: Windows 10 Java version : openjdk version “11.0.5” 2019-10-15 OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.5+10) OpenJDK… Read More »
We got some sample data for Elasticsearch. 1.5 million records to be precise. We will use Filebeats and Elasticsearch pipelines to load up the data into the cluster. The data has text, numbers and even geo points ! The data size on disk will be around 640MB (Windows environment). So let get on with it.
This short tutorial on Elasticsearch dynamic templates using match_mapping_type will teach you how to control the mappings of the dynamically added fields in Elasticsearch. If you are reading this then it means that you understand the importance of mappings and how to manage them using templates. Elasticsearch dynamic templates are a natural progression of templates.… Read More »
In this post on Elasticsearch Update by Query API, I will show the real world use of this API, what it can do and what it can’t. And why you should strive to not be in a situation which warrants the use of this API in the first place.
This post will show how to extract filename from filebeat shipped logs, using elasticsearch pipelines and grok. I will also show how to deal with the failures usually seen in real life. With that said lets get started.
In this tutorial on indexing csv files using Elasticsearch pipelines we will use painless script ingest a csv file. The painless script will run in a elasticsearch pipelines. This problem of ingesting csv logs shipped from filebeats directly into elasticsearch can be solved in many ways. I will discuss the usual method as well as… Read More »
This Elasticsearch error: No handler for type [string] declared on field is often seen after an “innocent” upgrade from Elasticsearch 5.x to 6.x. Classic sign is that the new indices do not get created. I faced this error when using Serilog to push data into the Elasticsearch cluster after upgrade. It is frustrating as it… Read More »
This tutorial on using Filebeat to ingest apache logs will show you how to create a working system in a jiffy. I will not go into minute details since I want to keep this post simple and sweet. I will just show the bare minimum which needs to be done to make the system work.