Category Archives: ElasticSearch

Indexing csv files using Elasticsearch pipelines

In this tutorial on indexing csv files using Elasticsearch pipelines we will use painless script ingest a csv file. The painless script will run in a elasticsearch pipelines. This problem of ingesting csv logs shipped from filebeats directly into elasticsearch can be solved in many ways. I will discuss the usual method as well as… Read More »

Fixing Elasticsearch error: No handler for type [string] declared on field

This Elasticsearch error: No handler for type [string] declared on field is often seen after an “innocent” upgrade from Elasticsearch 5.x to 6.x. Classic sign is that the new indices do not get created. I faced this error when using Serilog to push data into the Elasticsearch cluster after upgrade. It is frustrating as it… Read More »

Managing Elasticsearch aliases using Curator

This tutorial on managing Elasticsearch aliases using Curator will help you to manage your Elasticsearch aliases better. There are not many detailed tutorials on this topic and hence this post. I hope that at the end of this tutorial you will appreciate the power curator brings to your hands.

Taking Elasticsearch snapshots using Curator

This tutorial on taking Elasticsearch snapshots using curator will be divided into sections. One obvious section is how to take snapshots. Other less obvious part will be on configuring a shared directory using Network file sharing on Linux. I will be using a RHEL 7 based cluster of three machines for this tutorial. Once you… Read More »

Authentication in Elasticsearch without shield or x-pack

Authentication in Elasticsearch without using x-pack or shield. Possible? Yes. In this post I will show you how to do it using excellent readonlyrest plugin written by sscarduzio. The reason I used this plugin was the ease of use as well as the way it works. That it is listed on Elastic website itself as… Read More »

ElasticSearch Sample Data

This ElasticSearch Sample Data is to be used for learning purpose only. It is randomly generated but still care has been taken to make it look like real world data.