Bobcares

Digitalocean elasticsearch cluster | Easy Method

by | Nov 16, 2022

Digitalocean elasticsearch cluster article will guide you through installing Elasticsearch, configuring your use case, and securing installation with help of our DigitalOcean Managed Services.

Digitalocean elasticsearch cluster on Ubuntu

Digitalocean elasticsearch cluster

Elasticsearch is a popular platform for real time distributed search and analysis of data. It’s due to its usability, powerful features, and scalability.

 

Prerequisites

 

An Ubuntu server with 2GB RAM and 2 CPUs set up with a non-root sudo user.

 

We will start working with the minimum amount of CPU and RAM required for the Elasticsearch run. The amount of CPU, RAM, and storage that your Elasticsearch server required depending on the volume of logs that you expect.

 

Step 1 — Installing and Configuring Elasticsearch

 

To begin, use CURL. The commands line tool for transferring data with URLs. To import the Elasticsearch public GPG key into APT, you need to Pipe the output to the gpg --dearmor command. This converts the key format that apt can use to verify downloaded packages.

 
curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elastic.gpg
 

Next, add the Elastic source list to the directory "sources.list.d". Here apt will search for new sources:

 
echo "deb [signed-by=/usr/share/keyrings/elastic.gpg] https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
 

The [signed-by=/usr/share/keyrings/elastic.gpg] portion of the file instructs apt to use the key that was downloaded to verify repository and file information for Elasticsearch packages.

 

Now, update your package lists and APT will read the new Elastic source by using the below command:

 
sudo apt update
 

Then install Elasticsearch:

 
sudo apt install elasticsearch
 

Press Y when prompted to confirm the installation. If you are prompted to restart any services, then press ENTER to accept the defaults and continue. Elasticsearch is now ready to get started.

 

Step 2 — Configuring Elasticsearch

 

Further to configure Elasticsearch. We will edit its main configuration file elasticsearch.yml, where the configuration options stored. This file will locate in the directory” /etc/elasticsearch“.

 

Edit Elasticsearch’s configuration file:

 
sudo nano /etc/elasticsearch/elasticsearch.yml
 

The file elasticsearch.yml provides configuration options for your cluster, node, memory, network, and gateway. Most of these options are preconfigured. However, you can make changes according to your needs.

 

Elasticsearch listens for traffic from port 9200. To restrict access and to increase security, find the line that specifies network.host, uncomment it and replace its value with localhost so it reads like as shown below:

 
/etc/elasticsearch/elasticsearch.yml
 
. . .
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: localhost
. . .
 

We have specified localhost so Elasticsearch listens to all interfaces and bound IPs. If you want it to listen only on a specific interface then specify its IP in place of localhost. Later Save and close elasticsearch.yml.

 

These are the settings you can start to use Elasticsearch. Now you can further start Elasticsearch .

 

Start the Elasticsearch service with systemctl.

 
sudo systemctl start elasticsearch
 

Next, run the below command to enable Elasticsearch for server boots:

 
sudo systemctl enable elasticsearch
 

Step 3 — Securing Elasticsearch

 

If you want to allow remote access to the HTTP API. You should limit the network exposure with Ubuntu’s default firewall, UFW. Make a note that this firewall should enable.

 

We will now configure the firewall to allow access to the default Elasticsearch HTTP API port  9200 for the remote host. Generally, the server used in a single-server setup, such as198.51.100.1. To allow access, type the below command:

 
sudo ufw allow from 198.51.100.1 to any port 9200
 

Once completed, you can enable UFW with the command:

 
sudo ufw enable
 

Finally, check the status of UFW with the command:

 
sudo ufw status
 

If the rules are specified properly, you will receive output as shown below:

 
OutputStatus: active

To                         Action      From
--                         ------      ----
9200                       ALLOW      198.51.100.0
22                         ALLOW       Anywhere
22 (v6)                    ALLOW       Anywhere (v6)
 

The UFW will now enable and set up to protect Elasticsearch port 9200.

 

Step 4 — Testing Elasticsearch

 

By now, Elasticsearch will run on port 9200. You can test with CURL and a GET request.

 
curl -X GET 'http://localhost:9200'
 

You should receive the following response:

Output{
  "name" : "elastic-22",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "DEKKt_95QL6HLaqS9OkPdQ",
  "version" : {
    "number" : "7.17.1",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "e5acb99f822233d62d6444ce45a4543dc1c8059a",
    "build_date" : "2022-02-23T22:20:54.153567231Z",
    "build_snapshot" : false,
    "lucene_version" : "8.11.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}
 

If you have received a similar response to the above, then Elasticsearch is working properly.

 

To perform more checks of Elasticsearch execute the below command:

 
curl -X GET 'http://localhost:9200/_nodes?pretty'
 

You can verify all the current settings for the node, cluster, application paths, modules, and more from the above output command.

 

Step 5 — Using Elasticsearch

 

For using Elasticsearch, let’s first add some data. Elasticsearch uses a RESTful API, which responds to the usual commands: create, read, update, and delete. To work on this, we will use the CURL command again.

You can add your first entry like:

 
curl -XPOST -H "Content-Type: application/json" 'http://localhost:9200/tutorial/helloworld/1' -d '{ "message": "Hello World!" }'
 

Next will receive the response like this:

 
Output{"_index":"tutorial","_type":"helloworld","_id":"1","_version":2,"result":"updated","_shards":{"total":2,"successful":1,"failed":0},"_seq_no":1,"_primary_term":1}
 

Can check the first entry with an HTTP GET request.

 
curl -X GET -H "Content-Type: application/json" 'http://localhost:9200/tutorial/helloworld/1'
 

This should given the output as:

 
Output{"_index":"tutorial","_type":"helloworld","_id":"1","_version":1,"found":true,"_source":{ "message": "Hello, World!" }}
 

To modify any existing entry. You can manage with HTTP PUT requests.

 
curl -X PUT -H "Content-Type: application/json"  'localhost:9200/tutorial/helloworld/1?pretty' -d '
{
  "message": "Hello, People!"
}'
 

Elasticsearch will modify results like this:

 
Output{
  "_index" : "tutorial",
  "_type" : "helloworld",
  "_id" : "1",
  "_version" : 2,
  "result" : "updated",
  "_shards" : {
    "total" : 2,
    "successful" : 1,
    "failed" : 0
  },
  "_seq_no" : 1,
  "_primary_term" : 1
}
 

From the above example, we have modified  message of the first entry to “Hello, People!”. With this update, the version will automatically increase to the value 2.

 

You can notice that the extra argument pretty in the above request. This enables a human-readable format. So that we can write each data field on a new row. Can also “prettify” your results while retrieving data to get a proper readable output by command:

 
curl -X GET -H "Content-Type: application/json" 'http://localhost:9200/tutorial/helloworld/1?pretty'
 

Now the response will format as:

 
Output{
  "_index" : "tutorial",
  "_type" : "helloworld",
  "_id" : "1",
  "_version" : 2,
  "_seq_no" : 1,
  "_primary_term" : 1,
  "found" : true,
  "_source" : {
    "message" : "Hello, People!"
  }
}
}
 

Now we have successfully added and data in Elasticsearch.

 

[Looking for a solution to another query? We are just a click away.]

 

Conclusion

 

To sum up, from this article, you have installed, configured, and begun to use Elasticsearch.

 

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

GET STARTED

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Never again lose customers to poor
server speed! Let us help you.