Elasticsearch on Cloud

1-click AWS Deployment    1-click Azure Deployment 1-click Google Deployment

Overview

Elasticsearch is an Apache Lucene-based search server. It was developed by Shay Banon and published in 2010. It is now maintained by Elasticsearch BV. Its latest version is 7.0.0. Elasticsearch is a real-time distributed and open source full-text search and analytics engine. It is used in Single Page Application (SPA) projects. Elasticsearch is an open source developed in Java and used by many big organizations around the world. It is licensed under the Apache license version 2.0. It is accessible from RESTful web service interface and uses schema less JSON (JavaScript Object Notation) documents to store data. It is built on Java programming language and hence Elasticsearch can run on different platforms. It enables users to explore very large amount of data at very high speed. Elasticsearch runs on a clustered environment. A cluster can be one or more servers. Each server in the cluster is a node. As with all document databases, records are called documents. I’ll often refer to them as records because I’m stuck in my ways. Documents are stored in indexes, which can be sharded, or split into smaller pieces. Elasticsearch can run those shards on separate nodes to distribute the load across servers. You can and should replicate shards onto other servers in case of network or server issues (trust me, they happen).
Elasticsearch uses Apache Lucene to index documents for fast searching. Lucene has been around for nearly two decades and it’s still being improved! Although this search engine has been ported to other languages, it’s mainstay is Java. Thus, Elasticsearch is also written in Java and runs on the JVM.

General Features
The general features of Elasticsearch are as follows −
• Elasticsearch is scalable up to petabytes of structured and unstructured data.
• Elasticsearch can be used as a replacement of document stores like MongoDB and RavenDB.
• Elasticsearch uses denormalization to improve the search performance.
• Elasticsearch is one of the popular enterprise search engines, and is currently being used by many big organizations like Wikipedia, The Guardian, StackOverflow, GitHub etc.
• Elasticsearch is an open source and available under the Apache license version 2.0.

Key Concepts
The key concepts of Elasticsearch are as follows −
Node
It refers to a single running instance of Elasticsearch. Single physical and virtual server accommodates multiple nodes depending upon the capabilities of their physical resources like RAM, storage and processing power.
Cluster
It is a collection of one or more nodes. Cluster provides collective indexing and search capabilities across all the nodes for entire data.
Index
It is a collection of different type of documents and their properties. Index also uses the concept of shards to improve the performance. For example, a set of document contains data of a social networking application.
Document
It is a collection of fields in a specific manner defined in JSON format. Every document belongs to a type and resides inside an index. Every document is associated with a unique identifier called the UID.
Shard
Indexes are horizontally subdivided into shards. This means each shard contains all the properties of document but contains less number of JSON objects than index. The horizontal separation makes shard an independent node, which can be store in any node. Primary shard is the original horizontal part of an index and then these primary shards are replicated into replica shards.
Replicas
Elasticsearch allows a user to create replicas of their indexes and shards. Replication not only helps in increasing the availability of data in case of failure, but also improves the performance of searching by carrying out a parallel search operation in these replicas.
Advantages
• Elasticsearch is developed on Java, which makes it compatible on almost every platform.
• Elasticsearch is real time, in other words after one second the added document is searchable in this engine
• Elasticsearch is distributed, which makes it easy to scale and integrate in any big organization.
• Creating full backups are easy by using the concept of gateway, which is present in Elasticsearch.
• Handling multi-tenancy is very easy in Elasticsearch when compared to Apache Solr.
• Elasticsearch uses JSON objects as responses, which makes it possible to invoke the Elasticsearch server with a large number of different programming languages.
• Elasticsearch supports almost every document type except those that do not support text rendering.

Elasticsearch is a great solution employed by companies like Netflix, Github, and now VTS.

Where Elasticsearch Shines

Queries
This is what you think of when you type into a search bar. It matches the best results based on scores. For every query, Elasticsearch will return a collection of results; each with a _score that indicates how well the result matches the query parameters.

 

Filters
Filters are much faster than queries because there’s no ambiguity around scoring. There’s a binary yes/no decision on whether a particular document has the term.

What Elasticsearch is not

Elasticsearch is not a primary data store. Although it’s technically possible, there’s no guarantee that your data will be correct. Each document has a version number that increases monotonically. When two calls write to Elasticsearch, both will get written simultaneously, but only one will be the latest version. Out of the box, Elasticsearch does not support ACID transactions.

Parallel Concepts Between Elasticsearch and Databases
An index is like a database as it lets users search across many different types of documents; it can help you silo off information or organize it. For instance, if you have US data and UK data, indices make it really easy to limit your searches to one region. When you want to explicitly search across multiple regions, there’s syntax that makes that query equally simple.
Documents are JSON objects that comprise the results that Elasticsearch is searching for. We’ll go more in depth later.


Clusters and Nodes

Clusters are a collection of nodes that communicate with each other to read and write to an index. A cluster needs a unique name to prevent unnecessary nodes from joining.
A node is a single instance of Elasticsearch. It usually runs one instance per machine. They communicate with each other via network calls to share the responsibility of reading and writing data. A master node organizes the entire cluster.


Horizontal Scaling
Because the Elasticsearch cluster is not limited to a single machine, you can infinitely scale your system to handle higher traffic and larger data sets.

Shards and Indices
Shards are individual instances of a Lucene index. Lucene is the underlying technology that Elasticsearch uses for extremely fast data retrieval. Elasticsearch is an abstraction that lets users leverage the power of a Lucene index in a distributed system.

Each index is comprised of shards across one or many nodes. In this case, this Elasticsearch cluster has two nodes, two indices (properties and deals) and five shards in each node.

Let’s take a closer look at the properties index. As you can see, there are three primary shards and three replica shards. Primary shards are where the first write happens. A primary shard can have zero through many replica shards that simply duplicate its data.
The primary shard is not limited to single node, which is a testament to the distributed nature of the system. In case one node fails, replica shards in a functioning node can be promoted to the primary shard automatically. Data must be written to a primary shard before it’s duplicated to replica shards. Data can be read from both primary and replica shards

JSON REST API
Now that you know about the building blocks of Elasticsearch, you can interact with the Elasticsearch API and know what information is being returned. There is a collection of _cat commands that tells you about the current status of your cluster.
When you ask the cluster about the nodes, the output will tell you that we have two nodes running. The * indicates the master node, while “m” indicates that the second node is master eligible. In the case that the first node fails, the second node would get promoted to master and all of its shards would become primary shards. Elasticsearch handles all of these promotions out of the box.

In our example, the properties index is sharing nodes with the deals index. They’re part of the same cluster, so they’ll both show up when asking the cluster for information about the indices. The deal index has far more documents and consequently takes up far more disk space.
“Green” is an indication of the health of the index. It means that all primary shards are available and they each have at least one replica. “Yellow” would mean that all primary shards are available, but they don’t all have a replica. “Red” means not all primary shards are available.

If we take a look specifically at the shards on the properties index, we’ll see that there are three shards, each with both a primary and a replica. Elasticsearch will evenly distribute new documents amongst all the primary shards.

Documents
Ultimately, all of this architecture supports the retrieval of documents. Documents are JSON objects that are stored in Elasticsearch. They can have a nested structure to accommodate more complex data and queries.
The keys prepended with an underscore represent metadata that Elasticsearch uses to keep track of information. You can see this particular property document is in the properties index, and has a type of property. This particular property has a _version of 1, which means that no new property documents have been added to the index with the same _id.

Search request from start to finish
Now that you know about clusters, nodes, indices, shards, and documents, let’s go over what happens when you make a search request to Elasticsearch.
When you send a request to the cluster, it first passes through a coordinating node. Every node in the cluster should know about the cluster state. Cluster state contains information about which node have which indices and shards.
Since this is a search request, it doesn’t matter if we read from a primary shard or a replica shard. Replica shards are chosen according to load balance. All distinct shards within an index must have the search request routed to it. Each shard will return top results (defaulting to 10) and send them back to coordinator. The coordinator will then merge these results together to get the top global results, which it then returns to the user.

Using Elasticsearch to more effectively target dynamic content

From Static to Dynamic

The diagram below depicts the high-level setup before the introduction of the content service.

A content fragment is factually a piece of content. It might be a promotion of some sort. Or it could be some text that gets used as part of a banner. The challenge using this setup is that content fragments are static files that live on the file system. If you want to show a different fragment based on something you know about the user you have to generate every permutation you might want ahead-of-time, publish them all, then use logic in the application to decide which one to use.
One obvious way to address this is to publish content fragments in a relational database and then code the front-end app to query for the right content. That wasn’t appropriate here for a few reasons:
1. The front-end is being migrated to a collection of Single Page Applications (SPA’s) written in JavaScript. It’s easier for those pages to call a RESTful API to get JSON back. Yes, you could still do that with a relational database and a service tier, but the client was looking for something a little more JSON-native.
2. The structure of the content changes over time. We wanted to be able to accept any kind of content fragment the Marketing Team or SPA developers could think of and not have to worry about migrating database schemas.
3. The anticipated style of queries needed to find appropriate content fragments was more like what you’d expect from a search engine and less like what you might put in a SQL query–we needed to be able to say, “Here is some context, now return the most appropriate set of content fragments for the situation,” and be able to use relevancy scoring to help determine what comes back.
So relational databases were ruled out in favor of document-oriented NoSQL repositories. Ultimately, Elasticsearch was selected because of its ease of clustering, high performance, unified REST API, availability of commercial support, and add-ons such as Shield, Marvel, and Watcher that make it easier to integrate with the rest of the enterprise.
Introduction of a Content Delivery Service
The Content Delivery Service sits between Elasticsearch and the front-end applications. Its purpose is to abstract away Elasticsearch specifics and to protect the cluster by providing a simple, read-only REST API. It also enforces some light business logic such as making sure that only content that is currently effective according to its publication and expiration date is returned.
The diagram below shows the content infrastructure augmented with Elasticsearch and the content delivery service.

Content Delivery Service

 

As seen in the diagram, Interwoven is still the source of record and the primary way Marketing manages their content. But now, content fragments and system data are published to Elasticsearch. The front-end Single Page Apps ask the Content Delivery Service for content based on some set of context. The content is returned as a collection of JSON objects. The SPAs then take those objects and format them as needed.
Content Objects are Pure Content

A key concept worth emphasizing is that a content object is pure content. It contains no markup. It might have some properties that describe how it is expected to be used, but it is completely lacking in implementation. This has several benefits:
1. Content objects returned by the Content Delivery Service can be used across any and all channels (such as mobile) rather than being specific to a single channel (such as web).
2. Within a given channel the same object can have many different presentations.
3. Responsibilities are cleanly separated: The content service provides content. The front-end applications style and present the content for consumption.
This was a bit of a departure from how things used to be done. In the bad old days presentation was always getting mixed up with content which severely limits reuse.
Micro-services Provide Administrative Features

The other role the Content Management Service plays is JSON validation. When new types of content objects are developed we use JSON Schema to codify the structure. When a person or system posts a content object to the Content Management Service, the service validates the object against its JSON Schema before storing it in Elasticsearch.
In addition to the Content Management Service we also implemented a Scheduled Job Service. As the name suggests, it is used to perform administrative tasks on a schedule. For instance, maybe content needs to be reindexed from one cluster to another in a lower environment. Or maybe content needs to be fetched from a third-party and written to the cluster. The Job Service is able to talk to either the Content Management Service or Elasticsearch directly, depending on the task it needs to execute.
All of the administrative services are independently deployed web applications that sit behind an API Gateway. The Gateway leverages the Netflix Zuul Proxy. It is responsible for authenticating against LDAP and creating a shared session in redis. It gives the content admin team a single URL to hit and isolates authentication logic in a single place.
The diagram below shows the fully-realized picture.

A few key components aren’t on the diagram. We use Shield to protect the Elasticsearch cluster. Shield also makes it easy to configure SSL for node-to-node communication and provides out-of-the-box LDAP integration. With Shield we can map LDAP groups to roles and then grant roles various privileges on our Elasticsearch cluster and its indices.
We use Watcher to monitor cluster health and job failures that may happen in the Scheduled Job Service. The client has their own enterprise alerting and monitoring solution, but Watcher gives the content management team a flexible, powerful tool for keeping track of things at a level that is probably more granular than what the enterprise ops team cares about.
Ready for the Future
With Elasticsearch and a few comparatively small services on top of that, this travel giant now has what it wants to provide its customers with a more customized online experience. Content can be embattled to the users it is most suitable for using any kind of context the Marketing team can come up with. As the front-end commerce app advances, new types of content objects can be added easily and be served to the front-end with no schema or service changes required. Also, it’s all built on commercially-supported open source software.

Elasticsearch architecture

Elasticsearch is a real-time distributed search and analytics engine with high availability. It is used for full-text search, structured search, analytics, or all three in combination. It is built on top of the Apache Lucene library. It is a schema-free, document-oriented data store. However, unless you fully understand your use case, the general recommendation is not to use it as the primary data store. One of the advantages is that the RESTful API uses JSON over HTTP, which allows you to integrate, manage, and query index data in a variety of ways.
An Elasticsearch cluster is a group of one or more Elasticsearch nodes that are connected together. Let’s first outline how it is laid out, as shown in the following diagram:

Although each node has its own purpose and responsibility, each node can forward client requests to the appropriate nodes. The following are the nodes used in an Elasticsearch cluster:

• Master-eligible node: The master node’s tasks are primarily used for lightweight cluster-wide operations, including creating or deleting an index, tracking the cluster nodes, and determining the location of the allocated shards. By default, the master-eligible role is enabled. A master-eligible node can be elected to become the master node (the node with the asterisk) by the master-election process. You can disable this type of role for a node by setting node. master to false in the elasticsearch.yml file.
• Data node: A data node contains data that contains indexed documents. It handles related operations such as CRUD, search, and aggregation. By default, the data node role is enabled, and you can disable such a role for a node by setting the node.data to false in the elasticsearch.yml file.
• Ingest node: Using an ingest nodes is a way to process a document in pipeline mode before indexing the document. By default, the ingest node role is enabled—you can disable such a role for a node by setting node.ingest to false in the elasticsearch.yml file.
• Coordinating-only node: If all three roles (master eligible, data, and ingest) are disabled, the node will only act as a coordination node that performs routing requests, handling the search reduction phase, and distributing works via bulk indexing.When you launch an instance of Elasticsearch, you actually launch the Elasticsearch node. In our installation, we are running a single node of Elasticsearch, so we have a cluster with one node.

Elasticsearch Use Cases & Applications

Elasticsearch is highly scalable and offers near real-time search capabilities. This adds up to a solution that can do more than a search engine and supports a multitude of growing critical business needs and operational use cases.Generally, thanks to its powerful search capabilities, Elasticsearch is used as the underlying technology that powers applications with complex search features and requirements. From numbers, text, geo, structured, unstructured, Elasticsearch supports all data types.

Elasticsearch is popular due to its versatile nature in handling data and being paired with other tools. Companies like Wikipedia, Github, NY Times or Facebook all use Elasticsearch for various use cases: from easy search for all 164 years of published articles to instantaneous live chat or seamless e-commerce experience, any business that needs to serve information in a fast way can put Elasticsearch to good use.
With pretty much endless and versatile capabilities that continue to grow and change depending on business goals, here’s how businesses have used Elasticsearch for different use cases:
Instantaneous E-commerce Search Across Retail Product Catalogues
Retailers are using Elasticsearch to index their product catalogs and inventory, alongside all the product attributes, so when the clients search for a specific product attribute, their store can display the right products instantly.A near instant search bar can boost revenue by delivering a better product catalog search experience and make search the primary form of navigation.
Walgreens and Kreeger are some of the biggest retail companies streamlining their online grocery shopping experience with Elasticsearch.
Operational Logging Analytics
Using Elasticsearch to process billions of events every day to analyze logs and ensure consistent system performance or detect anomalies helped companies like GoDaddy to improve customer experience and enhance the user experience.
Site Content and Media Search
Using Elasticsearch for site content search is not limited to publishers – Shopify and Asana also use it to make their documentation and support content easily findable to clients. Search is also not limited to articles. One of the biggest video hosting companies, Vimeo, powers the search of millions of videos every day through Elasticsearch.
Instantaneous Live Chat
Live Chat is one company that improved the customer experience for 6,000 customers conducting millions of queries daily – all by using Elasticsearch to maintain an archive of 460 million documents and deliver instantaneous query response times.
Fraud Monitoring and Early Detection
SoftBank and Xoom are preventing and protecting against fraud and security threats by monitoring their system with Elasticsearch.

Application Search
One of the biggest companies using Elasticsearch for application search is eBay, searching across 800 million listings in subseconds and maintaining a world-class end-user experience for millions of people every day.
Business Analytics
Walmart is using Elasticsearch to gain insights into customer purchasing patterns and store performance metrics, in order to enhance the in-store and online retail customer shopping experience and boost their commercial success.
Enterprise Search
Facebook uses Elasticsearch and has gone from a simple enterprise search to over 40 tools across multiple clusters with 60+ million queries a day and growing.
Metrics Analytics
Sprint is using Elasticsearch to analyze over 200 dashboards, representing 3 billion events per day from logs, databases, emails, syslogs, test messages, and internal and vendor application APIs, in order to search for better retail operations insights.
Security Analytics
Slack is building a defensive security program to monitor malicious activity by using Elasticsearch. Cisco is also using Elasticsearch to leverage data to detect and defeat hackers and fight cyber threats.
Scraping and Analyzing Public Data
Public data like social media discussions can be mined by using Elasticsearch to do real-time analysis, resulting in a social sentiment analysis to understand customers. Though, these applications only scratch the surface of how companies can use Elasticsearch to solve a variety of growing challenges. You can also check out this fun Elasticsearch use case where we put together our own Internet of Things setup for measuring air pollution using a couple of IoT devices, Node.js, Elasticsearch, and MQTT.

 

Elasticsearch is a powerful open source search and analytics engine that makes data easy to explore. Elasticsearch is a search server based on Lucene. It provides a distributed, multitenant-capable full-text search engine with a RESTful web interface and schema-free JSON documents. Elasticsearch can be used to search all kinds of documents. It provides scalable search, has near real-time search and supports multitenancy.

Elasticsearch can be used to search all kinds of documents. It provides scalable search, has near real-time search and supports multitenancy.”Elasticsearch is distributed, which means that indices can be divided into shards and each shard can have zero or more replicas. Each node hosts one or more shards and acts as a coordinator to delegate operations to the correct shard(s). Rebalancing and routing are done automatically.

Elasticsearch uses Lucene and tries to make all its features available through the JSON and Java API. It supports facetting and percolating, which can be useful for notifying if new documents match for registered queries.

Another feature is called “gateway” and handles the long-term persistence of the index  for example, an index can be recovered from the gateway in the event of a server crash. Elasticsearch supports real-time GET requests, which makes it suitable as a NoSQL datastore, but it lacks distributed transactions.

Elasticsearch  is a famous open source provided by https://www.elastic.co/

Elasticsearch on Cloud runs on Amazon Web Services (AWS) and Azure and is built to provide a distributed, multitenant-capable full-text search engine with an HTTP web interface.

ElasticSearch is owned by Elasticsearch Inc ( www.elastic.co/) and they own all related trademarks and IP rights for this software.

Elasticsearch is owned by Elasticsearch (https://www.elastic.co/) and they own all related trademarks and IP rights for this software.

Cognosys provides hardened images of Elasticsearch on all public cloud i.e. AWS marketplace and Azure.

This Image is made specially for Customers who are looking for deploying a self managed Community edition on hardened kernel instead of just putting up a vanilla install.

Secured Elasticsearch on Centos

Elasticsearch on Cloud for Azure

Features

Features of Elasticsearch:

• Elasticsearch is being developed with a focus on not only search but also big data analytics. Traditional SQL database management systems are not designed for full-text searches against large volumes of data. Because it’s built on top of Lucene, Elasticsearch offers one of the most powerful full-text search capabilities and lets you perform and combine many types of searches, from structured, unstructured, geo, to metric.
• Elasticsearch is also a component of the ELK stack (Elasticsearch, Logstash, Kibana), which is increasingly being used for big data log analytics use cases, such as IT security, e-commerce shopper behavior analytics, market intelligence, risk management, and compliance.The analytical use case is the most popular Elasticsearch use case, even more popular than full text search. Specifically, Elasticsearch is often used for log analytics, slicing and dicing of numerical data such as application and infrastructure performance metrics. Although Apache Solr provided faceting before Elasticsearch was even born, Elasticsearch took faceting to another level, enabling its users to aggregate data on the fly using Elasticsearch’s aggregation queries. These aggregation queries are what powers pretty much all data visualizations you see in tools like Kibana, Grafana, and others.
• As an open source solution, Elasticseach requires no up-front licensing costs while offering the flexibility for complex customization. But it also requires experienced internal developers or third-party partners for sophisticated, custom functionalities.
• From a technical perspective, Elasticsearch makes it relatively easier to create and implement enterprise-scale search and analytics systems. With many commercial solutions, such as Spunk, developers need to be power-users of proprietary technology to really get the most out of the solution. Get an insider’s look on ELK vs. Splunk.
• Elasticsearch is schema-free and document-oriented. For many business applications, these are important technical innovations compared to legacy enterprise search engines.
• Elasticsearch works with a wide range of data connectors that are readily available or custom-built, enabling you to search across multiple repositories efficiently.
• Elastic – the commercial company of Elasticsearch – offers support packages with complementary technologies – Marvel, Shield, Watcher, and Found for added security, monitoring, hosting, and alerting capabilities, all of which are critical in today’s business IT systems.
• Elasticsearch supports multilingual search in 33 languages currently. Elasticsearch has client libraries for many programming languages such as Java, JavaScript, PHP, C#, Ruby, Python, Go, and many more. Availability of these client libraries makes it quite easy for developers to integrate with Elasticsearch

Major Features Of Elasticsearch

  • It supports facetting and percolating, which can be useful for notifying if new documents match for registered queries.
  • Another feature is called “gateway” and handles the long-term persistence of the index; for example, an index can be recovered from the gateway in the event of a server crash.
  • Elasticsearch supports real-time GET requests, which makes it suitable as a NoSQL datastore, but it lacks distributed transactions.

AWS

Installation Instructions For  Ubuntu

Note: How to find PublicDNS in AWS

Step 1) SSH Connection: To connect to the deployed instance, Please follow Instructions to Connect to Ubuntu instance on AWS Cloud

1) Download Putty.

2) Connect to virtual machine using following SSH credentials :

  • Hostname: PublicDNS  / IP of machine
  • Port : 22

Username: To connect to the operating system, use SSH and the username is ubuntu.
Password : Please Click here  to know how to  get password .

Step 2) Database Login Details :

  • MYSQL Username : root
  • MYSQL Password : Passw@rd123

Note :-Please change password immediately after first login.

Step 3) Application URL: Access the application via a browser at http://publicDNS:9200 

Step 4) Other Information:
1.Default installation path: will be on your web root folder “/var/www/html/Elasticsearch
2.Default ports:

  • Linux Machines:  SSH Port – 22 or 2222
  • Http: 80 or 8080
  • Https: 443
  • Sql or Mysql ports: By default these are not open on Public Endpoints. Internally Sql server: 1433. Mysql :3306

Configure custom inbound and outbound rules using this link

AWS Step by Step Screenshots

Product Overview

elastcsearch_ubuntu_1

elastcsearch_ubuntu_2

elastcsearch_ubuntu_3

Azure

Installation Instructions For Ubuntu

Note: How to find PublicDNS in Azure

Step 1) SSH Connection: To connect to the deployed instance, Please follow Instructions to Connect to Ubuntu instance on Azure Cloud

1) Download Putty.

2) Connect to virtual machine using following SSH credentials :

  • Host name: PublicDNS  / IP of machine
  • Port : 22

Username: Your chosen username when you created the machine ( For example:  Azureuser)

Password : Your Chosen Password when you created the machine ( How to reset the password if you do not remember)

Step 2) Database Login Details :

MYSQL Username : root
MYSQL Password : Passw@rd123

Note : Please change password immediately after first login.

Step 3) Application URL: Access the application via a browser at http://PublicDNS

Note: Open port 9200 on server Firewall.

Step 4) Other Information:
1. Default installation path: will be on your web root folder “/etc/elasticsearch”
2. Default Ports:

  • Linux Machines:  SSH Port – 22 or 2222
  • Http: 80 or 8080
  • Https: 443
  • MySQL ports: By default these are not open on Public Endpoints. MySQL :3306

Installation Instructions For CentOS

Note : How to find PublicDNS in Azure

Step 1) SSH Connection: To connect to the deployed instance, Please follow Instructions to Connect to Centos instance on Azure Cloud

1) Download Putty.

2) Connect to virtual machine using following SSH credentials :

  • Host name: PublicDNS  / IP of machine
  • Port : 22

Username: Your chosen username when you created the machine ( For example:  Azureuser)

Password : Your Chosen Password when you created the machine ( How to reset the password if you do not remember)

Step 2) Database Login Details :

  • MYSQL Username : root
  • MYSQL Password : Passw@rd123

Note : Please change password immediately after first login.

Step 3) Application URL: Access the application via a browser at http://PublicDNS

Step 4) Other Information:
1. Default installation path: will be on your web root folder “/etc/elasticsearch
2.Default ports:

  • Linux Machines:  SSH Port – 22 or 2222
  • Http: 80 or 8080
  • Https: 443
  • MySQL ports: By default these are not open on Public Endpoints. MySQL :3306

Configure custom inbound and outbound rules using this link

Azure Step by Step Screenshots for Ubuntu 14.04 LTS

Google

Installation Instructions For Ubuntu

Installation Instructions for Ubuntu

Step 1) VM Creation:

  1. Click the Launch on Compute Engine button to choose the hardware and network settings.
  2. You can see at this page, an overview of Cognosys Image as well as estimated cost of running the instance.
  3. In the settings page, you can choose the number of CPUs and amount of RAM, the disk size and type etc.

Step 2) SSH Connection:To initialize the DB Server connect to the deployed instance, Please follow Instructions to Connect to Ubuntu instance on Google Cloud

1) Download Putty.

2) Connect to the virtual machine using SSH key

  • Hostname: PublicDNS  / IP of machine
  • Port : 22

Step 3) Database Login Details:

The below screen appears after successful deployment of the image.

successful deployment of the image

For local MySQL root password, please use the temporary password generated automatically during image creation as shown above.

i) Please connect to Remote Desktop as given in step 2 to ensure stack is properly configured and DB is initialized.
ii) You can use MySQL server instance as localhost, username root and password as shown above.

If you have closed the deployment page you can also get the MySQL root password from VM Details  “Custom metadata” Section.

Step 3) Application URL: Access the application via a browser at http://PublicDNS

Note: Open port 9200 on server Firewall.

Step 4) Other Information:
1. Default installation path: will be on your web root folder “/etc/elasticsearch
2.Default ports:

  • Linux Machines:  SSH Port – 22 or 2222
  • Http: 80 or 8080
  • Https: 443
  • MySQL ports: By default these are not open on Public Endpoints. MySQL :3306

Videos

Secured Elasticsearch on Centos

Elasticsearch on Cloud

Related Posts