Elasticsearch (Search Service)

Elasticsearch is a distributed RESTful search engine built for the cloud.

See the Elasticsearch documentation for more information.

Supported versions

  • 5.2
  • 5.4
  • 6.5

Deprecated versions

The following versions are available but are not receiving security updates from upstream, so their use is not recommended. They will be removed at some point in the future.

  • 0.90
  • 1.4
  • 1.7
  • 2.4


The format exposed in the $PLATFORM_RELATIONSHIPS environment variable:

    "username": null,
    "scheme": "http",
    "service": "elasticsearch",
    "fragment": null,
    "ip": "",
    "hostname": "j2dkzht3gs2yr66fb2brhoj4zu.elasticsearch.service._.eu-3.platformsh.site",
    "public": false,
    "cluster": "rjify4yjcwxaa-master-7rqtwti",
    "host": "elasticsearch.internal",
    "rel": "elasticsearch",
    "query": [],
    "path": null,
    "password": null,
    "type": "elasticsearch:5.4",
    "port": 9200

Usage example

In your .platform/services.yaml:

    type: elasticsearch:6.5
    disk: 1024

In your .platform.app.yaml:

    elasticsearch: "mysearch:elasticsearch"

You can then use the service in a configuration file of your application with something like:



use Elasticsearch\ClientBuilder;
use Platformsh\ConfigReader\Config;

// Create a new config object to ease reading the Platform.sh environment variables.
// You can alternatively use getenv() yourself.
$config = new Config();

// Get the credentials to connect to the Elasticsearch service.
$credentials = $config->credentials('elasticsearch');

try {
    // The Elasticsearch library lets you connect to multiple hosts.
    // On Platform.sh Standard there is only a single host so just
    // register that.
    $hosts = [
            'scheme' => $credentials['scheme'],
            'host' => $credentials['host'],
            'port' => $credentials['port'],

    // Create an Elasticsearch client object.
    $builder = ClientBuilder::create();
    $client = $builder->build();

    $index = 'my_index';
    $type = 'People';

    // Index a few document.
    $params = [
        'index' => $index,
        'type' => $type,

    $names = ['Ada Lovelace', 'Alonzo Church', 'Barbara Liskov'];

    foreach ($names as $name) {
        $params['body']['name'] = $name;

    // Force just-added items to be indexed.
    $client->indices()->refresh(array('index' => $index));

    // Search for documents.
    $result = $client->search([
        'index' => $index,
        'type' => $type,
        'body' => [
            'query' => [
                'match' => [
                    'name' => 'Barbara Liskov',

    if (isset($result['hits']['hits'])) {
        print <<<TABLE
        foreach ($result['hits']['hits'] as $record) {
            printf("<tr><td>%s</td><td>%s</td></tr>\n", $record['_id'], $record['_source']['name']);
        print "</tbody>\n</table>\n";

    // Delete documents.
    $params = [
        'index' => $index,
        'type' => $type,

    $ids = array_map(function($row) {
        return $row['_id'];
    }, $result['hits']['hits']);

    foreach ($ids as $id) {
        $params['id'] = $id;

} catch (Exception $e) {
    print $e->getMessage();
const elasticsearch = require('elasticsearch');
const config = require("platformsh-config").config();

exports.usageExample = async function() {

    const credentials = config.credentials('elasticsearch');

    var client = new elasticsearch.Client({
        host: `${credentials.host}:${credentials.port}`,

    let index = 'my_index';
    let type = 'People';

    // Index a few document.
    let names = ['Ada Lovelace', 'Alonzo Church', 'Barbara Liskov'];

    let message = {
        refresh: "wait_for",
        body: []
    names.forEach((name) => {
        message.body.push({index: {_index: index, _type: type}});
        message.body.push({name: name});
    await client.bulk(message);

    // Search for documents.
    const response = await client.search({
        index: index,
        q: 'name:Barbara Liskov'

    let output = '';

    if(response.hits.total > 0) {
        output += `<table>
        response.hits.hits.forEach((record) => {
            output += `<tr><td>${record._id}</td><td>${record._source.name}</td></tr>\n`;
        output += "</tbody>\n</table>\n";

    // Clean up after ourselves.
    response.hits.hits.forEach((record) => {
            index: index,
            type: type,
            id: record._id,

    return output;
import elasticsearch
from platformshconfig import Config

def usage_example():

    # Create a new Config object to ease reading the Platform.sh environment variables.
    # You can alternatively use os.environ yourself.
    config = Config()

    # Get the credentials to connect to the Elasticsearch service.
    credentials = config.credentials('elasticsearch')

        # The Elasticsearch library lets you connect to multiple hosts.
        # On Platform.sh Standard there is only a single host so just register that.
        hosts = {
            "scheme": credentials['scheme'],
            "host": credentials['host'],
            "port": credentials['port']

        # Create an Elasticsearch client object.
        client = elasticsearch.Elasticsearch([hosts])

        # Index a few documents
        es_index = 'my_index'
        es_type = 'People'

        params = {
            "index": es_index,
            "type": es_type,
            "body": {"name": ''}

        names = ['Ada Lovelace', 'Alonzo Church', 'Barbara Liskov']

        ids = {}

        for name in names:
            params['body']['name'] = name
            ids[name] = client.index(index=params["index"], doc_type=params["type"], body=params['body'])

        # Force just-added items to be indexed.

        # Search for documents.
        result = client.search(index=es_index, body={
            'query': {
                'match': {
                    'name': 'Barbara Liskov'

        table = '''<table>

        if result['hits']['hits']:
            for record in result['hits']['hits']:
                table += '''<tr><td>{0}</td><td>{1}</td><tr>\n'''.format(record['_id'], record['_source']['name'])
            table += '''</tbody>\n</table>\n'''

        # Delete documents.
        params = {
            "index": es_index,
            "type": es_type,

        for name in names:
            client.delete(index=params['index'], doc_type=params['type'], id=ids[name]['_id'])

        return table

    except Exception as e:
        return e


When you create an index on Elasticsearch, you should not specify number_of_shards and number_of_replicas settings in your Elasticsearch API call. These values will be set automatically based on available resources.


The Elasticsearch 2.4 and later services offer a number of plugins. To enable them, list them under the configuration.plugins key in your services.yaml file, like so:

    type: "elasticsearch:6.5"
    disk: 1024
            - analysis-icu
            - lang-python

In this example you'd have the ICU analysis plugin and Python script support plugin.

If there is a publicly available plugin you need that is not listed here, please contact our support team.

Available plugins

This is the complete list of official Elasticsearch plugins that can be enabled:

Plugin Description 2.4 5.2 5.4
analysis-icu Support ICU Unicode text analysis * * *
analysis-kuromoji Japanese language support * * *
analysis-smartcn Smart Chinese Analysis Plugins * * *
analysis-stempel Stempel Polish Analysis Plugin * * *
analysis-phonetic Phonetic analysis * * *
analysis-ukrainian Ukrainian language support * *
cloud-aws AWS Cloud plugin, allows storing indices on AWS S3 *
delete-by-query Support for deleting documents matching a given query *
discovery-multicast Ability to form a cluster using TCP/IP multicast messages *
ingest-attachment Extract file attachments in common formats (such as PPT, XLS, and PDF) * *
ingest-user-agent Extracts details from the user agent string a browser sends with its web requests * *
lang-javascript Javascript language plugin, allows the use of Javascript in Elasticsearch scripts * *
lang-python Python language plugin, allows the use of Python in Elasticsearch scripts * * *
mapper-attachments Mapper attachments plugin for indexing common file types * * *
mapper-murmur3 Murmur3 mapper plugin for computing hashes at index-time * * *
mapper-size Size mapper plugin, enables the _size meta field * * *
repository-s3 Support for using S3 as a repository for Snapshot/Restore * *


The Elasticsearch data format sometimes changes between versions in incompatible ways. Elasticsearch does not include a data upgrade mechanism as it is expected that all indexes can be regenerated from stable data if needed. To upgrade (or downgrade) Elasticsearch you will need to use a new service from scratch.

There are two ways of doing that.


In your services.yaml file, change the version of your Elasticsearch service and its name. Then update the name in the .platform.app.yaml relationships block.

When you push that to Platform.sh, the old service will be deleted and a new one with the name name created, with no data. You can then have your application reindex data as appropriate.

This approach is simple but has the downside of temporarily having an empty Elasticsearch instance, which your application may or may not handle gracefully, and needing to rebuild your index afterward. Depending on the size of your data that could take a while.


For a transitional approach you will temporarily have two Elasticsearch services. Add a second Elasticsearch service with the new version a new name and give it a new relationship in .platform.app.yaml. You can optionally run in that configuration for a while to allow your application to populate indexes in the new service as well.

Once you're ready to cut over, remove the old Elasticsearch service and relationship. You may optionally have the new Elasticsearch service use the old relationship name if that's easier for your application to handle. Your application is now using the new Elasticsearch service.

This approach has the benefit of never being without a working Elasticsearch instance. On the downside, it requires two running Elasticsearch servers temporarily, each of which will consume resources and need adequate disk space. Depending on the size of your data that may be a lot of disk space.