Solr (Search Service)

Apache Solr is a scalable and fault-tolerant search index.

Solr search with generic schemas provided, and a custom schema is also supported.

See the Solr documentation for more information.

Supported versions

  • 3.6
  • 4.10
  • 6.3
  • 6.6
  • 7.6


The format exposed in the $PLATFORM_RELATIONSHIPS environment variable:

    "service": "solr",
    "ip": "",
    "hostname": "",
    "cluster": "rjify4yjcwxaa-master-7rqtwti",
    "host": "solr.internal",
    "rel": "solr",
    "path": "solr\/collection1",
    "scheme": "solr",
    "type": "solr:7.6",
    "port": 8080

Usage example

In your .platform/services.yaml:

    type: solr:7.6
    disk: 1024

In your

    solr: "mysearch:solr"

You can then use the service in a configuration file of your application with something like:


use Platformsh\ConfigReader\Config;
use Solarium\Client;

// Create a new config object to ease reading the environment variables.
// You can alternatively use getenv() yourself.
$config = new Config();

// Get the credentials to connect to the Solr service.
$credentials = $config->credentials('solr');

try {

    $config = [
        'endpoint' => [
            'localhost' => [
                'host' => $credentials['host'],
                'port' => $credentials['port'],
                'path' => "/" . $credentials['path'],

    $client = new Client($config);

    // Add a document
    $update = $client->createUpdate();

    $doc1 = $update->createDocument();
    $doc1->id = 123;
    $doc1->name = 'Valentina Tereshkova';


    $result = $client->update($update);
    print "Adding one document. Status (0 is success): " .$result->getStatus(). "<br />\n";

    // Select one document
    $query = $client->createQuery($client::QUERY_SELECT);
    $resultset = $client->execute($query);
    print  "Selecting documents (1 expected): " .$resultset->getNumFound() . "<br />\n";

    // Delete one document
    $update = $client->createUpdate();

    $result = $client->update($update);
    print "Deleting one document. Status (0 is success): " .$result->getStatus(). "<br />\n";

} catch (Exception $e) {
    print $e->getMessage();
const solr = require('solr-node');
const config = require("platformsh-config").config();

exports.usageExample = async function() {

    let client = new solr(config.formattedCredentials('solr', 'solr-node'));

    let output = '';

    // Add a document.
    var result = await client.update({
        id: 123,
        name: 'Valentina Tereshkova',

    output += "Adding one document. Status (0 is success): " + result.responseHeader.status +  "<br />\n";

    // Flush writes so that we can query against them.
    await client.softCommit();

    // Select one document:
    let strQuery = client.query().q();
    result = await;
    output += "Selecting documents (1 expected): " + result.response.numFound + "<br />\n";

    // Delete one document.
    result = await client.delete({id: 123});
    output += "Deleting one document. Status (0 is success): " + result.responseHeader.status + "<br />\n";

    return output;

import pysolr
from xml.etree import ElementTree as et
from platformshconfig import Config

def usage_example():

    # Create a new Config object to ease reading the environment variables.
    # You can alternatively use os.environ yourself.
    config = Config()

    # Get the credentials to connect to the Solr service.
    credentials = config.credentials('solr')

        formatted_url = config.formatted_credentials('solr', 'pysolr')

        # Create a new Solr Client using config variables
        client = pysolr.Solr(formatted_url)

        # Add a document
        message = ''
        doc_1 = {
            "id": 123,
            "name": "Valentina Tereshkova"

        result0 = client.add([doc_1])
        message += 'Adding one document. Status (0 is success): {0} <br />'.format(et.fromstring(result0)[0][0].text)

        # Select one document
        query ='*:*')
        message += '\nSelecting documents (1 expected): {0} <br />'.format(str(query.hits))

        # Delete one document
        result1 = client.delete(doc_1['id'])
        message += '\nDeleting one document. Status (0 is success): {0}'.format(et.fromstring(result1)[0][0].text)

        return message

    except Exception as e:
        return e


Solr 4

For Solr 4, supports only a single core per server called collection1.

If you want to provide your own Solr configuration, you can add a core_config key in your .platform/services.yaml:

    type: solr:4.10
    disk: 1024
                core_config: !include
                    type: archive
                    path: "<directory>"

The directory parameter points to a directory in the Git repository, in or below the .platform/ folder. This directory needs to contain everything that Solr needs to start a core. At the minimum, solrconfig.xml and schema.xml. For example, place them in .platform/solr/conf/ such that the schema.xml file is located at .platform/solr/conf/schema.xml. You can then reference that path like this -

    type: solr:4.10
    disk: 1024
        core_config: !include
            type: archive
            path: "solr/conf/"

Solr 6 and later

For Solr 6 and later supports multiple cores via different endpoints. Cores and endpoints are defined separately, with endpoints referencing cores. Each core may have its own configuration or share a configuration. It is best illustrated with an example.

    type: solr:7.6
    disk: 1024
                core_config: !include
                    type: archive
                    path: "core1-conf"
                core_config: !include
                    type: archive
                    path: "core2-conf"
                core: mainindex
                core: extraindex

The above definition defines a single Solr 7.6 server. That server has 2 cores defined: mainindex — the configuration for which is in the .platform/core1-conf directory — and extraindex — the configuration for which is in the .platform/core2-conf directory.

It then defines two endpoints: main is connected to the mainindex core while extra is connected to the extraindex core. Two endpoints may be connected to the same core but at this time there would be no reason to do so. Additional options may be defined in the future.

Each endpoint is then available in the relationships definition in For example, to allow an application to talk to both of the cores defined above its file should contain the following:

    solr1: 'solrsearch:main'
    solr2: 'solrsearch:extra'

That is, the application's environment would include a solr1 relationship that connects to the main endpoint, which is the mainindex core, and a solr2 relationship that connects to the extra endpoint, which is the extraindex core.

The relationships array would then look something like the following:

    "solr1": [
            "path": "solr/mainindex",
            "host": "",
            "scheme": "solr",
            "port": 8080
    "solr2": [
            "path": "solr/extraindex",
            "host": "",
            "scheme": "solr",
            "port": 8080


For even more customizability, it's also possible to define Solr configsets. For example, the following snippet would define one configset, which would be used by all cores. Specific details can then be overriden by individual cores using core_properties, which is equivalent to the Solr file.

    type: solr:7.6
    disk: 1024
            mainconfig: : !include
                type: archive
                path: "configsets/solr6"
                core_properties: |
                core_properties: |
                core: english_index
                core: arabic_index

In this example, the directory .platform/configsets/solr6 contains the configuration definition for multiple cores. There are then two cores created: english_index uses the defined configset, but specifically the .platform/configsets/solr6/english/schema.xml file, while arabic_index is identical except for using the .platform/configsets/solr6/arabic/schema.xml file. Each of those cores is then exposed as its own endpoint.

Note that not all features make sense to specify in the core_properties. Some keys, such as name and dataDir, are not supported, and may result in a solrconfig that fails to work as intended, or at all.

Default configuration

If no configuration is specified, the default configuration is equivalent to:

    type: solr:7.6
                conf_dir: {}  # This will pick up the default Drupal 8 configuration
                core: collection1

The Solr 6.x Drupal 8 configuration files are reasonably generic and should work in many other circumstances, but explicitly defining a core, configuration, and endpoint is generally recommended.


The recommended maximum size for configuration directories (zipped) is 2MB. These need to be monitored to ensure they don't grow beyond that. If the zipped configuration directories grow beyond this, performance will decline and deploys will become longer. The directory archives will be compressed and string encoded. You could use this bash pipeline echo $(($(tar czf - . | base64 | wc -c )/(1024*1024))) Megabytes inside the directory to get an idea of the archive size.

The configuration directory is a collection of configuration data, like a data dictionary, e.g. small collections of key/value sets. The best way to keep the size small is to restrict the directory context to plain configurations. Including binary data like plugin .jar files will inflate the archive size, and is not recommended.

Accessing the Solr server administrative interface

Because Solr uses HTTP for both its API and admin interface it's possible to access the admin interface over an SSH tunnel.

platform tunnel:open
That will open an SSH tunnel to all services on the current environment, and give an output similar to:

SSH tunnel opened on port 30000 to relationship: solr
SSH tunnel opened on port 30001 to relationship: database
Logs are written to: /home/myuser/.platformsh/tunnels.log

List tunnels with: platform tunnels
View tunnel details with: platform tunnel:info
Close tunnels with: platform tunnel:close

In this example, you can now open http://localhost:30000/solr/ in a browser to access the Solr admin interface. Note that you cannot create indexes or users this way, but you can browse the existing indexes and manipulate the stored data.

Note Enterprise users can use ssh -L 8888:localhost:8983 <user>@<cluster-name> to open a tunnel instead, after which the Solr server administrative interface will be available at http://localhost:8888/solr/.


The Solr data format sometimes changes between versions in incompatible ways. Solr does not include a data upgrade mechanism as it is expected that all indexes can be regenerated from stable data if needed. To upgrade (or downgrade) Solr you will need to use a new service from scratch.

There are two ways of doing that.


In your services.yaml file, change the version of your Solr service and its name. Then update the name in the relationships block.

When you push that to, the old service will be deleted and a new one with the name name created, with no data. You can then have your application reindex data as appropriate.

This approach is simple but has the downside of temporarily having an empty Solr instance, which your application may or may not handle gracefully, and needing to rebuild your index afterward. Depending on the size of your data that could take a while.


For a transitional approach you will temporarily have two Solr services. Add a second Solr service with the new version a new name and give it a new relationship in You can optionally run in that configuration for a while to allow your application to populate indexes in the new service as well.

Once you're ready to cut over, remove the old Solr service and relationship. You may optionally have the new Solr service use the old relationship name if that's easier for your application to handle. Your application is now using the new Solr service.

This approach has the benefit of never being without a working Solr instance. On the downside, it requires two running Solr servers temporarily, each of which will consume resources and need adequate disk space. Depending on the size of your data that may be a lot of disk space.