Platform.sh User Documentation

Solr (Search service)

Try for 30 days
Flexible, version-controlled infrastructure provisioning and development-to-production workflows
Activate your trial

Apache Solr is a scalable and fault-tolerant search index.

Solr search with generic schemas provided, and a custom schema is also supported. See the Solr documentation for more information.

Use a framework Anchor to this heading

If you use one of the following frameworks, follow its guide:

Supported versions Anchor to this heading

You can select the major and minor version.

Patch versions are applied periodically for bug fixes and the like. When you deploy your app, you always get the latest available patches.

Grid Dedicated Gen 3 Dedicated Gen 2
  • 9.6
  • 9.4
  • 9.2
  • 9.1
  • 8.11
  • 9.6
  • 9.4
  • 9.2
  • 9.1
  • 8.11
  • 8.11

Deprecated versions Anchor to this heading

The following versions are deprecated. They’re available, but they aren’t receiving security updates from upstream and aren’t guaranteed to work. They’ll be removed in the future, so migrate to one of the supported versions.

Grid Dedicated Gen 3 Dedicated Gen 2
  • 8.6
  • 8.4
  • 8.0
  • 7.7
  • 7.6
  • 6.6
  • 6.3
  • 4.10
  • 3.6
  • 8.6
  • 8.0
  • 7.7
  • 6.6
  • 6.3
  • 4.10

Relationship reference Anchor to this heading

Example information available through the PLATFORM_RELATIONSHIPS environment variable or by running platform relationships.

Note that the information about the relationship can change when an app is redeployed or restarted or the relationship is changed. So your apps should only rely on the PLATFORM_RELATIONSHIPS environment variable directly rather than hard coding any values.

{
  "username": null,
  "scheme": "solr",
  "service": "solr",
  "fragment": null,
  "ip": "123.456.78.90",
  "hostname": "azertyuiopqsdfghjklm.solr.service._.eu-1.platformsh.site",
  "port": 8080,
  "cluster": "azertyuiopqsdf-main-afdwftq",
  "host": "solr.internal",
  "rel": "solr",
  "path": "solr\/collection1",
  "query": [],
  "password": null,
  "type": "solr:9.6",
  "public": false,
  "host_mapped": false
}

Usage example Anchor to this heading

1. Configure the service Anchor to this heading

To define the service, use the solr type:

.platform/services.yaml
# The name of the service container. Must be unique within a project.
<SERVICE_NAME>:
  type: solr:<VERSION>
  disk: 256

Note that changing the name of the service replaces it with a brand new service and all existing data is lost. Back up your data before changing the service.

2. Define the relationship Anchor to this heading

To define the relationship, use the following configuration:

.platform.app.yaml
# Relationships enable access from this app to a given service.
# The example below shows simplified configuration leveraging a default service
# (identified from the relationship name) and a default endpoint.
# See the Application reference for all options for defining relationships and endpoints.
relationships:
  <SERVICE_NAME>:

You can define <SERVICE_NAME> as you like, so long as it’s unique between all defined services and matches in both the application and services configuration.

The example above leverages default endpoint configuration for relationships. That is, it uses default endpoints behind-the-scenes, providing a relationship (the network address a service is accessible from) that is identical to the name of that service.

Depending on your needs, instead of default endpoint configuration, you can use explicit endpoint configuration.

With the above definition, the application container now has access to the service via the relationship <SERVICE_NAME> and its corresponding PLATFORM_RELATIONSHIPS environment variable.

.platform.app.yaml
# Relationships enable access from this app to a given service.
# See the Application reference for all options for defining relationships and endpoints.
# Note that legacy definition of the relationship is still supported.
# More information: https://docs.platform.sh/create-apps/app-reference/single-runtime-image.html#relationships
relationships:
  <RELATIONSHIP_NAME>:
    service: <SERVICE_NAME>
    endpoint: solr

You can define <SERVICE_NAME> and <RELATIONSHIP_NAME> as you like, so long as it’s unique between all defined services and relationships and matches in both the application and services configuration.

The example above leverages explicit endpoint configuration for relationships.

Depending on your needs, instead of explicit endpoint configuration, you can use default endpoint configuration.

With the above definition, the application container now has access to the service via the relationship <RELATIONSHIP_NAME> and its corresponding PLATFORM_RELATIONSHIPS environment variable.

Example configuration Anchor to this heading

Service definition Anchor to this heading

.platform/services.yaml
# The name of the service container. Must be unique within a project.
solr:
  type: solr:9.6
  disk: 256

App configuration Anchor to this heading

.platform.app.yaml
# Relationships enable access from this app to a given service.
# The example below shows simplified configuration leveraging a default service
# (identified from the relationship name) and a default endpoint.
# See the Application reference for all options for defining relationships and endpoints.
relationships:
  solr:
.platform.app.yaml
# Relationships enable access from this app to a given service.
# See the Application reference for all options for defining relationships and endpoints.
# Note that legacy definition of the relationship is still supported.
# More information: https://docs.platform.sh/create-apps/app-reference/single-runtime-image.html#relationships
relationships:
  solr:
    service: solr
    endpoint: solr

Use in app Anchor to this heading

To use the configured service in your app, add a configuration file similar to the following to your project.

package examples

import (
	"fmt"

	psh "github.com/platformsh/config-reader-go/v2"
	gosolr "github.com/platformsh/config-reader-go/v2/gosolr"
	solr "github.com/rtt/Go-Solr"
)

func UsageExampleSolr() string {

	// Create a NewRuntimeConfig object to ease reading the Platform.sh environment variables.
	// You can alternatively use os.Getenv() yourself.
	config, err := psh.NewRuntimeConfig()
	checkErr(err)

	// Get the credentials to connect to the Solr service.
	credentials, err := config.Credentials("solr")
	checkErr(err)

	// Retrieve Solr formatted credentials.
	formatted, err := gosolr.FormattedCredentials(credentials)
	checkErr(err)

	// Connect to Solr using the formatted credentials.
	connection := &solr.Connection{URL: formatted}

	// Add a document and commit the operation.
	docAdd := map[string]interface{}{
		"add": []interface{}{
			map[string]interface{}{"id": 123, "name": "Valentina Tereshkova"},
		},
	}

	respAdd, err := connection.Update(docAdd, true)
	checkErr(err)

	// Select the document.
	q := &solr.Query{
		Params: solr.URLParamMap{
			"q": []string{"id:123"},
		},
	}

	resSelect, err := connection.CustomSelect(q, "query")
	checkErr(err)

	// Delete the document and commit the operation.
	docDelete := map[string]interface{}{
		"delete": map[string]interface{}{
			"id": 123,
		},
	}

	resDel, err := connection.Update(docDelete, true)
	checkErr(err)

	message := fmt.Sprintf("Adding one document - %s<br>"+
		"Selecting document (1 expected): %d<br>"+
		"Deleting document - %s<br>",
		respAdd, resSelect.Results.NumFound, resDel)

	return message
}
package sh.platform.languages.sample;

import org.apache.solr.client.solrj.SolrQuery;
import org.apache.solr.client.solrj.SolrServerException;
import org.apache.solr.client.solrj.impl.HttpSolrClient;
import org.apache.solr.client.solrj.impl.XMLResponseParser;
import org.apache.solr.client.solrj.response.QueryResponse;
import org.apache.solr.client.solrj.response.UpdateResponse;
import org.apache.solr.common.SolrDocumentList;
import org.apache.solr.common.SolrInputDocument;
import sh.platform.config.Config;
import sh.platform.config.Solr;

import java.io.IOException;
import java.util.function.Supplier;

public class SolrSample implements Supplier<String> {

    @Override
    public String get() {

        StringBuilder logger = new StringBuilder();

        // Create a new config object to ease reading the Platform.sh environment variables.
        // You can alternatively use getenv() yourself.
        Config config = new Config();

        Solr solr = config.getCredential("solr", Solr::new);

        try {

            final HttpSolrClient solrClient = solr.get();
            solrClient.setParser(new XMLResponseParser());

            // Add a document
            SolrInputDocument document = new SolrInputDocument();
            final String id = "123456";
            document.addField("id", id);
            document.addField("name", "Ada Lovelace");
            document.addField("city", "London");
            solrClient.add(document);
            final UpdateResponse response = solrClient.commit();
            logger.append("Adding one document. Status (0 is success): ")
                    .append(response.getStatus()).append('\n');

            SolrQuery query = new SolrQuery();
            query.set("q", "city:London");
            QueryResponse queryResponse = solrClient.query(query);

            logger.append("<p>");
            SolrDocumentList results = queryResponse.getResults();
            logger.append(String.format("Selecting documents (1 expected):  %d \n", results.getNumFound()));
            logger.append("</p>");

            // Delete one document
            solrClient.deleteById(id);

            logger.append("<p>");
            logger.append(String.format("Deleting one document. Status (0 is success):  %s \n",
                    solrClient.commit().getStatus()));
            logger.append("</p>");
        } catch (SolrServerException | IOException exp) {
            throw new RuntimeException("An error when execute Solr ", exp);
        }

        return logger.toString();
    }
}
const solr = require("solr-node");
const config = require("platformsh-config").config();

exports.usageExample = async function () {
    const client = new solr(config.formattedCredentials("solr", "solr-node"));

    // Add a document.
    const addResult = await client.update({
        id: 123,
        name: "Valentina Tereshkova",
    });

    // Flush writes so that we can query against them.
    await client.softCommit();

    // Select one document:
    const strQuery = client.query().q();
    const writeResult = await client.search(strQuery);

    // Delete one document.
    const deleteResult = await client.delete({ id: 123 });

    return `
    Adding one document. Status (0 is success): ${addResult.responseHeader.status}<br />
    Selecting documents (1 expected): ${writeResult.response.numFound}<br />
    Deleting one document. Status (0 is success): ${deleteResult.responseHeader.status}<br />
    `;
};
<?php
declare(strict_types=1);

use Platformsh\ConfigReader\Config;
use Solarium\Client;

// Create a new config object to ease reading the Platform.sh environment variables.
// You can alternatively use getenv() yourself.
$config = new Config();

// Get the credentials to connect to the Solr service.
$credentials = $config->credentials('solr');

try {

    $config = [
        'endpoint' => [
            'localhost' => [
                'host' => $credentials['host'],
                'port' => $credentials['port'],
                'path' => "/" . $credentials['path'],
            ]
        ]
    ];

    $client = new Client($config);

    // Add a document
    $update = $client->createUpdate();

    $doc1 = $update->createDocument();
    $doc1->id = 123;
    $doc1->name = 'Valentina Tereshkova';

    $update->addDocuments(array($doc1));
    $update->addCommit();

    $result = $client->update($update);
    print "Adding one document. Status (0 is success): " .$result->getStatus(). "<br />\n";

    // Select one document
    $query = $client->createQuery($client::QUERY_SELECT);
    $resultset = $client->execute($query);
    print  "Selecting documents (1 expected): " .$resultset->getNumFound() . "<br />\n";

    // Delete one document
    $update = $client->createUpdate();

    $update->addDeleteById(123);
    $update->addCommit();
    $result = $client->update($update);
    print "Deleting one document. Status (0 is success): " .$result->getStatus(). "<br />\n";

} catch (Exception $e) {
    print $e->getMessage();
}

import pysolr
from xml.etree import ElementTree as et
import json
from platformshconfig import Config


def usage_example():

    # Create a new Config object to ease reading the Platform.sh environment variables.
    # You can alternatively use os.environ yourself.
    config = Config()

    try:
        # Get the pysolr-formatted connection string.
        formatted_url = config.formatted_credentials('solr', 'pysolr')

        # Create a new Solr Client using config variables
        client = pysolr.Solr(formatted_url)

        # Add a document
        message = ''
        doc_1 = {
            "id": 123,
            "name": "Valentina Tereshkova"
        }

        result0 = client.add([doc_1], commit=True)
        client.commit()
        message += 'Adding one document. Status (0 is success): {} <br />'.format(json.loads(result0)['responseHeader']['status'])

        # Select one document
        query = client.search('*:*')
        message += '\nSelecting documents (1 expected): {} <br />'.format(str(query.hits))

        # Delete one document
        result1 = client.delete(doc_1['id'])
        client.commit()
        message += '\nDeleting one document. Status (0 is success): {}'.format(et.fromstring(result1)[0][0].text)

        return message

    except Exception as e:
        return e

Solr 4 Anchor to this heading

For Solr 4, Platform.sh supports only a single core per server called collection1.

You must provide your own Solr configuration via a core_config key in your .platform/services.yaml:

.platform/services.yaml
# The name of the service container. Must be unique within a project.
solr:
  type: "solr:4.10"
  disk: 1024
  configuration:
    core_config: !archive "DIRECTORY"

DIRECTORY points to a directory in the Git repository, in or below the .platform/ folder. This directory needs to contain everything that Solr needs to start a core. At the minimum, solrconfig.xml and schema.xml.

For example, place them in .platform/solr/conf/ such that the schema.xml file is located at .platform/solr/conf/schema.xml. You can then reference that path like this -

.platform/services.yaml
# The name of the service container. Must be unique within a project.
solr:
  type: "solr:4.10"
  disk: 1024
  configuration:
    core_config: !archive "solr/conf/"

Solr 6 and later Anchor to this heading

For Solr 6 and later Platform.sh supports multiple cores via different endpoints. Cores and endpoints are defined separately, with endpoints referencing cores. Each core may have its own configuration or share a configuration. It is best illustrated with an example.

.platform/services.yaml
# The name of the service container. Must be unique within a project.
solr:
  type: solr:9.6
  disk: 1024
  configuration:
    cores:
      mainindex:
        conf_dir: !archive "core1-conf"
      extraindex:
        conf_dir: !archive "core2-conf"
    endpoints:
      main:
        core: mainindex
      extra:
        core: extraindex

The above definition defines a single Solr 9.6 server. That server has 2 cores defined:

  • mainindex โ€” the configuration for which is in the .platform/core1-conf directory
  • extraindex โ€” the configuration for which is in the .platform/core2-conf directory.

It then defines two endpoints: main is connected to the mainindex core while extra is connected to the extraindex core. Two endpoints may be connected to the same core but at this time there would be no reason to do so. Additional options may be defined in the future.

Each endpoint is then available in the relationships definition in .platform.app.yaml. For example, to allow an application to talk to both of the cores defined above its configuration should contain the following:

.platform.app.yaml
name: myapp

type: "php:8.4"

[...]

relationships:
  solrsearch1: "solr:main"
  solrsearch2: "solr:extra"

That is, the application’s environment would include a solrsearch1 relationship that connects to the main endpoint, which is the mainindex core, and a solrsearch2 relationship that connects to the extra endpoint, which is the extraindex core.

The relationships array would then look something like the following:

{
  "solrsearch1": [
    {
      "path": "solr/mainindex",
      "host": "248.0.65.197",
      "scheme": "solr",
      "port": 8080
    }
  ],
  "solrsearch2": [
    {
      "path": "solr/extraindex",
      "host": "248.0.65.197",
      "scheme": "solr",
      "port": 8080
    }
  ]
}

Configsets Anchor to this heading

For even more customizability, it’s also possible to define Solr configsets. For example, the following snippet would define one configset, which would be used by all cores. Specific details can then be overridden by individual cores using core_properties, which is equivalent to the Solr core.properties file.

.platform/services.yaml
# The name of the service container. Must be unique within a project.
solr:
  type: solr:8.4
  disk: 1024
  configuration:
    configsets:
      mainconfig: !archive "configsets/solr8"
    cores:
      english_index:
        core_properties: |
          configSet=mainconfig
          schema=english/schema.xml          
      arabic_index:
        core_properties: |
          configSet=mainconfig
          schema=arabic/schema.xml          
    endpoints:
      english:
        core: english_index
      arabic:
        core: arabic_index

In this example, .platform/configsets/solr8 contains the configuration definition for multiple cores. There are then two cores created:

  • english_index uses the defined configset, but specifically the .platform/configsets/solr8/english/schema.xml file
  • arabic_index is identical except for using the .platform/configsets/solr8/arabic/schema.xml file.

Each of those cores is then exposed as its own endpoint.

Note that not all core properties features make sense to specify in the core_properties. Some keys, such as name and dataDir, aren’t supported, and may result in a solrconfig that fails to work as intended, or at all.

Default configuration Anchor to this heading

Default for version 9+ Anchor to this heading

If you don’t specify any configuration, the following default is used:

.platform/services.yaml
# The name of the service container. Must be unique within a project.
solr:
  type: solr:9.6
  configuration:
    cores:
      collection1:
        conf_dir: !archive "example"
    endpoints:
      solr:
        core: collection1

The example configuration directory is equivalent to the Solr example configuration set. This default configuration is designed only for testing. You are strongly recommended to define your own configuration with a custom core and endpoint.

Default for versions below 9 Anchor to this heading

If you don’t specify any configuration, the following default is used:

.platform/services.yaml
# The name of the service container. Must be unique within a project.
solr:
  type: solr:9.6
  configuration:
    cores:
      collection1: {}
    endpoints:
      solr:
        core: collection1

The default configuration is based on an older version of the Drupal 8 Search API Solr module that is no longer in use. You are strongly recommended to define your own configuration with a custom core and endpoint.

Limitations Anchor to this heading

The recommended maximum size for configuration directories (zipped) is 2MB. These need to be monitored to ensure they don’t grow beyond that. If the zipped configuration directories grow beyond this, performance declines and deploys become longer. The directory archives are compressed and string encoded. You could use this bash pipeline

echo $(($(tar czf - . | base64 | wc -c )/(1024*1024))) Megabytes

inside the directory to get an idea of the archive size.

The configuration directory is a collection of configuration data, like a data dictionary, e.g. small collections of key/value sets. The best way to keep the size small is to restrict the directory context to plain configurations. Including binary data like plugin .jar files inflates the archive size, and isn’t recommended.

Accessing the Solr server administrative interface Anchor to this heading

Because Solr uses HTTP for both its API and admin interface it’s possible to access the admin interface over an SSH tunnel.

platform tunnel:single --relationship RELATIONSHIP_NAME

By default, this opens a tunnel at 127.0.0.1:30000.

You can now open http://localhost:30000/solr/ in a browser to access the Solr admin interface. Note that you can’t create indexes or users this way, but you can browse the existing indexes and manipulate the stored data.

For Dedicated Gen 2 use ssh -L 8888:localhost:8983 USER@CLUSTER_NAME.ent.platform.sh to open a tunnel instead, after which the Solr server administrative interface is available at http://localhost:8888/solr/.

Available plugins Anchor to this heading

This is the complete list of plugins that are available and loaded by default:

Plugin Description 8.11 9.x
JTS Library for creating and manipulating vector geometry. * *
ICU4J Library providing Unicode and globalization support. * *

Upgrading Anchor to this heading

The Solr data format sometimes changes between versions in incompatible ways. Solr doesn’t include a data upgrade mechanism as it is expected that all indexes can be regenerated from stable data if needed. To upgrade (or downgrade) Solr you need to use a new service from scratch.

There are two ways of doing that.

Destructive Anchor to this heading

In your .platform/services.yaml file, change the version of your Solr service and its name. Be sure to also update the reference to the now changed service name in it’s corresponding application’s relationship block.

When you push that to Platform.sh, the old service is deleted and a new one with the name is created, with no data. You can then have your application re-index data as appropriate.

This approach has the downside of temporarily having an empty Solr instance, which your application may or may not handle gracefully, and needing to rebuild your index afterward. Depending on the size of your data that could take a while.

Transitional Anchor to this heading

For a transitional approach you temporarily have two Solr services. Add a second Solr service with the new version a new name and give it a new relationship in .platform.app.yaml. You can optionally run in that configuration for a while to allow your application to populate indexes in the new service as well.

Once you’re ready to cut over, remove the old Solr service and relationship. You may optionally have the new Solr service use the old relationship name if that’s easier for your application to handle. Your application is now using the new Solr service.

This approach has the benefit of never being without a working Solr instance. On the downside, it requires two running Solr servers temporarily, each of which consumes resources and need adequate disk space. Depending on the size of your data that may be a lot of disk space.

Is this page helpful?