PostgreSQL (Database service)

PostgreSQL is a high-performance, standards-compliant relational SQL database.

See the PostgreSQL documentation for more information.

Supported versions

  • 9.6
  • 10
  • 11

Deprecated versions

The following versions are available but are not receiving security updates from upstream, so their use is not recommended. They will be removed at some point in the future.

  • 9.3


The format exposed in the $PLATFORM_RELATIONSHIPS environment variable:

    "username": "main",
    "scheme": "pgsql",
    "service": "postgresql",
    "ip": "",
    "hostname": "",
    "cluster": "rjify4yjcwxaa-master-7rqtwti",
    "host": "postgresql.internal",
    "rel": "postgresql",
    "path": "main",
    "query": {
        "is_master": true
    "password": "main",
    "type": "postgresql:11",
    "port": 5432

Usage example

In your .platform/services.yaml add:

    type: postgresql:11
    disk: 1024

Add a relationship to the service in your

    database: "mydatabase:postgresql"

For PHP, in your add:

        - pdo_pgsql

You can then use the service in a configuration file of your application with something like:



use Platformsh\ConfigReader\Config;

// Create a new config object to ease reading the environment variables.
// You can alternatively use getenv() yourself.
$config = new Config();

// The 'database' relationship is generally the name of primary SQL database of an application.
// It could be anything, though, as in the case here here where it's called "postgresql".
$credentials = $config->credentials('postgresql');

try {
    // Connect to the database using PDO.  If using some other abstraction layer you would
    // inject the values from $database into whatever your abstraction layer asks for.
    $dsn = sprintf('pgsql:host=%s;port=%d;dbname=%s', $credentials['host'], $credentials['port'], $credentials['path']);
    $conn = new \PDO($dsn, $credentials['username'], $credentials['password'], [
        // Always use Exception error mode with PDO, as it's more reliable.
        // So we don't have to mess around with cursors and unbuffered queries by default.

    // Creating a table.
    $sql = "CREATE TABLE People (
      name VARCHAR(30) NOT NULL,
      city VARCHAR(30) NOT NULL

    // Insert data.
    $sql = "INSERT INTO People (name, city) VALUES 
        ('Neil Armstrong', 'Moon'), 
        ('Buzz Aldrin', 'Glen Ridge'), 
        ('Sally Ride', 'La Jolla');";

    // Show table.
    $sql = "SELECT * FROM People";
    $result = $conn->query($sql);

    if ($result) {
        print <<<TABLE
        foreach ($result as $record) {
            printf("<tr><td>%s</td><td>%s</td></tr>\n", $record->name, $record->city);
        print "</tbody>\n</table>\n";

    // Drop table.
    $sql = "DROP TABLE People";

} catch (\Exception $e) {
    print $e->getMessage();
const pg = require('pg');
const config = require("platformsh-config").config();

exports.usageExample = async function() {

    const credentials = config.credentials('postgresql');

    const client = new pg.Client({
        port: credentials.port,
        user: credentials.username,
        password: credentials.password,
        database: credentials.path,


    let sql = '';

    // Creating a table.
    sql = `CREATE TABLE IF NOT EXISTS People (
      name VARCHAR(30) NOT NULL,
      city VARCHAR(30) NOT NULL
    await client.query(sql);

    // Insert data.
    sql = `INSERT INTO People (name, city) VALUES 
        ('Neil Armstrong', 'Moon'), 
        ('Buzz Aldrin', 'Glen Ridge'), 
        ('Sally Ride', 'La Jolla');`;
    await client.query(sql);

    // Show table.
    sql = `SELECT * FROM People`;
    let result = await client.query(sql);

    let output = '';

    if (result.rows.length > 0) {
        output +=`<table>

        result.rows.forEach((row) => {
            output += `<tr><td>${}</td><td>${}</td></tr>\n`;

        output += `</tbody>\n</table>\n`;

    // Drop table.
    sql = `DROP TABLE People`;
    await client.query(sql);

    return output;
import psycopg2
from platformshconfig import Config

def usage_example():
    # Create a new Config object to ease reading the environment variables.
    # You can alternatively use os.environ yourself.
    config = Config()

    # The 'database' relationship is generally the name of primary SQL database of an application.
    # That's not required, but much of our default automation code assumes it.' \
    database = config.credentials('postgresql')

        # Connect to the database.
        conn_params = {
            'host': database['host'],
            'port': database['port'],
            'dbname': database['path'],
            'user': database['username'],
            'password': database['password']

        conn = psycopg2.connect(**conn_params)

        # Open a cursor to perform database operations.
        cur = conn.cursor()

        # Creating a table.
        sql = '''
                CREATE TABLE People (
                id SERIAL PRIMARY KEY,
                name VARCHAR(30) NOT NULL,
                city VARCHAR(30) NOT NULL


        # Insert data.
        sql = '''
                INSERT INTO People (name, city) VALUES
                ('Neil Armstrong', 'Moon'),
                ('Buzz Aldrin', 'Glen Ridge'),
                ('Sally Ride', 'La Jolla');


        # Show table.
        sql = '''SELECT * FROM People'''
        result = cur.fetchall()

        table = '''<table>

        if result:
            for record in result:
                table += '''<tr><td>{0}</td><td>{1}</td><tr>\n'''.format(record[1], record[2])
            table += '''</tbody>\n</table>\n'''

        # Drop table
        sql = "DROP TABLE People"

        # Close communication with the database

        return table

    except Exception as e:
        return e
package sh.platform.languages.sample;

import sh.platform.config.Config;
import sh.platform.config.MySQL;
import sh.platform.config.PostgreSQL;

import javax.sql.DataSource;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.function.Supplier;

public class PostgreSQLSample implements Supplier<String> {

    public String get() {
        StringBuilder logger = new StringBuilder();

        // Create a new config object to ease reading the environment variables.
        // You can alternatively use getenv() yourself.
        Config config = new Config();

        // The 'database' relationship is generally the name of primary SQL database of an application.
        // It could be anything, though, as in the case here here where it's called "postgresql".
        PostgreSQL database = config.getCredential("postgresql", PostgreSQL::new);
        DataSource dataSource = database.get();

        // Connect to the database
        try (Connection connection = dataSource.getConnection()) {

            // Creating a table.
            String sql = "CREATE TABLE JAVA_FRAMEWORKS (" +
                    " id SERIAL PRIMARY KEY," +
                    "name VARCHAR(30) NOT NULL)";

            final Statement statement = connection.createStatement();

            // Insert data.
            sql = "INSERT INTO JAVA_FRAMEWORKS (name) VALUES" +
                    "('Spring')," +
                    "('Jakarta EE')," +
                    "('Eclipse JNoSQL')";


            // Show table.
            sql = "SELECT * FROM JAVA_FRAMEWORKS";
            final ResultSet resultSet = statement.executeQuery(sql);
            while ( {
                int id = resultSet.getInt("id");
                String name = resultSet.getString("name");
                logger.append(String.format("the JAVA_FRAMEWORKS id %d the name %s ", id, name));
            statement.execute("DROP TABLE JAVA_FRAMEWORKS");
            return logger.toString();
        } catch (SQLException exp) {
            throw new RuntimeException("An error when execute PostgreSQL: " + exp.getMessage());

Exporting data

The easiest way to download all data in a PostgreSQL instance is with the Platform CLI. If you have a single SQL database, the following command will export all data using the pg_dump command to a local file:

platform db:dump

If you have multiple SQL databases it will prompt you which one to export. You can also specify one by relationship name explicitly:

platform db:dump --relationship database

By default the file will be uncompressed. If you want to compress it, use the --gzip (-z) option:

platform db:dump --gzip

You can use the --stdout option to pipe the result to another command. For example, if you want to create a bzip2-compressed file, you can run:

platform db:dump --stdout | bzip2 > dump.sql.bz2

Importing data

The easiest way to load data into a database is to pipe an SQL dump through the platform sql command, like so:

platform sql < my_database_snapshot.sql

That will run the database snapshot against the SQL database on That will work for any SQL file, so the usual caveats about importing an SQL dump apply (e.g., it's best to run against an empty database). As with exporting, you can also specify a specific environment to use and a specific database relationship to use, if there are multiple.

platform sql --relationship database -e master < my_database_snapshot.sql

note Importing a database snapshot is a destructive operation. It will overwrite data already in your database. Taking a snapshot or a database export before doing so is strongly recommended.

Extensions supports a number of PostgreSQL extensions. To enable them, list them under the configuration.extensions key in your services.yaml file, like so:

    type: "postgresql:11"
    disk: 1025
            - pg_trgm
            - hstore

In this case you will have pg_trgm installed, providing functions to determine the similarity of text based on trigram matching, and hstore providing a key-value store.

Available extensions

The following is the extensive list of supported extensions. Note that you cannot currently add custom extensions not listed here.

  • address_standardizer - Used to parse an address into constituent elements. Generally used to support geocoding address normalization step.
  • address_standardizer_data_us - Address Standardizer US dataset example
  • adminpack - administrative functions for PostgreSQL
  • autoinc - functions for autoincrementing fields
  • bloom - bloom access method - signature file based index (requires 9.6 or higher)
  • btree_gin - support for indexing common datatypes in GIN
  • btree_gist - support for indexing common datatypes in GiST
  • chkpass - data type for auto-encrypted passwords
  • citext - data type for case-insensitive character strings
  • cube - data type for multidimensional cubes
  • dblink - connect to other PostgreSQL databases from within a database
  • dict_int - text search dictionary template for integers
  • dict_xsyn - text search dictionary template for extended synonym processing
  • earthdistance - calculate great-circle distances on the surface of the Earth
  • file_fdw - foreign-data wrapper for flat file access
  • fuzzystrmatch - determine similarities and distance between strings
  • hstore - data type for storing sets of (key, value) pairs
  • insert_username - functions for tracking who changed a table
  • intagg - integer aggregator and enumerator (obsolete)
  • intarray - functions, operators, and index support for 1-D arrays of integers
  • isn - data types for international product numbering standards
  • lo - Large Object maintenance
  • ltree - data type for hierarchical tree-like structures
  • moddatetime - functions for tracking last modification time
  • pageinspect - inspect the contents of database pages at a low level
  • pg_buffercache - examine the shared buffer cache
  • pg_freespacemap - examine the free space map (FSM)
  • pg_prewarm - prewarm relation data (requires 9.6 or higher)
  • pg_stat_statements - track execution statistics of all SQL statements executed
  • pg_trgm - text similarity measurement and index searching based on trigrams
  • pg_visibility - examine the visibility map (VM) and page-level visibility info (requires 9.6 or higher)
  • pgcrypto - cryptographic functions
  • pgrouting - pgRouting Extension (requires 9.6 or higher)
  • pgrowlocks - show row-level locking information
  • pgstattuple - show tuple-level statistics
  • plpgsql - PL/pgSQL procedural language
  • postgis - PostGIS geometry, geography, and raster spatial types and functions
  • postgis_sfcgal - PostGIS SFCGAL functions
  • postgis_tiger_geocoder - PostGIS tiger geocoder and reverse geocoder
  • postgis_topology - PostGIS topology spatial types and functions
  • postgres_fdw - foreign-data wrapper for remote PostgreSQL servers
  • refint - functions for implementing referential integrity (obsolete)
  • seg - data type for representing line segments or floating-point intervals
  • sslinfo - information about SSL certificates
  • tablefunc - functions that manipulate whole tables, including crosstab
  • tcn - Triggered change notifications
  • timetravel - functions for implementing time travel
  • tsearch2 - compatibility package for pre-8.3 text search functions (obsolete, only available for 9.6 and 9.3)
  • tsm_system_rows - TABLESAMPLE method which accepts number of rows as a limit (requires 9.6 or higher)
  • tsm_system_time - TABLESAMPLE method which accepts time in milliseconds as a limit (requires 9.6 or higher)
  • unaccent - text search dictionary that removes accents
  • uuid-ossp - generate universally unique identifiers (UUIDs)
  • xml2 - XPath querying and XSLT


Could not find driver

If you see this error: Fatal error: Uncaught exception 'PDOException' with message 'could not find driver', this means you are missing the pdo_pgsql PHP extension. You simply need to enable it in your (see above).


PostgreSQL 10 and later include an upgrade utility that can convert databases from previous versions to version 10 or 11. If you upgrade your service from a previous version of PostgreSQL to version 10 or above (by modifying the services.yaml file) the upgrader will run automatically.

The upgrader does not work to upgrade to PostgreSQL 9 versions, so upgrades from PostgreSQL 9.3 to 9.6 are not supported. Upgrade straight to version 11 instead.

Warning: Make sure you first test your migration on a separate branch Warning: be sure to take a snapshot of your master environment before you merge this change.

Downgrading is not supported. If you want, for whatever reason, to downgrade you should dump to SQL, remove the service, recreate the service, and import your dump.