Transferring data to and from a Dedicated cluster

Backing up staging and production files automatically creates a backup of the staging and production instances on a Dedicated cluster every six hours. However, those are only useful for a full restore of the environment and can only be done by the team. At times, you’ll want to make a manual backup yourself.

To create a manual ad-hoc backup of all files on the staging or production environment, use the standard rsync command.

rsync -avzP <USERNAME>@<CLUSTER_NAME> pub/static/

That will copy all files from the pub/static directory on the production instance to the pub/static directory, relative to your local directory where you’re running that command.

Backing up the staging and production database 

To backup your database to your local system, you need to get the database credentials to use.

First, login to the cluster and run the following command:

echo $PLATFORM_RELATIONSHIPS | base64 --decode | json_pp

Which should give a JSON output containing something like this:

"database" : [
         "path" : "main",
         "service" : "mysqldb",
         "rel" : "mysql",
         "host" : "database.internal",
         "ip" : "",
         "scheme" : "mysql",
         "cluster" : "jyu7wavyy6n6q-main-7rqtwti",
         "username" : "user",
         "password" : "",
         "query" : {
            "is_master" : true
         "port" : 3306

The part you want is the user, password, and “path”, which means the DB name. Ignore the rest.

Now, run the following command on your local computer:

ssh <USERNAME>@<CLUSTER_NAME> 'mysqldump --single-transaction -u <user> -p<pass> -h localhost <dbname> | gzip' > database.gz

That runs a mysqldump command on the server, compress it using gzip, and stream the output to a file named database.gz on your local computer.

(If you’d prefer, bzip2 and xz are also available.)

Synchronizing files from development to staging/production 

To transfer data into either the staging or production environments, you can either download it from your development environment to your local system first or transfer it directly between environments using SSH-based tools (such as SCP, rsync).

First, set up SSH forwarding by default for domains.

Then run platform ssh with the production branch checked out to connect to the default development environment. Files are the easier data to transfer, and can be done with rsync.

rsync -avzP pub/static/ <USERNAME>@<CLUSTER_NAME>

Replace pub/static with the path to your files on system, such as web/sites/default/files/. Note that rsync is very picky about trailing / characters. Consult the rsync documentation for more that can be done with that command.

Synchronizing the database from development to staging/production 

The database can be copied directly from the development environment to staging or production, but doing so requires noting the appropriate credentials first on both systems.

First, log in to the production environment over SSH:


Once there, you can look up database credentials by running:

echo $PLATFORM_RELATIONSHIPS | base64 --decode | json_pp

Which should give a JSON output containing something like this:

   "database" : [
         "password" : "abc123",
         "username" : "projectname",
         "path" : "projectname",
         "port" : "3306",
         "scheme" : "mysql",
         "host" : "",
         "query" : {
            "is_master" : true,
            "compression" : true

The part we want is the host, user, password, and the “path”, which is the database name. Ignore the rest.

Now, in a separate terminal log in to the development instance using platform ssh. Run the same echo command as above to get the credentials for the database on the development instance. (The JSON will be slightly different but again we’re only interested in the user, password, host, and “path”/database name).

With the credentials from both databases, we can construct a command that will export data from the development server and write it directly to the Dedicated cluster’s server.

mysqldump -u <dev_user> -p<dev_password> -h <dev_host> <dev_dbname> --single-transaction | ssh -C <USERNAME>@<CLUSTER_NAME> 'mysql -u <prod_user> -p<prod_password> -h <prod_host> <prod_dbname>'

That dumps all data from the database as a stream of queries that get run on the production database without ever having to create an intermediary file. The -C on the SSH command tells SSH to compress the connection to save time.