There are times where you might want to trigger a redeployment of your application.
You can redeploy each environment in the management console by clicking the “Redeploy” button on the top right corner.
Redeploys are also available using the following Platform.sh CLI command:
Please note that the redeploy will happen after any scheduled builds in either “Running” or “Pending” state.
Despite the name, triggering a redeploy will not cause the
deploy hook to rerun for your application. Both your
deploy hook are tied to individual commits and are resused until another commit is pushed to the environment.
Triggering a redeploy can be useful for updating environment access to a new developer and adding custom TLS certificates, but when you do it is only the
post_deploy hook that runs from the beginning. If you want to rerun the
deploy hook, you will need to commit and push some small change to your application to do so.
In rare circumstances the build cache, used to speed up the build process, may become corrupted. That may happen if, for example, code is being downloaded from a 3rd party language service like Packagist or NPM while that service is experiencing issues. To flush the build cache entirely run the following command:
That will wipe the build cache for the current project entirely. Naturally the next build for each environment will likely be longer as the cache rebuilds.
These errors indicate your application (or application runner, like PHP-FPM) is crashing or unavailable. Typical causes include:
.platform.app.yamlconfiguration has an error and the process is not starting or requests are not able to be forwarded to it correctly. Check your
web.commands.startentry or that your
passthruconfiguration is correct.
- The amount of traffic coming to your site exceeds the processing power of your application.
- Certain code path(s) in your application are too slow and timing out.
- A PHP process is crashing because of a segmentation fault (see below).
- A PHP process is killed by the kernel out-of-memory killer (see below).
When trying to upload a large JSON file to your API you might see a 400 response code (
Platform.sh enforces a 10MB limit on sending files with the
application/json Content-Type header. If you want to send large files through, you will have to send them with
$ curl -XPOST 'https://example.com/graphql' --header 'Content-Type: multipart/form-data' -F file=large_file.json
This domain is already claimed by another project. If this is incorrect or you are trying to add a subdomain, please open a ticket with support.
is related to Platform.sh’s subdomain highjacking prevention assumptions, and likely occurred during an attempt to assign subdomains across multiple projects. Consult the documentation linked above for instructions for how to modify your DNS records to bypass some those assumptions regarding project domain ownership.
One reason Let’s Encrypt certificates may fail to provision on your environments has to do with the 64 character limit Let’s Encrypt places on URLs. If the names of your branches are too long, the Platform.sh generated environment URL will go over this limit, and the certificate will be rejected.
See Let’s Encrypt limits and branch names for a more detailed breakdown of this issue.
One of the billable parameters in your project’s settings is Storage. This global storage pool is allocated among the various services and application containers in your project via the
disk parameter. The sum of all
disk parameters in your project’s YAML config files must be less than or equal to the global project storage number.
Error: Resources exceeding plan limit; disk: 8192.00MB > 5120.00MB; try removing a service, or add more storage to your plan
This means that you have allocated, for example,
disk: 4096 in a MySQL service in
services.yaml and also
disk: 4096 in the
.platform.app.yaml for your application, while only having the minimum default of 5GB storage for your project as a whole. The solution is either to lower the
disk parameters to within the limits of 5GB of storage, or raise the global storage parameter on your project’s settings to at least 10GB.
Because storage is a billable component of your project, only the project’s owner can make this change.
When you receive a low-disk space notification for your application container:
platform ssh within your project folder to login to the container’s shell. Then use the
df command to check the available writable space for your application.
df -h -x tmpfs -x squashfs | grep -v /run/shared
This command will show the writable mounts on the system, similar to:
Filesystem Size Used Avail Use% Mounted on /dev/mapper/platform-syd7waxqy4n5q--master--7rqtwti----app 2.0G 37M 1.9G 2% /mnt /dev/mapper/platform-tmp--syd7waxqy4n5q--master--7rqtwti----app 3.9G 42M 3.8G 2% /tmp
The first line shows the storage device that is shared by all of your persistent disk mounts. All defined mounts use a common storage pool. In this example, the application container has allocated 2 GB of the total disk space. Of those 2GB, 2% (37 MB) is used by all defined mounts.
The second line is the operating system
temporary directory, which is always the same size.
While you can write to the
/tmp directory files there are not guaranteed to persist and may be deleted on deploy.
The sum of all disk keys defined in your project’s
.platform/services.yaml files must be equal or less than the available storage in your plan.
- Buy extra storage for your project
Each project comes with 5GB of Disk Storage available to each environment. To increase the disk space available for your project, click on “Edit Plan” to increase your storage in bulks of 5GB. See Extra Storage for more information.
- Increase your application and services disk space
Once you have enough storage available, you can increase the disk space allocated for your application and services using
disk keys in your
Check the following resources for more details:
For a MariaDB database, the command
platform db:size will give approximate disk usage as reported by MariaDB. However, be aware that due to the way MySQL/MariaDB store and pack data this number is not always accurate, and may be off by as much as 10 percentage points.
+--------------+--------+ | Property | Value | +--------------+--------+ | max | 2048MB | | used | 189MB | | percent_used | 9% | +--------------+--------+
For the most reliable disk usage warnings, we strongly recommend all customers enable Health notifications on all projects. That will provide you with a push-notification through your choice of channel when the available disk space on any service drops too low.
During the build hook, you may run into the following error depending on the size of your application:
W: [Errno 28] No space left on device: ...
The cause of this issue has to do with the amount of disk provided to the build container before it is deployed. Application images are restricted to 4 GB during build, no matter how much writable disk has been set aside for the deployed application.
Some build tools (yarn/npm) store cache for different versions of their modules. This can cause the build cache to grow over time beyond the maximum of 4GB. Try clearing the build cache and redeploying. In most cases, this will resolve the issue.
If for some reason your application requires more than 4 GB during build, you can open a support ticket to have this limit increased. The most disk space available during build still caps off at 8 GB in these cases.
If you receive MySQL error messages like this:
SQLSTATE[HY000]: General error: 1205 Lock wait timeout exceeded;
This means a process running in your application acquired a lock from MySQL for a long period of time. That is typically caused by one of the following:
- There are multiple places acquiring locks in different order. For example, code path 1 first locks record A and then locks record B. Code path 2, in contrast, first locks record B and then locks record A.
- There is a long running background process executed by your application that holds the lock until it ends.
If you’re using MariaDB 10+, you can use the SQL query
SHOW FULL PROCESSLIST \G to list DB queries waiting for locks. Find output like the following, and start debugging.
< skipped > Command: Query Time: ... State: Waiting for table metadata lock Info: SELECT ... < skipped >
To find active background processes, run
ps aufx on your application container.
Also, please make sure that locks are acquired in a pre-defined order and released as soon as possible.
There is a single MySQL user, so you can not use “DEFINER” Access Control mechanism for Stored Programs and Views.
When creating a
VIEW, you may need to explicitly set the
SECURITY parameter to
CREATE OR REPLACE SQL SECURITY INVOKER VIEW `view_name` AS SELECT
Errors such as “PDO Exception ‘MySQL server has gone away’” are usually simply the result of exhausting your existing diskspace. Be sure you have sufficient space allocated to the service in .platform/services.yaml.
The current disk usage can be checked using the CLI command
platform db:size. Because of issues with the way InnoDB reports its size, this can out by up to 20%. As table space can grow rapidly, it is usually advisable to make your database mount size twice the size reported by the
You are encouraged to add a low-disk warning notification to proactively warn of low disk space before it becomes an issue.
Another possible cause of “MySQL server has gone away” errors is a server timeout. MySQL has a built-in timeout for idle connections, which defaults to 10 minutes. Most typical web connections end long before that is ever approached, but it’s possible that a long-running worker may idle and not need the database for longer than the timeout. In that case the same “server has gone away” message may appear.
If that’s the case, the best way to handle it is to wrap your connection logic in code that detects a “server has gone away” exception and tries to re-establish the connection.
Alternatively, if your worker is idle for too long it can self-terminate. Platform.sh will automatically restart the worker process, and the new process can establish its own new database connection.
Another cause of the “MySQL server has gone away” errors can be the size of the database packets. If that is the case, the logs may show warnings like “Error while sending QUERY packet” before the error. One way to resolve the issue is to use the
max_allowed_packet parameter described above.
The provided user does not have permission to create databases.
The database is created for you and can be found in the
path field of the
$PLATFORM_RELATIONSHIPS environment variable.
Everything will be read-only, except the writable mounts you declare. Writable mounts are there for your data: for file uploads, logs and temporary files. Not for your code. In order to change code on Platform.sh you have to go through Git.
This is what gives you all of the benefits of having repeatable deployments, consistent backups, traceability, and the magically fast creation of new staging/dev environments.
In Platform.sh, you cannot just “hack production”. It is a constraint, but it is a good constraint.
During the build phase of your application, the main filesystem is writable. So you can do whatever you want (e.g. compile code or generate anything you need). But during and after the deploy phase, the main filesystem will be read-only.
If you have been granted access to a Platform.sh project, but are unable to clone the repository via the management console or with the CLI command
platform get <projectID>, this is likely due to the fact that 1) the project has an external repository integration configured to GitHub, GitLab, or Bitbucket, and 2) you have not been granted access to that integrated repository.
It will be necessary to update your access settings on the integrated repository before you can clone and commit to the project. See User access and integrations for more information.
If you check out a project via Git directly and not using the
platform get command, you may end up with the CLI unable to determine what project it’s in. If you run a CLI command from within the project directory you’ve checked out but get an error like this:
[RootNotFoundException] Project root not found. This can only be run from inside a project directory.
Then the CLI hasn’t been able to determine the project to use. To fix that, run:
platform project:set-remote <project_id>
<project_id> is the random-character ID of the project. That can be found by running
platform projects from the command line to list all accessible projects. Alternatively, it can be found in the management console after the
platform get command shown or in the URL of the management console or project domain.
If you see a bare “File not found” error when accessing your Drupal site with a browser, this means that you’ve pushed your code as a vanilla project but no index.php has been found.
You may see a line like the following in the
WARNING: [pool web] server reached max_children setting (2), consider raising it
That indicates that the server is receiving more concurrent requests than it has PHP processes allocated, which means some requests will have to wait until another finishes. In this example there are 2 PHP processes that can run concurrently.
Platform.sh sets the number of workers based on the available memory of your container and the estimated average memory size of each process. There are two ways to increase the number of workers:
- Adjust the worker sizing hints for your project.
- Upgrade your subscription on Platform.sh to get more computing resources. To do so, log into your account and edit the project.
If your PHP application is not able to handle the amount of traffic or it is slow, you should see log lines from
/var/log/app.log like any of the below:
WARNING: [pool web] child 120, script '/app/public/index.php' (request: "GET /index.php") execution timed out (358.009855 sec), terminating
That means your PHP process is running longer than allowed. You can adjust the
max_execution_time value in
php.ini, but there is still a 5 minute hard cap on any web request that cannot be adjusted.
The most common cause of a timeout is either an infinite loop (which is a bug that you should fix) or the work itself requires a long time to complete. For the latter case, you should consider putting the task into a background job.
The following command will identify the 20 slowest requests in the last hour, which can provide an indication of what code paths to investigate.
grep $(date +%Y-%m-%dT%H --date='-1 hours') /var/log/php.access.log | sort -k 4 -r -n | head -20
If you see that the processing time of certain requests is slow (e.g. taking more than 1000ms), you may wish to consider using a profiler like Blackfire to debug the performance issue.
Otherwise, you may check if the following options are applicable:
- Find the most visited pages and see if they can be cached and/or put behind a CDN. You may refer to how caching works.
- Upgrade your subscription on Platform.sh to get more computing resources. To do so, log into your account and edit the project subscription.
If your PHP process crashed with a segmentation fault, you should see log lines in
/var/log/app.log like below:
WARNING: [pool web] child 112 exited on signal 11 (SIGSEGV) after 7.405936 seconds from start
This is complicated, either a PHP extension is hitting a segmentation fault or your PHP application code is crashing. You should review recent changes in your application and try to find the cause of it, probably with the help of XDebug.
If your PHP process is killed by the kernel, you should see log lines in
/var/log/app.log like this:
WARNING: [pool web] child 429 exited on signal 9 (SIGKILL) after 50.938617 seconds from start
That means the memory usage of your container exceeds the limit allowed on your plan so the kernel kills the offending process. You should try the following:
- Check if the memory usage of your application is expected and try to optimize it.
- Use sizing hints to reduce the amount of PHP workers which reduces the memory footprint.
- Upgrade your subscription on Platform.sh to get more computing resources. To do so, log into your account and edit the project.
If you see a build or deployment running longer than expected, that may be one of the following cases:
- The build is blocked by a process in your build hook.
- The deployment is blocked by a long running process in your deploy hook.
- The deployment is blocked by a long running cron job in the environment.
- The deployment is blocked by a long running cron job in the parent environment.
To determine if your environment is being stuck in the build or the deployment, you can look at the build log available in the management console. If you see a line similar to the following:
Re-deploying environment w6ikvtghgyuty-drupal8-b3dsina.
It means the build has completed successfully and the system is trying to deploy. If that line never appears then it means the build is stuck.
For a blocked build (when you don’t find the
Re-deployment environment ... line), create a support ticket to have the build killed. In most regions the build will self-terminate after one hour. In older regions (US and EU) the build will need to be killed by our support team.
When a deployment is blocked, you should try the following:
- Use SSH to connect to your environment. Find any long-running cron jobs or deploy hooks on the environment by running
ps afx. Once you have identified the long running process on the environment, kill it with
kill <PID>. PID stands for the process id shown by
- If you’re performing “Sync” or “Activate” on an environment and the process is stuck, use SSH to connect to the parent environment and identify any long running cron jobs with
ps afx. Kill the job(s) if you see any.
Builds that take long time or fail is a common problem. Most of the time it’s related to an application issue and they can be hard to troubleshoot without guidance.
Here are a few tips that can help you solve the issues you are experiencing.
Invisible errors during the build and deploy phase can cause increased wait times, failed builds and other problems. Investigating each log and fixing errors is essential.
Related documentation: Accessing logs
If you encounter the message
connect() to unix:/run/app.sock failed (11: Resource temporarily unavailable) in
/var/log/error.log, it is caused by all of the PHP workers being busy.
This can be because too many requests are coming in at once, or the requests are taking too long to be processed (such as with calls to external third party servers without timeouts).
To address the issue, you can:
- Lower the memory consumption of each request, so that the amount of PHP workers gets automatically raised. This can be customized with the
runtime.sizing_hints.request_memorykey in your
.platform.app.yamlfile. Consult the PHP-FPM sizing documentation for more details.
- Adding a CDN.
- Set up caching.
- Following the global performance tuning recommendations.
- Removing stale plugins and extensions when using a CMS.
- Upgrading the container size to get more resources.
Hooks are frequently the cause of long build time. If they run into problem they can cause the build to fail or hang indefinitely.
The build hook can be tested in your local environment. Because the deployed environment on Platform.sh is read-only the build hooks cannot be rerun there.
Deploy hooks can be tested either locally or by logging into the application over SSH and running them there. They should execute safely but be aware that depending on what your scripts are doing they may have an adverse impact on the running application (e.g., flushing all caches).
Furthermore, you can test your hooks with these Linux commands to help figure out any problems:
time $cmd # Print execution time strace -T $cmd # Print a system call report
Related documentation: Build and deploy hooks
Containers cannot be shutdown while long-running tasks are active. That means long-running cron jobs will block a container from being shut down to make way for a new deploy.
For that reason, make sure your custom cron jobs execution times are low and that they are running properly. Be aware that cron jobs may invoke other services in unexpected ways, which can increase execution time.
drush core-cron run installed module’s cron task. Those can be, for example; evicting invalid cache, updating database records, regenerating assets. Be sure to frequently benchmark the
drush core-cron command in all your environments, as it is a common source of performance issues.
Related documentation: Cron and scheduled tasks
For existing Elite and Enterprise customers receiving this error message:
[ApiFeatureMissingException] This project does not support source operations.
it is due to the fact that the feature is not currently supported on pull request environments.