Python
Contents:
Platform.sh supports deploying Python applications. Your application can use WSGI-based (Gunicorn / uWSGI) application server, Tornado, Twisted, or Python 3.5+ asyncio server.
Supported
Grid | Dedicated |
---|---|
|
None available |
Support libraries
While it is possible to read the environment directly from your application, it is generally easier and more robust to use the platformshconfig
pip library which handles decoding of service credential information for you.
WSGI-based configuration
In this example, we use Gunicorn to run our WSGI application. Configure the .platform.app.yaml
file with a few key settings as listed below, a complete example is included at the end of this section.
-
Specify the language of your application (available versions are listed above):
type: 'python:3.8'
-
Build your application with the build hook. Assuming you have your pip dependencies stored in
requirements.txt
and asetup.py
at the root of your application folder to execute build steps:hooks: build: | pip install -r requirements.txt pip install -e . pip install gunicorn
These are installed as global dependencies in your environment.
-
Configure the command you use to start serving your application (this must be a foreground-running process) under the
web
section, e.g.:web: commands: start: "gunicorn -b 0.0.0.0:$PORT project.wsgi:application"
This assumes the WSGI file is
project/wsgi.py
and the WSGI application object is namedapplication
in the WSGI file. -
Define the web locations your application is using:
web: locations: "/": root: "" passthru: true allow: false "/static": root: "static/" allow: true
This configuration asks our web server to handle HTTP requests at “/static” to serve static files stored in
/app/static/
folder while everything else is forwarded to your application server. -
Create any Read/Write mounts. The root file system is read only. You must explicitly describe writable mounts.
mounts: tmp: source: local source_path: tmp logs: source: local source_path: logs
This setting allows your application writing files to
/app/tmp
and have logs stored in/app/logs
.
Then, set up the routes to your application in .platform/routes.yaml
.
"https://{default}/":
type: upstream
upstream: "app:http"
Here is the complete .platform.app.yaml
file:
name: app
type: python:2.7
web:
commands:
start: "gunicorn -b $PORT project.wsgi:application"
locations:
"/":
root: ""
passthru: true
allow: false
"/static":
root: "static/"
allow: true
hooks:
build: |
pip install -r requirements.txt
pip install -e .
pip install gunicorn
mounts:
tmp:
source: local
source_path: tmp
logs:
source: local
source_path: logs
disk: 512
Using the asyncio module
The above Gunicorn based WSGI example can be modified to use the Python 3.5+ asyncio module.
-
Change the
type
topython:3.6
. -
Change the start command to use asyncio.
web: commands: start: "gunicorn -b $PORT -k gaiohttp project.wsgi:application"
-
Add
aiohttp
as pip dependency in your build hook.hooks: build: | pip install -r requirements.txt pip install -e . pip install gunicorn aiohttp
Accessing services
To access various services with Python, see the following examples. The individual service pages have more information on configuring each service.
import elasticsearch
from platformshconfig import Config
def usage_example():
# Create a new Config object to ease reading the Platform.sh environment variables.
# You can alternatively use os.environ yourself.
config = Config()
# Get the credentials to connect to the Elasticsearch service.
credentials = config.credentials('elasticsearch')
try:
# The Elasticsearch library lets you connect to multiple hosts.
# On Platform.sh Standard there is only a single host so just register that.
hosts = {
"scheme": credentials['scheme'],
"host": credentials['host'],
"port": credentials['port']
}
# Create an Elasticsearch client object.
client = elasticsearch.Elasticsearch([hosts])
# Index a few documents
es_index = 'my_index'
es_type = 'People'
params = {
"index": es_index,
"type": es_type,
"body": {"name": ''}
}
names = ['Ada Lovelace', 'Alonzo Church', 'Barbara Liskov']
ids = {}
for name in names:
params['body']['name'] = name
ids[name] = client.index(index=params["index"], doc_type=params["type"], body=params['body'])
# Force just-added items to be indexed.
client.indices.refresh(index=es_index)
# Search for documents.
result = client.search(index=es_index, body={
'query': {
'match': {
'name': 'Barbara Liskov'
}
}
})
table = '''<table>
<thead>
<tr><th>ID</th><th>Name</th></tr>
</thead>
<tbody>'''
if result['hits']['hits']:
for record in result['hits']['hits']:
table += '''<tr><td>{0}</td><td>{1}</td><tr>\n'''.format(record['_id'], record['_source']['name'])
table += '''</tbody>\n</table>\n'''
# Delete documents.
params = {
"index": es_index,
"type": es_type,
}
for name in names:
client.delete(index=params['index'], doc_type=params['type'], id=ids[name]['_id'])
return table
except Exception as e:
return e
from json import dumps
from json import loads
from kafka import KafkaConsumer, KafkaProducer
from platformshconfig import Config
def usage_example():
# Create a new Config object to ease reading the Platform.sh environment variables.
# You can alternatively use os.environ yourself.
config = Config()
# Get the credentials to connect to the Kafka service.
credentials = config.credentials('kafka')
try:
kafka_server = '{}:{}'.format(credentials['host'], credentials['port'])
# Producer
producer = KafkaProducer(
bootstrap_servers=[kafka_server],
value_serializer=lambda x: dumps(x).encode('utf-8')
)
for e in range(10):
data = {'number' : e}
producer.send('numtest', value=data)
# Consumer
consumer = KafkaConsumer(
bootstrap_servers=[kafka_server],
auto_offset_reset='earliest'
)
consumer.subscribe(['numtest'])
output = ''
# For demonstration purposes so it doesn't block.
for e in range(10):
message = next(consumer)
output += str(loads(message.value.decode('UTF-8'))["number"]) + ', '
# What a real implementation would do instead.
# for message in consumer:
# output += loads(message.value.decode('UTF-8'))["number"]
return output
except Exception as e:
return e
import pymemcache
from platformshconfig import Config
def usage_example():
# Create a new Config object to ease reading the Platform.sh environment variables.
# You can alternatively use os.environ yourself.
config = Config()
# Get the credentials to connect to the Memcached service.
credentials = config.credentials('memcached')
try:
# Try connecting to Memached server.
memcached = pymemcache.Client((credentials['host'], credentials['port']))
memcached.set('Memcached::OPT_BINARY_PROTOCOL', True)
key = "Deploy_day"
value = "Friday"
# Set a value.
memcached.set(key, value)
# Read it back.
test = memcached.get(key)
return 'Found value <strong>{0}</strong> for key <strong>{1}</strong>.'.format(test.decode("utf-8"), key)
except Exception as e:
return e
from pymongo import MongoClient
from platformshconfig import Config
def usage_example():
# Create a new Config object to ease reading the Platform.sh environment variables.
# You can alternatively use os.environ yourself.
config = Config()
# The 'database' relationship is generally the name of primary SQL database of an application.
# It could be anything, though, as in the case here here where it's called "mongodb".
credentials = config.credentials('mongodb')
try:
formatted = config.formatted_credentials('mongodb', 'pymongo')
server = '{0}://{1}:{2}@{3}'.format(
credentials['scheme'],
credentials['username'],
credentials['password'],
formatted
)
client = MongoClient(server)
collection = client.main.starwars
post = {
"name": "Rey",
"occupation": "Jedi"
}
post_id = collection.insert_one(post).inserted_id
document = collection.find_one(
{"_id": post_id}
)
# Clean up after ourselves.
collection.drop()
return 'Found {0} ({1})<br />'.format(document['name'], document['occupation'])
except Exception as e:
return e
import pymysql
from platformshconfig import Config
def usage_example():
# Create a new Config object to ease reading the Platform.sh environment variables.
# You can alternatively use os.environ yourself.
config = Config()
# The 'database' relationship is generally the name of primary SQL database of an application.
# That's not required, but much of our default automation code assumes it.'
credentials = config.credentials('database')
try:
# Connect to the database using PDO. If using some other abstraction layer you would inject the values
# from `database` into whatever your abstraction layer asks for.
conn = pymysql.connect(host=credentials['host'],
port=credentials['port'],
database=credentials['path'],
user=credentials['username'],
password=credentials['password'])
sql = '''
CREATE TABLE People (
id SERIAL PRIMARY KEY,
name VARCHAR(30) NOT NULL,
city VARCHAR(30) NOT NULL
)
'''
cur = conn.cursor()
cur.execute(sql)
sql = '''
INSERT INTO People (name, city) VALUES
('Neil Armstrong', 'Moon'),
('Buzz Aldrin', 'Glen Ridge'),
('Sally Ride', 'La Jolla');
'''
cur.execute(sql)
# Show table.
sql = '''SELECT * FROM People'''
cur.execute(sql)
result = cur.fetchall()
table = '''<table>
<thead>
<tr><th>Name</th><th>City</th></tr>
</thead>
<tbody>'''
if result:
for record in result:
table += '''<tr><td>{0}</td><td>{1}</td><tr>\n'''.format(record[1], record[2])
table += '''</tbody>\n</table>\n'''
# Drop table
sql = '''DROP TABLE People'''
cur.execute(sql)
# Close communication with the database
cur.close()
conn.close()
return table
except Exception as e:
return e
import psycopg2
from platformshconfig import Config
def usage_example():
# Create a new Config object to ease reading the Platform.sh environment variables.
# You can alternatively use os.environ yourself.
config = Config()
# The 'database' relationship is generally the name of primary SQL database of an application.
# That's not required, but much of our default automation code assumes it.' \
database = config.credentials('postgresql')
try:
# Connect to the database.
conn_params = {
'host': database['host'],
'port': database['port'],
'dbname': database['path'],
'user': database['username'],
'password': database['password']
}
conn = psycopg2.connect(**conn_params)
# Open a cursor to perform database operations.
cur = conn.cursor()
cur.execute("DROP TABLE IF EXISTS People")
# Creating a table.
sql = '''
CREATE TABLE IF NOT EXISTS People (
id SERIAL PRIMARY KEY,
name VARCHAR(30) NOT NULL,
city VARCHAR(30) NOT NULL
)
'''
cur.execute(sql)
# Insert data.
sql = '''
INSERT INTO People (name, city) VALUES
('Neil Armstrong', 'Moon'),
('Buzz Aldrin', 'Glen Ridge'),
('Sally Ride', 'La Jolla');
'''
cur.execute(sql)
# Show table.
sql = '''SELECT * FROM People'''
cur.execute(sql)
result = cur.fetchall()
table = '''<table>
<thead>
<tr><th>Name</th><th>City</th></tr>
</thead>
<tbody>'''
if result:
for record in result:
table += '''<tr><td>{0}</td><td>{1}</td><tr>\n'''.format(record[1], record[2])
table += '''</tbody>\n</table>\n'''
# Drop table
sql = "DROP TABLE People"
cur.execute(sql)
# Close communication with the database
cur.close()
conn.close()
return table
except Exception as e:
return e
import pika
from platformshconfig import Config
def usage_example():
# Create a new Config object to ease reading the Platform.sh environment variables.
# You can alternatively use os.environ yourself.
config = Config()
# Get the credentials to connect to the RabbitMQ service.
credentials = config.credentials('rabbitmq')
try:
# Connect to the RabbitMQ server
creds = pika.PlainCredentials(credentials['username'], credentials['password'])
parameters = pika.ConnectionParameters(credentials['host'], credentials['port'], credentials=creds)
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
# Check to make sure that the recipient queue exists
channel.queue_declare(queue='deploy_days')
# Try sending a message over the channel
channel.basic_publish(exchange='',
routing_key='deploy_days',
body='Friday!')
# Receive the message
def callback(ch, method, properties, body):
print(" [x] Received {}".format(body))
# Tell RabbitMQ that this particular function should receive messages from our 'hello' queue
channel.basic_consume('deploy_days',
callback,
auto_ack=False)
# This blocks on waiting for an item from the queue, so comment it out in this demo script.
# print(' [*] Waiting for messages. To exit press CTRL+C')
# channel.start_consuming()
connection.close()
return " [x] Sent 'Friday!'<br/>"
except Exception as e:
return e
from redis import Redis
from platformshconfig import Config
def usage_example():
# Create a new config object to ease reading the Platform.sh environment variables.
# You can alternatively use os.environ yourself.
config = Config()
# Get the credentials to connect to the Redis service.
credentials = config.credentials('redis')
try:
redis = Redis(credentials['host'], credentials['port'])
key = "Deploy day"
value = "Friday"
# Set a value
redis.set(key, value)
# Read it back
test = redis.get(key)
return 'Found value <strong>{0}</strong> for key <strong>{1}</strong>.'.format(test.decode("utf-8"), key)
except Exception as e:
return e
import pysolr
from xml.etree import ElementTree as et
import json
from platformshconfig import Config
def usage_example():
# Create a new Config object to ease reading the Platform.sh environment variables.
# You can alternatively use os.environ yourself.
config = Config()
try:
# Get the pysolr-formatted connection string.
formatted_url = config.formatted_credentials('solr', 'pysolr')
# Create a new Solr Client using config variables
client = pysolr.Solr(formatted_url)
# Add a document
message = ''
doc_1 = {
"id": 123,
"name": "Valentina Tereshkova"
}
result0 = client.add([doc_1], commit=True)
client.commit()
message += 'Adding one document. Status (0 is success): {} <br />'.format(json.loads(result0)['responseHeader']['status'])
# Select one document
query = client.search('*:*')
message += '\nSelecting documents (1 expected): {} <br />'.format(str(query.hits))
# Delete one document
result1 = client.delete(doc_1['id'])
client.commit()
message += '\nDeleting one document. Status (0 is success): {}'.format(et.fromstring(result1)[0][0].text)
return message
except Exception as e:
return e
Project templates
A number of project templates for Python applications are available on GitHub. Not all of them are proactively maintained but all can be used as a starting point or reference for building your own website or web application.
Basic Python 3
This template provides the most basic configuration for running a custom Python 3.7 project. It includes the `platformshconfig` package and demonstrates using it to connect to MariaDB and Redis. It can be used to build a very rudimentary application but is intended primarily as a documentation reference. The application starts as a bare Python process with no separate runner.
Python is a general purpose scripting language often used in web development.
Features:
- Python 3.8
- MariaDB 10.4
- Redis 5.0
- Automatic TLS certificates
- Pipfile-based build
View the repository on GitHub.
Django 2
This template deploys the Django 2 application framework on Platform.sh, using the gunicorn application runner. It also includes a PostgreSQL database connection pre-configured.
New projects should be built using Django 3, but this project is a reference for existing migrating sites. Version 2 is in legacy support.
Features:
- Python 3.8
- PostgreSQL 12
- Automatic TLS certificates
- Pipfile-based build
View the repository on GitHub.
Django 3
This template deploys the Django 3 application framework on Platform.sh, using the gunicorn application runner. It also includes a PostgreSQL database connection pre-configured.
Django is a Python-based web application framework with a built-in ORM.
Features:
- Python 3.8
- PostgreSQL 12
- Automatic TLS certificates
- Pipfile-based build
View the repository on GitHub.
Flask
This template demonstrates building the Flask framework for Platform.sh. It includes a minimalist application skeleton that demonstrates how to connect to a MariaDB server for data storage and Redis for caching. The application starts as a bare Python process with no separate runner. It is intended for you to use as a starting point and modify for your own needs.
Flask is a lightweight web microframework for Python.
Features:
- Python 3.8
- MariaDB 10.4
- Redis 5.0
- Automatic TLS certificates
- Pipfile-based build
View the repository on GitHub.
Pelican
This template provides a basic Pelican skeleton. Only content files need to be committed, as Pelican itself is downloaded at build time via the Pipfile. All files are generated at build time, so at runtime only static files need to be served.
Pelican is a static site generator written in Python and using Jinja for templating.
Features:
- Python 3.8
- Automatic TLS certificates
- Pipfile-based build
View the repository on GitHub.
Pyramid
This template builds Pyramid on Platform.sh. It includes a minimalist application skeleton that demonstrates how to connect to a MariaDB server for data storage and Redis for caching. It is intended for you to use as a starting point and modify for your own needs.
Pyramid is a web framework written in Python.
Features:
- Python 3.8
- MariaDB 10.4
- Redis 5.0
- Automatic TLS certificates
- Pipfile-based build
View the repository on GitHub.
Python 3 running UWSGI
This template provides the most basic configuration for running a custom Python 3.7 project. It includes the `platformshconfig` package and demonstrates using it to connect to MariaDB and Redis. It can be used to build a very rudimentary application but is intended primarily as a documentation reference. The application runs through the UWSGI runner.
Python is a general purpose scripting language often used in web development.
Features:
- Python 3.8
- MariaDB 10.4
- Redis 5.0
- Automatic TLS certificates
- Pipfile-based build
View the repository on GitHub.
Wagtail
This template builds the Wagtail CMS on Platform.sh, using the gunicorn application runner. It includes a PostgreSQL database that is configured automatically, and a basic demonstration app that shows how to use it. It is intended for you to use as a starting point and modify for your own needs. You will need to run the command line installation process by logging into the project over SSH after the first deploy.
Wagtail is a web CMS built using the Django framework for Python.
Features:
- Python 3.8
- PostgreSQL 12
- Automatic TLS certificates
- Pipfile-based build
View the repository on GitHub.