Tutorial: Ship a djust app to production
Going from make dev to a real production deploy is where djust
apps trip people up. The dev server (uvicorn --reload) runs a
single process with in-memory state — perfect for local hacking,
fatal in production where one process can't hold all your users'
state. The fix is a small set of swap-outs: Redis for state,
multiple uvicorn workers, an Nginx in front, sticky sessions (or
not — see below), HTTPS for the WebSocket upgrade.
By the end of this tutorial you'll have:
- A production-ready ASGI app behind multiple uvicorn workers that share Redis-backed state.
- An Nginx config that proxies HTTP and upgrades WebSocket connections.
- Sticky sessions turned ON (the simple, default-correct choice) — plus the explanation of when to turn them OFF.
- Healthcheck endpoints that load balancers can probe.
- The four production checks every team adds in week 2 and wishes they'd added on day one: graceful shutdown, error monitoring, log structure, and the security headers checklist.
| You'll learn | Documented in |
|---|---|
DJUST_CONFIG['STATE_BACKEND'] = 'redis' | Production Deployment |
| Uvicorn worker count + reload behavior | Deployment |
| Nginx WebSocket proxy directives | Deployment |
| When sticky sessions matter | This tutorial |
| Production-readiness checklist | This tutorial |
Prerequisites: A working djust app you've been running with
make dev(any of the prior tutorials will do). Linux server access, basic systemd / nginx familiarity, a domain name with DNS pointing at the server, a Redis instance reachable from the server.
Step 1 — The four-line settings change
The development settings.py needs three production swap-outs:
# settings.py
import os
DEBUG = os.environ.get("DEBUG", "False").lower() == "true"
DJUST_CONFIG = {
"STATE_BACKEND": "redis",
"REDIS_URL": os.environ["REDIS_URL"], # required, not optional
"SESSION_TTL": 7200, # 2h — match your auth session
}
ALLOWED_HOSTS = ["yourapp.com", "www.yourapp.com"]
# Trust the load balancer's X-Forwarded-Proto so request.is_secure()
# returns True for HTTPS-fronted requests. Without this, every
# WebSocket upgrade tries to negotiate over plain HTTP and breaks.
SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
The four production swap-outs:
| In dev | In prod | Why |
|---|---|---|
DEBUG = True | DEBUG = False | Stack traces are info disclosure; templates cache; auto-reload off |
STATE_BACKEND='memory' | STATE_BACKEND='redis' | Multi-process workers share state |
ALLOWED_HOSTS=['*'] | Real domain list | Defense against host-header attacks |
(no SECURE_PROXY_SSL_HEADER) | Set to ('HTTP_X_FORWARDED_PROTO', 'https') | WebSocket upgrade respects HTTPS |
Step 2 — Uvicorn under systemd
# /etc/systemd/system/myapp.service
[Unit]
Description=djust app (myapp)
After=network.target redis.service postgresql.service
Wants=redis.service
[Service]
Type=notify
User=myapp
Group=myapp
WorkingDirectory=/opt/myapp
Environment="DJANGO_SETTINGS_MODULE=myproject.settings"
Environment="REDIS_URL=redis://localhost:6379/0"
Environment="DATABASE_URL=postgres://myapp:..."
Environment="DEBUG=False"
ExecStart=/opt/myapp/.venv/bin/uvicorn \
myproject.asgi:application \
--host 127.0.0.1 \
--port 8000 \
--workers 4 \
--proxy-headers \
--forwarded-allow-ips="127.0.0.1" \
--timeout-graceful-shutdown 30
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Three production-relevant flags:
--workers 4— one process per CPU core is the standard starting point. Each worker has its own memory; they share Redis-backed state. More workers = more concurrent connections but also more memory.--proxy-headers --forwarded-allow-ips="127.0.0.1"— trustX-Forwarded-For/X-Real-IPheaders from the local Nginx, sorequest.META["REMOTE_ADDR"]is the real client IP, not127.0.0.1.--timeout-graceful-shutdown 30— when systemd sends SIGTERM (deploy, reboot), uvicorn stops accepting new connections but lets in-flight ones finish for up to 30 s. Without this, a deploy mid-WebSocket disconnects every user.
Step 3 — Nginx in front
# /etc/nginx/sites-enabled/myapp.conf
upstream myapp {
server 127.0.0.1:8000;
keepalive 32;
}
server {
listen 80;
server_name yourapp.com www.yourapp.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name yourapp.com www.yourapp.com;
ssl_certificate /etc/letsencrypt/live/yourapp.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/yourapp.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
# Static files served by Nginx, not Django (faster, no Python in path)
location /static/ {
alias /opt/myapp/staticfiles/;
expires 1y;
add_header Cache-Control "public, immutable";
}
# Healthcheck (no Django; load-balancer probes hit this)
location = /nginx-health {
access_log off;
return 200 "ok\n";
add_header Content-Type text/plain;
}
# Everything else — HTTP and WebSocket — goes to uvicorn
location / {
proxy_pass http://myapp;
proxy_http_version 1.1;
# WebSocket upgrade headers — the line everyone forgets
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Real client IP + scheme (paired with SECURE_PROXY_SSL_HEADER)
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Long-lived WebSocket connections need a long read timeout
proxy_read_timeout 3600;
proxy_send_timeout 3600;
}
}
Three lines that get forgotten the most:
proxy_set_header Upgrade $http_upgrade;+proxy_set_header Connection "upgrade";— required for WebSocket. Without them, the upgrade negotiation fails and every reactive feature falls back to HTTP polling (or doesn't work at all).proxy_set_header X-Forwarded-Proto $scheme;— paired with Django'sSECURE_PROXY_SSL_HEADER. Without this, the Django app thinks every request is HTTP even though Nginx is serving HTTPS, andrequest.is_secure()returns False.proxy_read_timeout 3600;— Nginx defaults to 60 s. A WebSocket sitting idle for 65 s gets killed. Bump to 1 hour (or whatever matches your session timeout).
Step 4 — Sticky sessions: when, when not
WebSocket connections are sticky by definition — once a client opens a WS to worker N, all subsequent frames go to worker N. That's not the issue. The issue is the next page load in the same browser session.
If sticky sessions are OFF (default for most load balancers), the page-load that comes 30 seconds after the WebSocket disconnect might land on a different worker. The new worker's in-memory WebSocket-state cache is cold. With a Redis-backed state backend, this is fine — the new worker reads the user's state from Redis on demand. With memory-backed state (which you don't run in prod), the user effectively starts fresh.
So: with Redis state backend, sticky sessions are optional. Turn them on if you want to avoid the Redis round-trip on the re-mount; turn them off if you want easier load distribution.
For Nginx specifically:
upstream myapp {
ip_hash; # ← sticky by client IP
server 127.0.0.1:8000;
server 127.0.0.1:8001;
server 127.0.0.1:8002;
server 127.0.0.1:8003;
keepalive 32;
}
Or for Cloudflare / AWS ALB, configure session affinity via the load balancer dashboard.
Step 5 — The four checks every team eventually adds
Healthcheck endpoint that touches the DB and Redis
# myapp/views.py
from django.db import connection
from django.http import JsonResponse
import redis
def healthcheck(request):
"""Probed by the load balancer + uptime monitor."""
checks = {"db": False, "redis": False}
try:
with connection.cursor() as cur:
cur.execute("SELECT 1")
checks["db"] = True
except Exception:
pass
try:
r = redis.Redis.from_url(settings.DJUST_CONFIG["REDIS_URL"])
r.ping()
checks["redis"] = True
except Exception:
pass
status = 200 if all(checks.values()) else 503
return JsonResponse(checks, status=status)
Wire at /health/. Configure the load balancer to mark a worker
unhealthy on 5 consecutive 503s — DB connection storms or Redis
flakes get the bad worker out of rotation before it tanks user
requests.
Sentry (or equivalent error monitor)
# settings.py
import sentry_sdk
if not DEBUG and (dsn := os.environ.get("SENTRY_DSN")):
sentry_sdk.init(
dsn=dsn,
traces_sample_rate=0.1,
environment=os.environ.get("SENTRY_ENVIRONMENT", "production"),
send_default_pii=False, # don't auto-send user data
)
Catches the 500s your error pages render and the unhandled
exceptions in event handlers. The send_default_pii=False is
specific to Sentry — by default it ships
request.user.username. Turn that off unless your privacy
policy explicitly allows it.
Structured logging
# settings.py
LOGGING = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"json": {
"()": "pythonjsonlogger.jsonlogger.JsonFormatter",
"format": "%(asctime)s %(levelname)s %(name)s %(message)s",
},
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"formatter": "json",
},
},
"root": {"handlers": ["console"], "level": "INFO"},
}
Pipe stdout into the host's log aggregator (CloudWatch, Datadog, Loki, etc.). Json log lines are queryable; multi-line text tracebacks are not. Worth the 5 minutes of setup.
The security headers minimum
# settings.py
SECURE_HSTS_SECONDS = 31536000 # 1 year
SECURE_HSTS_INCLUDE_SUBDOMAINS = True
SECURE_HSTS_PRELOAD = True
SECURE_SSL_REDIRECT = True # enforce HTTPS at app level too
SECURE_BROWSER_XSS_FILTER = True
SECURE_CONTENT_TYPE_NOSNIFF = True
X_FRAME_OPTIONS = "DENY"
CSRF_COOKIE_SECURE = True
CSRF_COOKIE_HTTPONLY = True
SESSION_COOKIE_SECURE = True
SESSION_COOKIE_HTTPONLY = True
SESSION_COOKIE_SAMESITE = "Lax"
Run python manage.py check --deploy after applying — Django's
deploy checks catch any remaining gaps and tell you the exact
setting to add.
What just happened, end to end
Client Nginx :443 Uvicorn :8000 Redis
│ │ │ │
│ GET /dashboard/ │ │ │
│ ──────────────────────► proxy_pass ───────────────────► HTTP handler │
│ │ │ mount() reads │
│ │ │ ─── state ───────► (cache miss)
│ │ │ render template │
│ ◄ HTTP response ─────────│ ◄────────────────────────────│ │
│ │ │ │
│ WebSocket upgrade /ws/ │ │ │
│ ───── Connection: ──────► proxy_pass + Upgrade ───────► consumer │
│ Upgrade │ headers preserved │ accepts WS │
│ │ │ writes session ──► state stored
│ │ │ to Redis │
│ │ │ │
│ click event │ │ │
│ ───────────────────────► proxy_pass (existing WS) ────► event_handler │
│ │ │ self.x = ... │
│ │ │ writes state ────► state updated
│ │ │ sends diff │
│ ◄ patch ─────────────────│ ◄────────────────────────────│ │
Single client, multiple workers under the upstream — the WS is sticky to whichever worker accepted the upgrade, but the next HTTP page-load from the same client can land on any worker because Redis has the state.
Where to go next
- Zero-downtime deploys: add
Restart=on-failure+ a systemdKillMode=mixedso SIGTERM goes to all workers; pair with--timeout-graceful-shutdown 30so in-flight WebSockets drain. Thensystemctl restart myappbecomes safe mid-traffic. - Containerized deploy: the same ASGI command works inside a
Docker container. djust ships
djust-deployfor one-command deploys to common targets (see djust-deploy CLI). - Multi-region: if you scale to >1 region, you'll need a region-local Redis per region. Cross-region WebSocket session resumption isn't supported out of the box — design your URL scheme so each region gets its own subdomain (`us.example.com`, `eu.example.com`).
- Read replicas: Django has built-in support for a read
replica via
DATABASES['replica']. Routing reads from LiveView mount() to the replica is a useful optimization once the primary DB is hot. - CDN for static: Whitenoise is fine for one server. At
scale, point a CDN (CloudFront, Cloudflare, etc.) at
/static/. The same Nginxlocation /static/block above becomes aproxy_passto your CDN origin instead.
The four production checks (healthcheck, Sentry, structured logs, security headers) are not optional — they're what separates "the app stayed up" from "the app went down at 3 AM and nobody knew until customers tweeted." Add them before the launch, not after the first incident.