Optimal SQLite settings for Django

Requires: Django >= 5.1a1 (currently in alpha)
Last updated: 2024-06-12


There’s plenty of information out there on how to scale Django to handle numerous requests per second, but most of it doesn’t really apply to the kind of software I work on.

For the past few years, I’ve been developing custom software for smaller organizations. All my deployments have fewer than 100 users and operate on a single AWS instance. Initially, I chose PostgreSQL because I know it well and Django’s documentation suggests that SQLite isn’t suitable for production.

SQLite provides an excellent development alternative for applications that are predominantly read-only or require a smaller installation footprint.

Django Documentation

However, my previous experience with SQLite has taught me that it’s much more capable than what the Django docs suggest, offering better performance and simpler administration. So, I decided to migrate. To achieve the best possible performance, I had to configure Django and SQLite correctly.

After extensive research, testing, and trial and error, here are my recommended settings for Django web apps using SQLite as the primary database. I anticipate these settings will evolve over time, and I will update them accordingly. My goal is to have these settings ready to copy and paste into every new Django project, ensuring they are the most current and optimal for most web apps.

If you’re looking for help with custom web apps or need cybersecurity consulting, feel free to reach out. I’m available for consulting work.

The rest of this post contains information about each setting, my suggestion for a simple backup strategy and a lot of reference links.


PRAGMA foreign_keys = ON

By default, SQLite does not enforce foreign key constraints. To enable foreign key enforcement, you need to activate it manually.

Read more

PRAGMA journal_mode = WAL

The WAL journaling mode uses a write-ahead log instead of a rollback journal to implement transactions. The WAL journaling mode is persistent; after being set it stays in effect across multiple database connections and after closing and reopening the database.

Read more

PRAGMA synchronous = NORMAL

When synchronous is NORMAL, the SQLite database engine will still sync at the most critical moments, but less often than in FULL mode.

Read more

PRAGMA busy_timeout = 5000

The busy_timeout setting in SQLite configures the database to wait for a specified amount of time when a database file is locked, rather than returning an error immediately. This is particularly useful in scenarios like web apps where multiple requests are trying to access the database simultaneously.

Read more

PRAGMA temp_store = MEMORY

When temp_store is MEMORY temporary tables and indices are kept as if they were in pure in-memory databases.

Read more

PRAGMA mmap_size = 134217728

Stolen this from the Rails SQLite settings.

This is the maximum number of bytes of the database file that will be accessed using memory-mapped I/O

Read more
Turn on mmap support for your SQLite connections

PRAGMA journal_size_limit = 67108864

Stolen from the Rails SQLite settings.

… in WAL mode, the write-ahead log file is not truncated following a checkpoint. Instead, SQLite reuses the existing file for subsequent WAL entries since overwriting is faster than appending.

Read more

PRAGMA cache_size = 2000

Stolen from the Rails SQLite settings.

… maximum number of database disk pages that SQLite will hold in memory at once per open database file.

Read more

“transaction_mode”: “IMMEDIATE”

By default, SQLite starts transactions in DEFERRED mode: they are considered read only. They are upgraded to a write transaction that requires a database lock in-flight, when query containing a write/update/delete statement is issued.

The problem is that by upgrading a transaction after it has started, SQLite will immediately return a SQLITE_BUSY error without respecting the busy_timeout previously mentioned, if the database is already locked by another connection.

This is why you should start your transactions with BEGIN IMMEDIATE instead of only BEGIN. If the database is locked when the transaction starts, SQLite will respect busy_timeout.;

Source


Read more

Backups

This is the simplest solution for backup. It’s a little bash script that can be scheduled to run every hour using a cron job. This script will create a backup in a safe way even if your database is receiving a high volume of writes, as it captures a snapshot from a single transaction. Once the backup file is created it’s uploaded to an S3 bucket.

#!/usr/bin/env bash
#
# Run using cron to backup db every hour
#
set -euf -o pipefail

sqlite3 /path/to/db "VACUUM INTO '/path/to/backup'"
tar --zstd -cf "/path/to/backup.tar.zst" "/path/to/backup"

# 1-day, rolling hourly backup
aws s3 cp /path/to/backup.tar.zst s3://mybucket/backup-`date +%H`.tar.zst

# 1-month, rolling daily backup
aws s3 cp /path/to/backup.tar.zst s3://mybucket/backup-`date +%d`.tar.zst

# 1-month, rolling hourly backup
aws s3 cp /path/to/backup.tar.zst s3://mybucket/backup-`date +%d%H`.tar.zst

Source

If you need something more robust than that you should look into the great Litestream, which will continuously stream changes to an object store like S3 or any of it’s clones.

Other references and related links

Do you have any suggestions on how to improve this configuration? Send me an email.