Cet article est actuellement disponible en anglais uniquement. Traduction à venir.
By the end of this recipe, you will have a fully automated backup pipeline that captures both your Odoo PostgreSQL database and the filestore (the directory holding all uploaded attachments and generated PDFs), pushes them to S3 with versioning and lifecycle rules, and is verified daily by an automated restore test. Skill required: Linux administrator with PostgreSQL fundamentals. Time required: 2 hours setup, 30 minutes verification. ECOSIRE has run this for years and the recipe below is what we ship to every managed-hosting client.
The reason most "Odoo backups" fail in disaster recovery: people back up the database but forget the filestore, or back up the filestore but forget that attachment metadata lives in PG. When you restore, the database references files that no longer exist, every PDF is broken, and the recovery is technically a half-restore. The recipe below backs up both, in lockstep, with a checksum to verify integrity.
What you will need
- Linux host running Odoo (Ubuntu 22.04+ or Debian 12+).
- PostgreSQL 14 or newer (we recommend 17 for the parallel
pg_dumpimprovements). - AWS S3 bucket with versioning enabled. Or any S3-compatible (Backblaze B2, Wasabi, MinIO, DigitalOcean Spaces). Cost: under $5/month for typical Odoo instances.
- AWS CLI installed and configured with an IAM user that has
s3:PutObject,s3:GetObject, ands3:ListBucketon the bucket. - Time: 2 hours setup including testing the restore.
- Disk space: temporary working space at least 1.2x the database size.
Step-by-step
1. Enable PG point-in-time recovery (PITR)
WAL archiving means you can recover to any second, not just the daily snapshot. Edit /etc/postgresql/17/main/postgresql.conf:
wal_level = replica
archive_mode = on
archive_command = 'aws s3 cp %p s3://your-bucket/wal-archive/%f --quiet'
archive_timeout = 300 # force a WAL switch every 5 minutes max
max_wal_senders = 3
Restart PostgreSQL: sudo systemctl restart postgresql. Verification: sudo -u postgres psql -c "SHOW archive_mode" returns "on" and S3 starts accumulating WAL files within minutes. List them: aws s3 ls s3://your-bucket/wal-archive/ | tail.
2. Write the daily snapshot script
Create /usr/local/bin/odoo-backup.sh:
#!/bin/bash
set -euo pipefail
DATE=$(date -u +%FT%H%M%S)
DB=production
BUCKET=s3://your-bucket/odoo-backups
TMPDIR=/var/tmp/odoo-backup
mkdir -p "$TMPDIR"
# 1. Pause Odoo cron locks (NOT Odoo itself - we want zero downtime)
sudo -u postgres psql -d "$DB" -c "SELECT pg_advisory_lock(123456789)" -At >/dev/null
# 2. Dump database (parallel jobs = faster on multi-core)
DUMPFILE="$TMPDIR/${DB}-${DATE}.dump"
sudo -u postgres pg_dump -Fc -j 4 -f "$DUMPFILE" "$DB"
# 3. Snapshot filestore (rsync to a working dir, then tar)
FILESTORE_SRC="/var/lib/odoo/filestore/$DB"
FILESTORE_TAR="$TMPDIR/filestore-${DB}-${DATE}.tar.gz"
tar -czf "$FILESTORE_TAR" -C "/var/lib/odoo/filestore" "$DB"
# 4. Compute checksums for both
sha256sum "$DUMPFILE" > "$DUMPFILE.sha256"
sha256sum "$FILESTORE_TAR" > "$FILESTORE_TAR.sha256"
# 5. Release the advisory lock
sudo -u postgres psql -d "$DB" -c "SELECT pg_advisory_unlock(123456789)" -At >/dev/null
# 6. Upload to S3 with the same prefix so they stay paired
aws s3 cp "$DUMPFILE" "$BUCKET/$DATE/" --quiet
aws s3 cp "$FILESTORE_TAR" "$BUCKET/$DATE/" --quiet
aws s3 cp "$DUMPFILE.sha256" "$BUCKET/$DATE/" --quiet
aws s3 cp "$FILESTORE_TAR.sha256" "$BUCKET/$DATE/" --quiet
# 7. Cleanup local
rm -f "$DUMPFILE" "$FILESTORE_TAR" "$DUMPFILE.sha256" "$FILESTORE_TAR.sha256"
echo "Backup $DATE completed."
chmod +x /usr/local/bin/odoo-backup.sh. Add to root crontab: 15 2 * * * /usr/local/bin/odoo-backup.sh >> /var/log/odoo-backup.log 2>&1.
Verification: run the script manually. After completion, aws s3 ls s3://your-bucket/odoo-backups/ shows a timestamped folder with 4 files.
3. Configure S3 lifecycle rules
In the AWS Console, on your bucket > Management > Lifecycle Rules, add:
- Daily backups in
odoo-backups/: keep last 30 days in Standard, transition to Glacier Instant Retrieval at day 30, expire at day 365. - WAL archive in
wal-archive/: keep last 7 days in Standard, expire at day 35 (no point keeping WAL longer than the oldest base backup).
Cost for a 10 GB Odoo with 5 GB filestore: about $4 per month total.
Verification: lifecycle policy is "Enabled" in the console and the next-evaluation timestamp is within 24 hours.
4. Write the restore script
Equally important to the backup is a restore script you've actually tested. Create /usr/local/bin/odoo-restore.sh:
#!/bin/bash
# Usage: ./odoo-restore.sh 2026-05-04T020015 testdb
set -euo pipefail
DATE="$1"
TARGET_DB="$2"
BUCKET=s3://your-bucket/odoo-backups
TMPDIR=/var/tmp/odoo-restore
mkdir -p "$TMPDIR"
# 1. Pull from S3
aws s3 sync "$BUCKET/$DATE/" "$TMPDIR/"
# 2. Verify checksums
cd "$TMPDIR"
sha256sum -c production-${DATE}.dump.sha256
sha256sum -c filestore-production-${DATE}.tar.gz.sha256
# 3. Drop existing target db (DANGER - confirm by hand)
sudo -u postgres dropdb --if-exists "$TARGET_DB"
sudo -u postgres createdb -O odoo "$TARGET_DB"
# 4. Restore database
sudo -u postgres pg_restore -j 4 -d "$TARGET_DB" production-${DATE}.dump
# 5. Restore filestore
mkdir -p "/var/lib/odoo/filestore/$TARGET_DB"
tar -xzf filestore-production-${DATE}.tar.gz -C /var/lib/odoo/filestore/ --strip-components=1
mv /var/lib/odoo/filestore/production /var/lib/odoo/filestore/$TARGET_DB
# 6. Disable mail servers and crons (so the restored copy doesn't email customers)
sudo -u postgres psql -d "$TARGET_DB" -c "
UPDATE ir_mail_server SET active = false;
UPDATE fetchmail_server SET active = false;
UPDATE ir_cron SET active = false;
UPDATE ir_config_parameter SET value = 'http://localhost:8069' WHERE key = 'web.base.url';
"
echo "Restored to database $TARGET_DB. Mail/cron disabled. Test it before going live."
Verification: run the script with a recent date and a target DB name like restore_test_2026_05_04. Open Odoo against the restored database and click through 5 random records. Confirm attachments load (proving filestore restored correctly).
5. Schedule a daily restore test
Add another cron: 30 3 * * * /usr/local/bin/odoo-restore.sh $(date -u +%FT020015) restore_test. This runs the restore daily, immediately after the backup, into a throwaway database. If it ever fails, you find out the same morning, not on the day of disaster.
# Add a smoke test that opens a record and verifies an attachment
sudo -u postgres psql -d restore_test -c "
SELECT COUNT(*) FROM ir_attachment;
SELECT COUNT(*) FROM res_partner;
SELECT COUNT(*) FROM sale_order;
" > /tmp/smoke-counts.txt
Verification: smoke counts match production within 0.1 percent (cron-driven small writes during the backup window account for the tiny gap).
6. Document the RTO/RPO
Write down the recovery objectives so the business knows what to expect:
- RPO (Recovery Point Objective): with daily backup + WAL archiving, you can restore to within 5 minutes of any point in time.
- RTO (Recovery Time Objective): on a t3.large with a 10 GB database, full restore to operational is 15 to 25 minutes.
If RPO of 5 minutes is too lossy, add streaming replication to a hot standby. For most ECOSIRE clients, daily + WAL is enough.
7. Test a full disaster scenario quarterly
Once a quarter, simulate "the production server caught fire". Spin up a fresh EC2, install Odoo, and restore the most recent backup from S3. Time how long it takes. Document any friction. Verification: full DR rehearsal completes in under 60 minutes from "press the button" to "users can log in".
8. Encrypt at rest
For client-PII-heavy databases, set the S3 bucket's default encryption to AWS KMS with a customer-managed key. Add --sse aws:kms --sse-kms-key-id alias/odoo-backup to the aws s3 cp commands. Verification: aws s3api head-object on a backup file shows ServerSideEncryption: aws:kms.
Common mistakes
- Backing up the database without the filestore. PDFs, contract attachments, and product images all become broken links after restore.
- Using
pg_dumpall. It dumps role config + every database, which is overkill and slow. Stick topg_dump -Fcper-database. - No WAL archiving. You can only restore to the last full snapshot, losing up to 24 hours of work.
- Never testing restores. We have seen "backups" that were corrupted for 3 months until needed. Daily restore-test catches this.
- Forgetting to disable mail servers on the restored copy. The restored DB starts emailing customers about events that already happened, mass-confusing them.
- Storing backups on the same EBS volume as the database. EBS volume failure takes both. Always push to S3 (different fault domain).
Going further
Cross-region replication: enable S3 replication to a second region for disaster scenarios that take out an entire AWS region.
Logical replication slots: instead of WAL archiving, use logical replication to a hot standby running Odoo in read-only mode. Failover takes seconds.
Encrypted backups with client-side keys: encrypt the dump locally with gpg before pushing to S3 so even AWS root cannot read it. Adds 30 seconds to backup time on a 10 GB database.
Backup verification with structured queries: in the daily restore-test, run a battery of sanity queries (record counts, hash of recent records) and Slack-alert if any drift more than 1 percent from production.
For fully managed Odoo backups including off-site, encrypted, geo-redundant storage with 24/7 monitoring, ECOSIRE managed hosting handles the entire pipeline. Or read how to deploy Odoo on AWS EC2 with PostgreSQL 17 to set up the underlying infrastructure first.
Frequently Asked Questions
Can I do pg_basebackup instead of pg_dump?
Yes, and it's faster for very large databases. pg_basebackup is a physical (binary) backup and works directly with WAL archiving for PITR. Trade-off: you cannot restore selected tables, only the entire cluster. For Odoo most clients use pg_dump for portability and pg_basebackup only on databases over 500 GB.
How do I restore just a single table?
pg_restore --table=res_partner -d target_db production.dump. Be careful — restoring a single table while keeping the surrounding data risks foreign-key inconsistencies.
What about Odoo's built-in /web/database/manager backup?
It works for small databases (under a few GB) but downloads via the browser, has no encryption, and doesn't capture WAL. Use it for ad-hoc dev backups, not production DR.
How long should I keep backups?
Industry standard for SMB Odoo: 30 days hot (S3 Standard), 365 days cold (Glacier), and the most recent month-end snapshot kept indefinitely. For regulated industries (healthcare, finance), retention can extend to 7 to 10 years — check your jurisdiction.
For full DR setup including failover automation and quarterly DR drills, ECOSIRE Odoo support builds custom playbooks. Or read how to migrate Odoo Community to Enterprise — having reliable backups makes that migration much less stressful.
Rédigé par
ECOSIRE TeamTechnical Writing
The ECOSIRE technical writing team covers Odoo ERP, Shopify eCommerce, AI agents, Power BI analytics, GoHighLevel automation, and enterprise software best practices. Our guides help businesses make informed technology decisions.
ECOSIRE
Transformez votre entreprise avec Odoo ERP
Implémentation, personnalisation et assistance expertes d'Odoo pour rationaliser vos opérations.
Articles connexes
Comment ajouter un bouton personnalisé à une vue de formulaire Odoo (2026)
Ajoutez des boutons d'action personnalisés aux vues de formulaire Odoo 19 : méthode d'action Python, héritage des vues, visibilité conditionnelle, boîtes de dialogue de confirmation. Testé en production.
Comment ajouter un champ personnalisé dans Odoo sans Studio (2026)
Ajoutez des champs personnalisés via le module personnalisé dans Odoo 19 : héritage de modèle, extension de vue, champs calculés, décisions magasin/non-magasin. Code d'abord, contrôle de version.
Comment ajouter un rapport personnalisé dans Odoo à l'aide d'une mise en page externe
Créez un rapport PDF de marque dans Odoo 19 à l'aide de web.external_layout : modèle QWeb, format papier, liaison d'action. Avec logo imprimé + remplacements de pied de page.