Este artículo actualmente está disponible solo en inglés. La traducción estará disponible próximamente.
After an Odoo upgrade or a worker count change, users start seeing this:
ERROR: connection to server failed: FATAL: sorry, too many clients already
Or the Python-side variant:
psycopg2.OperationalError: FATAL: remaining connection slots are reserved for non-replication superuser connections
PostgreSQL's connection limit is a hard wall. Odoo's worker pool, the longpolling worker, the cron threads, and any external tools all open connections from a fixed pool. When the math is wrong, you hit the wall. This guide tunes Odoo 17.0/18.0/19.0 to fit within a sensible PostgreSQL configuration.
Quick Fix
Compute the actual connection demand:
worker_connections = workers × (1 + max_cron_threads) × 2 + longpolling_workers × 2 + 10
For a typical Odoo 18.0 production with 8 workers and 2 cron threads:
8 × (1 + 2) × 2 + 1 × 2 + 10 = 60 connections from Odoo alone
Set PostgreSQL's max_connections to at least 100, restart, and monitor:
# postgresql.conf
max_connections = 200
sudo systemctl restart postgresql
If you cannot raise max_connections (RDS limits, RAM constraints), add PgBouncer as a connection pooler.
Why This Happens
Odoo's connection demand has many dimensions:
- Per-worker connections. Each gunicorn worker holds at minimum one connection. Long-running requests can hold multiple (one for the main cursor, one for cron-like sub-flows).
- Cron threads.
--max-cron-threads=Nadds N more long-lived connections per worker. - Longpolling worker. Holds one persistent connection per active longpoll, plus one shared.
- Test runners and shells. A
odoo-bin shellsession adds one more connection. - External tools.
pgbench,pg_dump, monitoring queries, replication, all consume connection slots. - Connection leaks. A module that opens raw connections without closing them quietly drains the pool.
After a major version upgrade, two common things change: worker count is often increased to handle new modules, and connection-keeping behaviour can shift slightly between PostgreSQL client library versions.
Step-by-Step Diagnosis
1. See exactly who is connected.
SELECT pid, usename, datname, application_name, state, query_start, state_change
FROM pg_stat_activity
ORDER BY query_start NULLS LAST;
Group by application_name to see how many connections each tool holds.
2. Count by category.
SELECT application_name, count(*) FROM pg_stat_activity
GROUP BY application_name ORDER BY 2 DESC;
If odoo shows more connections than your worker math predicts, you have a leak. If psql or pg_dump shows surprisingly many, an external tool is contributing.
3. Check Odoo's configured worker count.
grep -E "^(workers|max_cron_threads|longpolling)" /etc/odoo/odoo.conf
Compute expected demand. If it exceeds max_connections, you have a configuration mismatch.
4. Check PostgreSQL's limit.
SHOW max_connections;
SELECT count(*) FROM pg_stat_activity;
If the count is at or near the limit and growing, you are connection-limited.
5. Look for idle-in-transaction.
SELECT count(*) FROM pg_stat_activity WHERE state = 'idle in transaction';
Non-zero values for more than a few seconds indicate a transaction that hung mid-flight, holding a connection forever. These are the silent killers.
Permanent Fix
Step 1 — Right-size max_connections. PostgreSQL's max_connections is bounded by RAM (each connection costs roughly 10 MB of memory for catalog cache and work buffers). For a 16 GB server, 200 connections is comfortable; for a 64 GB server, 500 is reasonable. Do not blindly raise it to 1000 — the kernel and PostgreSQL itself become overhead-bound.
# postgresql.conf
max_connections = 200
work_mem = 16MB
shared_buffers = 4GB
Step 2 — Right-size Odoo workers. A common rule: 2 workers per CPU core, 2 cron threads per worker, 1 longpolling worker.
# odoo.conf — for a 4-core server
workers = 8
max_cron_threads = 2
longpolling_port = 8072
Avoid the temptation to set workers=20 on a 4-core box. Each worker is a Python process; oversubscription causes context-switch thrash, not throughput.
Step 3 — Add PgBouncer for shared installations. For multi-database installations or many workers, a connection pooler dramatically reduces connection count to PostgreSQL itself:
# pgbouncer.ini
[databases]
* = host=127.0.0.1 port=5432
[pgbouncer]
listen_port = 6432
pool_mode = transaction
max_client_conn = 500
default_pool_size = 25
Odoo connects to PgBouncer (port 6432) which pools to PostgreSQL (port 5432). Hundreds of Odoo workers share a few dozen real PostgreSQL connections.
PgBouncer in transaction mode requires care with prepared statements and session-level state. Test thoroughly; some Odoo flows assume session pooling. ECOSIRE's Odoo support team tunes PgBouncer for high-traffic Odoo deployments routinely.
Step 4 — Add idle-transaction reaping.
# postgresql.conf
idle_in_transaction_session_timeout = 300000 # 5 minutes
statement_timeout = 600000 # 10 minutes
These caps prevent any single hung query from locking a connection forever.
Step 5 — Fix Odoo connection leaks. If any custom module opens raw psycopg2.connect() without closing, find and fix it:
grep -rn "psycopg2.connect\|psycopg2.pool" /opt/odoo/custom/
Use self.env.cr whenever possible. Raw connections are almost always wrong in Odoo modules.
How to Prevent It
- Capacity-plan before the upgrade. Compute expected connection demand for the new worker count. Verify
max_connectionsaccommodates it with headroom (at least 30 percent buffer). Do this before the deploy, not when users complain. - Monitor
pg_stat_activitycontinuously. Prometheus'spostgres_exporterships apg_stat_activity_countmetric. Alert at 80 percent ofmax_connections. - Cap Odoo's connection use per process.
db_maxconninodoo.conflimits the per-worker connection count — set it to a sensible value like 20 to prevent runaway leaks from one bad request taking down the pool. - Use PgBouncer from day one. Even small Odoo deployments benefit. The "I'll add it later" trap is real — adding PgBouncer to a stressed system mid-incident is the worst time to learn it.
- Test connection limits with a load test. k6 or Locust against your Odoo + ramped to peak QPS reveals connection saturation in staging. Far cheaper than debugging it in production.
- Respect
statement_timeout. Long queries should die. A multi-hour query holding a connection is a bug, not a feature.
Related Errors
- Upgrade aborted, database corrupted — what happens when an upgrade saturates connections mid-flight.
- PostgreSQL deadlock during stock move — different DB-level pain, often shows up alongside this.
- Database locked during import — connection exhaustion via long writes.
- Slow list view on > 1M rows — list views can pile up connections quickly.
Frequently Asked Questions
Why does the connection count keep growing after a worker restart?
If your worker actually restarts, its old connections should be cleaned up by PostgreSQL within tcp_keepalives_idle seconds. If counts grow without bound, the workers are not actually restarting, or some external tool is opening connections and not closing them. Check pg_stat_activity.application_name and state_change to see who is holding stale connections.
Should I use connection pooling inside Odoo or in PgBouncer?
Both. Odoo has its own per-worker connection pool (db_maxconn). PgBouncer pools across workers. They complement each other — Odoo's pool prevents one request from spawning many DB connections, PgBouncer's pool prevents many workers from each opening a separate DB connection.
What is the difference between pool_mode = session and pool_mode = transaction?
session keeps a PostgreSQL connection bound to one client for the whole session — minimal pooling benefit, full session-state semantics. transaction releases the PG connection back to the pool at every commit — high pooling benefit, but advisory locks and prepared statements are scoped to a single transaction. Odoo works with transaction mode if you avoid cr.execute("PREPARE") patterns.
My RDS PostgreSQL has a hard max_connections cap I cannot raise. What now?
PgBouncer is essentially mandatory in that situation. Run it on the application host (or on a small EC2 instance shared by all app hosts), point Odoo at it, and your effective connection count to RDS drops to whatever PgBouncer's default_pool_size is. ECOSIRE has shipped this pattern for several customers running Odoo on RDS-Postgres.
Need help with a tricky Odoo error? ECOSIRE's Odoo experts have shipped 215+ modules — get expert help.
Escrito por
ECOSIRE TeamTechnical Writing
The ECOSIRE technical writing team covers Odoo ERP, Shopify eCommerce, AI agents, Power BI analytics, GoHighLevel automation, and enterprise software best practices. Our guides help businesses make informed technology decisions.
ECOSIRE
Transforme su negocio con Odoo ERP
Implementación, personalización y soporte experto de Odoo para optimizar sus operaciones.
Artículos relacionados
Cómo agregar un botón personalizado a una vista de formulario de Odoo (2026)
Agregue botones de acción personalizados a las vistas de formulario de Odoo 19: método de acción de Python, herencia de vistas, visibilidad condicional, cuadros de diálogo de confirmación. Probado en producción.
Cómo agregar un campo personalizado en Odoo sin Studio (2026)
Agregue campos personalizados a través de un módulo personalizado en Odoo 19: herencia de modelo, extensión de vista, campos calculados, decisiones de tienda/no tienda. Código primero, controlado por versiones.
Cómo agregar un informe personalizado en Odoo usando un diseño externo
Cree un informe PDF con su marca en Odoo 19 usando web.external_layout: plantilla QWeb, formato de papel, enlace de acción. Con logotipo impreso + anulaciones de pie de página.