本文目前仅提供英文版本。翻译即将推出。
Your Odoo list view on a large table — account.move.line, mail.message, stock.move, anything past one million rows — takes 30 seconds to load. The user clicks "Next page" and waits. The pivot view times out. The dashboard cards never finish loading. The PostgreSQL log shows:
LOG: duration: 28342.567 ms statement: SELECT ... FROM account_move_line ...
This is one of the highest-impact Odoo performance issues on 17.0/18.0/19.0 because every user feels it every day. The fixes are mostly database-level, not Odoo-level.
Quick Fix
Identify the slow query and add the missing index. The most common offender is the order column on a list view that has no index:
-- Find slow queries currently running
SELECT pid, now() - query_start AS duration, state, query
FROM pg_stat_activity
WHERE state = 'active' AND now() - query_start > interval '5 seconds'
ORDER BY duration DESC;
-- Add missing indexes for common Odoo filters
CREATE INDEX CONCURRENTLY IF NOT EXISTS aml_partner_id_idx
ON account_move_line (partner_id);
CREATE INDEX CONCURRENTLY IF NOT EXISTS aml_date_idx
ON account_move_line (date);
CREATE INDEX CONCURRENTLY IF NOT EXISTS aml_account_id_date_idx
ON account_move_line (account_id, date);
CREATE INDEX CONCURRENTLY builds the index without locking the table — safe for production.
Why This Happens
Odoo list views generate three queries per page load:
- The
SELECTfor the visible rows. - The
SELECT count(*)for total row count (used by pagination). - Aggregation queries for column footers.
Each fights for performance independently. The five common bottlenecks:
- No index on the filtered column.
WHERE partner_id = 7on a 5M-row table without an index = sequential scan. count(*)over a large table. Even with indexes, count is expensive on big tables. Odoo 17.0 added approximate count fallbacks but they need configuration.- Order by a non-indexed column.
ORDER BY priority DESCwithoutpriorityindexed forces a sort of the entire result set. - Computed fields fetched per row. Non-stored computes execute Python per row; 80 rows × 50 ms = 4 seconds wasted.
- One2many fetches. A list view that fetches
order_linefor eachsale.orderrow triggers N+1 queries — 80 rows × 1 query each = 80 queries.
Step-by-Step Diagnosis
1. Enable PostgreSQL slow query log.
ALTER SYSTEM SET log_min_duration_statement = 1000;
SELECT pg_reload_conf();
Now any query over 1 second logs. Tail /var/log/postgresql/*.log and reproduce the slow list view.
2. Get the query and EXPLAIN it.
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT id FROM account_move_line
WHERE partner_id = 7 AND date >= '2026-01-01'
ORDER BY date DESC
LIMIT 80;
Seq Scan in the output = no usable index. Index Scan or Bitmap Index Scan = good.
3. Find missing indexes for common Odoo patterns.
-- Top tables by size and row count
SELECT relname AS table, n_live_tup AS rows, pg_size_pretty(pg_total_relation_size(relid)) AS size
FROM pg_stat_user_tables
ORDER BY n_live_tup DESC LIMIT 20;
Tables with millions of rows get indexes for every commonly-filtered column.
4. Check if count(*) is the bottleneck.
EXPLAIN ANALYZE SELECT count(*) FROM account_move_line WHERE partner_id = 7;
If count alone takes seconds, the table is too large for exact counts.
5. Check for N+1 queries. Tail the log while loading the list. If you see hundreds of similar queries with different id values, you have an N+1 — usually from a non-stored computed field.
Permanent Fix
Add indexes to common filter columns:
# In your custom module's model
class AccountMoveLine(models.Model):
_inherit = 'account.move.line'
partner_id = fields.Many2one('res.partner', index=True) # Odoo creates the index
date = fields.Date(index=True)
Or directly via SQL for indexes Odoo does not auto-create:
CREATE INDEX CONCURRENTLY aml_partner_account_date_idx
ON account_move_line (partner_id, account_id, date)
WHERE parent_state = 'posted';
Partial indexes (the WHERE clause) are huge wins for filtered queries — Odoo's accounting list views always filter parent_state = 'posted', so a posted-only partial index halves the scanned rows.
For slow count(*), switch to approximate count for large tables:
class AccountMoveLine(models.Model):
_inherit = 'account.move.line'
@api.model
def search_count(self, args, **kwargs):
# For massive tables, return an approximate count
if not args:
self.env.cr.execute(
"SELECT reltuples::bigint FROM pg_class WHERE relname = 'account_move_line'"
)
return self.env.cr.fetchone()[0]
return super().search_count(args, **kwargs)
Better, install OCA's web_optional_count module which makes count optional in list views — Odoo skips the count query and shows ">1000" instead.
Convert non-stored computes to stored:
amount_signed = fields.Monetary(
compute='_compute_amount_signed',
store=True, # critical
)
@api.depends('debit', 'credit', 'move_id.move_type')
def _compute_amount_signed(self):
for line in self:
line.amount_signed = line.debit - line.credit
Stored computes write at create/update time, then list views read them as columns — no Python per row.
For N+1 fetches, prefetch:
# WRONG — N+1
for order in self.env['sale.order'].search([], limit=100):
print(order.partner_id.name) # one query per partner_id
# RIGHT — prefetch
orders = self.env['sale.order'].search([], limit=100)
orders.read(['partner_id']) # batch fetches partner_id
for order in orders:
print(order.partner_id.name) # already in cache
Odoo's ORM auto-prefetches in many cases, but custom code that breaks the pattern (loops with conditional reads) defeats it.
Tune PostgreSQL configuration for an Odoo workload:
# postgresql.conf for a 16GB server
shared_buffers = 4GB
work_mem = 16MB
maintenance_work_mem = 1GB
random_page_cost = 1.1 # SSDs
effective_cache_size = 12GB
random_page_cost = 1.1 tells the planner that random I/O is cheap (true on SSDs); without it, the planner prefers seq scans on tables that fit in cache, which is wrong.
How to Prevent It
- Index every commonly-filtered column. Make this a code-review rule. Any new field used in a search domain or list filter gets
index=Trueor a partial index. - Stored computes by default. Reserve non-stored computes for fields that depend on context or are rarely accessed. Default to
store=Truefor anything that appears in list views. - Monitor slow queries. Prometheus's
postgres_exportershipspg_stat_statementsmetrics. Alert on queries above a 95th percentile threshold. - Capacity-plan tables. A table that grows to 10 M rows needs different indexing than one at 100 K. Review every quarter.
- Vacuum and analyze regularly.
autovacuum_naptime = 30sand per-table tuning for high-write tables. Without analyze, the planner uses stale statistics. - Partition the largest tables. PostgreSQL 13+ supports declarative partitioning.
account_move_linepartitioned by year is a 10x performance win on multi-year reports. Setup is non-trivial; ECOSIRE's Odoo support team handles this for our customers.
Related Errors
- Cron job stuck running — adjacent performance issue.
- Database locked during import — a different way large tables hurt.
- PostgreSQL deadlock during stock move — concurrency problem on large tables.
- Too many PostgreSQL connections — when slow queries pile up and exhaust the connection pool.
Frequently Asked Questions
How big does a table need to be before performance degrades?
Depends on workload, hardware, and indexes. With proper indexing, well-tuned PostgreSQL on SSDs handles 100 M rows for filtered queries with sub-100 ms latency. Without indexes, a 1 M row table starts to drag at ~500 ms list loads. Below ~100 K rows, indexing matters less.
Should I use BRIN or B-tree indexes?
B-tree for range queries (date >= ...), exact matches, and ordered lookups. BRIN for very large append-mostly tables where data is naturally clustered (date, sequential id). For Odoo, B-tree on (date) and (partner_id, date) covers most cases. BRIN can help on account_move_line.date if you have hundreds of millions of rows.
Can I disable count to speed up large list views?
Yes. Install OCA's web_optional_count. Or write a custom search with a compute returning a hardcoded >1000. Lose the exact count, gain seconds back per page load.
Why is my query fast in psql but slow via Odoo?
Three usual causes. First, Odoo wraps queries in additional joins (security record rules, multi-company filters) that change the plan. Second, Odoo's prefetch may grab more columns than psql does. Third, the parsed-prepared-statement cache differs. EXPLAIN the query Odoo actually runs (use the slow query log) — not the simplified version you tried in psql.
Need help with a tricky Odoo error? ECOSIRE's Odoo experts have shipped 215+ modules — get expert help.
作者
ECOSIRE TeamTechnical Writing
The ECOSIRE technical writing team covers Odoo ERP, Shopify eCommerce, AI agents, Power BI analytics, GoHighLevel automation, and enterprise software best practices. Our guides help businesses make informed technology decisions.
相关文章
如何将自定义按钮添加到 Odoo 表单视图 (2026)
将自定义操作按钮添加到 Odoo 19 表单视图:Python 操作方法、视图继承、条件可见性、确认对话框。经过生产测试。
如何在没有 Studio 的情况下在 Odoo 中添加自定义字段 (2026)
通过 Odoo 19 中的自定义模块添加自定义字段:模型继承、视图扩展、计算字段、存储/非存储决策。代码优先,版本控制。
如何使用外部布局在 Odoo 中添加自定义报告
使用 web.external_layout 在 Odoo 19 中构建品牌 PDF 报告:QWeb 模板、paperformat、操作绑定。带有印刷徽标+页脚覆盖。