यह लेख वर्तमान में केवल अंग्रेज़ी में उपलब्ध है। अनुवाद जल्द आ रहा है।
A user starts an Odoo data import — 50,000 sales orders, 200,000 products, anything large. Other users start seeing pages hang for 30+ seconds. Some get errors:
ERROR: could not obtain lock on row in relation "sale_order"
ERROR: deadlock detected
PostgreSQL is doing exactly what it should: serializing access to rows the import is touching. The fix is to make the import smaller and shorter, not to "unlock" the database. This applies to Odoo 17.0/18.0/19.0.
Quick Fix
Stop the running import (find its session in pg_stat_activity and kill the backend process), then re-run with smaller batch size:
-- Find the import session
SELECT pid, usename, query_start, state, query
FROM pg_stat_activity
WHERE state = 'active' AND query LIKE '%INSERT INTO sale_order%';
-- Cancel it (gentle)
SELECT pg_cancel_backend(<pid>);
-- Or terminate (forced)
SELECT pg_terminate_backend(<pid>);
For the re-import, split the file into chunks of 1000 to 5000 rows and import each separately. Odoo's UI import form accepts batched files; or use a script with batched commits.
Why This Happens
Odoo imports run as a single PostgreSQL transaction by default. A 50K-row import:
- Acquires row-level locks on every row touched.
- Holds those locks until commit.
- Blocks any other transaction that wants the same rows.
- Generates a massive WAL log entry.
- Updates indexes incrementally — each insert triggers index updates.
The five common breaks:
- Single-transaction imports. Default Odoo behaviour. Holds locks for the whole import duration.
- No batching. A loop that does
record.create(...)50K times in one transaction. - Compute fields firing on every row. Each insert recomputes related computes, multiplying the work.
- Indexes triggered per row. Odoo's auto-created indexes update synchronously. Bulk imports work better with indexes temporarily disabled.
onchangechains running per row. Server-side imports skip onchange but the Web UI import does not — UI imports call onchange for every row.
Step-by-Step Diagnosis
1. Identify what is blocking what.
SELECT blocking.pid AS blocker_pid,
blocked.pid AS blocked_pid,
blocking.query AS blocker_query,
blocked.query AS blocked_query
FROM pg_stat_activity blocked
JOIN pg_stat_activity blocking ON blocking.pid = ANY(pg_blocking_pids(blocked.pid));
Each row shows one transaction blocking another. The blocker is the import.
2. Check the import's transaction size.
SELECT pid, xact_start, now() - xact_start AS xact_duration, query
FROM pg_stat_activity
WHERE xact_start IS NOT NULL
ORDER BY xact_duration DESC LIMIT 10;
Long-running transactions (over 5 minutes for an import) hold locks for that whole window.
3. Check WAL pile-up.
SELECT pg_walfile_name(pg_current_wal_lsn());
SELECT pg_wal_lsn_diff(pg_current_wal_lsn(), '0/0') AS bytes;
A growing gap between this and the last archived WAL position means WAL is piling up. Imports flood WAL.
4. Check disk I/O.
iostat -x 5 5
If %util on the database disk is at 100, the disk is the bottleneck. The import is throttled by storage, not lock contention.
5. Identify which table. Most imports lock one table. Check pg_locks:
SELECT relation::regclass AS table, mode, count(*)
FROM pg_locks
WHERE locktype = 'relation' AND mode LIKE '%Lock'
GROUP BY relation::regclass, mode
ORDER BY count DESC;
Permanent Fix
Batch every import. For UI imports, split the source CSV into chunks before uploading. For programmatic imports, commit per batch:
def import_orders(self, rows, batch_size=500):
for i in range(0, len(rows), batch_size):
batch = rows[i:i + batch_size]
with self.env.cr.savepoint():
self.env['sale.order'].create([self._prepare(r) for r in batch])
self.env.cr.commit() # release locks between batches
_logger.info("Imported %d/%d", i + len(batch), len(rows))
cr.commit() between batches releases all locks, lets other users proceed, then the next batch acquires fresh locks.
Disable computes during bulk import:
def import_orders(self, rows):
self.env['sale.order'].with_context(
tracking_disable=True, # skip mail.thread
mail_create_nolog=True,
mail_create_nosubscribe=True,
mail_notrack=True,
no_compute=True, # if your model supports it
).create([...])
tracking_disable=True skips mail.thread chatter creation — a huge win for any model that inherits mail.thread (which is most of Odoo).
For very large imports (millions of rows), use COPY directly:
def bulk_load(self, csv_path):
self.env.cr.copy_expert(
"COPY temp_import (col1, col2) FROM STDIN WITH CSV HEADER",
open(csv_path, 'rb')
)
# then INSERT INTO sale_order SELECT ... FROM temp_import in batches
PostgreSQL's COPY is 10x to 100x faster than INSERT for bulk load. Skip Odoo's ORM for the load phase, then run a separate "post-process" pass for compute fields.
Use queue_job for async imports:
from odoo.addons.queue_job.job import job
class ImportRunner(models.Model):
_name = 'import.runner'
@job
def import_batch(self, csv_chunk):
for row in csv_chunk:
self.env['sale.order'].create(self._prepare(row))
The queue worker runs imports outside the user's session — the user's web request completes immediately, the actual import runs in the background, locks are held for milliseconds at a time.
How to Prevent It
- Default to batched imports. Make 1000-row batches the team norm. Anything larger is a code smell.
- Use
tracking_disable=Trueon every bulk import. Nothing slows imports more than mail.thread side effects. - Run imports off-hours. Even with batching, a million-row import still runs longer; schedule for low-usage windows.
- Use queue_job for user-triggered imports. Web users trigger an async job; the import runs without blocking their session or other users.
- Monitor lock contention.
pg_stat_activityqueries flagged via Prometheus alerts catch slow imports before users complain. - Test imports at scale. Test with a 100K row sample, not 100. Issues only surface at scale.
Related Errors
- Slow list view on > 1M rows — adjacent table-size performance issue.
- Cron job stuck running — what async imports look like when they go wrong.
- PostgreSQL deadlock during stock move — sibling concurrency issue.
- Too many PostgreSQL connections — long imports can saturate the pool.
Frequently Asked Questions
Can I just raise PostgreSQL's lock timeout?
That makes the symptom less visible, not the problem smaller. Other transactions wait longer instead of erroring. Better to fix the import to hold locks for less time.
Why does Odoo's UI import lock everything when CSV import via shell does not?
UI imports run inside a single Odoo request transaction and call onchange for every row. Shell imports via cr.copy_expert or batched scripts skip onchange and commit per batch. UI imports are user-friendly but slow at scale.
Should I use LOCK TABLE to fail fast?
Sometimes yes. If you want the import to either run cleanly or fail immediately rather than queue behind users, LOCK TABLE sale_order IN SHARE ROW EXCLUSIVE MODE NOWAIT at the start of your import gives you that semantics. Failed-fast imports are easier to coordinate than queued ones.
How do I import 10 million rows reliably?
Multi-phase: COPY raw data into a staging table, run validations on the staging table, INSERT in batches into Odoo's actual model with tracking_disable=True and no_compute=True, then a final pass that computes any cached fields. ECOSIRE's Odoo support team handles this kind of bulk migration regularly with tested scripts.
Can I disable indexes during import and rebuild after?
Yes, and it can be 5-10x faster on tables with many indexes:
ALTER INDEX idx_x DISABLE; -- 13.0+
-- run import
REINDEX INDEX idx_x;
Costs you the index during import (queries that needed it slow down) but speeds up writes dramatically. Worth it for one-time imports of millions of rows; not worth it for routine imports.
Why does my import slow down as it progresses?
Two causes. First, growing indexes — each insert updates more index pages. Second, ORM cache bloat — without invalidate_all() between batches, memory pressure makes everything slower. Both are addressed by smaller batches with explicit invalidation.
Need help with a tricky Odoo error? ECOSIRE's Odoo experts have shipped 215+ modules — get expert help.
लेखक
ECOSIRE TeamTechnical Writing
The ECOSIRE technical writing team covers Odoo ERP, Shopify eCommerce, AI agents, Power BI analytics, GoHighLevel automation, and enterprise software best practices. Our guides help businesses make informed technology decisions.
ECOSIRE
Odoo ERP के साथ अपना व्यवसाय बदलें
आपके संचालन को सुव्यवस्थित करने के लिए विशेषज्ञ ओडू कार्यान्वयन, अनुकूलन और समर्थन।
संबंधित लेख
How to Add a Custom Button to an Odoo Form View (2026)
Add custom action buttons to Odoo 19 form views: Python action method, view inheritance, conditional visibility, confirmation dialogs. Production-tested.
How to Add a Custom Field in Odoo Without Studio (2026)
Add custom fields via custom module in Odoo 19: model inheritance, view extension, computed fields, store/non-store decisions. Code-first, version-controlled.
How to Add a Custom Report in Odoo Using External Layout
Build a branded PDF report in Odoo 19 using web.external_layout: QWeb template, paperformat, action binding. With print logo + footer overrides.