هذه المقالة متاحة حاليًا باللغة الإنجليزية فقط. الترجمة قريبا.
Your Odoo cron worker starts at 200 MB RSS, climbs to 2 GB by the next day, and to 6 GB before getting OOM-killed. The pattern repeats every cycle. Your scheduled jobs run fine for the first few hours after a restart, then everything slows down. The log shows:
WARNING: Worker (12345) consumed 5872MB, exceeding limit_memory_soft (4096MB), restarting
Memory growth in long-running cron processes is a real bug class on Odoo 17.0/18.0/19.0 — almost always caused by code that keeps building up in-memory state without releasing it.
Quick Fix
Set a hard memory limit so workers self-recycle before becoming unhealthy:
# odoo.conf
limit_memory_soft = 2147483648 # 2 GB — graceful recycle
limit_memory_hard = 3221225472 # 3 GB — kill
Restart Odoo. Workers above the soft limit finish their current job and exit; the worker pool spawns a fresh replacement automatically. This buys you time while you find the leak.
Why This Happens
Long-running Python processes accumulate memory if they:
- Hold ORM cache for the entire process lifetime. Odoo's ORM cache grows with each record accessed. Without periodic invalidation, a cron processing millions of records caches all of them.
- Build large lists in memory. A cron that does
all_records = self.env['x'].search([])on a 5M-row table allocates 5M Python objects before it can iterate. - Leak
mail.messagechatter. Everyrecord.create()creates chatter messages; even if youtracking_disable=True, the cache references linger. - Hold open file handles, sockets, or external API responses. A cron calling an external API and not closing connections leaks resources.
- Custom code with global state. A module that does
_cache = {}at module level and keeps adding to it never releases memory.
Step-by-Step Diagnosis
1. Confirm the leak.
ps -eo pid,rss,cmd --sort=-rss | grep odoo-bin | head
Run hourly. If RSS grows monotonically, you have a leak. If it plateaus and bounces, you do not — the worker is just sized for its workload.
2. Identify which cron is leaking. Disable crons one by one, or in groups, and watch the RSS growth pattern. The cron whose disable stops the growth is the culprit.
3. Profile with tracemalloc. Add to a test version of the cron:
import tracemalloc
import gc
def cron_with_leak_check(self):
tracemalloc.start()
snap_before = tracemalloc.take_snapshot()
self._do_actual_work()
gc.collect()
snap_after = tracemalloc.take_snapshot()
diffs = snap_after.compare_to(snap_before, 'lineno')
for stat in diffs[:10]:
_logger.info(stat)
Top entries point at the lines allocating without releasing.
4. Check ORM cache size.
@api.model
def _log_cache_size(self):
cache = self.env.cache
sizes = {field: len(values) for field, values in cache._data.items()}
_logger.info("Cache: %s", sorted(sizes.items(), key=lambda x: -x[1])[:10])
A growing cache without bounds is your culprit.
5. Check for unclosed connections in your cron's external API code:
ls /proc/<pid>/fd | wc -l
If file descriptor count grows monotonically, you are leaking sockets or files.
Permanent Fix
Invalidate ORM cache periodically:
def cron_process_records(self):
batch_size = 1000
offset = 0
while True:
records = self.search([('done', '=', False)], limit=batch_size, offset=offset)
if not records:
break
self._process(records)
self.env.cr.commit()
self.env.invalidate_all() # critical — drop the cache
offset += batch_size
invalidate_all() clears the per-cursor cache. Combined with periodic commits, this caps memory growth.
Use generators instead of lists:
# WRONG — loads all records into memory
all_records = self.search([])
for rec in all_records:
self._process(rec)
# RIGHT — process in batches
batch_size = 1000
offset = 0
while True:
batch = self.search([], limit=batch_size, offset=offset)
if not batch:
break
self._process(batch)
self.env.invalidate_all()
offset += batch_size
Disable mail.thread tracking in bulk crons:
def cron_bulk_update(self):
self.with_context(
tracking_disable=True,
mail_create_nolog=True,
mail_notrack=True,
).write({'state': 'done'})
mail.thread chatter is the single biggest source of accidental memory retention in Odoo crons.
Close external connections explicitly:
import requests
def cron_call_api(self):
with requests.Session() as session:
for record in self.search([]):
response = session.get(f"https://api.example.com/{record.ref}")
try:
record.api_response = response.json()
finally:
response.close()
with blocks and explicit .close() calls prevent socket leaks.
Avoid module-level mutable state. Replace any _cache = {} at module scope with a model-level cache that gets cleared on commit:
class MyModel(models.Model):
_name = 'my.model'
@api.model
@tools.ormcache('key')
def _expensive_lookup(self, key):
return self._do_lookup(key)
Odoo's ormcache is invalidated on commit. Module-level dicts are not.
Reduce limit_memory_soft and limit_memory_hard. Even after fixing leaks, set conservative limits so any lingering issue is contained. A worker that recycles every 4 hours after processing 1000 jobs is a healthy worker.
How to Prevent It
- Periodic
invalidate_all()in long crons. Make this part of every cron's structure: process batch, commit, invalidate, next. tracking_disable=Trueon all bulk operations. Make it a code-review rule. Bulk writes/creates without it default to leaking mail chatter cache.- Test crons under realistic data. A cron tested on 1000 records may pass; the same cron on 1M records may leak. Use real-scale data in staging.
- Memory monitoring. Prometheus's
process_resident_memory_bytesper Odoo worker. Alert on growth above N MB/hour. - Limit memory aggressively. Lower
limit_memory_hardthan your gut suggests. The worker pool recycles automatically; recycle frequency is cheap. - No module-level mutable state. Code-review rule. Module-level state is per-process and cumulative.
Related Errors
- Cron job stuck running — adjacent issue, often related to the same long-running jobs.
- Database locked during import — sibling resource problem.
- Too many PostgreSQL connections — what happens when leaking workers also leak connections.
- Slow list view on > 1M rows — same large-data root, different symptom.
Frequently Asked Questions
What is a normal Odoo worker memory footprint?
Healthy production: 300 to 800 MB per worker. Above 1.5 GB consistently, you have either a leak or a workload that warrants its own dedicated worker. Web workers and cron workers can have different profiles; tune separately.
Does gc.collect() help?
Sometimes. Python's garbage collector handles cyclic references that the reference-counting GC cannot. Forcing a gc.collect() between batches frees cyclic-reference memory. But the underlying issue — your cron holds references — is what to fix.
Can I run crons with their own Python process settings?
Yes. Run a separate Odoo instance with only --max-cron-threads=4 --workers=0 and stricter memory limits. The web workers stay tuned for low-latency requests; the cron instance is tuned for batch work.
My memory grows even after fixing the obvious leaks. What now?
Try tracemalloc (in step 3 of diagnosis) for line-level allocation tracking. Also enable Python's -X tracemalloc=10 for deeper traces. The OCA queue_job worker is a battle-tested process model — switching long-running work there often eliminates the issue by structure, not by hunting individual leaks.
Should limit_memory_soft be lower than the OOM killer threshold?
Yes, by a wide margin. OS OOM-kill is a hard SIGKILL — workers cannot clean up. Odoo's limit_memory_soft triggers a graceful recycle (worker finishes current request then exits). Set Odoo's hard limit at 60 percent of available RAM divided by worker count, and the OS OOM threshold at 85 percent. Plenty of headroom either way.
How do I know if tracking_disable=True is enough?
Profile before and after. The single biggest contributor to mail.thread cache memory is automatic chatter creation on writes. Setting tracking_disable=True skips that. If you still see cache growth after this flag, look at custom inheritances of _message_post or _track_subtype.
Need help with a tricky Odoo error? ECOSIRE's Odoo experts have shipped 215+ modules — get expert help.
بقلم
ECOSIRE TeamTechnical Writing
The ECOSIRE technical writing team covers Odoo ERP, Shopify eCommerce, AI agents, Power BI analytics, GoHighLevel automation, and enterprise software best practices. Our guides help businesses make informed technology decisions.
ECOSIRE
قم بتحويل أعمالك باستخدام Odoo ERP
تنفيذ وتخصيص ودعم خبير Odoo لتبسيط عملياتك.
مقالات ذات صلة
كيفية إضافة زر مخصص إلى عرض نموذج Odoo (2026)
إضافة أزرار إجراءات مخصصة إلى طرق عرض نموذج Odoo 19: طريقة إجراء Python، وعرض الميراث، والرؤية المشروطة، ومربعات حوار التأكيد. تم اختبار الإنتاج.
كيفية إضافة حقل مخصص في Odoo بدون الاستوديو (2026)
قم بإضافة حقول مخصصة عبر وحدة مخصصة في Odoo 19: وراثة النموذج، وامتداد العرض، والحقول المحسوبة، وقرارات المتجر/غير المتجر. الكود أولاً، يتم التحكم في الإصدار.
كيفية إضافة تقرير مخصص في أودو باستخدام التخطيط الخارجي
أنشئ تقرير PDF يحمل علامة تجارية في Odoo 19 باستخدام web.external_layout: قالب QWeb، تنسيق الورق، ربط الإجراء. مع طباعة الشعار + تجاوزات التذييل.