odoo-addon-queue-job


Nameodoo-addon-queue-job JSON
Version 18.0.1.1.1 PyPI version JSON
download
home_pagehttps://github.com/OCA/queue
SummaryJob Queue
upload_time2024-12-21 16:20:39
maintainerNone
docs_urlNone
authorCamptocamp,ACSONE SA/NV,Odoo Community Association (OCA)
requires_python>=3.10
licenseLGPL-3
keywords
VCS
bugtrack_url
requirements requests
Travis-CI No Travis.
coveralls test coverage No coveralls.
            =========
Job Queue
=========

.. 
   !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
   !! This file is generated by oca-gen-addon-readme !!
   !! changes will be overwritten.                   !!
   !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
   !! source digest: sha256:a70b92466f87890c5806c5ddded30d2290f2492e47073493ec557672ee6b67b6
   !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

.. |badge1| image:: https://img.shields.io/badge/maturity-Mature-brightgreen.png
    :target: https://odoo-community.org/page/development-status
    :alt: Mature
.. |badge2| image:: https://img.shields.io/badge/licence-LGPL--3-blue.png
    :target: http://www.gnu.org/licenses/lgpl-3.0-standalone.html
    :alt: License: LGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fqueue-lightgray.png?logo=github
    :target: https://github.com/OCA/queue/tree/18.0/queue_job
    :alt: OCA/queue
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
    :target: https://translation.odoo-community.org/projects/queue-18-0/queue-18-0-queue_job
    :alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
    :target: https://runboat.odoo-community.org/builds?repo=OCA/queue&target_branch=18.0
    :alt: Try me on Runboat

|badge1| |badge2| |badge3| |badge4| |badge5|

This addon adds an integrated Job Queue to Odoo.

It allows to postpone method calls executed asynchronously.

Jobs are executed in the background by a ``Jobrunner``, in their own
transaction.

Example:

.. code:: python

   from odoo import models, fields, api

   class MyModel(models.Model):
      _name = 'my.model'

      def my_method(self, a, k=None):
          _logger.info('executed with a: %s and k: %s', a, k)


   class MyOtherModel(models.Model):
       _name = 'my.other.model'

       def button_do_stuff(self):
           self.env['my.model'].with_delay().my_method('a', k=2)

In the snippet of code above, when we call ``button_do_stuff``, a job
**capturing the method and arguments** will be postponed. It will be
executed as soon as the Jobrunner has a free bucket, which can be
instantaneous if no other job is running.

Features:

- Views for jobs, jobs are stored in PostgreSQL
- Jobrunner: execute the jobs, highly efficient thanks to PostgreSQL's
  NOTIFY
- Channels: give a capacity for the root channel and its sub-channels
  and segregate jobs in them. Allow for instance to restrict heavy jobs
  to be executed one at a time while little ones are executed 4 at a
  times.
- Retries: Ability to retry jobs by raising a type of exception
- Retry Pattern: the 3 first tries, retry after 10 seconds, the 5 next
  tries, retry after 1 minutes, ...
- Job properties: priorities, estimated time of arrival (ETA), custom
  description, number of retries
- Related Actions: link an action on the job view, such as open the
  record concerned by the job

**Table of contents**

.. contents::
   :local:

Installation
============

Be sure to have the ``requests`` library.

Configuration
=============

- Using environment variables and command line:

  - Adjust environment variables (optional):

    - ``ODOO_QUEUE_JOB_CHANNELS=root:4`` or any other channels
      configuration. The default is ``root:1``
    - if ``xmlrpc_port`` is not set: ``ODOO_QUEUE_JOB_PORT=8069``

  - Start Odoo with ``--load=web,queue_job`` and ``--workers`` greater
    than 1. [1]_

- Using the Odoo configuration file:

.. code:: ini

   [options]
   (...)
   workers = 6
   server_wide_modules = web,queue_job

   (...)
   [queue_job]
   channels = root:2

- Confirm the runner is starting correctly by checking the odoo log
  file:

::

   ...INFO...queue_job.jobrunner.runner: starting
   ...INFO...queue_job.jobrunner.runner: initializing database connections
   ...INFO...queue_job.jobrunner.runner: queue job runner ready for db <dbname>
   ...INFO...queue_job.jobrunner.runner: database connections ready

- Create jobs (eg using ``base_import_async``) and observe they start
  immediately and in parallel.
- Tip: to enable debug logging for the queue job, use
  ``--log-handler=odoo.addons.queue_job:DEBUG``

.. [1]
   It works with the threaded Odoo server too, although this way of
   running Odoo is obviously not for production purposes.

Usage
=====

To use this module, you need to:

1. Go to ``Job Queue`` menu

Developers
----------

Delaying jobs
~~~~~~~~~~~~~

The fast way to enqueue a job for a method is to use ``with_delay()`` on
a record or model:

.. code:: python

   def button_done(self):
       self.with_delay().print_confirmation_document(self.state)
       self.write({"state": "done"})
       return True

Here, the method ``print_confirmation_document()`` will be executed
asynchronously as a job. ``with_delay()`` can take several parameters to
define more precisely how the job is executed (priority, ...).

All the arguments passed to the method being delayed are stored in the
job and passed to the method when it is executed asynchronously,
including ``self``, so the current record is maintained during the job
execution (warning: the context is not kept).

Dependencies can be expressed between jobs. To start a graph of jobs,
use ``delayable()`` on a record or model. The following is the
equivalent of ``with_delay()`` but using the long form:

.. code:: python

   def button_done(self):
       delayable = self.delayable()
       delayable.print_confirmation_document(self.state)
       delayable.delay()
       self.write({"state": "done"})
       return True

Methods of Delayable objects return itself, so it can be used as a
builder pattern, which in some cases allow to build the jobs
dynamically:

.. code:: python

   def button_generate_simple_with_delayable(self):
       self.ensure_one()
       # Introduction of a delayable object, using a builder pattern
       # allowing to chain jobs or set properties. The delay() method
       # on the delayable object actually stores the delayable objects
       # in the queue_job table
       (
           self.delayable()
           .generate_thumbnail((50, 50))
           .set(priority=30)
           .set(description=_("generate xxx"))
           .delay()
       )

The simplest way to define a dependency is to use ``.on_done(job)`` on a
Delayable:

.. code:: python

   def button_chain_done(self):
       self.ensure_one()
       job1 = self.browse(1).delayable().generate_thumbnail((50, 50))
       job2 = self.browse(1).delayable().generate_thumbnail((50, 50))
       job3 = self.browse(1).delayable().generate_thumbnail((50, 50))
       # job 3 is executed when job 2 is done which is executed when job 1 is done
       job1.on_done(job2.on_done(job3)).delay()

Delayables can be chained to form more complex graphs using the
``chain()`` and ``group()`` primitives. A chain represents a sequence of
jobs to execute in order, a group represents jobs which can be executed
in parallel. Using ``chain()`` has the same effect as using several
nested ``on_done()`` but is more readable. Both can be combined to form
a graph, for instance we can group [A] of jobs, which blocks another
group [B] of jobs. When and only when all the jobs of the group [A] are
executed, the jobs of the group [B] are executed. The code would look
like:

.. code:: python

   from odoo.addons.queue_job.delay import group, chain

   def button_done(self):
       group_a = group(self.delayable().method_foo(), self.delayable().method_bar())
       group_b = group(self.delayable().method_baz(1), self.delayable().method_baz(2))
       chain(group_a, group_b).delay()
       self.write({"state": "done"})
       return True

When a failure happens in a graph of jobs, the execution of the jobs
that depend on the failed job stops. They remain in a state
``wait_dependencies`` until their "parent" job is successful. This can
happen in two ways: either the parent job retries and is successful on a
second try, either the parent job is manually "set to done" by a user.
In these two cases, the dependency is resolved and the graph will
continue to be processed. Alternatively, the failed job and all its
dependent jobs can be canceled by a user. The other jobs of the graph
that do not depend on the failed job continue their execution in any
case.

Note: ``delay()`` must be called on the delayable, chain, or group which
is at the top of the graph. In the example above, if it was called on
``group_a``, then ``group_b`` would never be delayed (but a warning
would be shown).

It is also possible to split a job into several jobs, each one
processing a part of the work. This can be useful to avoid very long
jobs, parallelize some task and get more specific errors. Usage is as
follows:

.. code:: python

   def button_split_delayable(self):
       (
           self  # Can be a big recordset, let's say 1000 records
           .delayable()
           .generate_thumbnail((50, 50))
           .set(priority=30)
           .set(description=_("generate xxx"))
           .split(50)  # Split the job in 20 jobs of 50 records each
           .delay()
       )

The ``split()`` method takes a ``chain`` boolean keyword argument. If
set to True, the jobs will be chained, meaning that the next job will
only start when the previous one is done:

.. code:: python

   def button_increment_var(self):
       (
           self
           .delayable()
           .increment_counter()
           .split(1, chain=True) # Will exceute the jobs one after the other
           .delay()
       )

Enqueing Job Options
~~~~~~~~~~~~~~~~~~~~

- priority: default is 10, the closest it is to 0, the faster it will be
  executed
- eta: Estimated Time of Arrival of the job. It will not be executed
  before this date/time
- max_retries: default is 5, maximum number of retries before giving up
  and set the job state to 'failed'. A value of 0 means infinite
  retries.
- description: human description of the job. If not set, description is
  computed from the function doc or method name
- channel: the complete name of the channel to use to process the
  function. If specified it overrides the one defined on the function
- identity_key: key uniquely identifying the job, if specified and a job
  with the same key has not yet been run, the new job will not be
  created

Configure default options for jobs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

In earlier versions, jobs could be configured using the ``@job``
decorator. This is now obsolete, they can be configured using optional
``queue.job.function`` and ``queue.job.channel`` XML records.

Example of channel:

.. code:: XML

   <record id="channel_sale" model="queue.job.channel">
       <field name="name">sale</field>
       <field name="parent_id" ref="queue_job.channel_root" />
   </record>

Example of job function:

.. code:: XML

   <record id="job_function_sale_order_action_done" model="queue.job.function">
       <field name="model_id" ref="sale.model_sale_order" />
       <field name="method">action_done</field>
       <field name="channel_id" ref="channel_sale" />
       <field name="related_action" eval='{"func_name": "custom_related_action"}' />
       <field name="retry_pattern" eval="{1: 60, 2: 180, 3: 10, 5: 300}" />
   </record>

The general form for the ``name`` is: ``<model.name>.method``.

The channel, related action and retry pattern options are optional, they
are documented below.

When writing modules, if 2+ modules add a job function or channel with
the same name (and parent for channels), they'll be merged in the same
record, even if they have different xmlids. On uninstall, the merged
record is deleted when all the modules using it are uninstalled.

**Job function: model**

If the function is defined in an abstract model, you can not write
``<field name="model_id" ref="xml_id_of_the_abstract_model"</field>``
but you have to define a function for each model that inherits from the
abstract model.

**Job function: channel**

The channel where the job will be delayed. The default channel is
``root``.

**Job function: related action**

The *Related Action* appears as a button on the Job's view. The button
will execute the defined action.

The default one is to open the view of the record related to the job
(form view when there is a single record, list view for several
records). In many cases, the default related action is enough and
doesn't need customization, but it can be customized by providing a
dictionary on the job function:

.. code:: python

   {
       "enable": False,
       "func_name": "related_action_partner",
       "kwargs": {"name": "Partner"},
   }

- ``enable``: when ``False``, the button has no effect (default:
  ``True``)
- ``func_name``: name of the method on ``queue.job`` that returns an
  action
- ``kwargs``: extra arguments to pass to the related action method

Example of related action code:

.. code:: python

   class QueueJob(models.Model):
       _inherit = 'queue.job'

       def related_action_partner(self, name):
           self.ensure_one()
           model = self.model_name
           partner = self.records
           action = {
               'name': name,
               'type': 'ir.actions.act_window',
               'res_model': model,
               'view_type': 'form',
               'view_mode': 'form',
               'res_id': partner.id,
           }
           return action

**Job function: retry pattern**

When a job fails with a retryable error type, it is automatically
retried later. By default, the retry is always 10 minutes later.

A retry pattern can be configured on the job function. What a pattern
represents is "from X tries, postpone to Y seconds". It is expressed as
a dictionary where keys are tries and values are seconds to postpone as
integers:

.. code:: python

   {
       1: 10,
       5: 20,
       10: 30,
       15: 300,
   }

Based on this configuration, we can tell that:

- 5 first retries are postponed 10 seconds later
- retries 5 to 10 postponed 20 seconds later
- retries 10 to 15 postponed 30 seconds later
- all subsequent retries postponed 5 minutes later

**Job Context**

The context of the recordset of the job, or any recordset passed in
arguments of a job, is transferred to the job according to an
allow-list.

The default allow-list is ("tz", "lang", "allowed_company_ids",
"force_company", "active_test"). It can be customized in
``Base._job_prepare_context_before_enqueue_keys``. **Bypass jobs on
running Odoo**

When you are developing (ie: connector modules) you might want to bypass
the queue job and run your code immediately.

To do so you can set QUEUE_JOB\__NO_DELAY=1 in your enviroment.

**Bypass jobs in tests**

When writing tests on job-related methods is always tricky to deal with
delayed recordsets. To make your testing life easier you can set
queue_job\__no_delay=True in the context.

Tip: you can do this at test case level like this

.. code:: python

   @classmethod
   def setUpClass(cls):
       super().setUpClass()
       cls.env = cls.env(context=dict(
           cls.env.context,
           queue_job__no_delay=True,  # no jobs thanks
       ))

Then all your tests execute the job methods synchronously without
delaying any jobs.

Testing
~~~~~~~

**Asserting enqueued jobs**

The recommended way to test jobs, rather than running them directly and
synchronously is to split the tests in two parts:

   - one test where the job is mocked (trap jobs with ``trap_jobs()``
     and the test only verifies that the job has been delayed with the
     expected arguments
   - one test that only calls the method of the job synchronously, to
     validate the proper behavior of this method only

Proceeding this way means that you can prove that jobs will be enqueued
properly at runtime, and it ensures your code does not have a different
behavior in tests and in production (because running your jobs
synchronously may have a different behavior as they are in the same
transaction / in the middle of the method). Additionally, it gives more
control on the arguments you want to pass when calling the job's method
(synchronously, this time, in the second type of tests), and it makes
tests smaller.

The best way to run such assertions on the enqueued jobs is to use
``odoo.addons.queue_job.tests.common.trap_jobs()``.

A very small example (more details in ``tests/common.py``):

.. code:: python

   # code
   def my_job_method(self, name, count):
       self.write({"name": " ".join([name] * count)

   def method_to_test(self):
       count = self.env["other.model"].search_count([])
       self.with_delay(priority=15).my_job_method("Hi!", count=count)
       return count

   # tests
   from odoo.addons.queue_job.tests.common import trap_jobs

   # first test only check the expected behavior of the method and the proper
   # enqueuing of jobs
   def test_method_to_test(self):
       with trap_jobs() as trap:
           result = self.env["model"].method_to_test()
           expected_count = 12

           trap.assert_jobs_count(1, only=self.env["model"].my_job_method)
           trap.assert_enqueued_job(
               self.env["model"].my_job_method,
               args=("Hi!",),
               kwargs=dict(count=expected_count),
               properties=dict(priority=15)
           )
           self.assertEqual(result, expected_count)


    # second test to validate the behavior of the job unitarily
    def test_my_job_method(self):
        record = self.env["model"].browse(1)
        record.my_job_method("Hi!", count=12)
        self.assertEqual(record.name, "Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi!")

If you prefer, you can still test the whole thing in a single test, by
calling ``jobs_tester.perform_enqueued_jobs()`` in your test.

.. code:: python

   def test_method_to_test(self):
       with trap_jobs() as trap:
           result = self.env["model"].method_to_test()
           expected_count = 12

           trap.assert_jobs_count(1, only=self.env["model"].my_job_method)
           trap.assert_enqueued_job(
               self.env["model"].my_job_method,
               args=("Hi!",),
               kwargs=dict(count=expected_count),
               properties=dict(priority=15)
           )
           self.assertEqual(result, expected_count)

           trap.perform_enqueued_jobs()

           record = self.env["model"].browse(1)
           record.my_job_method("Hi!", count=12)
           self.assertEqual(record.name, "Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi!")

**Execute jobs synchronously when running Odoo**

When you are developing (ie: connector modules) you might want to bypass
the queue job and run your code immediately.

To do so you can set ``QUEUE_JOB__NO_DELAY=1`` in your environment.

Warning

Do not do this in production

**Execute jobs synchronously in tests**

You should use ``trap_jobs``, really, but if for any reason you could
not use it, and still need to have job methods executed synchronously in
your tests, you can do so by setting ``queue_job__no_delay=True`` in the
context.

Tip: you can do this at test case level like this

.. code:: python

   @classmethod
   def setUpClass(cls):
       super().setUpClass()
       cls.env = cls.env(context=dict(
           cls.env.context,
           queue_job__no_delay=True,  # no jobs thanks
       ))

Then all your tests execute the job methods synchronously without
delaying any jobs.

In tests you'll have to mute the logger like:

   @mute_logger('odoo.addons.queue_job.models.base')

Note

in graphs of jobs, the ``queue_job__no_delay`` context key must be in at
least one job's env of the graph for the whole graph to be executed
synchronously

Tips and tricks
~~~~~~~~~~~~~~~

- **Idempotency**
  (https://www.restapitutorial.com/lessons/idempotency.html): The
  queue_job should be idempotent so they can be retried several times
  without impact on the data.
- **The job should test at the very beginning its relevance**: the
  moment the job will be executed is unknown by design. So the first
  task of a job should be to check if the related work is still relevant
  at the moment of the execution.

Patterns
~~~~~~~~

Through the time, two main patterns emerged:

1. For data exposed to users, a model should store the data and the
   model should be the creator of the job. The job is kept hidden from
   the users
2. For technical data, that are not exposed to the users, it is
   generally alright to create directly jobs with data passed as
   arguments to the job, without intermediary models.

Known issues / Roadmap
======================

- After creating a new database or installing ``queue_job`` on an
  existing database, Odoo must be restarted for the runner to detect it.
- When Odoo shuts down normally, it waits for running jobs to finish.
  However, when the Odoo server crashes or is otherwise force-stopped,
  running jobs are interrupted while the runner has no chance to know
  they have been aborted. In such situations, jobs may remain in
  ``started`` or ``enqueued`` state after the Odoo server is halted.
  Since the runner has no way to know if they are actually running or
  not, and does not know for sure if it is safe to restart the jobs, it
  does not attempt to restart them automatically. Such stale jobs
  therefore fill the running queue and prevent other jobs to start. You
  must therefore requeue them manually, either from the Jobs view, or by
  running the following SQL statement *before starting Odoo*:

.. code:: sql

   update queue_job set state='pending' where state in ('started', 'enqueued')

Changelog
=========

Next
----

- [ADD] Run jobrunner as a worker process instead of a thread in the
  main process (when running with --workers > 0)
- [REF] ``@job`` and ``@related_action`` deprecated, any method can be
  delayed, and configured using ``queue.job.function`` records
- [MIGRATION] from 13.0 branched at rev. e24ff4b

Bug Tracker
===========

Bugs are tracked on `GitHub Issues <https://github.com/OCA/queue/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/queue/issues/new?body=module:%20queue_job%0Aversion:%2018.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.

Do not contact contributors directly about support or help with technical issues.

Credits
=======

Authors
-------

* Camptocamp
* ACSONE SA/NV

Contributors
------------

- Guewen Baconnier <guewen.baconnier@camptocamp.com>
- Stéphane Bidoul <stephane.bidoul@acsone.eu>
- Matthieu Dietrich <matthieu.dietrich@camptocamp.com>
- Jos De Graeve <Jos.DeGraeve@apertoso.be>
- David Lefever <dl@taktik.be>
- Laurent Mignon <laurent.mignon@acsone.eu>
- Laetitia Gangloff <laetitia.gangloff@acsone.eu>
- Cédric Pigeon <cedric.pigeon@acsone.eu>
- Tatiana Deribina <tatiana.deribina@avoin.systems>
- Souheil Bejaoui <souheil.bejaoui@acsone.eu>
- Eric Antones <eantones@nuobit.com>
- Simone Orsi <simone.orsi@camptocamp.com>
- Nguyen Minh Chien <chien@trobz.com>
- Tran Quoc Duong <duongtq@trobz.com>
- Vo Hong Thien <thienvh@trobz.com>

Other credits
-------------

The migration of this module from 17.0 to 18.0 was financially supported
by Camptocamp.

Maintainers
-----------

This module is maintained by the OCA.

.. image:: https://odoo-community.org/logo.png
   :alt: Odoo Community Association
   :target: https://odoo-community.org

OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.

.. |maintainer-guewen| image:: https://github.com/guewen.png?size=40px
    :target: https://github.com/guewen
    :alt: guewen

Current `maintainer <https://odoo-community.org/page/maintainer-role>`__:

|maintainer-guewen| 

This module is part of the `OCA/queue <https://github.com/OCA/queue/tree/18.0/queue_job>`_ project on GitHub.

You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/OCA/queue",
    "name": "odoo-addon-queue-job",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": null,
    "author": "Camptocamp,ACSONE SA/NV,Odoo Community Association (OCA)",
    "author_email": "support@odoo-community.org",
    "download_url": null,
    "platform": null,
    "description": "=========\nJob Queue\n=========\n\n.. \n   !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n   !! This file is generated by oca-gen-addon-readme !!\n   !! changes will be overwritten.                   !!\n   !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n   !! source digest: sha256:a70b92466f87890c5806c5ddded30d2290f2492e47073493ec557672ee6b67b6\n   !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n\n.. |badge1| image:: https://img.shields.io/badge/maturity-Mature-brightgreen.png\n    :target: https://odoo-community.org/page/development-status\n    :alt: Mature\n.. |badge2| image:: https://img.shields.io/badge/licence-LGPL--3-blue.png\n    :target: http://www.gnu.org/licenses/lgpl-3.0-standalone.html\n    :alt: License: LGPL-3\n.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fqueue-lightgray.png?logo=github\n    :target: https://github.com/OCA/queue/tree/18.0/queue_job\n    :alt: OCA/queue\n.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png\n    :target: https://translation.odoo-community.org/projects/queue-18-0/queue-18-0-queue_job\n    :alt: Translate me on Weblate\n.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png\n    :target: https://runboat.odoo-community.org/builds?repo=OCA/queue&target_branch=18.0\n    :alt: Try me on Runboat\n\n|badge1| |badge2| |badge3| |badge4| |badge5|\n\nThis addon adds an integrated Job Queue to Odoo.\n\nIt allows to postpone method calls executed asynchronously.\n\nJobs are executed in the background by a ``Jobrunner``, in their own\ntransaction.\n\nExample:\n\n.. code:: python\n\n   from odoo import models, fields, api\n\n   class MyModel(models.Model):\n      _name = 'my.model'\n\n      def my_method(self, a, k=None):\n          _logger.info('executed with a: %s and k: %s', a, k)\n\n\n   class MyOtherModel(models.Model):\n       _name = 'my.other.model'\n\n       def button_do_stuff(self):\n           self.env['my.model'].with_delay().my_method('a', k=2)\n\nIn the snippet of code above, when we call ``button_do_stuff``, a job\n**capturing the method and arguments** will be postponed. It will be\nexecuted as soon as the Jobrunner has a free bucket, which can be\ninstantaneous if no other job is running.\n\nFeatures:\n\n- Views for jobs, jobs are stored in PostgreSQL\n- Jobrunner: execute the jobs, highly efficient thanks to PostgreSQL's\n  NOTIFY\n- Channels: give a capacity for the root channel and its sub-channels\n  and segregate jobs in them. Allow for instance to restrict heavy jobs\n  to be executed one at a time while little ones are executed 4 at a\n  times.\n- Retries: Ability to retry jobs by raising a type of exception\n- Retry Pattern: the 3 first tries, retry after 10 seconds, the 5 next\n  tries, retry after 1 minutes, ...\n- Job properties: priorities, estimated time of arrival (ETA), custom\n  description, number of retries\n- Related Actions: link an action on the job view, such as open the\n  record concerned by the job\n\n**Table of contents**\n\n.. contents::\n   :local:\n\nInstallation\n============\n\nBe sure to have the ``requests`` library.\n\nConfiguration\n=============\n\n- Using environment variables and command line:\n\n  - Adjust environment variables (optional):\n\n    - ``ODOO_QUEUE_JOB_CHANNELS=root:4`` or any other channels\n      configuration. The default is ``root:1``\n    - if ``xmlrpc_port`` is not set: ``ODOO_QUEUE_JOB_PORT=8069``\n\n  - Start Odoo with ``--load=web,queue_job`` and ``--workers`` greater\n    than 1. [1]_\n\n- Using the Odoo configuration file:\n\n.. code:: ini\n\n   [options]\n   (...)\n   workers = 6\n   server_wide_modules = web,queue_job\n\n   (...)\n   [queue_job]\n   channels = root:2\n\n- Confirm the runner is starting correctly by checking the odoo log\n  file:\n\n::\n\n   ...INFO...queue_job.jobrunner.runner: starting\n   ...INFO...queue_job.jobrunner.runner: initializing database connections\n   ...INFO...queue_job.jobrunner.runner: queue job runner ready for db <dbname>\n   ...INFO...queue_job.jobrunner.runner: database connections ready\n\n- Create jobs (eg using ``base_import_async``) and observe they start\n  immediately and in parallel.\n- Tip: to enable debug logging for the queue job, use\n  ``--log-handler=odoo.addons.queue_job:DEBUG``\n\n.. [1]\n   It works with the threaded Odoo server too, although this way of\n   running Odoo is obviously not for production purposes.\n\nUsage\n=====\n\nTo use this module, you need to:\n\n1. Go to ``Job Queue`` menu\n\nDevelopers\n----------\n\nDelaying jobs\n~~~~~~~~~~~~~\n\nThe fast way to enqueue a job for a method is to use ``with_delay()`` on\na record or model:\n\n.. code:: python\n\n   def button_done(self):\n       self.with_delay().print_confirmation_document(self.state)\n       self.write({\"state\": \"done\"})\n       return True\n\nHere, the method ``print_confirmation_document()`` will be executed\nasynchronously as a job. ``with_delay()`` can take several parameters to\ndefine more precisely how the job is executed (priority, ...).\n\nAll the arguments passed to the method being delayed are stored in the\njob and passed to the method when it is executed asynchronously,\nincluding ``self``, so the current record is maintained during the job\nexecution (warning: the context is not kept).\n\nDependencies can be expressed between jobs. To start a graph of jobs,\nuse ``delayable()`` on a record or model. The following is the\nequivalent of ``with_delay()`` but using the long form:\n\n.. code:: python\n\n   def button_done(self):\n       delayable = self.delayable()\n       delayable.print_confirmation_document(self.state)\n       delayable.delay()\n       self.write({\"state\": \"done\"})\n       return True\n\nMethods of Delayable objects return itself, so it can be used as a\nbuilder pattern, which in some cases allow to build the jobs\ndynamically:\n\n.. code:: python\n\n   def button_generate_simple_with_delayable(self):\n       self.ensure_one()\n       # Introduction of a delayable object, using a builder pattern\n       # allowing to chain jobs or set properties. The delay() method\n       # on the delayable object actually stores the delayable objects\n       # in the queue_job table\n       (\n           self.delayable()\n           .generate_thumbnail((50, 50))\n           .set(priority=30)\n           .set(description=_(\"generate xxx\"))\n           .delay()\n       )\n\nThe simplest way to define a dependency is to use ``.on_done(job)`` on a\nDelayable:\n\n.. code:: python\n\n   def button_chain_done(self):\n       self.ensure_one()\n       job1 = self.browse(1).delayable().generate_thumbnail((50, 50))\n       job2 = self.browse(1).delayable().generate_thumbnail((50, 50))\n       job3 = self.browse(1).delayable().generate_thumbnail((50, 50))\n       # job 3 is executed when job 2 is done which is executed when job 1 is done\n       job1.on_done(job2.on_done(job3)).delay()\n\nDelayables can be chained to form more complex graphs using the\n``chain()`` and ``group()`` primitives. A chain represents a sequence of\njobs to execute in order, a group represents jobs which can be executed\nin parallel. Using ``chain()`` has the same effect as using several\nnested ``on_done()`` but is more readable. Both can be combined to form\na graph, for instance we can group [A] of jobs, which blocks another\ngroup [B] of jobs. When and only when all the jobs of the group [A] are\nexecuted, the jobs of the group [B] are executed. The code would look\nlike:\n\n.. code:: python\n\n   from odoo.addons.queue_job.delay import group, chain\n\n   def button_done(self):\n       group_a = group(self.delayable().method_foo(), self.delayable().method_bar())\n       group_b = group(self.delayable().method_baz(1), self.delayable().method_baz(2))\n       chain(group_a, group_b).delay()\n       self.write({\"state\": \"done\"})\n       return True\n\nWhen a failure happens in a graph of jobs, the execution of the jobs\nthat depend on the failed job stops. They remain in a state\n``wait_dependencies`` until their \"parent\" job is successful. This can\nhappen in two ways: either the parent job retries and is successful on a\nsecond try, either the parent job is manually \"set to done\" by a user.\nIn these two cases, the dependency is resolved and the graph will\ncontinue to be processed. Alternatively, the failed job and all its\ndependent jobs can be canceled by a user. The other jobs of the graph\nthat do not depend on the failed job continue their execution in any\ncase.\n\nNote: ``delay()`` must be called on the delayable, chain, or group which\nis at the top of the graph. In the example above, if it was called on\n``group_a``, then ``group_b`` would never be delayed (but a warning\nwould be shown).\n\nIt is also possible to split a job into several jobs, each one\nprocessing a part of the work. This can be useful to avoid very long\njobs, parallelize some task and get more specific errors. Usage is as\nfollows:\n\n.. code:: python\n\n   def button_split_delayable(self):\n       (\n           self  # Can be a big recordset, let's say 1000 records\n           .delayable()\n           .generate_thumbnail((50, 50))\n           .set(priority=30)\n           .set(description=_(\"generate xxx\"))\n           .split(50)  # Split the job in 20 jobs of 50 records each\n           .delay()\n       )\n\nThe ``split()`` method takes a ``chain`` boolean keyword argument. If\nset to True, the jobs will be chained, meaning that the next job will\nonly start when the previous one is done:\n\n.. code:: python\n\n   def button_increment_var(self):\n       (\n           self\n           .delayable()\n           .increment_counter()\n           .split(1, chain=True) # Will exceute the jobs one after the other\n           .delay()\n       )\n\nEnqueing Job Options\n~~~~~~~~~~~~~~~~~~~~\n\n- priority: default is 10, the closest it is to 0, the faster it will be\n  executed\n- eta: Estimated Time of Arrival of the job. It will not be executed\n  before this date/time\n- max_retries: default is 5, maximum number of retries before giving up\n  and set the job state to 'failed'. A value of 0 means infinite\n  retries.\n- description: human description of the job. If not set, description is\n  computed from the function doc or method name\n- channel: the complete name of the channel to use to process the\n  function. If specified it overrides the one defined on the function\n- identity_key: key uniquely identifying the job, if specified and a job\n  with the same key has not yet been run, the new job will not be\n  created\n\nConfigure default options for jobs\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nIn earlier versions, jobs could be configured using the ``@job``\ndecorator. This is now obsolete, they can be configured using optional\n``queue.job.function`` and ``queue.job.channel`` XML records.\n\nExample of channel:\n\n.. code:: XML\n\n   <record id=\"channel_sale\" model=\"queue.job.channel\">\n       <field name=\"name\">sale</field>\n       <field name=\"parent_id\" ref=\"queue_job.channel_root\" />\n   </record>\n\nExample of job function:\n\n.. code:: XML\n\n   <record id=\"job_function_sale_order_action_done\" model=\"queue.job.function\">\n       <field name=\"model_id\" ref=\"sale.model_sale_order\" />\n       <field name=\"method\">action_done</field>\n       <field name=\"channel_id\" ref=\"channel_sale\" />\n       <field name=\"related_action\" eval='{\"func_name\": \"custom_related_action\"}' />\n       <field name=\"retry_pattern\" eval=\"{1: 60, 2: 180, 3: 10, 5: 300}\" />\n   </record>\n\nThe general form for the ``name`` is: ``<model.name>.method``.\n\nThe channel, related action and retry pattern options are optional, they\nare documented below.\n\nWhen writing modules, if 2+ modules add a job function or channel with\nthe same name (and parent for channels), they'll be merged in the same\nrecord, even if they have different xmlids. On uninstall, the merged\nrecord is deleted when all the modules using it are uninstalled.\n\n**Job function: model**\n\nIf the function is defined in an abstract model, you can not write\n``<field name=\"model_id\" ref=\"xml_id_of_the_abstract_model\"</field>``\nbut you have to define a function for each model that inherits from the\nabstract model.\n\n**Job function: channel**\n\nThe channel where the job will be delayed. The default channel is\n``root``.\n\n**Job function: related action**\n\nThe *Related Action* appears as a button on the Job's view. The button\nwill execute the defined action.\n\nThe default one is to open the view of the record related to the job\n(form view when there is a single record, list view for several\nrecords). In many cases, the default related action is enough and\ndoesn't need customization, but it can be customized by providing a\ndictionary on the job function:\n\n.. code:: python\n\n   {\n       \"enable\": False,\n       \"func_name\": \"related_action_partner\",\n       \"kwargs\": {\"name\": \"Partner\"},\n   }\n\n- ``enable``: when ``False``, the button has no effect (default:\n  ``True``)\n- ``func_name``: name of the method on ``queue.job`` that returns an\n  action\n- ``kwargs``: extra arguments to pass to the related action method\n\nExample of related action code:\n\n.. code:: python\n\n   class QueueJob(models.Model):\n       _inherit = 'queue.job'\n\n       def related_action_partner(self, name):\n           self.ensure_one()\n           model = self.model_name\n           partner = self.records\n           action = {\n               'name': name,\n               'type': 'ir.actions.act_window',\n               'res_model': model,\n               'view_type': 'form',\n               'view_mode': 'form',\n               'res_id': partner.id,\n           }\n           return action\n\n**Job function: retry pattern**\n\nWhen a job fails with a retryable error type, it is automatically\nretried later. By default, the retry is always 10 minutes later.\n\nA retry pattern can be configured on the job function. What a pattern\nrepresents is \"from X tries, postpone to Y seconds\". It is expressed as\na dictionary where keys are tries and values are seconds to postpone as\nintegers:\n\n.. code:: python\n\n   {\n       1: 10,\n       5: 20,\n       10: 30,\n       15: 300,\n   }\n\nBased on this configuration, we can tell that:\n\n- 5 first retries are postponed 10 seconds later\n- retries 5 to 10 postponed 20 seconds later\n- retries 10 to 15 postponed 30 seconds later\n- all subsequent retries postponed 5 minutes later\n\n**Job Context**\n\nThe context of the recordset of the job, or any recordset passed in\narguments of a job, is transferred to the job according to an\nallow-list.\n\nThe default allow-list is (\"tz\", \"lang\", \"allowed_company_ids\",\n\"force_company\", \"active_test\"). It can be customized in\n``Base._job_prepare_context_before_enqueue_keys``. **Bypass jobs on\nrunning Odoo**\n\nWhen you are developing (ie: connector modules) you might want to bypass\nthe queue job and run your code immediately.\n\nTo do so you can set QUEUE_JOB\\__NO_DELAY=1 in your enviroment.\n\n**Bypass jobs in tests**\n\nWhen writing tests on job-related methods is always tricky to deal with\ndelayed recordsets. To make your testing life easier you can set\nqueue_job\\__no_delay=True in the context.\n\nTip: you can do this at test case level like this\n\n.. code:: python\n\n   @classmethod\n   def setUpClass(cls):\n       super().setUpClass()\n       cls.env = cls.env(context=dict(\n           cls.env.context,\n           queue_job__no_delay=True,  # no jobs thanks\n       ))\n\nThen all your tests execute the job methods synchronously without\ndelaying any jobs.\n\nTesting\n~~~~~~~\n\n**Asserting enqueued jobs**\n\nThe recommended way to test jobs, rather than running them directly and\nsynchronously is to split the tests in two parts:\n\n   - one test where the job is mocked (trap jobs with ``trap_jobs()``\n     and the test only verifies that the job has been delayed with the\n     expected arguments\n   - one test that only calls the method of the job synchronously, to\n     validate the proper behavior of this method only\n\nProceeding this way means that you can prove that jobs will be enqueued\nproperly at runtime, and it ensures your code does not have a different\nbehavior in tests and in production (because running your jobs\nsynchronously may have a different behavior as they are in the same\ntransaction / in the middle of the method). Additionally, it gives more\ncontrol on the arguments you want to pass when calling the job's method\n(synchronously, this time, in the second type of tests), and it makes\ntests smaller.\n\nThe best way to run such assertions on the enqueued jobs is to use\n``odoo.addons.queue_job.tests.common.trap_jobs()``.\n\nA very small example (more details in ``tests/common.py``):\n\n.. code:: python\n\n   # code\n   def my_job_method(self, name, count):\n       self.write({\"name\": \" \".join([name] * count)\n\n   def method_to_test(self):\n       count = self.env[\"other.model\"].search_count([])\n       self.with_delay(priority=15).my_job_method(\"Hi!\", count=count)\n       return count\n\n   # tests\n   from odoo.addons.queue_job.tests.common import trap_jobs\n\n   # first test only check the expected behavior of the method and the proper\n   # enqueuing of jobs\n   def test_method_to_test(self):\n       with trap_jobs() as trap:\n           result = self.env[\"model\"].method_to_test()\n           expected_count = 12\n\n           trap.assert_jobs_count(1, only=self.env[\"model\"].my_job_method)\n           trap.assert_enqueued_job(\n               self.env[\"model\"].my_job_method,\n               args=(\"Hi!\",),\n               kwargs=dict(count=expected_count),\n               properties=dict(priority=15)\n           )\n           self.assertEqual(result, expected_count)\n\n\n    # second test to validate the behavior of the job unitarily\n    def test_my_job_method(self):\n        record = self.env[\"model\"].browse(1)\n        record.my_job_method(\"Hi!\", count=12)\n        self.assertEqual(record.name, \"Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi!\")\n\nIf you prefer, you can still test the whole thing in a single test, by\ncalling ``jobs_tester.perform_enqueued_jobs()`` in your test.\n\n.. code:: python\n\n   def test_method_to_test(self):\n       with trap_jobs() as trap:\n           result = self.env[\"model\"].method_to_test()\n           expected_count = 12\n\n           trap.assert_jobs_count(1, only=self.env[\"model\"].my_job_method)\n           trap.assert_enqueued_job(\n               self.env[\"model\"].my_job_method,\n               args=(\"Hi!\",),\n               kwargs=dict(count=expected_count),\n               properties=dict(priority=15)\n           )\n           self.assertEqual(result, expected_count)\n\n           trap.perform_enqueued_jobs()\n\n           record = self.env[\"model\"].browse(1)\n           record.my_job_method(\"Hi!\", count=12)\n           self.assertEqual(record.name, \"Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi!\")\n\n**Execute jobs synchronously when running Odoo**\n\nWhen you are developing (ie: connector modules) you might want to bypass\nthe queue job and run your code immediately.\n\nTo do so you can set ``QUEUE_JOB__NO_DELAY=1`` in your environment.\n\nWarning\n\nDo not do this in production\n\n**Execute jobs synchronously in tests**\n\nYou should use ``trap_jobs``, really, but if for any reason you could\nnot use it, and still need to have job methods executed synchronously in\nyour tests, you can do so by setting ``queue_job__no_delay=True`` in the\ncontext.\n\nTip: you can do this at test case level like this\n\n.. code:: python\n\n   @classmethod\n   def setUpClass(cls):\n       super().setUpClass()\n       cls.env = cls.env(context=dict(\n           cls.env.context,\n           queue_job__no_delay=True,  # no jobs thanks\n       ))\n\nThen all your tests execute the job methods synchronously without\ndelaying any jobs.\n\nIn tests you'll have to mute the logger like:\n\n   @mute_logger('odoo.addons.queue_job.models.base')\n\nNote\n\nin graphs of jobs, the ``queue_job__no_delay`` context key must be in at\nleast one job's env of the graph for the whole graph to be executed\nsynchronously\n\nTips and tricks\n~~~~~~~~~~~~~~~\n\n- **Idempotency**\n  (https://www.restapitutorial.com/lessons/idempotency.html): The\n  queue_job should be idempotent so they can be retried several times\n  without impact on the data.\n- **The job should test at the very beginning its relevance**: the\n  moment the job will be executed is unknown by design. So the first\n  task of a job should be to check if the related work is still relevant\n  at the moment of the execution.\n\nPatterns\n~~~~~~~~\n\nThrough the time, two main patterns emerged:\n\n1. For data exposed to users, a model should store the data and the\n   model should be the creator of the job. The job is kept hidden from\n   the users\n2. For technical data, that are not exposed to the users, it is\n   generally alright to create directly jobs with data passed as\n   arguments to the job, without intermediary models.\n\nKnown issues / Roadmap\n======================\n\n- After creating a new database or installing ``queue_job`` on an\n  existing database, Odoo must be restarted for the runner to detect it.\n- When Odoo shuts down normally, it waits for running jobs to finish.\n  However, when the Odoo server crashes or is otherwise force-stopped,\n  running jobs are interrupted while the runner has no chance to know\n  they have been aborted. In such situations, jobs may remain in\n  ``started`` or ``enqueued`` state after the Odoo server is halted.\n  Since the runner has no way to know if they are actually running or\n  not, and does not know for sure if it is safe to restart the jobs, it\n  does not attempt to restart them automatically. Such stale jobs\n  therefore fill the running queue and prevent other jobs to start. You\n  must therefore requeue them manually, either from the Jobs view, or by\n  running the following SQL statement *before starting Odoo*:\n\n.. code:: sql\n\n   update queue_job set state='pending' where state in ('started', 'enqueued')\n\nChangelog\n=========\n\nNext\n----\n\n- [ADD] Run jobrunner as a worker process instead of a thread in the\n  main process (when running with --workers > 0)\n- [REF] ``@job`` and ``@related_action`` deprecated, any method can be\n  delayed, and configured using ``queue.job.function`` records\n- [MIGRATION] from 13.0 branched at rev. e24ff4b\n\nBug Tracker\n===========\n\nBugs are tracked on `GitHub Issues <https://github.com/OCA/queue/issues>`_.\nIn case of trouble, please check there if your issue has already been reported.\nIf you spotted it first, help us to smash it by providing a detailed and welcomed\n`feedback <https://github.com/OCA/queue/issues/new?body=module:%20queue_job%0Aversion:%2018.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.\n\nDo not contact contributors directly about support or help with technical issues.\n\nCredits\n=======\n\nAuthors\n-------\n\n* Camptocamp\n* ACSONE SA/NV\n\nContributors\n------------\n\n- Guewen Baconnier <guewen.baconnier@camptocamp.com>\n- St\u00e9phane Bidoul <stephane.bidoul@acsone.eu>\n- Matthieu Dietrich <matthieu.dietrich@camptocamp.com>\n- Jos De Graeve <Jos.DeGraeve@apertoso.be>\n- David Lefever <dl@taktik.be>\n- Laurent Mignon <laurent.mignon@acsone.eu>\n- Laetitia Gangloff <laetitia.gangloff@acsone.eu>\n- C\u00e9dric Pigeon <cedric.pigeon@acsone.eu>\n- Tatiana Deribina <tatiana.deribina@avoin.systems>\n- Souheil Bejaoui <souheil.bejaoui@acsone.eu>\n- Eric Antones <eantones@nuobit.com>\n- Simone Orsi <simone.orsi@camptocamp.com>\n- Nguyen Minh Chien <chien@trobz.com>\n- Tran Quoc Duong <duongtq@trobz.com>\n- Vo Hong Thien <thienvh@trobz.com>\n\nOther credits\n-------------\n\nThe migration of this module from 17.0 to 18.0 was financially supported\nby Camptocamp.\n\nMaintainers\n-----------\n\nThis module is maintained by the OCA.\n\n.. image:: https://odoo-community.org/logo.png\n   :alt: Odoo Community Association\n   :target: https://odoo-community.org\n\nOCA, or the Odoo Community Association, is a nonprofit organization whose\nmission is to support the collaborative development of Odoo features and\npromote its widespread use.\n\n.. |maintainer-guewen| image:: https://github.com/guewen.png?size=40px\n    :target: https://github.com/guewen\n    :alt: guewen\n\nCurrent `maintainer <https://odoo-community.org/page/maintainer-role>`__:\n\n|maintainer-guewen| \n\nThis module is part of the `OCA/queue <https://github.com/OCA/queue/tree/18.0/queue_job>`_ project on GitHub.\n\nYou are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.\n",
    "bugtrack_url": null,
    "license": "LGPL-3",
    "summary": "Job Queue",
    "version": "18.0.1.1.1",
    "project_urls": {
        "Homepage": "https://github.com/OCA/queue"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3d99f88f59dfcde68327d9384c0272290414afff1f2b7670bb2baaf83957c2ce",
                "md5": "19eb44719cf6eb83e37cebcc6edeca69",
                "sha256": "b2c8d225a62a98920db01f97bd1ca348bb212ee28ca31e17919c7f6c1ba12a42"
            },
            "downloads": -1,
            "filename": "odoo_addon_queue_job-18.0.1.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "19eb44719cf6eb83e37cebcc6edeca69",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 304045,
            "upload_time": "2024-12-21T16:20:39",
            "upload_time_iso_8601": "2024-12-21T16:20:39.365559Z",
            "url": "https://files.pythonhosted.org/packages/3d/99/f88f59dfcde68327d9384c0272290414afff1f2b7670bb2baaf83957c2ce/odoo_addon_queue_job-18.0.1.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-21 16:20:39",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "OCA",
    "github_project": "queue",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "requests",
            "specs": []
        }
    ],
    "lcname": "odoo-addon-queue-job"
}
        
Elapsed time: 0.40759s