collective.auditlog


Namecollective.auditlog JSON
Version 2.0.0 PyPI version JSON
download
home_pagehttp://svn.plone.org/svn/collective/
SummaryProvides extra conditions and triggers for all content actions
upload_time2023-05-31 07:12:32
maintainer
docs_urlNone
authorrain2o
requires_python
licenseGPL
keywords plone audit log
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            Introduction
============

The package allows you to audit actions being done on your site.
It accomplishes this by using configurable content rules.

By default, after you activate this package,
it will create all the content rules
that can be used for auditing with only the Page type to audit OOTB.
If you want to audit more types of objects,
you will need to configure the content rules.

The audits are stored into a relational database.
Once installed and called for the first time
it will create a table called "audit" if it does not already exist,
so there is no need to create the table manually.

AuditLog attempts to use plone.app.async or collective.celery to
perform the store actions, but if that fails it will finish the task
directly. The advantage of this is to allow an individual 'worker'
client to run Async and handle all of these request.
If there is a lot of activity it will not get backed up.
Async queues the job up and handles it as it can
while the users request finishes and moves on
avoiding sacrifices in performance.
Refer to the collective.async pypi page
for instructions on setting it up if you use it.
Async is NOT required for AuditLog to work,
however it is advised, especially for high traffic sites.


Installation
============

Download the package from GitHub and extract into your src directory.
Add 'collective.auditlog' to your eggs and zcml slugs in buildout.
Include the location (src/collective.auditlog) in development slugs too.
Run buildout

In Site Setup -> Add-ons, active Audit Log.
Once it is installed you will see "AuditLog" under Add-on Configuration.
This is where you can configuration the relational database.
The configuration string needs to be a valid SQLAlchemy connection string.
The control panel also allows you to enable/disable
tracking of actions performed on working copies.

All that is left is to configure the new Content Rules
to track the content types and actions you desire.

Upgrading
=========

If you are upgrading from an older version, you will need to run the
upgrade steps from the add-ons control panel.

USAGE
=====
For now, collective.auditlog uses SQLAlchemy for storing data. To use
postgres, it's necessary to add the 'psycopg2' egg to the buildout. Once
the product is installed, add the correct connection URL to the product
setup. Example::

    postgresql://enfold:enfold@localhost/auditlog

By default, collective.auditlog uses content rules to define which events
to capture. An additional mechanism has been added that allows the site to
automatically log the various events supported by collective.auditlog.
Simply choose from the picklist in the config for this to work. If no
events are selected, no logging will occur.

It is possible to log custom events from application code by using the
AuditableActionPerformedEvent, like this::

    from zope event import notify
    from collective.auditlog.interfaces import AuditableActionPerformedEvent
    notify(AuditableActionPerformedEvent(obj, request, "action", "note"))

'obj', refers to the affected content object; 'request' is the current zope
request, 'action' and 'note' correspond to the logged action and its
description, respectively. All parameters are required, but everything
except obj can be set to None if no value is available.

In addition to control panel configuration, connection parameters can be
set using the zope-conf-additional directive in the buildout. Note that
this will take precedence over any control panel configuration. Example::

    zope-conf-additional =
        <product-config collective.auditlog>
            audit-connection-string postgres://enfold:enfold@localhost/auditlog
            audit-connection-params {"pool_recycle": 3600, "echo": true}
        </product-config>

There is now a view for the audit log entries, located at @@auditlog-view.
There is no link to it from the control panel at the moment. The view uses
infinite scrollong rather than pagination for looking at the logs.

Searching the audit log
=======================

Audit logs are stored in a SQL table, so you can use any SQL database tool
to analyse the audit log. However, sometimes you may need to query the logs
in the context of the Plone objects that generated them, for which the
information stored in SQL is not enough. For this purpose,
collective.auditlog offers a catalog based mechanism which can be used to
query the logs using any Plone based indexes available at the time of
logging. This can be used, for example, to develop a portlet that shows the
latest documents that have been modified.

To enable catalog-based logging, choose sql+zodb storage in the audit log
control panel. Information will still be stored in the SQL database, but
a special catalog will be enabled to store plone indexing information as
well.

Once this storage is enabled, you can search the logs using a catalog-like
query::

    from datetime import datetime
    from collective.auditlog.catalog import searchAudited

    from = datatime(2018,6,1)
    to = datetime(2018,12,31)
    query = {'portal_type': 'Document',
            'review_state': 'published'}
    audited = searchAudited(from_date=from,
                            to_date=to,
                            actions=['added', 'modified'],
                            **query)

All of the parameters are optional, but an empty query will return all
indexed objects, so use with care.

Note that the query will return catalog records, and any documents that have
multiple actions performed in the selected date range, will only appear once
in the list. There are also catalog records for deleted items, so a query
can be made to look for those even if they are gone from Plone.


Celery Integration
==================
The collective.celery package requires adding the celery and
collective.celery eggs to the mian buildout section eggs. Example::

    eggs =
        celery
        Plone
        collective.celery

We still use the celery-broker part, for clarity. The celery part is
still required, but is simpler::

    [celery-broker]
    host = 127.0.0.1
    port = 6379

    [celery]
    recipe = zc.recipe.egg
    environment-vars = ${buildout:environment-vars}
    eggs =
        ${buildout:eggs}
        flower
    scripts = pcelery flower

The celery part depends on having some variables added to the main
environment-vars section::

    environment-vars =
        CELERY_BROKER_URL redis://${celery-broker:host}:${celery-broker:port}
        CELERY_RESULT_BACKEND redis://${celery-broker:host}:${celery-broker:port}
        CELERY_TASKS collective.es.index.tasks

Additional Zope configuration
-----------------------------

There's now a hook in collective.celery for carrying out additional zope
configuration before running the tasks. If the tasks module contains an
'extra_config' method, it is passed the zope startup object at worker
initialization time. This is used by collective.es.index to run the
elasticsearch configuration method.

Monitoring celery tasks
-----------------------

Celery needs to be started as an independent process. It's recommended to
use supervisord for this. To try it out from the command line, you can run
use supervisord for this. To try it out from the command line, you can run
"bin/pcelery worker" from the buildout directory. Note that the script is
now named 'pcelery' and it needs a path to the zope configuration. Example::

    $ bin/pcelery worker parts/client1/etc/zope.conf

Flower is included in this setup. Run "bin/flower" from the buildout
directory and consult the dashboard at http://localhost:5555 using a
browser. Note that the broker is now a requried parameter::

    $ bin/flower --broker redis://127.0.0.1:6379

Dependencies
============

All dependencies are installed automatically
when installing collective.auditlog.
Here is just a list of those for reference:

- setuptools
- sqlalchemy
- five.globalrequest
- plone.app.async [OPTIONAL]
- collective.celery [OPTIONAL]

Authors
=======

- Joel Rainwater, initial author
- Nathan van Gheem, Async integration, bug fixes, optimization.
- Alessandro Pisa, bug fixing, testing
- Enfold Systems, celery integration and audit view


Changelog
=========

2.0.0 (2023-05-31)
------------------

- Re-release 2.0.0a2 as 2.0.0.
  This will be the last release supporting Python2.7.
  [ale-rt]


2.0.0a2 (2020-03-19)
--------------------

- Fix request not having an environment attribute in instance scripts
  [ale-rt]


2.0.0a1 (2020-03-11)
--------------------

- Remove inconsistent passing of ``request`` parameter and use zope.globalrequest instead.
  [thet]

- Remove deprecations.
  action.canExecute is renamed to ``can_execute``, takes no paramters and is a property.
  [thet]

- Python 3 compatibility.
  [thet]

- Remove support for plone.app.async.
  Due to  ``async`` being a reserved word, this cannot made Python 3 compatible.
  Use collective.celeries instead.
  [thet]

- Drop support for Archetypes.
  [thet]

- Plone 5.2 compatibility.
  Drop Support for Plone 5.0 and 4.3 (Both are missing zope.interface.interfaces.IObjectEvent).
  [thet]

- Make Arhcetype a soft dependency.
  [ale-rt]

- Align with Plone code style: black, isort.
  [thet]

- Fix soft dependency on formlib (#22)
  [ale-rt]

- Speed up rule retrieval
  [ale-rt]

- Added some memoized properties and methods to the `AuditActionExecutor` class
  for easier customization
  [ale-rt]

- collective.celery integration
  [enfold]

- @@auditlog-view allows viewing/sorting/searching audit log entries
  [enfold]

- add login & logout audits
  [enfold]

- ability to specify the sqlalchemy DSN in config file
  [enfold]

- Notify an event before storing audit log entry.
  [enfold]

- Use custom permission for viewing audit log.
  [enfold]

- Fix tests.
  [enfold]

- Fix db connection leak.
  [enfold]

- Use valid json in info field.
  [enfold]


1.3.3 (2018-07-12)
------------------

- Factored out getObjectInfo and addLogEntry.
  [reinhardt]


1.3.2 (2018-07-11)
------------------

- Skip retrieving rule when audit log is disabled completely.
  Improves performance.
  [reinhardt]


1.3.1 (2017-04-13)
------------------

- Fix upgrade step title.
  [ale-rt]


1.3.0 (2017-04-13)
------------------

- The engine parameters (like pool_recycle, echo, ...)
  can be specified through a registry record
  [ale-rt]


1.2.2 (2016-06-06)
------------------

- Make action more robust on IActionSucceededEvent
  [ale-rt]


1.2.1 (2016-05-10)
------------------

- Fix unicode issues
- Tests are working again
  [ale-rt]


1.2.0 (2016-05-03)
------------------

- First public release

            

Raw data

            {
    "_id": null,
    "home_page": "http://svn.plone.org/svn/collective/",
    "name": "collective.auditlog",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "Plone Audit Log",
    "author": "rain2o",
    "author_email": "Joel@rain2odesigns.com",
    "download_url": "https://files.pythonhosted.org/packages/a6/a9/25194239eeee26699dc4da6072c3699caba0a0fb03a157d8e047127fba22/collective.auditlog-2.0.0.tar.gz",
    "platform": null,
    "description": "Introduction\n============\n\nThe package allows you to audit actions being done on your site.\nIt accomplishes this by using configurable content rules.\n\nBy default, after you activate this package,\nit will create all the content rules\nthat can be used for auditing with only the Page type to audit OOTB.\nIf you want to audit more types of objects,\nyou will need to configure the content rules.\n\nThe audits are stored into a relational database.\nOnce installed and called for the first time\nit will create a table called \"audit\" if it does not already exist,\nso there is no need to create the table manually.\n\nAuditLog attempts to use plone.app.async or collective.celery to\nperform the store actions, but if that fails it will finish the task\ndirectly. The advantage of this is to allow an individual 'worker'\nclient to run Async and handle all of these request.\nIf there is a lot of activity it will not get backed up.\nAsync queues the job up and handles it as it can\nwhile the users request finishes and moves on\navoiding sacrifices in performance.\nRefer to the collective.async pypi page\nfor instructions on setting it up if you use it.\nAsync is NOT required for AuditLog to work,\nhowever it is advised, especially for high traffic sites.\n\n\nInstallation\n============\n\nDownload the package from GitHub and extract into your src directory.\nAdd 'collective.auditlog' to your eggs and zcml slugs in buildout.\nInclude the location (src/collective.auditlog) in development slugs too.\nRun buildout\n\nIn Site Setup -> Add-ons, active Audit Log.\nOnce it is installed you will see \"AuditLog\" under Add-on Configuration.\nThis is where you can configuration the relational database.\nThe configuration string needs to be a valid SQLAlchemy connection string.\nThe control panel also allows you to enable/disable\ntracking of actions performed on working copies.\n\nAll that is left is to configure the new Content Rules\nto track the content types and actions you desire.\n\nUpgrading\n=========\n\nIf you are upgrading from an older version, you will need to run the\nupgrade steps from the add-ons control panel.\n\nUSAGE\n=====\nFor now, collective.auditlog uses SQLAlchemy for storing data. To use\npostgres, it's necessary to add the 'psycopg2' egg to the buildout. Once\nthe product is installed, add the correct connection URL to the product\nsetup. Example::\n\n    postgresql://enfold:enfold@localhost/auditlog\n\nBy default, collective.auditlog uses content rules to define which events\nto capture. An additional mechanism has been added that allows the site to\nautomatically log the various events supported by collective.auditlog.\nSimply choose from the picklist in the config for this to work. If no\nevents are selected, no logging will occur.\n\nIt is possible to log custom events from application code by using the\nAuditableActionPerformedEvent, like this::\n\n    from zope event import notify\n    from collective.auditlog.interfaces import AuditableActionPerformedEvent\n    notify(AuditableActionPerformedEvent(obj, request, \"action\", \"note\"))\n\n'obj', refers to the affected content object; 'request' is the current zope\nrequest, 'action' and 'note' correspond to the logged action and its\ndescription, respectively. All parameters are required, but everything\nexcept obj can be set to None if no value is available.\n\nIn addition to control panel configuration, connection parameters can be\nset using the zope-conf-additional directive in the buildout. Note that\nthis will take precedence over any control panel configuration. Example::\n\n    zope-conf-additional =\n        <product-config collective.auditlog>\n            audit-connection-string postgres://enfold:enfold@localhost/auditlog\n            audit-connection-params {\"pool_recycle\": 3600, \"echo\": true}\n        </product-config>\n\nThere is now a view for the audit log entries, located at @@auditlog-view.\nThere is no link to it from the control panel at the moment. The view uses\ninfinite scrollong rather than pagination for looking at the logs.\n\nSearching the audit log\n=======================\n\nAudit logs are stored in a SQL table, so you can use any SQL database tool\nto analyse the audit log. However, sometimes you may need to query the logs\nin the context of the Plone objects that generated them, for which the\ninformation stored in SQL is not enough. For this purpose,\ncollective.auditlog offers a catalog based mechanism which can be used to\nquery the logs using any Plone based indexes available at the time of\nlogging. This can be used, for example, to develop a portlet that shows the\nlatest documents that have been modified.\n\nTo enable catalog-based logging, choose sql+zodb storage in the audit log\ncontrol panel. Information will still be stored in the SQL database, but\na special catalog will be enabled to store plone indexing information as\nwell.\n\nOnce this storage is enabled, you can search the logs using a catalog-like\nquery::\n\n    from datetime import datetime\n    from collective.auditlog.catalog import searchAudited\n\n    from = datatime(2018,6,1)\n    to = datetime(2018,12,31)\n    query = {'portal_type': 'Document',\n            'review_state': 'published'}\n    audited = searchAudited(from_date=from,\n                            to_date=to,\n                            actions=['added', 'modified'],\n                            **query)\n\nAll of the parameters are optional, but an empty query will return all\nindexed objects, so use with care.\n\nNote that the query will return catalog records, and any documents that have\nmultiple actions performed in the selected date range, will only appear once\nin the list. There are also catalog records for deleted items, so a query\ncan be made to look for those even if they are gone from Plone.\n\n\nCelery Integration\n==================\nThe collective.celery package requires adding the celery and\ncollective.celery eggs to the mian buildout section eggs. Example::\n\n    eggs =\n        celery\n        Plone\n        collective.celery\n\nWe still use the celery-broker part, for clarity. The celery part is\nstill required, but is simpler::\n\n    [celery-broker]\n    host = 127.0.0.1\n    port = 6379\n\n    [celery]\n    recipe = zc.recipe.egg\n    environment-vars = ${buildout:environment-vars}\n    eggs =\n        ${buildout:eggs}\n        flower\n    scripts = pcelery flower\n\nThe celery part depends on having some variables added to the main\nenvironment-vars section::\n\n    environment-vars =\n        CELERY_BROKER_URL redis://${celery-broker:host}:${celery-broker:port}\n        CELERY_RESULT_BACKEND redis://${celery-broker:host}:${celery-broker:port}\n        CELERY_TASKS collective.es.index.tasks\n\nAdditional Zope configuration\n-----------------------------\n\nThere's now a hook in collective.celery for carrying out additional zope\nconfiguration before running the tasks. If the tasks module contains an\n'extra_config' method, it is passed the zope startup object at worker\ninitialization time. This is used by collective.es.index to run the\nelasticsearch configuration method.\n\nMonitoring celery tasks\n-----------------------\n\nCelery needs to be started as an independent process. It's recommended to\nuse supervisord for this. To try it out from the command line, you can run\nuse supervisord for this. To try it out from the command line, you can run\n\"bin/pcelery worker\" from the buildout directory. Note that the script is\nnow named 'pcelery' and it needs a path to the zope configuration. Example::\n\n    $ bin/pcelery worker parts/client1/etc/zope.conf\n\nFlower is included in this setup. Run \"bin/flower\" from the buildout\ndirectory and consult the dashboard at http://localhost:5555 using a\nbrowser. Note that the broker is now a requried parameter::\n\n    $ bin/flower --broker redis://127.0.0.1:6379\n\nDependencies\n============\n\nAll dependencies are installed automatically\nwhen installing collective.auditlog.\nHere is just a list of those for reference:\n\n- setuptools\n- sqlalchemy\n- five.globalrequest\n- plone.app.async [OPTIONAL]\n- collective.celery [OPTIONAL]\n\nAuthors\n=======\n\n- Joel Rainwater, initial author\n- Nathan van Gheem, Async integration, bug fixes, optimization.\n- Alessandro Pisa, bug fixing, testing\n- Enfold Systems, celery integration and audit view\n\n\nChangelog\n=========\n\n2.0.0 (2023-05-31)\n------------------\n\n- Re-release 2.0.0a2 as 2.0.0.\n  This will be the last release supporting Python2.7.\n  [ale-rt]\n\n\n2.0.0a2 (2020-03-19)\n--------------------\n\n- Fix request not having an environment attribute in instance scripts\n  [ale-rt]\n\n\n2.0.0a1 (2020-03-11)\n--------------------\n\n- Remove inconsistent passing of ``request`` parameter and use zope.globalrequest instead.\n  [thet]\n\n- Remove deprecations.\n  action.canExecute is renamed to ``can_execute``, takes no paramters and is a property.\n  [thet]\n\n- Python 3 compatibility.\n  [thet]\n\n- Remove support for plone.app.async.\n  Due to  ``async`` being a reserved word, this cannot made Python 3 compatible.\n  Use collective.celeries instead.\n  [thet]\n\n- Drop support for Archetypes.\n  [thet]\n\n- Plone 5.2 compatibility.\n  Drop Support for Plone 5.0 and 4.3 (Both are missing zope.interface.interfaces.IObjectEvent).\n  [thet]\n\n- Make Arhcetype a soft dependency.\n  [ale-rt]\n\n- Align with Plone code style: black, isort.\n  [thet]\n\n- Fix soft dependency on formlib (#22)\n  [ale-rt]\n\n- Speed up rule retrieval\n  [ale-rt]\n\n- Added some memoized properties and methods to the `AuditActionExecutor` class\n  for easier customization\n  [ale-rt]\n\n- collective.celery integration\n  [enfold]\n\n- @@auditlog-view allows viewing/sorting/searching audit log entries\n  [enfold]\n\n- add login & logout audits\n  [enfold]\n\n- ability to specify the sqlalchemy DSN in config file\n  [enfold]\n\n- Notify an event before storing audit log entry.\n  [enfold]\n\n- Use custom permission for viewing audit log.\n  [enfold]\n\n- Fix tests.\n  [enfold]\n\n- Fix db connection leak.\n  [enfold]\n\n- Use valid json in info field.\n  [enfold]\n\n\n1.3.3 (2018-07-12)\n------------------\n\n- Factored out getObjectInfo and addLogEntry.\n  [reinhardt]\n\n\n1.3.2 (2018-07-11)\n------------------\n\n- Skip retrieving rule when audit log is disabled completely.\n  Improves performance.\n  [reinhardt]\n\n\n1.3.1 (2017-04-13)\n------------------\n\n- Fix upgrade step title.\n  [ale-rt]\n\n\n1.3.0 (2017-04-13)\n------------------\n\n- The engine parameters (like pool_recycle, echo, ...)\n  can be specified through a registry record\n  [ale-rt]\n\n\n1.2.2 (2016-06-06)\n------------------\n\n- Make action more robust on IActionSucceededEvent\n  [ale-rt]\n\n\n1.2.1 (2016-05-10)\n------------------\n\n- Fix unicode issues\n- Tests are working again\n  [ale-rt]\n\n\n1.2.0 (2016-05-03)\n------------------\n\n- First public release\n",
    "bugtrack_url": null,
    "license": "GPL",
    "summary": "Provides extra conditions and triggers for all content actions",
    "version": "2.0.0",
    "project_urls": {
        "Homepage": "http://svn.plone.org/svn/collective/"
    },
    "split_keywords": [
        "plone",
        "audit",
        "log"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a6a925194239eeee26699dc4da6072c3699caba0a0fb03a157d8e047127fba22",
                "md5": "410053727a92e8dd122ea04765dacd2d",
                "sha256": "a7a81999230927aa1c52b4ad93c26ed95b4f98a83290c0fba004ce651655b5e7"
            },
            "downloads": -1,
            "filename": "collective.auditlog-2.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "410053727a92e8dd122ea04765dacd2d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 53398,
            "upload_time": "2023-05-31T07:12:32",
            "upload_time_iso_8601": "2023-05-31T07:12:32.945986Z",
            "url": "https://files.pythonhosted.org/packages/a6/a9/25194239eeee26699dc4da6072c3699caba0a0fb03a157d8e047127fba22/collective.auditlog-2.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-05-31 07:12:32",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "collective.auditlog"
}
        
Elapsed time: 0.14411s