wagtail-storages


Namewagtail-storages JSON
Version 2.0 PyPI version JSON
download
home_pagehttps://github.com/torchbox/wagtail-storages
SummaryUse AWS S3 with private documents in Wagtail.
upload_time2024-09-18 11:02:16
maintainerTorchbox
docs_urlNone
authorTorchbox
requires_python>=3.10
licenseBSD 3-Clause License
keywords wagtail s3 django storages storage
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            .. image:: https://github.com/torchbox/wagtail-storages/raw/main/logo.png

wagtail-storages
================

.. image:: https://img.shields.io/pypi/v/wagtail-storages.svg
   :target: https://pypi.org/project/wagtail-storages/
.. image:: https://img.shields.io/pypi/dm/wagtail-storages.svg
   :target: https://pypi.org/project/wagtail-storages/
.. image:: https://travis-ci.org/torchbox/wagtail-storages.svg?branch=master
   :target: https://travis-ci.org/torchbox/wagtail-storages

This package fills the missing gap in using AWS S3 together with Wagtail. This
package will be useful if you want to:

- Use AWS S3 bucket for hosting Wagtail documents.
- Put the bucket behind the CDN so that the bucket is not called directly each
  time.
- Allow editors to use privacy controls on documents, whilst using CDN.
- Avoid time-outs because of downloads being proxied through Wagtail views.

  *Note: you cannot use the document* `redirect view`__ *if you want your documents to be truly private.*

.. _WagtailRedirectView: https://docs.wagtail.io/en/stable/advanced_topics/settings.html#wagtaildocs-serve-method
__ WagtailRedirectView_


What does it do?
----------------

The package is a collection of signal handlers and Wagtail hooks.

- Sets per-object ACLs on S3 whenever privacy settings change on a Wagtail
  document.
- Replaces the current document view with a redirect. Either to a signed S3
  bucket URL for private documents or public custom domain URL for public ones.
- Purges CDN for documents that have changed.

Requirements
------------

- ``django-storages`` with the ``S3Boto3Storage`` storage backend configured in
  a Wagtail project.
- CDN supported by Wagtail front-end cache invalidator.

Management commands
-------------------

``django-admin fix_document_acls``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The package provider a management command that sets all the documents' ACLs
according to the their collection permissions. This must be called if there had
been documents in a bucket before the package was used to make sure the ACLs in
the bucket are correct.

Settings
--------

WAGTAIL_STORAGES_DOCUMENTS_FRONTENDCACHE
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Using the same format as Wagtail's ``WAGTAILFRONTENDCACHE`` setting, but to be
only used by the wagtail-storages to purge the documents. If not set, the purge
won't happen. `Read more on how to format it in the Wagtail docs
<https://docs.wagtail.io/en/stable/reference/contrib/frontendcache.html>`_,
e.g.


.. code:: python

   WAGTAIL_STORAGES_DOCUMENTS_FRONTENDCACHE = {
       'cloudfront': {
           'BACKEND': 'wagtail.contrib.frontend_cache.backends.CloudfrontBackend',
           'DISTRIBUTION_ID': 'your-distribution-id',
        },
   }

WAGTAIL_STORAGES_DOCUMENT_HOOK_ORDER
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Set a custom order for the document hook order. It's set to 100 by default.
It's important that it runs after any of your hooks since it returns a
response, e.g.

.. code:: python

   WAGTAIL_STORAGES_DOCUMENT_HOOK_ORDER = 900


Recommended S3 setup with Wagtail
---------------------------------

The following guide explains the recommended setup for using S3 with Wagtail.
This guide assumes that:

* You serve your main website at ``llamasavers.com`` (replace
  ``llamasavers.com`` with your actual domain name).
* Your S3 bucket is called ``media.llamasavers.com`` and you host it from that
  domain name.
* You are using CDN on that domain, this guide will assume Cloudflare.

Set up S3 bucket
~~~~~~~~~~~~~~~~

First, set up your S3 bucket. It must be configured to:

- Have a name that matches the domain name, e.g. ``media.llamasavers.com``.
- Allow the user to perform the following actions on the bucket:
   - ``s3:ListBucket``
   - ``s3:GetBucketLocation``
   - ``s3:ListBucketMultipartUploads``
   - ``s3:ListBucketVersions``
- Allow the user to perform all the actions (``s3:*``) on the objects within the
  bucket.
- Allow the internet traffic to access Wagtail image renditions (``images/*``).
- Allow use of public ACLs, by disabling:
   - "Block public access to buckets and objects granted through new access control lists (ACLs)"
   - "Block public access to buckets and objects granted through any access control lists (ACLs)"

The user permissions can be set in the IAM or via a bucket policy. See example
of all of those points being achieved in the bucket policy below.

.. code:: json

   {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "PublicGetObject",
                "Effect": "Allow",
                "Principal": "*",
                "Action": "s3:GetObject",
                "Resource": "arn:aws:s3:::[BUCKET NAME]/images/*"
            },
            {
                "Sid": "AllowUserManageBucket",
                "Effect": "Allow",
                "Principal": {
                    "AWS": "arn:aws:iam::[USER ARN]"
                },
                "Action": [
                    "s3:ListBucket",
                    "s3:GetBucketLocation",
                    "s3:ListBucketMultipartUploads",
                    "s3:ListBucketVersions"
                ],
                "Resource": "arn:aws:s3:::[BUCKET NAME]"
            },
            {
                "Sid": "AllowUserManageBucketObjects",
                "Effect": "Allow",
                "Principal": {
                    "AWS": "arn:aws:iam::[USER ARN]"
                },
                "Action": "s3:*",
                "Resource": "arn:aws:s3:::[BUCKET NAME]/*"
            }
        ]
    }


After the S3 bucket is set up on AWS, you can configure the Wagtail project to
use it.

Set up django-storages
~~~~~~~~~~~~~~~~~~~~~~

Install ``django-storages`` and ``boto3``.

.. code:: sh

   pip install django-storages[boto3]

Set up your S3 bucket with ``django-storages``. The following code allows
configuration via environment variables.

.. code:: python

    # settings.py
    import os


    if "AWS_STORAGE_BUCKET_NAME" in os.environ:
        # Add django-storages to the installed apps
        INSTALLED_APPS = INSTALLED_APPS + ["storages"]

        # https://docs.djangoproject.com/en/stable/ref/settings/#default-file-storage
        DEFAULT_FILE_STORAGE = "storages.backends.s3boto3.S3Boto3Storage"

        AWS_STORAGE_BUCKET_NAME = os.environ["AWS_STORAGE_BUCKET_NAME"]

        # Disables signing of the S3 objects' URLs. When set to True it
        # will append authorization querystring to each URL.
        AWS_QUERYSTRING_AUTH = False

        # Do not allow overriding files on S3 as per Wagtail docs recommendation:
        # https://docs.wagtail.io/en/stable/advanced_topics/deploying.html#cloud-storage
        # Not having this setting may have consequences such as losing files.
        AWS_S3_FILE_OVERWRITE = False

        # Default ACL for new files should be "private" - not accessible to the
        # public. Images should be made available to public via the bucket policy,
        # where the documents should use wagtail-storages.
        AWS_DEFAULT_ACL = "private"

        # We generally use this setting in production to put the S3 bucket
        # behind a CDN using a custom domain, e.g. media.llamasavers.com.
        # https://django-storages.readthedocs.io/en/latest/backends/amazon-S3.html#cloudfront
        if "AWS_S3_CUSTOM_DOMAIN" in os.environ:
            AWS_S3_CUSTOM_DOMAIN = os.environ["AWS_S3_CUSTOM_DOMAIN"]

        # When signing URLs is enabled, the region must be set.
        # The global S3 endpoint does not seem to support signed URLS.
        # Set this only if you will be using signed URLs.
        if "AWS_S3_REGION_NAME" in os.environ:
            AWS_S3_REGION_NAME = os.environ["AWS_S3_REGION_NAME"]

        # This settings lets you force using http or https protocol when generating
        # the URLs to the files. Set https as default.
        # https://github.com/jschneier/django-storages/blob/10d1929de5e0318dbd63d715db4bebc9a42257b5/storages/backends/s3boto3.py#L217
        AWS_S3_URL_PROTOCOL = os.environ.get("AWS_S3_URL_PROTOCOL", "https:")


If you use the above snippet, you can set the following environment variables:

* ``AWS_STORAGE_BUCKET_NAME`` - set to ``media.llamasavers.com``.
* ``AWS_S3_CUSTOM_DOMAIN`` - set to ``media.llamasavers.com``.
* ``AWS_S3_REGION_NAME`` - set to your AWS region name, e.g. ``eu-west-2``.

You can use one of the methods to provide `boto3 with credentials`__. We
suggest you stick with the environment variables. To do that, you need to set
the following variables:

* ``AWS_ACCESS_KEY_ID``
* ``AWS_SECRET_ACCESS_KEY``

.. _Boto3Credentials: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html

__ Boto3Credentials_

Now the storage should be configured and working. Editors should be able to
upload images and documents in Wagtail admin.

Set up ``wagtail-storages``
~~~~~~~~~~~~~~~~~~~~~~~~~~~

Install ``wagtail-storages`` itself.

.. code:: sh

   pip install wagtail-storages


Add ``wagtail_storages`` to your ``INSTALLED_APPS`` in your settings file.

.. code:: python

   # settings.py

   INSTALLED_APPS = [
       # ... Other apps
       "wagtail_storages.apps.WagtailStoragesConfig",
       # ... Other apps
   ]

With that, ACLs should be updated if documents are moved to
private collections.

If you already have files in your S3 bucket, run ``django-admin
fix_document_acls`` to make sure all documents have the right ACLs set up.

Set up front-end cache invalidation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

If edge cache is set up on the custom domain (``media.llamasavers.com``) you
should set up the CDN purging to avoid having outdated or private documents
available to users via the CDN endpoint. For example, for Cloudflare you want
to use a configuration similar to the one below:

.. code:: python

   # settings.py
   import os


   if "S3_CACHE_CLOUDFLARE_TOKEN" in os.environ:
        WAGTAIL_STORAGES_DOCUMENTS_FRONTENDCACHE = {
            "default": {
                "BACKEND": "wagtail.contrib.frontend_cache.backends.CloudflareBackend",
                "EMAIL": os.environ["S3_CACHE_CLOUDFLARE_EMAIL"],
                "TOKEN": os.environ["S3_CACHE_CLOUDFLARE_TOKEN"],
                "ZONEID": os.environ["S3_CACHE_CLOUDFLARE_ZONEID"],
            },
        }

Then set the following environment variables:

* ``S3_CACHE_CLOUDFLARE_EMAIL``
* ``S3_CACHE_CLOUDFLARE_TOKEN``
* ``S3_CACHE_CLOUDFLARE_ZONEID``

Once set up, the documents will be purged from cache when they are
modified or their privacy settings have changed.

The setting follows configuration format of the front-end cache invalidator
configuration in Wagtail. See the details `here`__. The only difference is
the setting name, which for wagtail-storages is
``WAGTAIL_STORAGES_DOCUMENTS_FRONTENDCACHE``.

.. _WagtailFrontEndCache: https://docs.wagtail.io/en/stable/reference/contrib/frontendcache.html

__ WagtailFrontEndCache_

All done!

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/torchbox/wagtail-storages",
    "name": "wagtail-storages",
    "maintainer": "Torchbox",
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": "info@torchbox.com",
    "keywords": "wagtail, s3, django, storages, storage",
    "author": "Torchbox",
    "author_email": "info@torchbox.com",
    "download_url": "https://files.pythonhosted.org/packages/1f/8f/311fbcb455f9fa558a1adc727a9bff0224d3b31c10a9b7d413015800d8e1/wagtail_storages-2.0.tar.gz",
    "platform": null,
    "description": ".. image:: https://github.com/torchbox/wagtail-storages/raw/main/logo.png\n\nwagtail-storages\n================\n\n.. image:: https://img.shields.io/pypi/v/wagtail-storages.svg\n   :target: https://pypi.org/project/wagtail-storages/\n.. image:: https://img.shields.io/pypi/dm/wagtail-storages.svg\n   :target: https://pypi.org/project/wagtail-storages/\n.. image:: https://travis-ci.org/torchbox/wagtail-storages.svg?branch=master\n   :target: https://travis-ci.org/torchbox/wagtail-storages\n\nThis package fills the missing gap in using AWS S3 together with Wagtail. This\npackage will be useful if you want to:\n\n- Use AWS S3 bucket for hosting Wagtail documents.\n- Put the bucket behind the CDN so that the bucket is not called directly each\n  time.\n- Allow editors to use privacy controls on documents, whilst using CDN.\n- Avoid time-outs because of downloads being proxied through Wagtail views.\n\n  *Note: you cannot use the document* `redirect view`__ *if you want your documents to be truly private.*\n\n.. _WagtailRedirectView: https://docs.wagtail.io/en/stable/advanced_topics/settings.html#wagtaildocs-serve-method\n__ WagtailRedirectView_\n\n\nWhat does it do?\n----------------\n\nThe package is a collection of signal handlers and Wagtail hooks.\n\n- Sets per-object ACLs on S3 whenever privacy settings change on a Wagtail\n  document.\n- Replaces the current document view with a redirect. Either to a signed S3\n  bucket URL for private documents or public custom domain URL for public ones.\n- Purges CDN for documents that have changed.\n\nRequirements\n------------\n\n- ``django-storages`` with the ``S3Boto3Storage`` storage backend configured in\n  a Wagtail project.\n- CDN supported by Wagtail front-end cache invalidator.\n\nManagement commands\n-------------------\n\n``django-admin fix_document_acls``\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe package provider a management command that sets all the documents' ACLs\naccording to the their collection permissions. This must be called if there had\nbeen documents in a bucket before the package was used to make sure the ACLs in\nthe bucket are correct.\n\nSettings\n--------\n\nWAGTAIL_STORAGES_DOCUMENTS_FRONTENDCACHE\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nUsing the same format as Wagtail's ``WAGTAILFRONTENDCACHE`` setting, but to be\nonly used by the wagtail-storages to purge the documents. If not set, the purge\nwon't happen. `Read more on how to format it in the Wagtail docs\n<https://docs.wagtail.io/en/stable/reference/contrib/frontendcache.html>`_,\ne.g.\n\n\n.. code:: python\n\n   WAGTAIL_STORAGES_DOCUMENTS_FRONTENDCACHE = {\n       'cloudfront': {\n           'BACKEND': 'wagtail.contrib.frontend_cache.backends.CloudfrontBackend',\n           'DISTRIBUTION_ID': 'your-distribution-id',\n        },\n   }\n\nWAGTAIL_STORAGES_DOCUMENT_HOOK_ORDER\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nSet a custom order for the document hook order. It's set to 100 by default.\nIt's important that it runs after any of your hooks since it returns a\nresponse, e.g.\n\n.. code:: python\n\n   WAGTAIL_STORAGES_DOCUMENT_HOOK_ORDER = 900\n\n\nRecommended S3 setup with Wagtail\n---------------------------------\n\nThe following guide explains the recommended setup for using S3 with Wagtail.\nThis guide assumes that:\n\n* You serve your main website at ``llamasavers.com`` (replace\n  ``llamasavers.com`` with your actual domain name).\n* Your S3 bucket is called ``media.llamasavers.com`` and you host it from that\n  domain name.\n* You are using CDN on that domain, this guide will assume Cloudflare.\n\nSet up S3 bucket\n~~~~~~~~~~~~~~~~\n\nFirst, set up your S3 bucket. It must be configured to:\n\n- Have a name that matches the domain name, e.g. ``media.llamasavers.com``.\n- Allow the user to perform the following actions on the bucket:\n   - ``s3:ListBucket``\n   - ``s3:GetBucketLocation``\n   - ``s3:ListBucketMultipartUploads``\n   - ``s3:ListBucketVersions``\n- Allow the user to perform all the actions (``s3:*``) on the objects within the\n  bucket.\n- Allow the internet traffic to access Wagtail image renditions (``images/*``).\n- Allow use of public ACLs, by disabling:\n   - \"Block public access to buckets and objects granted through new access control lists (ACLs)\"\n   - \"Block public access to buckets and objects granted through any access control lists (ACLs)\"\n\nThe user permissions can be set in the IAM or via a bucket policy. See example\nof all of those points being achieved in the bucket policy below.\n\n.. code:: json\n\n   {\n        \"Version\": \"2012-10-17\",\n        \"Statement\": [\n            {\n                \"Sid\": \"PublicGetObject\",\n                \"Effect\": \"Allow\",\n                \"Principal\": \"*\",\n                \"Action\": \"s3:GetObject\",\n                \"Resource\": \"arn:aws:s3:::[BUCKET NAME]/images/*\"\n            },\n            {\n                \"Sid\": \"AllowUserManageBucket\",\n                \"Effect\": \"Allow\",\n                \"Principal\": {\n                    \"AWS\": \"arn:aws:iam::[USER ARN]\"\n                },\n                \"Action\": [\n                    \"s3:ListBucket\",\n                    \"s3:GetBucketLocation\",\n                    \"s3:ListBucketMultipartUploads\",\n                    \"s3:ListBucketVersions\"\n                ],\n                \"Resource\": \"arn:aws:s3:::[BUCKET NAME]\"\n            },\n            {\n                \"Sid\": \"AllowUserManageBucketObjects\",\n                \"Effect\": \"Allow\",\n                \"Principal\": {\n                    \"AWS\": \"arn:aws:iam::[USER ARN]\"\n                },\n                \"Action\": \"s3:*\",\n                \"Resource\": \"arn:aws:s3:::[BUCKET NAME]/*\"\n            }\n        ]\n    }\n\n\nAfter the S3 bucket is set up on AWS, you can configure the Wagtail project to\nuse it.\n\nSet up django-storages\n~~~~~~~~~~~~~~~~~~~~~~\n\nInstall ``django-storages`` and ``boto3``.\n\n.. code:: sh\n\n   pip install django-storages[boto3]\n\nSet up your S3 bucket with ``django-storages``. The following code allows\nconfiguration via environment variables.\n\n.. code:: python\n\n    # settings.py\n    import os\n\n\n    if \"AWS_STORAGE_BUCKET_NAME\" in os.environ:\n        # Add django-storages to the installed apps\n        INSTALLED_APPS = INSTALLED_APPS + [\"storages\"]\n\n        # https://docs.djangoproject.com/en/stable/ref/settings/#default-file-storage\n        DEFAULT_FILE_STORAGE = \"storages.backends.s3boto3.S3Boto3Storage\"\n\n        AWS_STORAGE_BUCKET_NAME = os.environ[\"AWS_STORAGE_BUCKET_NAME\"]\n\n        # Disables signing of the S3 objects' URLs. When set to True it\n        # will append authorization querystring to each URL.\n        AWS_QUERYSTRING_AUTH = False\n\n        # Do not allow overriding files on S3 as per Wagtail docs recommendation:\n        # https://docs.wagtail.io/en/stable/advanced_topics/deploying.html#cloud-storage\n        # Not having this setting may have consequences such as losing files.\n        AWS_S3_FILE_OVERWRITE = False\n\n        # Default ACL for new files should be \"private\" - not accessible to the\n        # public. Images should be made available to public via the bucket policy,\n        # where the documents should use wagtail-storages.\n        AWS_DEFAULT_ACL = \"private\"\n\n        # We generally use this setting in production to put the S3 bucket\n        # behind a CDN using a custom domain, e.g. media.llamasavers.com.\n        # https://django-storages.readthedocs.io/en/latest/backends/amazon-S3.html#cloudfront\n        if \"AWS_S3_CUSTOM_DOMAIN\" in os.environ:\n            AWS_S3_CUSTOM_DOMAIN = os.environ[\"AWS_S3_CUSTOM_DOMAIN\"]\n\n        # When signing URLs is enabled, the region must be set.\n        # The global S3 endpoint does not seem to support signed URLS.\n        # Set this only if you will be using signed URLs.\n        if \"AWS_S3_REGION_NAME\" in os.environ:\n            AWS_S3_REGION_NAME = os.environ[\"AWS_S3_REGION_NAME\"]\n\n        # This settings lets you force using http or https protocol when generating\n        # the URLs to the files. Set https as default.\n        # https://github.com/jschneier/django-storages/blob/10d1929de5e0318dbd63d715db4bebc9a42257b5/storages/backends/s3boto3.py#L217\n        AWS_S3_URL_PROTOCOL = os.environ.get(\"AWS_S3_URL_PROTOCOL\", \"https:\")\n\n\nIf you use the above snippet, you can set the following environment variables:\n\n* ``AWS_STORAGE_BUCKET_NAME`` - set to ``media.llamasavers.com``.\n* ``AWS_S3_CUSTOM_DOMAIN`` - set to ``media.llamasavers.com``.\n* ``AWS_S3_REGION_NAME`` - set to your AWS region name, e.g. ``eu-west-2``.\n\nYou can use one of the methods to provide `boto3 with credentials`__. We\nsuggest you stick with the environment variables. To do that, you need to set\nthe following variables:\n\n* ``AWS_ACCESS_KEY_ID``\n* ``AWS_SECRET_ACCESS_KEY``\n\n.. _Boto3Credentials: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html\n\n__ Boto3Credentials_\n\nNow the storage should be configured and working. Editors should be able to\nupload images and documents in Wagtail admin.\n\nSet up ``wagtail-storages``\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nInstall ``wagtail-storages`` itself.\n\n.. code:: sh\n\n   pip install wagtail-storages\n\n\nAdd ``wagtail_storages`` to your ``INSTALLED_APPS`` in your settings file.\n\n.. code:: python\n\n   # settings.py\n\n   INSTALLED_APPS = [\n       # ... Other apps\n       \"wagtail_storages.apps.WagtailStoragesConfig\",\n       # ... Other apps\n   ]\n\nWith that, ACLs should be updated if documents are moved to\nprivate collections.\n\nIf you already have files in your S3 bucket, run ``django-admin\nfix_document_acls`` to make sure all documents have the right ACLs set up.\n\nSet up front-end cache invalidation\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nIf edge cache is set up on the custom domain (``media.llamasavers.com``) you\nshould set up the CDN purging to avoid having outdated or private documents\navailable to users via the CDN endpoint. For example, for Cloudflare you want\nto use a configuration similar to the one below:\n\n.. code:: python\n\n   # settings.py\n   import os\n\n\n   if \"S3_CACHE_CLOUDFLARE_TOKEN\" in os.environ:\n        WAGTAIL_STORAGES_DOCUMENTS_FRONTENDCACHE = {\n            \"default\": {\n                \"BACKEND\": \"wagtail.contrib.frontend_cache.backends.CloudflareBackend\",\n                \"EMAIL\": os.environ[\"S3_CACHE_CLOUDFLARE_EMAIL\"],\n                \"TOKEN\": os.environ[\"S3_CACHE_CLOUDFLARE_TOKEN\"],\n                \"ZONEID\": os.environ[\"S3_CACHE_CLOUDFLARE_ZONEID\"],\n            },\n        }\n\nThen set the following environment variables:\n\n* ``S3_CACHE_CLOUDFLARE_EMAIL``\n* ``S3_CACHE_CLOUDFLARE_TOKEN``\n* ``S3_CACHE_CLOUDFLARE_ZONEID``\n\nOnce set up, the documents will be purged from cache when they are\nmodified or their privacy settings have changed.\n\nThe setting follows configuration format of the front-end cache invalidator\nconfiguration in Wagtail. See the details `here`__. The only difference is\nthe setting name, which for wagtail-storages is\n``WAGTAIL_STORAGES_DOCUMENTS_FRONTENDCACHE``.\n\n.. _WagtailFrontEndCache: https://docs.wagtail.io/en/stable/reference/contrib/frontendcache.html\n\n__ WagtailFrontEndCache_\n\nAll done!\n",
    "bugtrack_url": null,
    "license": "BSD 3-Clause License",
    "summary": "Use AWS S3 with private documents in Wagtail.",
    "version": "2.0",
    "project_urls": {
        "Homepage": "https://github.com/torchbox/wagtail-storages"
    },
    "split_keywords": [
        "wagtail",
        " s3",
        " django",
        " storages",
        " storage"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "465991a5cea8f5ea2efe66733e30cbdeb00da8181c4361ef4b6f6b22004fbef2",
                "md5": "1c8c3d77fecc0d3a78ecc542ddf1e848",
                "sha256": "4664d11b918bad3a656b4d9071d67566c1aa84aae0a3e630cc1d05383bef572c"
            },
            "downloads": -1,
            "filename": "wagtail_storages-2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "1c8c3d77fecc0d3a78ecc542ddf1e848",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 11538,
            "upload_time": "2024-09-18T11:02:15",
            "upload_time_iso_8601": "2024-09-18T11:02:15.186129Z",
            "url": "https://files.pythonhosted.org/packages/46/59/91a5cea8f5ea2efe66733e30cbdeb00da8181c4361ef4b6f6b22004fbef2/wagtail_storages-2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1f8f311fbcb455f9fa558a1adc727a9bff0224d3b31c10a9b7d413015800d8e1",
                "md5": "64df85f48af04e267964640e308c63ac",
                "sha256": "ba3da585d4ffeb8acd828adbed905cbe3db1e799d41318afbd953a472afd1b38"
            },
            "downloads": -1,
            "filename": "wagtail_storages-2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "64df85f48af04e267964640e308c63ac",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 14416,
            "upload_time": "2024-09-18T11:02:16",
            "upload_time_iso_8601": "2024-09-18T11:02:16.976515Z",
            "url": "https://files.pythonhosted.org/packages/1f/8f/311fbcb455f9fa558a1adc727a9bff0224d3b31c10a9b7d413015800d8e1/wagtail_storages-2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-18 11:02:16",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "torchbox",
    "github_project": "wagtail-storages",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "tox": true,
    "lcname": "wagtail-storages"
}
        
Elapsed time: 0.52484s