scrapy-time-machine


Namescrapy-time-machine JSON
Version 1.1.1 PyPI version JSON
download
home_pagehttps://github.com/zytedata/scrapy-time-machine
SummaryA downloader middleware that stores the current request chain to be crawled at another time.
upload_time2024-02-01 14:25:37
maintainer
docs_urlNone
authorLuiz Francisco Rodrigues da Silva
requires_python
licenseMIT license
keywords scrapy cache middleware
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # scrapy-time-machine

![PyPI](https://img.shields.io/pypi/v/scrapy-time-machine)
![PyPI - Python Version](https://img.shields.io/pypi/pyversions/scrapy-time-machine)
![GitHub Workflow Status](https://img.shields.io/github/workflow/status/zytedata/scrapy-time-machine/Unit%20tests)

Run your spider with a previously crawled request chain.

## Install

    pip install scrapy-time-machine

## Why?

Lets say your spider crawls some page everyday and after some time you notice that an important information was added and you want to start saving it.

You may modify your spider and extract this information from now on, but what if you want the historical value of this data, since it was first introduced to the site?

With this extension you can save a snapshot of the site at every run to be used in the future (as long as you don't change the request chain).

## Enabling

To enable this middlware, add this information to your projects's `settings.py`:

    DOWNLOADER_MIDDLEWARES = {
        "scrapy_time_machine.timemachine.TimeMachineMiddleware": 901
    }

    TIME_MACHINE_ENABLED = True
    TIME_MACHINE_STORAGE = "scrapy_time_machine.storages.DbmTimeMachineStorage"

## Using

### Store a snapshot of the current state of the site

`scrapy crawl sample -s TIME_MACHINE_SNAPSHOT=true -s TIME_MACHINE_URI="/tmp/%(name)s-%(time)s.db"`

This will save a snapshot at `/tmp/sample-YYYY-MM-DDThh-mm-ss.db`


### Retrieve a snapshot from a previously saved state of the site

`scrapy crawl sample -s TIME_MACHINE_RETRIEVE=true -s TIME_MACHINE_URI=/tmp/sample-YYYY-MM-DDThh-mm-ss.db`

If no change was made to the spider between the current version and the version that produced the snapshot, the extracted items should be the same.


## Sample project

There is a sample Scrapy project available at the [examples](examples/project/) directory.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/zytedata/scrapy-time-machine",
    "name": "scrapy-time-machine",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "scrapy cache middleware",
    "author": "Luiz Francisco Rodrigues da Silva",
    "author_email": "luizfrdasilva@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/73/55/fec84d80b58cc35bf9ec17fe9bca83eeda7093fe4fce35c5f3967873573c/scrapy-time-machine-1.1.1.tar.gz",
    "platform": "Any",
    "description": "# scrapy-time-machine\n\n![PyPI](https://img.shields.io/pypi/v/scrapy-time-machine)\n![PyPI - Python Version](https://img.shields.io/pypi/pyversions/scrapy-time-machine)\n![GitHub Workflow Status](https://img.shields.io/github/workflow/status/zytedata/scrapy-time-machine/Unit%20tests)\n\nRun your spider with a previously crawled request chain.\n\n## Install\n\n    pip install scrapy-time-machine\n\n## Why?\n\nLets say your spider crawls some page everyday and after some time you notice that an important information was added and you want to start saving it.\n\nYou may modify your spider and extract this information from now on, but what if you want the historical value of this data, since it was first introduced to the site?\n\nWith this extension you can save a snapshot of the site at every run to be used in the future (as long as you don't change the request chain).\n\n## Enabling\n\nTo enable this middlware, add this information to your projects's `settings.py`:\n\n    DOWNLOADER_MIDDLEWARES = {\n        \"scrapy_time_machine.timemachine.TimeMachineMiddleware\": 901\n    }\n\n    TIME_MACHINE_ENABLED = True\n    TIME_MACHINE_STORAGE = \"scrapy_time_machine.storages.DbmTimeMachineStorage\"\n\n## Using\n\n### Store a snapshot of the current state of the site\n\n`scrapy crawl sample -s TIME_MACHINE_SNAPSHOT=true -s TIME_MACHINE_URI=\"/tmp/%(name)s-%(time)s.db\"`\n\nThis will save a snapshot at `/tmp/sample-YYYY-MM-DDThh-mm-ss.db`\n\n\n### Retrieve a snapshot from a previously saved state of the site\n\n`scrapy crawl sample -s TIME_MACHINE_RETRIEVE=true -s TIME_MACHINE_URI=/tmp/sample-YYYY-MM-DDThh-mm-ss.db`\n\nIf no change was made to the spider between the current version and the version that produced the snapshot, the extracted items should be the same.\n\n\n## Sample project\n\nThere is a sample Scrapy project available at the [examples](examples/project/) directory.\n",
    "bugtrack_url": null,
    "license": "MIT license",
    "summary": "A downloader middleware that stores the current request chain to be crawled at another time.",
    "version": "1.1.1",
    "project_urls": {
        "Homepage": "https://github.com/zytedata/scrapy-time-machine"
    },
    "split_keywords": [
        "scrapy",
        "cache",
        "middleware"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "931fcb2a210198495652aef03257d560c4d6ced6dd894d2447c5e4190415b44f",
                "md5": "57ee05ec6cc8e64d1b1af6f4b32b1048",
                "sha256": "199eea5eca5133e1978686689cb86b23a548a37feadca1bd1ccbbb861306c7bc"
            },
            "downloads": -1,
            "filename": "scrapy_time_machine-1.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "57ee05ec6cc8e64d1b1af6f4b32b1048",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 6702,
            "upload_time": "2024-02-01T14:25:36",
            "upload_time_iso_8601": "2024-02-01T14:25:36.259085Z",
            "url": "https://files.pythonhosted.org/packages/93/1f/cb2a210198495652aef03257d560c4d6ced6dd894d2447c5e4190415b44f/scrapy_time_machine-1.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7355fec84d80b58cc35bf9ec17fe9bca83eeda7093fe4fce35c5f3967873573c",
                "md5": "eddc8a7f3fa5fe6b1ce6028414658944",
                "sha256": "72aabb16986c74abff8635a166e18f80f544f9fc0966b9e858ecc7afc8acfe4e"
            },
            "downloads": -1,
            "filename": "scrapy-time-machine-1.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "eddc8a7f3fa5fe6b1ce6028414658944",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 6012,
            "upload_time": "2024-02-01T14:25:37",
            "upload_time_iso_8601": "2024-02-01T14:25:37.732361Z",
            "url": "https://files.pythonhosted.org/packages/73/55/fec84d80b58cc35bf9ec17fe9bca83eeda7093fe4fce35c5f3967873573c/scrapy-time-machine-1.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-01 14:25:37",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "zytedata",
    "github_project": "scrapy-time-machine",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "tox": true,
    "lcname": "scrapy-time-machine"
}
        
Elapsed time: 0.17662s