python-dataservice


Namepython-dataservice JSON
Version 0.11.4 PyPI version JSON
download
home_pageNone
SummaryLightweight async data gathering for Python
upload_time2024-11-12 22:04:21
maintainerNone
docs_urlNone
authorNomadMonad
requires_python<4.0,>=3.11
licenseMIT
keywords async data gathering scraping web scraping web crawling crawling data extraction data scraping api data
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            .. image:: https://img.shields.io/pypi/pyversions/python-dataservice.svg
   :alt: Python Versions

DataService
===========

Lightweight - async - data gathering for Python.
____________________________________________________________________________________
DataService is a lightweight web scraping and general purpose data gathering library for Python.

Designed for simplicity, it's built upon common web scraping and data gathering patterns.

No complex API to learn, just standard Python idioms.

Dual synchronous and asynchronous support.

Installation
------------
Please note that DataService requires Python 3.11 or higher.

You can install DataService via pip:

.. code-block:: bash

    pip install python-dataservice


You can also install the optional ``playwright`` dependency to use the ``PlaywrightClient``:

.. code-block:: bash

    pip install python-dataservice[playwright]

To install Playwright, run:

.. code-block:: bash

    python -m playwright install

or simply:

.. code-block:: bash

    playwright install

How to use DataService
----------------------

To start, create a ``DataService`` instance with an ``Iterable`` of ``Request`` objects. This setup provides you with an ``Iterator`` of data objects that you can then iterate over or convert to a ``list``, ``tuple``, a ``pd.DataFrame`` or any data structure of choice.

.. code-block:: python

    start_requests = [Request(url="https://books.toscrape.com/index.html", callback=parse_books_page, client=HttpXClient())]
    data_service = DataService(start_requests)
    data = tuple(data_service)

A ``Request`` is a ``Pydantic`` model that includes the URL to fetch, a reference to the ``client`` callable, and a ``callback`` function for parsing the ``Response`` object.

The client can be any async Python callable that accepts a ``Request`` object and returns a ``Response`` object.
``DataService`` provides an ``HttpXClient`` class by default, which is based on the ``httpx`` library, but you are free to use your own custom async client.

The callback function processes a ``Response`` object and returns either ``data`` or additional ``Request`` objects.

In this trivial example we are requesting the `Books to Scrape <https://books.toscrape.com/index.html>`_ homepage and parsing the number of books on the page.

Example ``parse_books_page`` function:

.. code-block:: python

    def parse_books_page(response: Response):
        articles = response.html.find_all("article", {"class": "product_pod"})
        return {
            "url": response.url,
            "title": response.html.title.get_text(strip=True),
            "articles": len(articles),
        }

This function takes a ``Response`` object, which has a ``html`` attribute (a ``BeautifulSoup`` object of the HTML content). The function parses the HTML content and returns data.

The callback function can ``return`` or ``yield`` either ``data`` (``dict`` or ``pydantic.BaseModel``) or more ``Request`` objects.

If you have used ``Scrapy`` before, you will find this pattern familiar.

For more examples and advanced usage, check out the `examples <https://dataservice.readthedocs.io/en/latest/examples.html>`_ section.

For a detailed API reference, check out the `API <https://dataservice.readthedocs.io/en/latest/modules.html>`_  section.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "python-dataservice",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.11",
    "maintainer_email": null,
    "keywords": "async, data gathering, scraping, web scraping, web crawling, crawling, data extraction, data scraping, API, data",
    "author": "NomadMonad",
    "author_email": "romagnoli.luca@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/cf/fc/30bac29f766ef4248470376c2aba510d31113938033f97f72ed9e384457b/python_dataservice-0.11.4.tar.gz",
    "platform": null,
    "description": ".. image:: https://img.shields.io/pypi/pyversions/python-dataservice.svg\n   :alt: Python Versions\n\nDataService\n===========\n\nLightweight - async - data gathering for Python.\n____________________________________________________________________________________\nDataService is a lightweight web scraping and general purpose data gathering library for Python.\n\nDesigned for simplicity, it's built upon common web scraping and data gathering patterns.\n\nNo complex API to learn, just standard Python idioms.\n\nDual synchronous and asynchronous support.\n\nInstallation\n------------\nPlease note that DataService requires Python 3.11 or higher.\n\nYou can install DataService via pip:\n\n.. code-block:: bash\n\n    pip install python-dataservice\n\n\nYou can also install the optional ``playwright`` dependency to use the ``PlaywrightClient``:\n\n.. code-block:: bash\n\n    pip install python-dataservice[playwright]\n\nTo install Playwright, run:\n\n.. code-block:: bash\n\n    python -m playwright install\n\nor simply:\n\n.. code-block:: bash\n\n    playwright install\n\nHow to use DataService\n----------------------\n\nTo start, create a ``DataService`` instance with an ``Iterable`` of ``Request`` objects. This setup provides you with an ``Iterator`` of data objects that you can then iterate over or convert to a ``list``, ``tuple``, a ``pd.DataFrame`` or any data structure of choice.\n\n.. code-block:: python\n\n    start_requests = [Request(url=\"https://books.toscrape.com/index.html\", callback=parse_books_page, client=HttpXClient())]\n    data_service = DataService(start_requests)\n    data = tuple(data_service)\n\nA ``Request`` is a ``Pydantic`` model that includes the URL to fetch, a reference to the ``client`` callable, and a ``callback`` function for parsing the ``Response`` object.\n\nThe client can be any async Python callable that accepts a ``Request`` object and returns a ``Response`` object.\n``DataService`` provides an ``HttpXClient`` class by default, which is based on the ``httpx`` library, but you are free to use your own custom async client.\n\nThe callback function processes a ``Response`` object and returns either ``data`` or additional ``Request`` objects.\n\nIn this trivial example we are requesting the `Books to Scrape <https://books.toscrape.com/index.html>`_ homepage and parsing the number of books on the page.\n\nExample ``parse_books_page`` function:\n\n.. code-block:: python\n\n    def parse_books_page(response: Response):\n        articles = response.html.find_all(\"article\", {\"class\": \"product_pod\"})\n        return {\n            \"url\": response.url,\n            \"title\": response.html.title.get_text(strip=True),\n            \"articles\": len(articles),\n        }\n\nThis function takes a ``Response`` object, which has a ``html`` attribute (a ``BeautifulSoup`` object of the HTML content). The function parses the HTML content and returns data.\n\nThe callback function can ``return`` or ``yield`` either ``data`` (``dict`` or ``pydantic.BaseModel``) or more ``Request`` objects.\n\nIf you have used ``Scrapy`` before, you will find this pattern familiar.\n\nFor more examples and advanced usage, check out the `examples <https://dataservice.readthedocs.io/en/latest/examples.html>`_ section.\n\nFor a detailed API reference, check out the `API <https://dataservice.readthedocs.io/en/latest/modules.html>`_  section.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Lightweight async data gathering for Python",
    "version": "0.11.4",
    "project_urls": {
        "Documentation": "https://readthedocs.org/projects/dataservice/"
    },
    "split_keywords": [
        "async",
        " data gathering",
        " scraping",
        " web scraping",
        " web crawling",
        " crawling",
        " data extraction",
        " data scraping",
        " api",
        " data"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d49f7aa9312ada0254c12f3cfd2e22526e8895f86a2b010bc22cfb24bb1accc1",
                "md5": "8402faf36a04a3b70fda6a7c55e05d8a",
                "sha256": "1ba01367b138cbd98fcd700b4badd6f395e31ba3c19f8ee08848c4536bf49410"
            },
            "downloads": -1,
            "filename": "python_dataservice-0.11.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "8402faf36a04a3b70fda6a7c55e05d8a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.11",
            "size": 25371,
            "upload_time": "2024-11-12T22:04:19",
            "upload_time_iso_8601": "2024-11-12T22:04:19.749375Z",
            "url": "https://files.pythonhosted.org/packages/d4/9f/7aa9312ada0254c12f3cfd2e22526e8895f86a2b010bc22cfb24bb1accc1/python_dataservice-0.11.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cffc30bac29f766ef4248470376c2aba510d31113938033f97f72ed9e384457b",
                "md5": "9f1b3d9b92279bba259eabd685895ed2",
                "sha256": "01031091bd6d074706d154635339bcd1e8d976160a1ccf2725e00df3f3974cfd"
            },
            "downloads": -1,
            "filename": "python_dataservice-0.11.4.tar.gz",
            "has_sig": false,
            "md5_digest": "9f1b3d9b92279bba259eabd685895ed2",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.11",
            "size": 22285,
            "upload_time": "2024-11-12T22:04:21",
            "upload_time_iso_8601": "2024-11-12T22:04:21.468780Z",
            "url": "https://files.pythonhosted.org/packages/cf/fc/30bac29f766ef4248470376c2aba510d31113938033f97f72ed9e384457b/python_dataservice-0.11.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-12 22:04:21",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "python-dataservice"
}
        
Elapsed time: 0.41400s