text-scrubber


Nametext-scrubber JSON
Version 0.5.0 PyPI version JSON
download
home_pagehttps://github.com/sybrenjansen/text-scrubber
SummaryPython package that offers text scrubbing functionality, providing building blocks for string cleaning as well as normalizing geographical text (countries/states/cities)
upload_time2024-08-26 12:56:31
maintainerNone
docs_urlNone
authorSybren Jansen
requires_pythonNone
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            text-scrubber
=============

|Build status| |Docs status|

.. |Build status| image:: https://github.com/sybrenjansen/text-scrubber/workflows/Build/badge.svg?branch=master
.. |Docs status| image:: https://github.com/sybrenjansen/text-scrubber/workflows/Docs/badge.svg?branch=master

``text-scrubber`` is a Python package that offers text scrubbing functionality, providing building blocks for string
cleaning as well as normalizing geographical text (countries/states/cities).

Full documentation is available at https://sybrenjansen.github.io/text-scrubber/.


TextScrubber
------------

The ``TextScrubber`` class cleans a single or a collection of strings. It can be easily constructed and configured with
building blocks:


.. code-block:: python

    from text_scrubber import TextScrubber

    ts = (TextScrubber().to_ascii()
                        .lowercase()
                        .tokenize()
                        .remove_stop_words()
                        .join())

which can then be used as:

.. code-block:: python

    ts.transform('héLlô there, WòrlD')  # outputs 'hello world'

or with an iterable of input:

.. code-block:: python

    ts.transform(['héLlô there, WòrlD', 'slímm̀er ÀI'])  # outputs ['hello world', 'slimmer AI']

For a complete list of building blocks please refer to the ``TextScrubber`` API reference.

Geo
---

The ``text_scrubber.geo`` module contains functions to normalize geographical data which deal with spelling errors,
country name variations, etc.:

.. code-block:: python

    from text_scrubber.geo import normalize_country, normalize_region, normalize_city

    """
    Countries
    """

    normalize_country('Peoples rep. of China')
    # [Location(canonical_name='China', matched_name='Peoples Republic of China', country=None,
    #           score=1.0)]

    normalize_country('Deutschland')
    # [Location(canonical_name='Germany', matched_name='Deutschland', country=None, score=1.0)]

    normalize_country('st Nevis and Kitties')
    # [Location(canonical_name='Saint Kitts and Nevis', matched_name='Saint Kitts and Nevis',
    #           country=None, score=0.75)]

    normalize_country('ira')
    # [Location(canonical_name='Iran', matched_name='Iran', country=None, score=0.857...),
    #  Location(canonical_name='Iraq', matched_name='Iraq', country=None, score=0.857...)]

    """
    Cities
    """

    normalize_city('Leibnitz', ['Austria'])
    # [Location(canonical_name='Leibnitz', matched_name='Leibnitz', country='Austria', score=1.0)]

    normalize_city('heidelberg')
    # [Location(canonical_name='Heidelberg', matched_name='Heidelberg', country='Germany',
    #           score=1.0),
    #  Location(canonical_name='Heidelberg', matched_name='Heidelberg', country='South Africa',
    #           score=1.0),
    #  Location(canonical_name='Heidelberg', matched_name='Heidelberg', country='United States',
    #           score=1.0)]

    normalize_city('ohioo', ['US'])
    # [Location(canonical_name='Ohio', matched_name='Ohio', country='United States',
    #           score=0.888...)]

    normalize_city('Madri', ['Spain', 'US', 'Brazil'])
    # [Location(canonical_name='Madrid', matched_name='Madrid', country='Spain',
    #           score=0.909...),
    #  Location(canonical_name='Madrid', matched_name='Madrid', country='United States',
    #           score=0.909...),
    #  Location(canonical_name='Mari', matched_name='Mari', country='Brazil',
    #           score=0.888...)]

    """
    Regions
    """

    normalize_region('triangle park', ['US'])
    # [Location(canonical_name='The Triangle Park', matched_name='The Triangle Park',
    #           country='United States', score=1.0)]

    normalize_region('Fur', ['Denmark'])
    # [Location(canonical_name='Fur', matched_name='Fur', country='Denmark', score=1.0)]

    normalize_region('texel', ['NL'])
    # [Location(canonical_name='Texel', matched_name='Texel', country='Netherlands', score=1.0)]


Each of the above normalization functions return the canonical name, matched name, the match score, and when normalizing
cities or regions it will also contain the corresponding country. The difference between canonical and matched name
stems from the fact that some countries, cities, or regions can have alternative names. E.g., ``NYC`` maps to
``New York City``. When the query was ``NYCC`` the canonical name will be ``New York City``, but the matched name
``NYC``. The match scores are always between 0.0 and 1.0, where 1.0 is a perfect match. If a known mapping exists, like
``Deutschland`` to ``Germany``, then the match score will be 1.0.

.. note::

    When normalizing a country or finding countries in a string, the ``country`` attribute of a ``LocationMatch`` object
    is always ``None``. The normalized name can be found using the ``canonical_name`` attribute.

The ``text_scrubber.geo`` module also contains functions to find the name of places (country, region, and city) in
text dealing with spelling errors, country name variations, etc.:

.. code-block:: python

    from text_scrubber.geo import (find_city_in_string, find_country_in_string,
                                   find_region_in_string)

    """
    Countries
    """

    find_country_in_string("Institute of German study, Accra, Ghana")
    # [ExtractedLocation(location=Location(canonical_name='Ghana', matched_name='Ghana',
    #                                      country=None, score=1.0),
    #                    substring='Ghana', substring_range=Range(start=34, end=39)),
    #  ExtractedLocation(location=Location(canonical_name='Germany', matched_name='Germany',
    #                                      country=None, score=0.923...),
    #                    substring='German', substring_range=Range(start=13, end=19))]

    find_country_in_string("Peking University, 5 Yiheyuan Rd, "
                           "Haidian District, Beijing, CH, 100871")
    # This was a trick question though, as CH=Switzerland. China is CN
    # [ExtractedLocation(location=Location(canonical_name='Switzerland', matched_name='CH',
    #                                      country=None, score=1.0),
    #                    substring='CH', substring_range=Range(start=61, end=63))]

    """
    Cities
    """

    find_city_in_string("Météorage Pau France", {"France"})
    # [ExtractedLocation(location=Location(canonical_name='Pau', matched_name='Pau',
    #                                      country='France', score=1.0),
    #                    substring='Pau', substring_range=Range(start=10, end=13)),
    #  ExtractedLocation(location=Location(canonical_name='La Frasnée', matched_name='Фране',
    #                                      country='France', score=0.909...),
    #                    substring='France', substring_range=Range(start=14, end=20))]

    find_city_in_string("Bavarian Environment Agency, Hans Högn Straße 12, "
                        "95030 Hof Saale, Bavaria, Germany", {"Germany"})
    # [ExtractedLocation(location=Location(canonical_name='Hof', matched_name='Hof',
    #                                      country='Germany', score=1.0),
    #                    substring='Hof', substring_range=Range(start=56, end=59)),
    #  ExtractedLocation(location=Location(canonical_name='Saal', matched_name='Saal',
    #                                      country='Germany', score=0.888...),
    #                    substring='Saale', substring_range=Range(start=60, end=65)),
    #  ExtractedLocation(location=Location(canonical_name='Trassem', matched_name='Trassem',
    #                                      country='Germany', score=0.857...),
    #                    substring='Straße', substring_range=Range(start=39, end=45))]

    """
    Regions
    """

    find_region_in_string("Fur Museum, 7884 Fur, Denmark.", {"Denmark"})
    # [ExtractedLocation(location=Location(canonical_name='Fur', matched_name='Fur',
    #                                      country='Denmark', score=1.0),
    #                    substring='Fur', substring_range=Range(start=0, end=3)),
    #  ExtractedLocation(location=Location(canonical_name='Fur', matched_name='Fur',
    #                                      country='Denmark', score=1.0),
    #                    substring='Fur', substring_range=Range(start=17, end=20)),
    #  ExtractedLocation(location=Location(canonical_name='Kingdom of Denmark',
    #                                      matched_name='Denmark', country='Denmark', score=1.0),
    #                    substring='Denmark', substring_range=Range(start=22, end=29))]

    find_region_in_string("Department of Biological Oceanography, Royal Netherlands Institute "
                          "for Sea Research (NIOZ), Texel, The Netherlands", {"Netherlands"})
    # [ExtractedLocation(location=Location(canonical_name='Kingdom of the Netherlands',
    #                                      matched_name='Netherlands', country='Netherlands',
    #                                      score=1.0),
    #                    substring='Netherlands', substring_range=Range(start=45, end=56)),
    #  ExtractedLocation(location=Location(canonical_name='Texel', matched_name='Texel',
    #                                      country='Netherlands', score=1.0),
    #                    substring='Texel', substring_range=Range(start=92, end=97)),
    #  ExtractedLocation(location=Location(canonical_name='Kingdom of the Netherlands',
    #                                      matched_name='Netherlands', country='Netherlands',
    #                                      score=1.0),
    #                    substring='Netherlands', substring_range=Range(start=103, end=114))]

.. note::

    Whenever a country is considered part of another country ``normalize_country`` will return the latter.
    E.g., ``Puerto Rico`` is mapped to ``United States`` and ``Greenland`` to ``Denmark``.


Resource loading
~~~~~~~~~~~~~~~~

Resources for cities and regions aren't all loaded when you import ``TextScrubber``, they're loaded on the fly per
country. This means that the first time you do a query it can take a while. The second time around the same query will
be much faster, as will all other queries involving the same countr(y)(ies). You can load in resources per country in
advance by using:

.. code-block:: python

    from text_scrubber.geo import (add_city_resources, add_region_resources,
                                   normalize_country_to_country_codes)

    country_codes = normalize_country_to_country_codes(['Netherlands', 'China', 'USA'])
    add_city_resources(country_codes)
    add_region_resources(country_codes, progress_bar=True)

.. note::

    Whenever a country is considered part of another country ``normalize_country_to_country_codes`` returns both.

Cleaning
~~~~~~~~

There are clean functions available for countries/regions/cities, which all follow the same cleaning pipeline:

.. code-block:: python

    from text_scrubber.geo import clean_country, clean_region, clean_city

    clean_country('cent afr rep.')     # 'central african republic'
    clean_region('Hyōgo')              # 'hyogo'
    clean_city('płońsk')               # 'plonsk'
    clean_city('neustadt/westerwald')  # 'neustadt westerwald'


Documentation
-------------

If you want to build the documentation, please install the documentation dependencies by executing:

.. code-block:: bash

    pip install .[docs]

Documentation can be build by executing:

.. code-block:: bash

    python setup.py build_docs

Documentation can also be build from the ``docs`` folder directly. In that case ``text-scrubber`` should be installed
and available in your current working environment. Execute:

.. code-block:: bash

    make html

in the ``docs`` folder.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/sybrenjansen/text-scrubber",
    "name": "text-scrubber",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": null,
    "author": "Sybren Jansen",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/d6/f9/557876bae8acca92af2bf0d29a585443d92717ace2d7a91e70cfb19464bf/text_scrubber-0.5.0.tar.gz",
    "platform": null,
    "description": "text-scrubber\n=============\n\n|Build status| |Docs status|\n\n.. |Build status| image:: https://github.com/sybrenjansen/text-scrubber/workflows/Build/badge.svg?branch=master\n.. |Docs status| image:: https://github.com/sybrenjansen/text-scrubber/workflows/Docs/badge.svg?branch=master\n\n``text-scrubber`` is a Python package that offers text scrubbing functionality, providing building blocks for string\ncleaning as well as normalizing geographical text (countries/states/cities).\n\nFull documentation is available at https://sybrenjansen.github.io/text-scrubber/.\n\n\nTextScrubber\n------------\n\nThe ``TextScrubber`` class cleans a single or a collection of strings. It can be easily constructed and configured with\nbuilding blocks:\n\n\n.. code-block:: python\n\n    from text_scrubber import TextScrubber\n\n    ts = (TextScrubber().to_ascii()\n                        .lowercase()\n                        .tokenize()\n                        .remove_stop_words()\n                        .join())\n\nwhich can then be used as:\n\n.. code-block:: python\n\n    ts.transform('h\u00e9Ll\u00f4 there, W\u00f2rlD')  # outputs 'hello world'\n\nor with an iterable of input:\n\n.. code-block:: python\n\n    ts.transform(['h\u00e9Ll\u00f4 there, W\u00f2rlD', 'sl\u00edmm\u0300er \u00c0I'])  # outputs ['hello world', 'slimmer AI']\n\nFor a complete list of building blocks please refer to the ``TextScrubber`` API reference.\n\nGeo\n---\n\nThe ``text_scrubber.geo`` module contains functions to normalize geographical data which deal with spelling errors,\ncountry name variations, etc.:\n\n.. code-block:: python\n\n    from text_scrubber.geo import normalize_country, normalize_region, normalize_city\n\n    \"\"\"\n    Countries\n    \"\"\"\n\n    normalize_country('Peoples rep. of China')\n    # [Location(canonical_name='China', matched_name='Peoples Republic of China', country=None,\n    #           score=1.0)]\n\n    normalize_country('Deutschland')\n    # [Location(canonical_name='Germany', matched_name='Deutschland', country=None, score=1.0)]\n\n    normalize_country('st Nevis and Kitties')\n    # [Location(canonical_name='Saint Kitts and Nevis', matched_name='Saint Kitts and Nevis',\n    #           country=None, score=0.75)]\n\n    normalize_country('ira')\n    # [Location(canonical_name='Iran', matched_name='Iran', country=None, score=0.857...),\n    #  Location(canonical_name='Iraq', matched_name='Iraq', country=None, score=0.857...)]\n\n    \"\"\"\n    Cities\n    \"\"\"\n\n    normalize_city('Leibnitz', ['Austria'])\n    # [Location(canonical_name='Leibnitz', matched_name='Leibnitz', country='Austria', score=1.0)]\n\n    normalize_city('heidelberg')\n    # [Location(canonical_name='Heidelberg', matched_name='Heidelberg', country='Germany',\n    #           score=1.0),\n    #  Location(canonical_name='Heidelberg', matched_name='Heidelberg', country='South Africa',\n    #           score=1.0),\n    #  Location(canonical_name='Heidelberg', matched_name='Heidelberg', country='United States',\n    #           score=1.0)]\n\n    normalize_city('ohioo', ['US'])\n    # [Location(canonical_name='Ohio', matched_name='Ohio', country='United States',\n    #           score=0.888...)]\n\n    normalize_city('Madri', ['Spain', 'US', 'Brazil'])\n    # [Location(canonical_name='Madrid', matched_name='Madrid', country='Spain',\n    #           score=0.909...),\n    #  Location(canonical_name='Madrid', matched_name='Madrid', country='United States',\n    #           score=0.909...),\n    #  Location(canonical_name='Mari', matched_name='Mari', country='Brazil',\n    #           score=0.888...)]\n\n    \"\"\"\n    Regions\n    \"\"\"\n\n    normalize_region('triangle park', ['US'])\n    # [Location(canonical_name='The Triangle Park', matched_name='The Triangle Park',\n    #           country='United States', score=1.0)]\n\n    normalize_region('Fur', ['Denmark'])\n    # [Location(canonical_name='Fur', matched_name='Fur', country='Denmark', score=1.0)]\n\n    normalize_region('texel', ['NL'])\n    # [Location(canonical_name='Texel', matched_name='Texel', country='Netherlands', score=1.0)]\n\n\nEach of the above normalization functions return the canonical name, matched name, the match score, and when normalizing\ncities or regions it will also contain the corresponding country. The difference between canonical and matched name\nstems from the fact that some countries, cities, or regions can have alternative names. E.g., ``NYC`` maps to\n``New York City``. When the query was ``NYCC`` the canonical name will be ``New York City``, but the matched name\n``NYC``. The match scores are always between 0.0 and 1.0, where 1.0 is a perfect match. If a known mapping exists, like\n``Deutschland`` to ``Germany``, then the match score will be 1.0.\n\n.. note::\n\n    When normalizing a country or finding countries in a string, the ``country`` attribute of a ``LocationMatch`` object\n    is always ``None``. The normalized name can be found using the ``canonical_name`` attribute.\n\nThe ``text_scrubber.geo`` module also contains functions to find the name of places (country, region, and city) in\ntext dealing with spelling errors, country name variations, etc.:\n\n.. code-block:: python\n\n    from text_scrubber.geo import (find_city_in_string, find_country_in_string,\n                                   find_region_in_string)\n\n    \"\"\"\n    Countries\n    \"\"\"\n\n    find_country_in_string(\"Institute of German study, Accra, Ghana\")\n    # [ExtractedLocation(location=Location(canonical_name='Ghana', matched_name='Ghana',\n    #                                      country=None, score=1.0),\n    #                    substring='Ghana', substring_range=Range(start=34, end=39)),\n    #  ExtractedLocation(location=Location(canonical_name='Germany', matched_name='Germany',\n    #                                      country=None, score=0.923...),\n    #                    substring='German', substring_range=Range(start=13, end=19))]\n\n    find_country_in_string(\"Peking University, 5 Yiheyuan Rd, \"\n                           \"Haidian District, Beijing, CH, 100871\")\n    # This was a trick question though, as CH=Switzerland. China is CN\n    # [ExtractedLocation(location=Location(canonical_name='Switzerland', matched_name='CH',\n    #                                      country=None, score=1.0),\n    #                    substring='CH', substring_range=Range(start=61, end=63))]\n\n    \"\"\"\n    Cities\n    \"\"\"\n\n    find_city_in_string(\"M\u00e9t\u00e9orage Pau France\", {\"France\"})\n    # [ExtractedLocation(location=Location(canonical_name='Pau', matched_name='Pau',\n    #                                      country='France', score=1.0),\n    #                    substring='Pau', substring_range=Range(start=10, end=13)),\n    #  ExtractedLocation(location=Location(canonical_name='La Frasn\u00e9e', matched_name='\u0424\u0440\u0430\u043d\u0435',\n    #                                      country='France', score=0.909...),\n    #                    substring='France', substring_range=Range(start=14, end=20))]\n\n    find_city_in_string(\"Bavarian Environment Agency, Hans H\u00f6gn Stra\u00dfe 12, \"\n                        \"95030 Hof Saale, Bavaria, Germany\", {\"Germany\"})\n    # [ExtractedLocation(location=Location(canonical_name='Hof', matched_name='Hof',\n    #                                      country='Germany', score=1.0),\n    #                    substring='Hof', substring_range=Range(start=56, end=59)),\n    #  ExtractedLocation(location=Location(canonical_name='Saal', matched_name='Saal',\n    #                                      country='Germany', score=0.888...),\n    #                    substring='Saale', substring_range=Range(start=60, end=65)),\n    #  ExtractedLocation(location=Location(canonical_name='Trassem', matched_name='Trassem',\n    #                                      country='Germany', score=0.857...),\n    #                    substring='Stra\u00dfe', substring_range=Range(start=39, end=45))]\n\n    \"\"\"\n    Regions\n    \"\"\"\n\n    find_region_in_string(\"Fur Museum, 7884 Fur, Denmark.\", {\"Denmark\"})\n    # [ExtractedLocation(location=Location(canonical_name='Fur', matched_name='Fur',\n    #                                      country='Denmark', score=1.0),\n    #                    substring='Fur', substring_range=Range(start=0, end=3)),\n    #  ExtractedLocation(location=Location(canonical_name='Fur', matched_name='Fur',\n    #                                      country='Denmark', score=1.0),\n    #                    substring='Fur', substring_range=Range(start=17, end=20)),\n    #  ExtractedLocation(location=Location(canonical_name='Kingdom of Denmark',\n    #                                      matched_name='Denmark', country='Denmark', score=1.0),\n    #                    substring='Denmark', substring_range=Range(start=22, end=29))]\n\n    find_region_in_string(\"Department of Biological Oceanography, Royal Netherlands Institute \"\n                          \"for Sea Research (NIOZ), Texel, The Netherlands\", {\"Netherlands\"})\n    # [ExtractedLocation(location=Location(canonical_name='Kingdom of the Netherlands',\n    #                                      matched_name='Netherlands', country='Netherlands',\n    #                                      score=1.0),\n    #                    substring='Netherlands', substring_range=Range(start=45, end=56)),\n    #  ExtractedLocation(location=Location(canonical_name='Texel', matched_name='Texel',\n    #                                      country='Netherlands', score=1.0),\n    #                    substring='Texel', substring_range=Range(start=92, end=97)),\n    #  ExtractedLocation(location=Location(canonical_name='Kingdom of the Netherlands',\n    #                                      matched_name='Netherlands', country='Netherlands',\n    #                                      score=1.0),\n    #                    substring='Netherlands', substring_range=Range(start=103, end=114))]\n\n.. note::\n\n    Whenever a country is considered part of another country ``normalize_country`` will return the latter.\n    E.g., ``Puerto Rico`` is mapped to ``United States`` and ``Greenland`` to ``Denmark``.\n\n\nResource loading\n~~~~~~~~~~~~~~~~\n\nResources for cities and regions aren't all loaded when you import ``TextScrubber``, they're loaded on the fly per\ncountry. This means that the first time you do a query it can take a while. The second time around the same query will\nbe much faster, as will all other queries involving the same countr(y)(ies). You can load in resources per country in\nadvance by using:\n\n.. code-block:: python\n\n    from text_scrubber.geo import (add_city_resources, add_region_resources,\n                                   normalize_country_to_country_codes)\n\n    country_codes = normalize_country_to_country_codes(['Netherlands', 'China', 'USA'])\n    add_city_resources(country_codes)\n    add_region_resources(country_codes, progress_bar=True)\n\n.. note::\n\n    Whenever a country is considered part of another country ``normalize_country_to_country_codes`` returns both.\n\nCleaning\n~~~~~~~~\n\nThere are clean functions available for countries/regions/cities, which all follow the same cleaning pipeline:\n\n.. code-block:: python\n\n    from text_scrubber.geo import clean_country, clean_region, clean_city\n\n    clean_country('cent afr rep.')     # 'central african republic'\n    clean_region('Hy\u014dgo')              # 'hyogo'\n    clean_city('p\u0142o\u0144sk')               # 'plonsk'\n    clean_city('neustadt/westerwald')  # 'neustadt westerwald'\n\n\nDocumentation\n-------------\n\nIf you want to build the documentation, please install the documentation dependencies by executing:\n\n.. code-block:: bash\n\n    pip install .[docs]\n\nDocumentation can be build by executing:\n\n.. code-block:: bash\n\n    python setup.py build_docs\n\nDocumentation can also be build from the ``docs`` folder directly. In that case ``text-scrubber`` should be installed\nand available in your current working environment. Execute:\n\n.. code-block:: bash\n\n    make html\n\nin the ``docs`` folder.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Python package that offers text scrubbing functionality, providing building blocks for string cleaning as well as normalizing geographical text (countries/states/cities)",
    "version": "0.5.0",
    "project_urls": {
        "Homepage": "https://github.com/sybrenjansen/text-scrubber"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b0480cc35e5c4cf593fc788155dd260a1ff7171e3ca4332c8ae1f8517a86c8b4",
                "md5": "ee64e000676e6474c2d6bcb9f4b390fe",
                "sha256": "9f4d520d962dc698b24e2617b65d15aeb378a3c339d4712b821cd6d585c592db"
            },
            "downloads": -1,
            "filename": "text_scrubber-0.5.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "ee64e000676e6474c2d6bcb9f4b390fe",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 30238875,
            "upload_time": "2024-08-26T12:56:27",
            "upload_time_iso_8601": "2024-08-26T12:56:27.409155Z",
            "url": "https://files.pythonhosted.org/packages/b0/48/0cc35e5c4cf593fc788155dd260a1ff7171e3ca4332c8ae1f8517a86c8b4/text_scrubber-0.5.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d6f9557876bae8acca92af2bf0d29a585443d92717ace2d7a91e70cfb19464bf",
                "md5": "eac8f1b7fa2f2cc4d83526fd5acb375c",
                "sha256": "c083fb7ba7f4ee5343fd1b83491c226c1b6ad3ea947e5052fc48cb284111c9b2"
            },
            "downloads": -1,
            "filename": "text_scrubber-0.5.0.tar.gz",
            "has_sig": false,
            "md5_digest": "eac8f1b7fa2f2cc4d83526fd5acb375c",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 30131583,
            "upload_time": "2024-08-26T12:56:31",
            "upload_time_iso_8601": "2024-08-26T12:56:31.471738Z",
            "url": "https://files.pythonhosted.org/packages/d6/f9/557876bae8acca92af2bf0d29a585443d92717ace2d7a91e70cfb19464bf/text_scrubber-0.5.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-26 12:56:31",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "sybrenjansen",
    "github_project": "text-scrubber",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "text-scrubber"
}
        
Elapsed time: 0.82815s