Name | parsel JSON |
Version |
1.9.1
JSON |
| download |
home_page | https://github.com/scrapy/parsel |
Summary | Parsel is a library to extract data from HTML and XML using XPath and CSS selectors |
upload_time | 2024-04-08 08:12:24 |
maintainer | None |
docs_url | None |
author | Scrapy project |
requires_python | >=3.8 |
license | BSD |
keywords |
parsel
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
|
======
Parsel
======
.. image:: https://github.com/scrapy/parsel/actions/workflows/tests.yml/badge.svg
:target: https://github.com/scrapy/parsel/actions/workflows/tests.yml
:alt: Tests
.. image:: https://img.shields.io/pypi/pyversions/parsel.svg
:target: https://github.com/scrapy/parsel/actions/workflows/tests.yml
:alt: Supported Python versions
.. image:: https://img.shields.io/pypi/v/parsel.svg
:target: https://pypi.python.org/pypi/parsel
:alt: PyPI Version
.. image:: https://img.shields.io/codecov/c/github/scrapy/parsel/master.svg
:target: https://codecov.io/github/scrapy/parsel?branch=master
:alt: Coverage report
Parsel is a BSD-licensed Python_ library to extract data from HTML_, JSON_, and
XML_ documents.
It supports:
- CSS_ and XPath_ expressions for HTML and XML documents
- JMESPath_ expressions for JSON documents
- `Regular expressions`_
Find the Parsel online documentation at https://parsel.readthedocs.org.
Example (`open online demo`_):
.. code-block:: python
>>> from parsel import Selector
>>> text = """
<html>
<body>
<h1>Hello, Parsel!</h1>
<ul>
<li><a href="http://example.com">Link 1</a></li>
<li><a href="http://scrapy.org">Link 2</a></li>
</ul>
<script type="application/json">{"a": ["b", "c"]}</script>
</body>
</html>"""
>>> selector = Selector(text=text)
>>> selector.css('h1::text').get()
'Hello, Parsel!'
>>> selector.xpath('//h1/text()').re(r'\w+')
['Hello', 'Parsel']
>>> for li in selector.css('ul > li'):
... print(li.xpath('.//@href').get())
http://example.com
http://scrapy.org
>>> selector.css('script::text').jmespath("a").get()
'b'
>>> selector.css('script::text').jmespath("a").getall()
['b', 'c']
.. _CSS: https://en.wikipedia.org/wiki/Cascading_Style_Sheets
.. _HTML: https://en.wikipedia.org/wiki/HTML
.. _JMESPath: https://jmespath.org/
.. _JSON: https://en.wikipedia.org/wiki/JSON
.. _open online demo: https://colab.research.google.com/drive/149VFa6Px3wg7S3SEnUqk--TyBrKplxCN#forceEdit=true&sandboxMode=true
.. _Python: https://www.python.org/
.. _regular expressions: https://docs.python.org/library/re.html
.. _XML: https://en.wikipedia.org/wiki/XML
.. _XPath: https://en.wikipedia.org/wiki/XPath
History
-------
1.9.1 (2024-04-08)
~~~~~~~~~~~~~~~~~~
* Removed the dependency on ``pytest-runner``.
* Removed the obsolete ``Makefile``.
1.9.0 (2024-03-14)
~~~~~~~~~~~~~~~~~~
* Now requires ``cssselect >= 1.2.0`` (this minimum version was required since
1.8.0 but that wasn't properly recorded)
* Removed support for Python 3.7
* Added support for Python 3.12 and PyPy 3.10
* Fixed an exception when calling ``__str__`` or ``__repr__`` on some JSON
selectors
* Code formatted with ``black``
* CI fixes and improvements
1.8.1 (2023-04-18)
~~~~~~~~~~~~~~~~~~
* Remove a Sphinx reference from NEWS to fix the PyPI description
* Add a ``twine check`` CI check to detect such problems
1.8.0 (2023-04-18)
~~~~~~~~~~~~~~~~~~
* Add support for JMESPath: you can now create a selector for a JSON document
and call ``Selector.jmespath()``. See `the documentation`_ for more
information and examples.
* Selectors can now be constructed from ``bytes`` (using the ``body`` and
``encoding`` arguments) instead of ``str`` (using the ``text`` argument), so
that there is no internal conversion from ``str`` to ``bytes`` and the memory
usage is lower.
* Typing improvements
* The ``pkg_resources`` module (which was absent from the requirements) is no
longer used
* Documentation build fixes
* New requirements:
* ``jmespath``
* ``typing_extensions`` (on Python 3.7)
.. _the documentation: https://parsel.readthedocs.io/en/latest/usage.html
1.7.0 (2022-11-01)
~~~~~~~~~~~~~~~~~~
* Add PEP 561-style type information
* Support for Python 2.7, 3.5 and 3.6 is removed
* Support for Python 3.9-3.11 is added
* Very large documents (with deep nesting or long tag content) can now be
parsed, and ``Selector`` now takes a new argument ``huge_tree`` to disable
this
* Support for new features of cssselect 1.2.0 is added
* The ``Selector.remove()`` and ``SelectorList.remove()`` methods are
deprecated and replaced with the new ``Selector.drop()`` and
``SelectorList.drop()`` methods which don't delete text after the dropped
elements when used in the HTML mode.
1.6.0 (2020-05-07)
~~~~~~~~~~~~~~~~~~
* Python 3.4 is no longer supported
* New ``Selector.remove()`` and ``SelectorList.remove()`` methods to remove
selected elements from the parsed document tree
* Improvements to error reporting, test coverage and documentation, and code
cleanup
1.5.2 (2019-08-09)
~~~~~~~~~~~~~~~~~~
* ``Selector.remove_namespaces`` received a significant performance improvement
* The value of ``data`` within the printable representation of a selector
(``repr(selector)``) now ends in ``...`` when truncated, to make the
truncation obvious.
* Minor documentation improvements.
1.5.1 (2018-10-25)
~~~~~~~~~~~~~~~~~~
* ``has-class`` XPath function handles newlines and other separators
in class names properly;
* fixed parsing of HTML documents with null bytes;
* documentation improvements;
* Python 3.7 tests are run on CI; other test improvements.
1.5.0 (2018-07-04)
~~~~~~~~~~~~~~~~~~
* New ``Selector.attrib`` and ``SelectorList.attrib`` properties which make
it easier to get attributes of HTML elements.
* CSS selectors became faster: compilation results are cached
(LRU cache is used for ``css2xpath``), so there is
less overhead when the same CSS expression is used several times.
* ``.get()`` and ``.getall()`` selector methods are documented and recommended
over ``.extract_first()`` and ``.extract()``.
* Various documentation tweaks and improvements.
One more change is that ``.extract()`` and ``.extract_first()`` methods
are now implemented using ``.get()`` and ``.getall()``, not the other
way around, and instead of calling ``Selector.extract`` all other methods
now call ``Selector.get`` internally. It can be **backwards incompatible**
in case of custom Selector subclasses which override ``Selector.extract``
without doing the same for ``Selector.get``. If you have such Selector
subclass, make sure ``get`` method is also overridden. For example, this::
class MySelector(parsel.Selector):
def extract(self):
return super().extract() + " foo"
should be changed to this::
class MySelector(parsel.Selector):
def get(self):
return super().get() + " foo"
extract = get
1.4.0 (2018-02-08)
~~~~~~~~~~~~~~~~~~
* ``Selector`` and ``SelectorList`` can't be pickled because
pickling/unpickling doesn't work for ``lxml.html.HtmlElement``;
parsel now raises TypeError explicitly instead of allowing pickle to
silently produce wrong output. This is technically backwards-incompatible
if you're using Python < 3.6.
1.3.1 (2017-12-28)
~~~~~~~~~~~~~~~~~~
* Fix artifact uploads to pypi.
1.3.0 (2017-12-28)
~~~~~~~~~~~~~~~~~~
* ``has-class`` XPath extension function;
* ``parsel.xpathfuncs.set_xpathfunc`` is a simplified way to register
XPath extensions;
* ``Selector.remove_namespaces`` now removes namespace declarations;
* Python 3.3 support is dropped;
* ``make htmlview`` command for easier Parsel docs development.
* CI: PyPy installation is fixed; parsel now runs tests for PyPy3 as well.
1.2.0 (2017-05-17)
~~~~~~~~~~~~~~~~~~
* Add ``SelectorList.get`` and ``SelectorList.getall``
methods as aliases for ``SelectorList.extract_first``
and ``SelectorList.extract`` respectively
* Add default value parameter to ``SelectorList.re_first`` method
* Add ``Selector.re_first`` method
* Add ``replace_entities`` argument on ``.re()`` and ``.re_first()``
to turn off replacing of character entity references
* Bug fix: detect ``None`` result from lxml parsing and fallback with an empty document
* Rearrange XML/HTML examples in the selectors usage docs
* Travis CI:
* Test against Python 3.6
* Test against PyPy using "Portable PyPy for Linux" distribution
1.1.0 (2016-11-22)
~~~~~~~~~~~~~~~~~~
* Change default HTML parser to `lxml.html.HTMLParser <https://lxml.de/api/lxml.html.HTMLParser-class.html>`_,
which makes easier to use some HTML specific features
* Add css2xpath function to translate CSS to XPath
* Add support for ad-hoc namespaces declarations
* Add support for XPath variables
* Documentation improvements and updates
1.0.3 (2016-07-29)
~~~~~~~~~~~~~~~~~~
* Add BSD-3-Clause license file
* Re-enable PyPy tests
* Integrate py.test runs with setuptools (needed for Debian packaging)
* Changelog is now called ``NEWS``
1.0.2 (2016-04-26)
~~~~~~~~~~~~~~~~~~
* Fix bug in exception handling causing original traceback to be lost
* Added docstrings and other doc fixes
1.0.1 (2015-08-24)
~~~~~~~~~~~~~~~~~~
* Updated PyPI classifiers
* Added docstrings for csstranslator module and other doc fixes
1.0.0 (2015-08-22)
~~~~~~~~~~~~~~~~~~
* Documentation fixes
0.9.6 (2015-08-14)
~~~~~~~~~~~~~~~~~~
* Updated documentation
* Extended test coverage
0.9.5 (2015-08-11)
~~~~~~~~~~~~~~~~~~
* Support for extending SelectorList
0.9.4 (2015-08-10)
~~~~~~~~~~~~~~~~~~
* Try workaround for travis-ci/dpl#253
0.9.3 (2015-08-07)
~~~~~~~~~~~~~~~~~~
* Add base_url argument
0.9.2 (2015-08-07)
~~~~~~~~~~~~~~~~~~
* Rename module unified -> selector and promoted root attribute
* Add create_root_node function
0.9.1 (2015-08-04)
~~~~~~~~~~~~~~~~~~
* Setup Sphinx build and docs structure
* Build universal wheels
* Rename some leftovers from package extraction
0.9.0 (2015-07-30)
~~~~~~~~~~~~~~~~~~
* First release on PyPI.
Raw data
{
"_id": null,
"home_page": "https://github.com/scrapy/parsel",
"name": "parsel",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "parsel",
"author": "Scrapy project",
"author_email": "info@scrapy.org",
"download_url": "https://files.pythonhosted.org/packages/87/bd/b982085f091367ca25ccb61f2d127655a0daac1716ecfde014ab7c538116/parsel-1.9.1.tar.gz",
"platform": null,
"description": "======\nParsel\n======\n\n.. image:: https://github.com/scrapy/parsel/actions/workflows/tests.yml/badge.svg\n :target: https://github.com/scrapy/parsel/actions/workflows/tests.yml\n :alt: Tests\n\n.. image:: https://img.shields.io/pypi/pyversions/parsel.svg\n :target: https://github.com/scrapy/parsel/actions/workflows/tests.yml\n :alt: Supported Python versions\n\n.. image:: https://img.shields.io/pypi/v/parsel.svg\n :target: https://pypi.python.org/pypi/parsel\n :alt: PyPI Version\n\n.. image:: https://img.shields.io/codecov/c/github/scrapy/parsel/master.svg\n :target: https://codecov.io/github/scrapy/parsel?branch=master\n :alt: Coverage report\n\n\nParsel is a BSD-licensed Python_ library to extract data from HTML_, JSON_, and\nXML_ documents.\n\nIt supports:\n\n- CSS_ and XPath_ expressions for HTML and XML documents\n\n- JMESPath_ expressions for JSON documents\n\n- `Regular expressions`_\n\nFind the Parsel online documentation at https://parsel.readthedocs.org.\n\nExample (`open online demo`_):\n\n.. code-block:: python\n\n >>> from parsel import Selector\n >>> text = \"\"\"\n <html>\n <body>\n <h1>Hello, Parsel!</h1>\n <ul>\n <li><a href=\"http://example.com\">Link 1</a></li>\n <li><a href=\"http://scrapy.org\">Link 2</a></li>\n </ul>\n <script type=\"application/json\">{\"a\": [\"b\", \"c\"]}</script>\n </body>\n </html>\"\"\"\n >>> selector = Selector(text=text)\n >>> selector.css('h1::text').get()\n 'Hello, Parsel!'\n >>> selector.xpath('//h1/text()').re(r'\\w+')\n ['Hello', 'Parsel']\n >>> for li in selector.css('ul > li'):\n ... print(li.xpath('.//@href').get())\n http://example.com\n http://scrapy.org\n >>> selector.css('script::text').jmespath(\"a\").get()\n 'b'\n >>> selector.css('script::text').jmespath(\"a\").getall()\n ['b', 'c']\n\n.. _CSS: https://en.wikipedia.org/wiki/Cascading_Style_Sheets\n.. _HTML: https://en.wikipedia.org/wiki/HTML\n.. _JMESPath: https://jmespath.org/\n.. _JSON: https://en.wikipedia.org/wiki/JSON\n.. _open online demo: https://colab.research.google.com/drive/149VFa6Px3wg7S3SEnUqk--TyBrKplxCN#forceEdit=true&sandboxMode=true\n.. _Python: https://www.python.org/\n.. _regular expressions: https://docs.python.org/library/re.html\n.. _XML: https://en.wikipedia.org/wiki/XML\n.. _XPath: https://en.wikipedia.org/wiki/XPath\n\n\n\n\n\nHistory\n-------\n\n1.9.1 (2024-04-08)\n~~~~~~~~~~~~~~~~~~\n\n* Removed the dependency on ``pytest-runner``.\n* Removed the obsolete ``Makefile``.\n\n1.9.0 (2024-03-14)\n~~~~~~~~~~~~~~~~~~\n\n* Now requires ``cssselect >= 1.2.0`` (this minimum version was required since\n 1.8.0 but that wasn't properly recorded)\n* Removed support for Python 3.7\n* Added support for Python 3.12 and PyPy 3.10\n* Fixed an exception when calling ``__str__`` or ``__repr__`` on some JSON\n selectors\n* Code formatted with ``black``\n* CI fixes and improvements\n\n1.8.1 (2023-04-18)\n~~~~~~~~~~~~~~~~~~\n\n* Remove a Sphinx reference from NEWS to fix the PyPI description\n* Add a ``twine check`` CI check to detect such problems\n\n1.8.0 (2023-04-18)\n~~~~~~~~~~~~~~~~~~\n\n* Add support for JMESPath: you can now create a selector for a JSON document\n and call ``Selector.jmespath()``. See `the documentation`_ for more\n information and examples.\n* Selectors can now be constructed from ``bytes`` (using the ``body`` and\n ``encoding`` arguments) instead of ``str`` (using the ``text`` argument), so\n that there is no internal conversion from ``str`` to ``bytes`` and the memory\n usage is lower.\n* Typing improvements\n* The ``pkg_resources`` module (which was absent from the requirements) is no\n longer used\n* Documentation build fixes\n* New requirements:\n\n * ``jmespath``\n * ``typing_extensions`` (on Python 3.7)\n\n .. _the documentation: https://parsel.readthedocs.io/en/latest/usage.html\n\n1.7.0 (2022-11-01)\n~~~~~~~~~~~~~~~~~~\n\n* Add PEP 561-style type information\n* Support for Python 2.7, 3.5 and 3.6 is removed\n* Support for Python 3.9-3.11 is added\n* Very large documents (with deep nesting or long tag content) can now be\n parsed, and ``Selector`` now takes a new argument ``huge_tree`` to disable\n this\n* Support for new features of cssselect 1.2.0 is added\n* The ``Selector.remove()`` and ``SelectorList.remove()`` methods are\n deprecated and replaced with the new ``Selector.drop()`` and\n ``SelectorList.drop()`` methods which don't delete text after the dropped\n elements when used in the HTML mode.\n\n\n1.6.0 (2020-05-07)\n~~~~~~~~~~~~~~~~~~\n\n* Python 3.4 is no longer supported\n* New ``Selector.remove()`` and ``SelectorList.remove()`` methods to remove\n selected elements from the parsed document tree\n* Improvements to error reporting, test coverage and documentation, and code\n cleanup\n\n\n1.5.2 (2019-08-09)\n~~~~~~~~~~~~~~~~~~\n\n* ``Selector.remove_namespaces`` received a significant performance improvement\n* The value of ``data`` within the printable representation of a selector\n (``repr(selector)``) now ends in ``...`` when truncated, to make the\n truncation obvious.\n* Minor documentation improvements.\n\n\n1.5.1 (2018-10-25)\n~~~~~~~~~~~~~~~~~~\n\n* ``has-class`` XPath function handles newlines and other separators\n in class names properly;\n* fixed parsing of HTML documents with null bytes;\n* documentation improvements;\n* Python 3.7 tests are run on CI; other test improvements.\n\n\n1.5.0 (2018-07-04)\n~~~~~~~~~~~~~~~~~~\n\n* New ``Selector.attrib`` and ``SelectorList.attrib`` properties which make\n it easier to get attributes of HTML elements.\n* CSS selectors became faster: compilation results are cached\n (LRU cache is used for ``css2xpath``), so there is\n less overhead when the same CSS expression is used several times.\n* ``.get()`` and ``.getall()`` selector methods are documented and recommended\n over ``.extract_first()`` and ``.extract()``.\n* Various documentation tweaks and improvements.\n\nOne more change is that ``.extract()`` and ``.extract_first()`` methods\nare now implemented using ``.get()`` and ``.getall()``, not the other\nway around, and instead of calling ``Selector.extract`` all other methods\nnow call ``Selector.get`` internally. It can be **backwards incompatible**\nin case of custom Selector subclasses which override ``Selector.extract``\nwithout doing the same for ``Selector.get``. If you have such Selector\nsubclass, make sure ``get`` method is also overridden. For example, this::\n\n class MySelector(parsel.Selector):\n def extract(self):\n return super().extract() + \" foo\"\n\nshould be changed to this::\n\n class MySelector(parsel.Selector):\n def get(self):\n return super().get() + \" foo\"\n extract = get\n\n\n1.4.0 (2018-02-08)\n~~~~~~~~~~~~~~~~~~\n\n* ``Selector`` and ``SelectorList`` can't be pickled because\n pickling/unpickling doesn't work for ``lxml.html.HtmlElement``;\n parsel now raises TypeError explicitly instead of allowing pickle to\n silently produce wrong output. This is technically backwards-incompatible\n if you're using Python < 3.6.\n\n\n1.3.1 (2017-12-28)\n~~~~~~~~~~~~~~~~~~\n\n* Fix artifact uploads to pypi.\n\n\n1.3.0 (2017-12-28)\n~~~~~~~~~~~~~~~~~~\n\n* ``has-class`` XPath extension function;\n* ``parsel.xpathfuncs.set_xpathfunc`` is a simplified way to register\n XPath extensions;\n* ``Selector.remove_namespaces`` now removes namespace declarations;\n* Python 3.3 support is dropped;\n* ``make htmlview`` command for easier Parsel docs development.\n* CI: PyPy installation is fixed; parsel now runs tests for PyPy3 as well.\n\n\n1.2.0 (2017-05-17)\n~~~~~~~~~~~~~~~~~~\n\n* Add ``SelectorList.get`` and ``SelectorList.getall``\n methods as aliases for ``SelectorList.extract_first``\n and ``SelectorList.extract`` respectively\n* Add default value parameter to ``SelectorList.re_first`` method\n* Add ``Selector.re_first`` method\n* Add ``replace_entities`` argument on ``.re()`` and ``.re_first()``\n to turn off replacing of character entity references\n* Bug fix: detect ``None`` result from lxml parsing and fallback with an empty document\n* Rearrange XML/HTML examples in the selectors usage docs\n* Travis CI:\n\n * Test against Python 3.6\n * Test against PyPy using \"Portable PyPy for Linux\" distribution\n\n\n1.1.0 (2016-11-22)\n~~~~~~~~~~~~~~~~~~\n\n* Change default HTML parser to `lxml.html.HTMLParser <https://lxml.de/api/lxml.html.HTMLParser-class.html>`_,\n which makes easier to use some HTML specific features\n* Add css2xpath function to translate CSS to XPath\n* Add support for ad-hoc namespaces declarations\n* Add support for XPath variables\n* Documentation improvements and updates\n\n\n1.0.3 (2016-07-29)\n~~~~~~~~~~~~~~~~~~\n\n* Add BSD-3-Clause license file\n* Re-enable PyPy tests\n* Integrate py.test runs with setuptools (needed for Debian packaging)\n* Changelog is now called ``NEWS``\n\n\n1.0.2 (2016-04-26)\n~~~~~~~~~~~~~~~~~~\n\n* Fix bug in exception handling causing original traceback to be lost\n* Added docstrings and other doc fixes\n\n\n1.0.1 (2015-08-24)\n~~~~~~~~~~~~~~~~~~\n\n* Updated PyPI classifiers\n* Added docstrings for csstranslator module and other doc fixes\n\n\n1.0.0 (2015-08-22)\n~~~~~~~~~~~~~~~~~~\n\n* Documentation fixes\n\n\n0.9.6 (2015-08-14)\n~~~~~~~~~~~~~~~~~~\n\n* Updated documentation\n* Extended test coverage\n\n\n0.9.5 (2015-08-11)\n~~~~~~~~~~~~~~~~~~\n\n* Support for extending SelectorList\n\n\n0.9.4 (2015-08-10)\n~~~~~~~~~~~~~~~~~~\n\n* Try workaround for travis-ci/dpl#253\n\n\n0.9.3 (2015-08-07)\n~~~~~~~~~~~~~~~~~~\n\n* Add base_url argument\n\n\n0.9.2 (2015-08-07)\n~~~~~~~~~~~~~~~~~~\n\n* Rename module unified -> selector and promoted root attribute\n* Add create_root_node function\n\n\n0.9.1 (2015-08-04)\n~~~~~~~~~~~~~~~~~~\n\n* Setup Sphinx build and docs structure\n* Build universal wheels\n* Rename some leftovers from package extraction\n\n\n0.9.0 (2015-07-30)\n~~~~~~~~~~~~~~~~~~\n\n* First release on PyPI.\n",
"bugtrack_url": null,
"license": "BSD",
"summary": "Parsel is a library to extract data from HTML and XML using XPath and CSS selectors",
"version": "1.9.1",
"project_urls": {
"Homepage": "https://github.com/scrapy/parsel"
},
"split_keywords": [
"parsel"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "857ee3f1a7ff69303a4e08a8742a285406e5786650d8218ff194743eff292a1e",
"md5": "e27b5b33dd76e185750bf11f3362d98b",
"sha256": "c4a777ee6c3ff5e39652b58e351c5cf02c12ff420d05b07a7966aebb68ab1700"
},
"downloads": -1,
"filename": "parsel-1.9.1-py2.py3-none-any.whl",
"has_sig": false,
"md5_digest": "e27b5b33dd76e185750bf11f3362d98b",
"packagetype": "bdist_wheel",
"python_version": "py2.py3",
"requires_python": ">=3.8",
"size": 17116,
"upload_time": "2024-04-08T08:12:23",
"upload_time_iso_8601": "2024-04-08T08:12:23.160745Z",
"url": "https://files.pythonhosted.org/packages/85/7e/e3f1a7ff69303a4e08a8742a285406e5786650d8218ff194743eff292a1e/parsel-1.9.1-py2.py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "87bdb982085f091367ca25ccb61f2d127655a0daac1716ecfde014ab7c538116",
"md5": "cd242530b761e477244eeac2caeadb85",
"sha256": "14e00dc07731c9030db620c195fcae884b5b4848e9f9c523c6119f708ccfa9ac"
},
"downloads": -1,
"filename": "parsel-1.9.1.tar.gz",
"has_sig": false,
"md5_digest": "cd242530b761e477244eeac2caeadb85",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 51225,
"upload_time": "2024-04-08T08:12:24",
"upload_time_iso_8601": "2024-04-08T08:12:24.943643Z",
"url": "https://files.pythonhosted.org/packages/87/bd/b982085f091367ca25ccb61f2d127655a0daac1716ecfde014ab7c538116/parsel-1.9.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-04-08 08:12:24",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "scrapy",
"github_project": "parsel",
"travis_ci": false,
"coveralls": true,
"github_actions": true,
"tox": true,
"lcname": "parsel"
}