Scrapy spiderdocs command
=========================
.. image:: https://img.shields.io/pypi/pyversions/scrapy-spiderdocs.svg
:target: https://pypi.python.org/pypi/scrapy-spiderdocs/
:alt: PyPI python versions
.. image:: https://img.shields.io/pypi/l/scrapy-spiderdocs.svg
:target: https://pypi.python.org/pypi/scrapy-spiderdocs/
:alt: PyPI license
.. image:: https://badge.fury.io/py/scrapy-spiderdocs.svg
:target: https://pypi.python.org/pypi/scrapy-spiderdocs/
:alt: PyPI version
.. image:: https://img.shields.io/pypi/status/scrapy-spiderdocs.svg
:target: https://pypi.python.org/pypi/scrapy-spiderdocs/
:alt: PyPI status
.. image:: https://img.shields.io/pypi/dm/scrapy-spiderdocs.svg
:target: https://pypi.python.org/pypi/scrapy-spiderdocs/
:alt: PyPI download month
Usage example
-------------
.. code-block:: bash
pip install scrapy-spiderdocs
scrapy spiderdocs <module.name>
Example project
---------------
See ``documented`` project for example.
.. code-block:: python
# -*- coding: utf-8 -*-
import scrapy
class ExampleSpider(scrapy.Spider):
"""Some text.
Hi!
; Note
Some note.
; Output
{
"1": 1
}
"""
name = 'example'
allowed_domains = ('example.com',)
start_urls = ('http://example.com/',)
def parse(self, response):
yield {
'body_length': len(response.body)
}
class ExampleSpider2(scrapy.Spider):
"""Some text.
Hi!
; Info
Some info.
"""
name = 'example2'
allowed_domains = ('example.com',)
start_urls = ('http://example.com/',)
def parse(self, response):
yield {'success': True}
Settings:
.. code-block:: python
SPIDERDOCS_SECTION_PROCESSORS = {
'output': lambda name, content: '### {name}\n\n```json\n{content}\n```'.format(name=name, content=content),
'info': lambda name, content: '{content}'.format(content=content)
}
Execute the command:
.. code-block:: bash
scrapy spiderdocs documented.spiders
Output:
.. code-block::
# documented.spiders spiders
## example2 [documented.spiders.example.ExampleSpider2]
Some info.
## example [documented.spiders.example.ExampleSpider]
### Note
Some note.
### Output
```json
{
"1": 1
}
```
Output options
--------------
stdout
~~~~~~
.. code-block:: bash
scrapy spiderdocs <module.name> > somefile.md
`-o` (`--output`) option
~~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: bash
scrapy spiderdocs <module.name> -o somefile.md
Settings
~~~~~~~~
.. code-block:: python
SPIDERDOCS_LOCATIONS = {
'module.name': "somefile.md"
}
The setting used if no module specified.
.. code-block:: bash
scrapy spiderdocs
Docstring syntax
----------------
Use ``;`` to create sections. For example:
.. code-block::
; Section 1
Some text ...
; Section 2
Some text ...
Use ``; end`` to close a section:
.. code-block::
This text will not be added to the documentation.
; Section 1
Some text ...
; end
And this text also will be skipped.
Section processors
~~~~~~~~~~~~~~~~~~
An example:
.. code-block:: python
SPIDERDOCS_SECTION_PROCESSORS = {
'output': lambda name, content: '### {name}\n\n```json\n{content}\n```'.format(name=name, content=content)
}
.. code-block:: bash
; Output
{
"attr": "value"
}
will be translated into:
.. code-block::
### Output
```json
{
"attr": "value"
}
```
Scrapy settings
---------------
``SPIDERDOCS_LOCATIONS: {<module>: <destination>}``, default: ``{}``.
``SPIDERDOCS_SECTION_PROCESSORS: {<section_name>: <function(name, content) -> str>}``, default: ``{}``.
See usage examples above.
Development
-----------
.. code-block:: bash
git clone git@github.com:nanvel/scrapy-spiderdocs.git
cd scrapy-spiderdocs
virtualenv .env --no-site-packages -p /usr/local/bin/python3
source .env/bin/activate
pip install scrapy
scrapy crawl example
scrapy spiderdocs documented.spiders
python -m unittest documented.tests
TODO
----
unittests (is there is no docstring, ...)
Raw data
{
"_id": null,
"home_page": "https://github.com/nanvel/scrapy-spiderdocs",
"name": "scrapy-spiderdocs",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "",
"keywords": "scrapy,spiders,documentation",
"author": "Oleksandr Polieno",
"author_email": "polyenoom@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/31/81/dea2793fd8f34684ca226818c78a8076c97c59d88b7607005d525a6db1fc/scrapy-spiderdocs-0.1.3.tar.gz",
"platform": "OS Independent",
"description": "Scrapy spiderdocs command\n=========================\n\n.. image:: https://img.shields.io/pypi/pyversions/scrapy-spiderdocs.svg\n :target: https://pypi.python.org/pypi/scrapy-spiderdocs/\n :alt: PyPI python versions\n\n.. image:: https://img.shields.io/pypi/l/scrapy-spiderdocs.svg\n :target: https://pypi.python.org/pypi/scrapy-spiderdocs/\n :alt: PyPI license\n\n.. image:: https://badge.fury.io/py/scrapy-spiderdocs.svg\n :target: https://pypi.python.org/pypi/scrapy-spiderdocs/\n :alt: PyPI version\n\n.. image:: https://img.shields.io/pypi/status/scrapy-spiderdocs.svg\n :target: https://pypi.python.org/pypi/scrapy-spiderdocs/\n :alt: PyPI status\n\n.. image:: https://img.shields.io/pypi/dm/scrapy-spiderdocs.svg\n :target: https://pypi.python.org/pypi/scrapy-spiderdocs/\n :alt: PyPI download month\n\n\nUsage example\n-------------\n\n.. code-block:: bash\n\n pip install scrapy-spiderdocs\n scrapy spiderdocs <module.name>\n\nExample project\n---------------\n\nSee ``documented`` project for example.\n\n.. code-block:: python\n\n # -*- coding: utf-8 -*-\n import scrapy\n\n\n class ExampleSpider(scrapy.Spider):\n \"\"\"Some text.\n Hi!\n\n ; Note\n\n Some note.\n\n ; Output\n\n {\n \"1\": 1\n }\n \"\"\"\n\n name = 'example'\n allowed_domains = ('example.com',)\n start_urls = ('http://example.com/',)\n\n def parse(self, response):\n yield {\n 'body_length': len(response.body)\n }\n\n\n class ExampleSpider2(scrapy.Spider):\n \"\"\"Some text.\n Hi!\n\n ; Info\n\n Some info.\n \"\"\"\n\n name = 'example2'\n allowed_domains = ('example.com',)\n start_urls = ('http://example.com/',)\n\n def parse(self, response):\n yield {'success': True}\n\n\nSettings:\n\n.. code-block:: python\n\n SPIDERDOCS_SECTION_PROCESSORS = {\n 'output': lambda name, content: '### {name}\\n\\n```json\\n{content}\\n```'.format(name=name, content=content),\n 'info': lambda name, content: '{content}'.format(content=content)\n }\n\nExecute the command:\n\n.. code-block:: bash\n\n scrapy spiderdocs documented.spiders\n\nOutput:\n\n.. code-block::\n\n # documented.spiders spiders\n\n ## example2 [documented.spiders.example.ExampleSpider2]\n\n Some info.\n\n ## example [documented.spiders.example.ExampleSpider]\n\n ### Note\n\n Some note.\n\n ### Output\n\n ```json\n {\n \"1\": 1\n }\n ```\n\nOutput options\n--------------\n\nstdout\n~~~~~~\n\n.. code-block:: bash\n\n scrapy spiderdocs <module.name> > somefile.md\n\n`-o` (`--output`) option\n~~~~~~~~~~~~~~~~~~~~~~~~\n\n.. code-block:: bash\n\n scrapy spiderdocs <module.name> -o somefile.md\n\nSettings\n~~~~~~~~\n\n.. code-block:: python\n\n SPIDERDOCS_LOCATIONS = {\n 'module.name': \"somefile.md\"\n }\n\nThe setting used if no module specified.\n\n.. code-block:: bash\n\n scrapy spiderdocs\n\nDocstring syntax\n----------------\n\nUse ``;`` to create sections. For example:\n\n.. code-block::\n\n ; Section 1\n\n Some text ...\n\n ; Section 2\n\n Some text ...\n\nUse ``; end`` to close a section:\n\n.. code-block::\n\n This text will not be added to the documentation.\n\n ; Section 1\n\n Some text ...\n\n ; end\n\n And this text also will be skipped.\n\nSection processors\n~~~~~~~~~~~~~~~~~~\n\nAn example:\n\n.. code-block:: python\n\n SPIDERDOCS_SECTION_PROCESSORS = {\n 'output': lambda name, content: '### {name}\\n\\n```json\\n{content}\\n```'.format(name=name, content=content)\n }\n\n.. code-block:: bash\n\n ; Output\n\n {\n \"attr\": \"value\"\n }\n\nwill be translated into:\n\n.. code-block::\n\n ### Output\n\n ```json\n {\n \"attr\": \"value\"\n }\n ```\n\nScrapy settings\n---------------\n\n``SPIDERDOCS_LOCATIONS: {<module>: <destination>}``, default: ``{}``.\n\n``SPIDERDOCS_SECTION_PROCESSORS: {<section_name>: <function(name, content) -> str>}``, default: ``{}``.\n\nSee usage examples above.\n\nDevelopment\n-----------\n\n.. code-block:: bash\n\n git clone git@github.com:nanvel/scrapy-spiderdocs.git\n cd scrapy-spiderdocs\n virtualenv .env --no-site-packages -p /usr/local/bin/python3\n source .env/bin/activate\n pip install scrapy\n scrapy crawl example\n scrapy spiderdocs documented.spiders\n python -m unittest documented.tests\n\nTODO\n----\n\nunittests (is there is no docstring, ...)\n",
"bugtrack_url": null,
"license": "MIT License",
"summary": "Generate spiders md documentation based on spider docstrings.",
"version": "0.1.3",
"split_keywords": [
"scrapy",
"spiders",
"documentation"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "3181dea2793fd8f34684ca226818c78a8076c97c59d88b7607005d525a6db1fc",
"md5": "1e3ce0925f66dbddc7e42cb5ea95783a",
"sha256": "0faecf1c3567f3b391468ecfbf29e1539d84cd6d1d5c308b79402b9ff10b4786"
},
"downloads": -1,
"filename": "scrapy-spiderdocs-0.1.3.tar.gz",
"has_sig": false,
"md5_digest": "1e3ce0925f66dbddc7e42cb5ea95783a",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 8446,
"upload_time": "2023-04-23T08:38:26",
"upload_time_iso_8601": "2023-04-23T08:38:26.861807Z",
"url": "https://files.pythonhosted.org/packages/31/81/dea2793fd8f34684ca226818c78a8076c97c59d88b7607005d525a6db1fc/scrapy-spiderdocs-0.1.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-04-23 08:38:26",
"github": true,
"gitlab": false,
"bitbucket": false,
"github_user": "nanvel",
"github_project": "scrapy-spiderdocs",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "scrapy-spiderdocs"
}