ani-scrapy


Nameani-scrapy JSON
Version 0.1.8 PyPI version JSON
download
home_pageNone
SummaryPython library for scraping anime websites, currently supporting AnimeFLV and JKAnime.
upload_time2025-08-24 06:45:23
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseMIT
keywords anime scraping playwright async python
VCS
bugtrack_url
requirements aider-install aiohappyeyeballs aiohttp aiosignal attrs beautifulsoup4 bs4 build certifi cffi charset-normalizer click cloudscraper colorama curl_cffi docutils frozenlist greenlet h11 id idna jaraco.classes jaraco.context jaraco.functools keyring loguru lxml markdown-it-py mdurl more-itertools multidict nh3 packaging playwright propcache py-anime-scraper pycparser pyee Pygments pyparsing pyproject_hooks PyVirtualDisplay pywin32-ctypes readme_renderer requests requests-toolbelt rfc3986 rich soupsieve twine typing_extensions urllib3 uv win32_setctime yarl
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Ani Scrapy

[![PyPI Version](https://img.shields.io/pypi/v/ani-scrapy.svg)](https://pypi.org/project/ani-scrapy/)

[![License](https://img.shields.io/badge/license-MIT-green.svg)](LICENSE)

<!-- [![Build Status](https://github.com/your_username/py-anime-scraper/actions/workflows/main.yml/badge.svg)](https://github.com/your_username/py-anime-scraper/actions) -->

**Ani-Scrapy** is a Python library for scraping anime websites, designed to provide both synchronous and asynchronous interfaces. It currently supports **AnimeFLV** and **JKAnime**, and makes it easy to switch between different platforms.

Ani-Scrapy helps developers automate anime downloads and build applications. It provides detailed anime and episode information, along with download links from multiple servers, supporting dynamic and static content across several sites.

## πŸš€ Features

### Core Functionality

- **Dual Interface**: Synchronous and asynchronous APIs for flexible integration.
- **Multi-Platform Support**: Unified interface for different platforms.
- **Comprehensive Data**: Detailed anime metadata, episode information, and download links.

### Content Handling

- **Static Content Extraction**: Direct server links using `request + cloudscraper + curl_cffi + aiohttp + bs4`
- **Dynamic Content Processing**: JavaScript-rendered links using `Playwright`
- **Mixed Approach**: Smart fallback between static and dynamic methods

### Technical Capabilities

- **Concurrent Scraping**: Built-in support for asynchronous batch processing
- **Automatic Resource Management**: Browser instances handled automatically when not provided
- **Custom Browser Support**: Configurable browser paths and headless/headed modes via `executable_path` and `headless` options

### Development Experience

- **Modular Design**: Easy to extend with new scrapers and platforms
- **Configurable Logging**: Verbose mode and multiple log levels (`DEBUG`, `INFO`, `SUCCESS`, `WARNING`, `ERROR`)
- **Performance Optimization**: Connection reuse and caching capabilities

## πŸ“¦ Installation

### From PyPI:

```bash
pip install ani-scrapy
```

### From GitHub:

```bash
pip install git+https://github.com/ElPitagoras14/ani-scrapy.git
```

### Development Installation:

```bash
git clone https://github.com/ElPitagoras14/ani-scrapy.git
cd ani-scrapy
pip install -e .
playwright install chromium
```

## 🐍 Requirements

- Python >= 3.9 (tested with 3.12)

Install Chromium (only once):

```bash
playwright install chromium
```

## πŸ“Š Supported Websites

### Currently Supported

- **AnimeFLV**: Full support
- **JKAnime**: Supports search, info, table downloads, file downloads | ~~iframe downloads~~

## πŸš€ Basic Usage

### Asynchronous API Example

```python
from ani_scrapy.async_api import AnimeFLVScraper, JKAnimeScraper, AsyncBrowser
import asyncio


async def main():
    # Initialize scrapers
    animeflv_scraper = AnimeFLVScraper(verbose=True)
    jkanime_scraper = JKAnimeScraper(verbose=True)

    # Search anime
    an_results = await animeflv_scraper.search_anime(query="naruto", page=1)
    jk_results = await jkanime_scraper.search_anime(query="naruto")
    print(f"AnimeFLV results: {len(an_results.animes)} animes found")
    print(f"JKAnime results: {len(jk_results.animes)} animes found")

    # Get anime info
    an_info = await animeflv_scraper.get_anime_info(
        anime_id=an_results.animes[0].id
    )
    jk_info = await jkanime_scraper.get_anime_info(
        anime_id=jk_results.animes[0].id
    )
    print(f"AnimeFLV info: {an_info.title}")
    print(f"JKAnime info: {jk_info.title}")

    # Get download links (with browser for dynamic content)
    async with AsyncBrowser(headless=False) as browser:
        # Table download links
        an_table_links = await animeflv_scraper.get_table_download_links(
            anime_id=an_info.id, episode_id=1
        )
        jk_table_links = await jkanime_scraper.get_table_download_links(
            anime_id=jk_info.id, episode_id=1, browser=browser
        )

        # Iframe download links (requires browser for JS content)
        an_iframe_links = await animeflv_scraper.get_iframe_download_links(
            anime_id=an_info.id, episode_id=1, browser=browser
        )

        # Get final file download links
        if an_iframe_links.download_links:
            file_links = await animeflv_scraper.get_file_download_link(
                download_info=an_iframe_links.download_links[0],
                browser=browser,
            )
            print(f"Download URL: {file_links.url}")


if __name__ == "__main__":
    asyncio.run(main())

```

### Synchronous API Example

```python
from ani_scrapy.sync_api import AnimeFLVScraper, JKAnimeScraper, SyncBrowser

# Initialize scrapers

animeflv_scraper = AnimeFLVScraper(verbose=True)
jkanime_scraper = JKAnimeScraper(verbose=True)

# Search anime

an_results = animeflv_scraper.search_anime(query="naruto", page=1)
jk_results = jkanime_scraper.search_anime(query="naruto")
print(f"AnimeFLV results: {len(an_results.animes)} animes found")
print(f"JKAnime results: {len(jk_results.animes)} animes found")

# Get anime info

an_info = animeflv_scraper.get_anime_info(anime_id=an_results.animes[0].id)
jk_info = jkanime_scraper.get_anime_info(anime_id=jk_results.animes[0].id)
print(f"AnimeFLV info: {an_info.title}")
print(f"JKAnime info: {jk_info.title}")

# Get download links with browser for dynamic content

with SyncBrowser(headless=False) as browser: # Table download links
    an_table_links = animeflv_scraper.get_table_download_links(
    anime_id=an_info.id, episode_id=1
    )
    jk_table_links = jkanime_scraper.get_table_download_links(
    anime_id=jk_info.id, episode_id=1, browser=browser
    )

    # Iframe download links (requires browser for JS content)
    an_iframe_links = animeflv_scraper.get_iframe_download_links(
        anime_id=an_info.id, episode_id=1, browser=browser
    )

    # Get final file download links
    if an_iframe_links.download_links:
        file_links = animeflv_scraper.get_file_download_link(
            download_info=an_iframe_links.download_links[0], browser=browser
        )
        print(f"Download URL: {file_links.url}")

```

## πŸ“– API Reference

For complete documentation: [API Reference](https://github.com/ElPitagoras14/ani-scrapy/blob/main/docs/API_REFERENCE.md)

### Methods Overview:

- `search_anime` - Search for anime
- `get_anime_info` - Get detailed anime information
- `get_table_download_links` - Get direct server links
- `get_iframe_download_links` - Get iframe links
- `get_file_download_link` - Get final download URL

### Browser Classes:

- `AsyncBrowser` - Automatic resource management for async operations
- `SyncBrowser` - Context manager for synchronous scraping

## πŸ› οΈ Advanced Usage

### Custom Browser Configuration

```python
from ani_scrapy import AsyncBrowser, SyncBrowser

# Custom Brave browser path
brave_path = ""

async with AsyncBrowser(
    headless=False,
    executable_path=brave_path,
) as browser:
    # Your scraping code here
    pass
```

### Error Handling Example

```python
try:
    results = await scraper.search_anime("naruto")
    if results.animes:
        anime_info = await scraper.get_anime_info(results.animes[0].id)
        print(f"Success: {anime_info.title}")
except Exception as e:
    print(f"Error occurred: {e}")
    # Implement retry logic or fallback here
```

### Concurrent Scraping

```python
import asyncio

async def scrape_multiple_animes(anime_ids):
    tasks = []
    for anime_id in anime_ids:
        task = scraper.get_anime_info(anime_id)
        tasks.append(task)

    results = await asyncio.gather(*tasks, return_exceptions=True)
    return results
```

## 🀝 Contributing

Contributions to Ani-Scrapy are welcome! You can help by:

- Reporting bugs or suggesting new features via GitHub Issues.
- Improving documentation.
- Adding new scrapers or enhancing existing ones.
- Writing tests to ensure code quality.

### How to contribute

1. Fork the repository.
2. Create a new branch for your feature or fix:

```bash
git checkout -b my-feature
```

3. Make your changes and commit with clear messages.
4. Push your branch to your fork.
5. Open a Pull Request against the `main` branch of the original repository.

Please ensure that all tests pass before submitting a PR. Contributions are expected to respect the license and coding style.

## πŸ§ͺ Development and Testing

Install development dependencies:

```bash
pip install -r requirements.txt
```

## 🚧 Coming Soon

Support for more anime websites and further unification of scraper methods is planned.

If you want to contribute by adding new scrapers for other sites, contributions are welcome!

## ⚠️ Disclaimer

This library is intended for **educational and personal use only**. Please respect the terms of service of the websites being scraped and the applicable laws. The author is not responsible for any misuse.

## πŸ“„ License

MIT Β© 2025 El PitΓ‘goras

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "ani-scrapy",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "anime, scraping, playwright, async, python",
    "author": null,
    "author_email": "\"github:ElPitagoras14\" <jonfrgar@espol.edu.ec>",
    "download_url": "https://files.pythonhosted.org/packages/73/2b/1cb39c654e0e1323b8610da2057e0286f304fcb7503e33b0ddac04ff03e3/ani_scrapy-0.1.8.tar.gz",
    "platform": null,
    "description": "# Ani Scrapy\n\n[![PyPI Version](https://img.shields.io/pypi/v/ani-scrapy.svg)](https://pypi.org/project/ani-scrapy/)\n\n[![License](https://img.shields.io/badge/license-MIT-green.svg)](LICENSE)\n\n<!-- [![Build Status](https://github.com/your_username/py-anime-scraper/actions/workflows/main.yml/badge.svg)](https://github.com/your_username/py-anime-scraper/actions) -->\n\n**Ani-Scrapy** is a Python library for scraping anime websites, designed to provide both synchronous and asynchronous interfaces. It currently supports **AnimeFLV** and **JKAnime**, and makes it easy to switch between different platforms.\n\nAni-Scrapy helps developers automate anime downloads and build applications. It provides detailed anime and episode information, along with download links from multiple servers, supporting dynamic and static content across several sites.\n\n## \ud83d\ude80 Features\n\n### Core Functionality\n\n- **Dual Interface**: Synchronous and asynchronous APIs for flexible integration.\n- **Multi-Platform Support**: Unified interface for different platforms.\n- **Comprehensive Data**: Detailed anime metadata, episode information, and download links.\n\n### Content Handling\n\n- **Static Content Extraction**: Direct server links using `request + cloudscraper + curl_cffi + aiohttp + bs4`\n- **Dynamic Content Processing**: JavaScript-rendered links using `Playwright`\n- **Mixed Approach**: Smart fallback between static and dynamic methods\n\n### Technical Capabilities\n\n- **Concurrent Scraping**: Built-in support for asynchronous batch processing\n- **Automatic Resource Management**: Browser instances handled automatically when not provided\n- **Custom Browser Support**: Configurable browser paths and headless/headed modes via `executable_path` and `headless` options\n\n### Development Experience\n\n- **Modular Design**: Easy to extend with new scrapers and platforms\n- **Configurable Logging**: Verbose mode and multiple log levels (`DEBUG`, `INFO`, `SUCCESS`, `WARNING`, `ERROR`)\n- **Performance Optimization**: Connection reuse and caching capabilities\n\n## \ud83d\udce6 Installation\n\n### From PyPI:\n\n```bash\npip install ani-scrapy\n```\n\n### From GitHub:\n\n```bash\npip install git+https://github.com/ElPitagoras14/ani-scrapy.git\n```\n\n### Development Installation:\n\n```bash\ngit clone https://github.com/ElPitagoras14/ani-scrapy.git\ncd ani-scrapy\npip install -e .\nplaywright install chromium\n```\n\n## \ud83d\udc0d Requirements\n\n- Python >= 3.9 (tested with 3.12)\n\nInstall Chromium (only once):\n\n```bash\nplaywright install chromium\n```\n\n## \ud83d\udcca Supported Websites\n\n### Currently Supported\n\n- **AnimeFLV**: Full support\n- **JKAnime**: Supports search, info, table downloads, file downloads | ~~iframe downloads~~\n\n## \ud83d\ude80 Basic Usage\n\n### Asynchronous API Example\n\n```python\nfrom ani_scrapy.async_api import AnimeFLVScraper, JKAnimeScraper, AsyncBrowser\nimport asyncio\n\n\nasync def main():\n    # Initialize scrapers\n    animeflv_scraper = AnimeFLVScraper(verbose=True)\n    jkanime_scraper = JKAnimeScraper(verbose=True)\n\n    # Search anime\n    an_results = await animeflv_scraper.search_anime(query=\"naruto\", page=1)\n    jk_results = await jkanime_scraper.search_anime(query=\"naruto\")\n    print(f\"AnimeFLV results: {len(an_results.animes)} animes found\")\n    print(f\"JKAnime results: {len(jk_results.animes)} animes found\")\n\n    # Get anime info\n    an_info = await animeflv_scraper.get_anime_info(\n        anime_id=an_results.animes[0].id\n    )\n    jk_info = await jkanime_scraper.get_anime_info(\n        anime_id=jk_results.animes[0].id\n    )\n    print(f\"AnimeFLV info: {an_info.title}\")\n    print(f\"JKAnime info: {jk_info.title}\")\n\n    # Get download links (with browser for dynamic content)\n    async with AsyncBrowser(headless=False) as browser:\n        # Table download links\n        an_table_links = await animeflv_scraper.get_table_download_links(\n            anime_id=an_info.id, episode_id=1\n        )\n        jk_table_links = await jkanime_scraper.get_table_download_links(\n            anime_id=jk_info.id, episode_id=1, browser=browser\n        )\n\n        # Iframe download links (requires browser for JS content)\n        an_iframe_links = await animeflv_scraper.get_iframe_download_links(\n            anime_id=an_info.id, episode_id=1, browser=browser\n        )\n\n        # Get final file download links\n        if an_iframe_links.download_links:\n            file_links = await animeflv_scraper.get_file_download_link(\n                download_info=an_iframe_links.download_links[0],\n                browser=browser,\n            )\n            print(f\"Download URL: {file_links.url}\")\n\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n\n```\n\n### Synchronous API Example\n\n```python\nfrom ani_scrapy.sync_api import AnimeFLVScraper, JKAnimeScraper, SyncBrowser\n\n# Initialize scrapers\n\nanimeflv_scraper = AnimeFLVScraper(verbose=True)\njkanime_scraper = JKAnimeScraper(verbose=True)\n\n# Search anime\n\nan_results = animeflv_scraper.search_anime(query=\"naruto\", page=1)\njk_results = jkanime_scraper.search_anime(query=\"naruto\")\nprint(f\"AnimeFLV results: {len(an_results.animes)} animes found\")\nprint(f\"JKAnime results: {len(jk_results.animes)} animes found\")\n\n# Get anime info\n\nan_info = animeflv_scraper.get_anime_info(anime_id=an_results.animes[0].id)\njk_info = jkanime_scraper.get_anime_info(anime_id=jk_results.animes[0].id)\nprint(f\"AnimeFLV info: {an_info.title}\")\nprint(f\"JKAnime info: {jk_info.title}\")\n\n# Get download links with browser for dynamic content\n\nwith SyncBrowser(headless=False) as browser: # Table download links\n    an_table_links = animeflv_scraper.get_table_download_links(\n    anime_id=an_info.id, episode_id=1\n    )\n    jk_table_links = jkanime_scraper.get_table_download_links(\n    anime_id=jk_info.id, episode_id=1, browser=browser\n    )\n\n    # Iframe download links (requires browser for JS content)\n    an_iframe_links = animeflv_scraper.get_iframe_download_links(\n        anime_id=an_info.id, episode_id=1, browser=browser\n    )\n\n    # Get final file download links\n    if an_iframe_links.download_links:\n        file_links = animeflv_scraper.get_file_download_link(\n            download_info=an_iframe_links.download_links[0], browser=browser\n        )\n        print(f\"Download URL: {file_links.url}\")\n\n```\n\n## \ud83d\udcd6 API Reference\n\nFor complete documentation: [API Reference](https://github.com/ElPitagoras14/ani-scrapy/blob/main/docs/API_REFERENCE.md)\n\n### Methods Overview:\n\n- `search_anime` - Search for anime\n- `get_anime_info` - Get detailed anime information\n- `get_table_download_links` - Get direct server links\n- `get_iframe_download_links` - Get iframe links\n- `get_file_download_link` - Get final download URL\n\n### Browser Classes:\n\n- `AsyncBrowser` - Automatic resource management for async operations\n- `SyncBrowser` - Context manager for synchronous scraping\n\n## \ud83d\udee0\ufe0f Advanced Usage\n\n### Custom Browser Configuration\n\n```python\nfrom ani_scrapy import AsyncBrowser, SyncBrowser\n\n# Custom Brave browser path\nbrave_path = \"\"\n\nasync with AsyncBrowser(\n    headless=False,\n    executable_path=brave_path,\n) as browser:\n    # Your scraping code here\n    pass\n```\n\n### Error Handling Example\n\n```python\ntry:\n    results = await scraper.search_anime(\"naruto\")\n    if results.animes:\n        anime_info = await scraper.get_anime_info(results.animes[0].id)\n        print(f\"Success: {anime_info.title}\")\nexcept Exception as e:\n    print(f\"Error occurred: {e}\")\n    # Implement retry logic or fallback here\n```\n\n### Concurrent Scraping\n\n```python\nimport asyncio\n\nasync def scrape_multiple_animes(anime_ids):\n    tasks = []\n    for anime_id in anime_ids:\n        task = scraper.get_anime_info(anime_id)\n        tasks.append(task)\n\n    results = await asyncio.gather(*tasks, return_exceptions=True)\n    return results\n```\n\n## \ud83e\udd1d Contributing\n\nContributions to Ani-Scrapy are welcome! You can help by:\n\n- Reporting bugs or suggesting new features via GitHub Issues.\n- Improving documentation.\n- Adding new scrapers or enhancing existing ones.\n- Writing tests to ensure code quality.\n\n### How to contribute\n\n1. Fork the repository.\n2. Create a new branch for your feature or fix:\n\n```bash\ngit checkout -b my-feature\n```\n\n3. Make your changes and commit with clear messages.\n4. Push your branch to your fork.\n5. Open a Pull Request against the `main` branch of the original repository.\n\nPlease ensure that all tests pass before submitting a PR. Contributions are expected to respect the license and coding style.\n\n## \ud83e\uddea Development and Testing\n\nInstall development dependencies:\n\n```bash\npip install -r requirements.txt\n```\n\n## \ud83d\udea7 Coming Soon\n\nSupport for more anime websites and further unification of scraper methods is planned.\n\nIf you want to contribute by adding new scrapers for other sites, contributions are welcome!\n\n## \u26a0\ufe0f Disclaimer\n\nThis library is intended for **educational and personal use only**. Please respect the terms of service of the websites being scraped and the applicable laws. The author is not responsible for any misuse.\n\n## \ud83d\udcc4 License\n\nMIT \u00a9 2025 El Pit\u00e1goras\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Python library for scraping anime websites, currently supporting AnimeFLV and JKAnime.",
    "version": "0.1.8",
    "project_urls": {
        "Homepage": "https://github.com/ElPitagoras14/ani-scrapy",
        "Repository": "https://github.com/ElPitagoras14/ani-scrapy"
    },
    "split_keywords": [
        "anime",
        " scraping",
        " playwright",
        " async",
        " python"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "556dec52575c72f8cf85d008a965a70888459db4f9c3d1003ad42ccc2b7df452",
                "md5": "36300eb8ea71979e97df6d5dcffd9bbc",
                "sha256": "3e701388f5700a9fbc0dcbc8376cd9c40e61cc98b12d0d401f49d0a04b4ef0d5"
            },
            "downloads": -1,
            "filename": "ani_scrapy-0.1.8-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "36300eb8ea71979e97df6d5dcffd9bbc",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 33606,
            "upload_time": "2025-08-24T06:45:22",
            "upload_time_iso_8601": "2025-08-24T06:45:22.330309Z",
            "url": "https://files.pythonhosted.org/packages/55/6d/ec52575c72f8cf85d008a965a70888459db4f9c3d1003ad42ccc2b7df452/ani_scrapy-0.1.8-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "732b1cb39c654e0e1323b8610da2057e0286f304fcb7503e33b0ddac04ff03e3",
                "md5": "69d4edf64c7927fe62b4dad135ab4ecf",
                "sha256": "85f9fb0cec11d3a059590f9fb619831d7517a077efe40ff51e015299094521b2"
            },
            "downloads": -1,
            "filename": "ani_scrapy-0.1.8.tar.gz",
            "has_sig": false,
            "md5_digest": "69d4edf64c7927fe62b4dad135ab4ecf",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 27569,
            "upload_time": "2025-08-24T06:45:23",
            "upload_time_iso_8601": "2025-08-24T06:45:23.631852Z",
            "url": "https://files.pythonhosted.org/packages/73/2b/1cb39c654e0e1323b8610da2057e0286f304fcb7503e33b0ddac04ff03e3/ani_scrapy-0.1.8.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-24 06:45:23",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ElPitagoras14",
    "github_project": "ani-scrapy",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "aider-install",
            "specs": [
                [
                    "==",
                    "0.2.0"
                ]
            ]
        },
        {
            "name": "aiohappyeyeballs",
            "specs": [
                [
                    "==",
                    "2.6.1"
                ]
            ]
        },
        {
            "name": "aiohttp",
            "specs": [
                [
                    "==",
                    "3.12.14"
                ]
            ]
        },
        {
            "name": "aiosignal",
            "specs": [
                [
                    "==",
                    "1.4.0"
                ]
            ]
        },
        {
            "name": "attrs",
            "specs": [
                [
                    "==",
                    "25.3.0"
                ]
            ]
        },
        {
            "name": "beautifulsoup4",
            "specs": [
                [
                    "==",
                    "4.13.4"
                ]
            ]
        },
        {
            "name": "bs4",
            "specs": [
                [
                    "==",
                    "0.0.2"
                ]
            ]
        },
        {
            "name": "build",
            "specs": [
                [
                    "==",
                    "1.2.2.post1"
                ]
            ]
        },
        {
            "name": "certifi",
            "specs": [
                [
                    "==",
                    "2025.7.14"
                ]
            ]
        },
        {
            "name": "cffi",
            "specs": [
                [
                    "==",
                    "1.17.1"
                ]
            ]
        },
        {
            "name": "charset-normalizer",
            "specs": [
                [
                    "==",
                    "3.4.2"
                ]
            ]
        },
        {
            "name": "click",
            "specs": [
                [
                    "==",
                    "8.2.1"
                ]
            ]
        },
        {
            "name": "cloudscraper",
            "specs": [
                [
                    "==",
                    "1.2.71"
                ]
            ]
        },
        {
            "name": "colorama",
            "specs": [
                [
                    "==",
                    "0.4.6"
                ]
            ]
        },
        {
            "name": "curl_cffi",
            "specs": [
                [
                    "==",
                    "0.13.0"
                ]
            ]
        },
        {
            "name": "docutils",
            "specs": [
                [
                    "==",
                    "0.21.2"
                ]
            ]
        },
        {
            "name": "frozenlist",
            "specs": [
                [
                    "==",
                    "1.7.0"
                ]
            ]
        },
        {
            "name": "greenlet",
            "specs": [
                [
                    "==",
                    "3.2.3"
                ]
            ]
        },
        {
            "name": "h11",
            "specs": [
                [
                    "==",
                    "0.16.0"
                ]
            ]
        },
        {
            "name": "id",
            "specs": [
                [
                    "==",
                    "1.5.0"
                ]
            ]
        },
        {
            "name": "idna",
            "specs": [
                [
                    "==",
                    "3.10"
                ]
            ]
        },
        {
            "name": "jaraco.classes",
            "specs": [
                [
                    "==",
                    "3.4.0"
                ]
            ]
        },
        {
            "name": "jaraco.context",
            "specs": [
                [
                    "==",
                    "6.0.1"
                ]
            ]
        },
        {
            "name": "jaraco.functools",
            "specs": [
                [
                    "==",
                    "4.2.1"
                ]
            ]
        },
        {
            "name": "keyring",
            "specs": [
                [
                    "==",
                    "25.6.0"
                ]
            ]
        },
        {
            "name": "loguru",
            "specs": [
                [
                    "==",
                    "0.7.3"
                ]
            ]
        },
        {
            "name": "lxml",
            "specs": [
                [
                    "==",
                    "6.0.0"
                ]
            ]
        },
        {
            "name": "markdown-it-py",
            "specs": [
                [
                    "==",
                    "3.0.0"
                ]
            ]
        },
        {
            "name": "mdurl",
            "specs": [
                [
                    "==",
                    "0.1.2"
                ]
            ]
        },
        {
            "name": "more-itertools",
            "specs": [
                [
                    "==",
                    "10.7.0"
                ]
            ]
        },
        {
            "name": "multidict",
            "specs": [
                [
                    "==",
                    "6.6.3"
                ]
            ]
        },
        {
            "name": "nh3",
            "specs": [
                [
                    "==",
                    "0.2.22"
                ]
            ]
        },
        {
            "name": "packaging",
            "specs": [
                [
                    "==",
                    "25.0"
                ]
            ]
        },
        {
            "name": "playwright",
            "specs": [
                [
                    "==",
                    "1.53.0"
                ]
            ]
        },
        {
            "name": "propcache",
            "specs": [
                [
                    "==",
                    "0.3.2"
                ]
            ]
        },
        {
            "name": "py-anime-scraper",
            "specs": [
                [
                    "==",
                    "1.0.4"
                ]
            ]
        },
        {
            "name": "pycparser",
            "specs": [
                [
                    "==",
                    "2.22"
                ]
            ]
        },
        {
            "name": "pyee",
            "specs": [
                [
                    "==",
                    "13.0.0"
                ]
            ]
        },
        {
            "name": "Pygments",
            "specs": [
                [
                    "==",
                    "2.19.2"
                ]
            ]
        },
        {
            "name": "pyparsing",
            "specs": [
                [
                    "==",
                    "3.2.3"
                ]
            ]
        },
        {
            "name": "pyproject_hooks",
            "specs": [
                [
                    "==",
                    "1.2.0"
                ]
            ]
        },
        {
            "name": "PyVirtualDisplay",
            "specs": [
                [
                    "==",
                    "3.0"
                ]
            ]
        },
        {
            "name": "pywin32-ctypes",
            "specs": [
                [
                    "==",
                    "0.2.3"
                ]
            ]
        },
        {
            "name": "readme_renderer",
            "specs": [
                [
                    "==",
                    "44.0"
                ]
            ]
        },
        {
            "name": "requests",
            "specs": [
                [
                    "==",
                    "2.32.4"
                ]
            ]
        },
        {
            "name": "requests-toolbelt",
            "specs": [
                [
                    "==",
                    "1.0.0"
                ]
            ]
        },
        {
            "name": "rfc3986",
            "specs": [
                [
                    "==",
                    "2.0.0"
                ]
            ]
        },
        {
            "name": "rich",
            "specs": [
                [
                    "==",
                    "14.0.0"
                ]
            ]
        },
        {
            "name": "soupsieve",
            "specs": [
                [
                    "==",
                    "2.7"
                ]
            ]
        },
        {
            "name": "twine",
            "specs": [
                [
                    "==",
                    "6.1.0"
                ]
            ]
        },
        {
            "name": "typing_extensions",
            "specs": [
                [
                    "==",
                    "4.14.1"
                ]
            ]
        },
        {
            "name": "urllib3",
            "specs": [
                [
                    "==",
                    "2.5.0"
                ]
            ]
        },
        {
            "name": "uv",
            "specs": [
                [
                    "==",
                    "0.8.11"
                ]
            ]
        },
        {
            "name": "win32_setctime",
            "specs": [
                [
                    "==",
                    "1.2.0"
                ]
            ]
        },
        {
            "name": "yarl",
            "specs": [
                [
                    "==",
                    "1.20.1"
                ]
            ]
        }
    ],
    "lcname": "ani-scrapy"
}
        
Elapsed time: 0.49979s