fadex


Namefadex JSON
Version 0.1.5 PyPI version JSON
download
home_pagehttps://github.com/fahad-programmer/fadex
SummaryA Powerful WebScraper With Unmatched Performance
upload_time2024-10-09 12:35:19
maintainerNone
docs_urlNone
authorNone
requires_pythonNone
licenseMIT
keywords web scraping scraper async performance beautiful soap lxml
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ## :dart: About ##
**Fadex** is a powerful Python module that provides robust web scraping functionalities, including fetching web pages, extracting metadata, and parsing HTML content. Built with a Rust backend using PyO3, it is optimized for performance and ease of use in web scraping tasks.

## :sparkles: Features ##

:heavy_check_mark: Fetch web pages asynchronously;\
:heavy_check_mark: Extract metadata including title and description;\
:heavy_check_mark: Sanitize and extract all href links from HTML;\
:heavy_check_mark: Fetch elements by ID and class efficiently;

## Installing

Use the following command in your terminal to install the module.
```bash
$ pip install fadex
```

## :rocket: Technologies ##

The following tools were used in this project:

- [Python](https://python.org)
- [Rust](https://www.rust-lang.org/)
- [PyO3](https://pyo3.rs/v0.15.0/)

## :white_check_mark: Requirements ##

Before starting :checkered_flag:, ensure you have [Python](https://python.org) installed.


## :test_tube: How To Use ##

```python
import asyncio
from fadex import fetch_page_py

async def fetch_page(url):
    try:
        content = await fetch_page_py(url)
        print("Page content fetched successfully:")
        print(content)
    except Exception as e:
        print(f"Failed to fetch page: {e}")

# Example usage
url = "http://example.com"
asyncio.run(fetch_page(url))
```

## :hammer_and_wrench: Functionalities

- Fetch metadata (title and description):
  ```python
  title, description = get_meta_and_title(html_content)
  ```

- Extract links from HTML:
  ```python
  links = extract_links(html_content)
  ```

- Fetch elements by ID:
  ```python
  elements = find_element_by_id(html_content, "your-id")
  ```

- Fetch elements by class:
  ```python
  elements = get_elements_by_cls(html_content, "your-class")
  ```

## :memo: License ##

This project is licensed under the MIT License. For more details, see the [LICENSE](LICENSE.md) file.

Made with :heart: by <a href="https://github.com/fahad-programmer" target="_blank">Fahad Malik</a>

&#xa0;

<a href="#top">Back to top</a>
```

# Fadex: A Powerful Web Scraper With Unmatched Performance

## Overview

**Fadex** is a Python module that provides powerful web scraping functionalities, including fetching web pages, extracting metadata, and parsing HTML content. Built with a Rust backend using PyO3, it aims to provide high performance and ease of use for web scraping tasks.

## Installation

You can easily install Fadex using pip:

```bash
pip install fadex
```

## Usage

### Basic Example

To fetch the content of a web page asynchronously, you can use the `fetch_page` function:

```python
import asyncio
from fadex import fetch_page

async def fetch_page_py(url):
    try:
        content = await fetch_page(url)
        print("Page content fetched successfully:")
        print(content)
    except Exception as e:
        print(f"Failed to fetch page: {e}")

# Example usage
url = "http://gigmasters.it"
asyncio.run(fetch_page_py(url))
```

## API Reference

### Functions

#### `get_meta_and_title(html: str) -> Tuple[Optional[str], Optional[str]]`

Parses the HTML content and extracts the title and meta description.

- **Parameters:**
  - `html`: A string containing the HTML content.
- **Returns:**
  - A tuple containing:
    - `title`: An optional string representing the page title.
    - `description`: An optional string representing the meta description.

#### `extract_links(html: str) -> List[str]`

Extracts and sanitizes all href links from the HTML content.

- **Parameters:**
  - `html`: A string containing the HTML content.
- **Returns:**
  - A list of sanitized URLs extracted from the HTML.

#### `fetch_page(url: str) -> Awaitable[str]`

Asynchronously fetches the content of a web page.

- **Parameters:**
  - `url`: A string containing the URL of the page to fetch.
- **Returns:**
  - A string containing the content of the fetched page.

#### `find_element_by_id(html: str, id: str) -> List[str]`

Fetches the elements that have the specified id in the html content.

- **Parameters:**
  - `html`: A string containing the html content.
  - `id` : The id of which u want elements for.
- **Returns:**
  - A list of elements usually one that have the same id as given in param.

#### `get_elements_by_cls(html: str, class: str) -> List[str]`

Fetches the elements that have the specified class in the html content.

- **Parameters:**
  - `html`: A string containing the html content.
  - `class` : The class of which you want elements for.
- **Returns:**
  - A list of elements that have the same class as given in param.


## Performance Comparison

We conducted a performance comparison between **Fadex**, **BeautifulSoup**, and **lxml** by extracting the metadata (title and description) and extracting all links from 10 popular websites. The results are as follows:

### Metadata Extraction Performance

```
Fadex Metadata Extraction Average Time: 0.56 seconds (Successful Extracts: 100)
BeautifulSoup Metadata Extraction Average Time: 0.78 seconds (Successful Extracts: 100)
lxml Metadata Extraction Average Time: 0.69 seconds (Successful Extracts: 100)

Performance Comparison for Metadata Extraction:
Fadex Time: 0.56 seconds
BeautifulSoup Time: 0.78 seconds
lxml Time: 0.69 seconds

Winner for Metadata Extraction: Fadex
```

### Link Extraction Performance

```
Fadex Link Extraction Average Time: 0.62 seconds (Successful Extracts: 100)
BeautifulSoup Link Extraction Average Time: 0.81 seconds (Successful Extracts: 100)
lxml Link Extraction Average Time: 0.65 seconds (Successful Extracts: 100)

Performance Comparison for Link Extraction:
Fadex Time: 0.62 seconds
BeautifulSoup Time: 0.81 seconds
lxml Time: 0.65 seconds

Winner for Link Extraction: Fadex
```

These results show that **Fadex** outperforms both **BeautifulSoup** and **lxml** in terms of average response time for extracting metadata and links. However, the performance of each library can also depend on factors such as the complexity of the HTML content and the internet connection stability.

## Example Code for Performance Comparison

Below is the code used for the performance comparison:

```python
import asyncio
import time
from fadex import fetch_page_py, get_meta_and_title_py, extract_links_py
from bs4 import BeautifulSoup
from lxml import html as lxml_html
from urllib.parse import urljoin, urlparse

# Function to extract metadata using Fadex
def extract_metadata_with_fadex(html_content):
    try:
        title, description = get_meta_and_title_py(html_content)
        return True, title, description
    except Exception as e:
        return False, None, None

# Function to extract metadata using BeautifulSoup
def extract_metadata_with_beautifulsoup(html_content):
    try:
        soup = BeautifulSoup(html_content, 'html.parser')
        title = soup.title.string if soup.title else None
        description = None
        meta_tag = soup.find('meta', attrs={'name': 'description'})
        if meta_tag:
            description = meta_tag.get('content')
        return True, title, description
    except Exception as e:
        return False, None, None

# Function to extract metadata using lxml
def extract_metadata_with_lxml(html_content):
    try:
        tree = lxml_html.fromstring(html_content)
        title = tree.find('.//title').text if tree.find('.//title') is not None else None
        description = None
        meta = tree.xpath('//meta[@name="description"]')
        if meta and 'content' in meta[0].attrib:
            description = meta[0].attrib['content']
        return True, title, description
    except Exception as e:
        return False, None, None

# Function to extract links using Fadex
def extract_links_with_fadex(html_content, base_url):
    try:
        links = extract_links_py(html_content, base_url)
        return True, links
    except Exception as e:
        return False, []

# Function to extract links using BeautifulSoup
def extract_links_with_beautifulsoup(html_content, base_url):
    try:
        soup = BeautifulSoup(html_content, 'html.parser')
        links = [urljoin(base_url, a['href']) for a in soup.find_all('a', href=True)]
        return True, [link for link in links if urlparse(link).scheme in ["http", "https"]]
    except Exception as e:
        return False, []

# Function to extract links using lxml
def extract_links_with_lxml(html_content, base_url):
    try:
        tree = lxml_html.fromstring(html_content)
        links = [urljoin(base_url, link) for link in tree.xpath('//a/@href')]
        return True, [link for link in links if urlparse(link).scheme in ["http", "https"]]
    except Exception as e:
        return False, []

# Function to measure average performance for each library
def measure_metadata_performance(html_contents, extract_func, iterations=5):
    total_time = 0
    successful_extracts = 0
    for _ in range(iterations):
        for html_content in html_contents:
            start_time = time.time()
            success, title, description = extract_func(html_content)
            total_time += time.time() - start_time
            if success:
                successful_extracts += 1
    average_time = total_time / (len(html_contents) * iterations)
    return average_time, successful_extracts

# Function to measure link extraction performance for each library
def measure_link_extraction_performance(html_contents, base_urls, extract_func, iterations=5):
    total_time = 0
    successful_extracts = 0
    for _ in range(iterations):
        for html_content, base_url in zip(html_contents, base_urls):
            start_time = time.time()
            success, links = extract_func(html_content, base_url)
            total_time += time.time() - start_time
            if success:
                successful_extracts += 1
    average_time = total_time / (len(html_contents) * iterations)
    return average_time, successful_extracts

# Main function to run the tests
async def main():
    # List of popular URLs for testing
    urls = [
        "https://www.google.com",
        "https://www.wikipedia.org",
        "https://www.github.com",
        "https://www.reddit.com",
        "https://www.stackoverflow.com",
        "https://www.nytimes.com",
        "https://www.bbc.com",
        "https://www.amazon.com",
        "https://www.apple.com",
        "https://www.microsoft.com"
    ]

    # Fetch page content using Fadex
    html_contents = []
    for url in urls:
        try:
            content = await fetch_page_py(url)
            html_contents.append(content)
        except Exception as e:
            print(f"Failed to fetch page from {url}: {e}")

    # Define number of iterations for performance measurement
    iterations = 10

    # Measure performance for Fadex (metadata extraction)
    fadex_meta_average_time, fadex_meta_success = measure_metadata_performance(
        html_contents, extract_metadata_with_fadex, iterations
    )

    # Measure performance for BeautifulSoup (metadata extraction)
    bs_meta_average_time, bs_meta_success = measure_metadata_performance(
        html_contents, extract_metadata_with_beautifulsoup, iterations
    )

    # Measure performance for lxml (metadata extraction)
    lxml_meta_average_time, lxml_meta_success = measure_metadata_performance(
       

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/fahad-programmer/fadex",
    "name": "fadex",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "web scraping, scraper, async, performance, Beautiful Soap, lxml",
    "author": null,
    "author_email": "Fahad <malikfahadnawaz@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/cf/a0/f4c9c29633bafe97917402f077d3a353dcd3fe4efe7ebd7bdd496e941aa7/Fadex-0.1.5.tar.gz",
    "platform": null,
    "description": "## :dart: About ##\n**Fadex** is a powerful Python module that provides robust web scraping functionalities, including fetching web pages, extracting metadata, and parsing HTML content. Built with a Rust backend using PyO3, it is optimized for performance and ease of use in web scraping tasks.\n\n## :sparkles: Features ##\n\n:heavy_check_mark: Fetch web pages asynchronously;\\\n:heavy_check_mark: Extract metadata including title and description;\\\n:heavy_check_mark: Sanitize and extract all href links from HTML;\\\n:heavy_check_mark: Fetch elements by ID and class efficiently;\n\n## Installing\n\nUse the following command in your terminal to install the module.\n```bash\n$ pip install fadex\n```\n\n## :rocket: Technologies ##\n\nThe following tools were used in this project:\n\n- [Python](https://python.org)\n- [Rust](https://www.rust-lang.org/)\n- [PyO3](https://pyo3.rs/v0.15.0/)\n\n## :white_check_mark: Requirements ##\n\nBefore starting :checkered_flag:, ensure you have [Python](https://python.org) installed.\n\n\n## :test_tube: How To Use ##\n\n```python\nimport asyncio\nfrom fadex import fetch_page_py\n\nasync def fetch_page(url):\n    try:\n        content = await fetch_page_py(url)\n        print(\"Page content fetched successfully:\")\n        print(content)\n    except Exception as e:\n        print(f\"Failed to fetch page: {e}\")\n\n# Example usage\nurl = \"http://example.com\"\nasyncio.run(fetch_page(url))\n```\n\n## :hammer_and_wrench: Functionalities\n\n- Fetch metadata (title and description):\n  ```python\n  title, description = get_meta_and_title(html_content)\n  ```\n\n- Extract links from HTML:\n  ```python\n  links = extract_links(html_content)\n  ```\n\n- Fetch elements by ID:\n  ```python\n  elements = find_element_by_id(html_content, \"your-id\")\n  ```\n\n- Fetch elements by class:\n  ```python\n  elements = get_elements_by_cls(html_content, \"your-class\")\n  ```\n\n## :memo: License ##\n\nThis project is licensed under the MIT License. For more details, see the [LICENSE](LICENSE.md) file.\n\nMade with :heart: by <a href=\"https://github.com/fahad-programmer\" target=\"_blank\">Fahad Malik</a>\n\n&#xa0;\n\n<a href=\"#top\">Back to top</a>\n```\n\n# Fadex: A Powerful Web Scraper With Unmatched Performance\n\n## Overview\n\n**Fadex** is a Python module that provides powerful web scraping functionalities, including fetching web pages, extracting metadata, and parsing HTML content. Built with a Rust backend using PyO3, it aims to provide high performance and ease of use for web scraping tasks.\n\n## Installation\n\nYou can easily install Fadex using pip:\n\n```bash\npip install fadex\n```\n\n## Usage\n\n### Basic Example\n\nTo fetch the content of a web page asynchronously, you can use the `fetch_page` function:\n\n```python\nimport asyncio\nfrom fadex import fetch_page\n\nasync def fetch_page_py(url):\n    try:\n        content = await fetch_page(url)\n        print(\"Page content fetched successfully:\")\n        print(content)\n    except Exception as e:\n        print(f\"Failed to fetch page: {e}\")\n\n# Example usage\nurl = \"http://gigmasters.it\"\nasyncio.run(fetch_page_py(url))\n```\n\n## API Reference\n\n### Functions\n\n#### `get_meta_and_title(html: str) -> Tuple[Optional[str], Optional[str]]`\n\nParses the HTML content and extracts the title and meta description.\n\n- **Parameters:**\n  - `html`: A string containing the HTML content.\n- **Returns:**\n  - A tuple containing:\n    - `title`: An optional string representing the page title.\n    - `description`: An optional string representing the meta description.\n\n#### `extract_links(html: str) -> List[str]`\n\nExtracts and sanitizes all href links from the HTML content.\n\n- **Parameters:**\n  - `html`: A string containing the HTML content.\n- **Returns:**\n  - A list of sanitized URLs extracted from the HTML.\n\n#### `fetch_page(url: str) -> Awaitable[str]`\n\nAsynchronously fetches the content of a web page.\n\n- **Parameters:**\n  - `url`: A string containing the URL of the page to fetch.\n- **Returns:**\n  - A string containing the content of the fetched page.\n\n#### `find_element_by_id(html: str, id: str) -> List[str]`\n\nFetches the elements that have the specified id in the html content.\n\n- **Parameters:**\n  - `html`: A string containing the html content.\n  - `id` : The id of which u want elements for.\n- **Returns:**\n  - A list of elements usually one that have the same id as given in param.\n\n#### `get_elements_by_cls(html: str, class: str) -> List[str]`\n\nFetches the elements that have the specified class in the html content.\n\n- **Parameters:**\n  - `html`: A string containing the html content.\n  - `class` : The class of which you want elements for.\n- **Returns:**\n  - A list of elements that have the same class as given in param.\n\n\n## Performance Comparison\n\nWe conducted a performance comparison between **Fadex**, **BeautifulSoup**, and **lxml** by extracting the metadata (title and description) and extracting all links from 10 popular websites. The results are as follows:\n\n### Metadata Extraction Performance\n\n```\nFadex Metadata Extraction Average Time: 0.56 seconds (Successful Extracts: 100)\nBeautifulSoup Metadata Extraction Average Time: 0.78 seconds (Successful Extracts: 100)\nlxml Metadata Extraction Average Time: 0.69 seconds (Successful Extracts: 100)\n\nPerformance Comparison for Metadata Extraction:\nFadex Time: 0.56 seconds\nBeautifulSoup Time: 0.78 seconds\nlxml Time: 0.69 seconds\n\nWinner for Metadata Extraction: Fadex\n```\n\n### Link Extraction Performance\n\n```\nFadex Link Extraction Average Time: 0.62 seconds (Successful Extracts: 100)\nBeautifulSoup Link Extraction Average Time: 0.81 seconds (Successful Extracts: 100)\nlxml Link Extraction Average Time: 0.65 seconds (Successful Extracts: 100)\n\nPerformance Comparison for Link Extraction:\nFadex Time: 0.62 seconds\nBeautifulSoup Time: 0.81 seconds\nlxml Time: 0.65 seconds\n\nWinner for Link Extraction: Fadex\n```\n\nThese results show that **Fadex** outperforms both **BeautifulSoup** and **lxml** in terms of average response time for extracting metadata and links. However, the performance of each library can also depend on factors such as the complexity of the HTML content and the internet connection stability.\n\n## Example Code for Performance Comparison\n\nBelow is the code used for the performance comparison:\n\n```python\nimport asyncio\nimport time\nfrom fadex import fetch_page_py, get_meta_and_title_py, extract_links_py\nfrom bs4 import BeautifulSoup\nfrom lxml import html as lxml_html\nfrom urllib.parse import urljoin, urlparse\n\n# Function to extract metadata using Fadex\ndef extract_metadata_with_fadex(html_content):\n    try:\n        title, description = get_meta_and_title_py(html_content)\n        return True, title, description\n    except Exception as e:\n        return False, None, None\n\n# Function to extract metadata using BeautifulSoup\ndef extract_metadata_with_beautifulsoup(html_content):\n    try:\n        soup = BeautifulSoup(html_content, 'html.parser')\n        title = soup.title.string if soup.title else None\n        description = None\n        meta_tag = soup.find('meta', attrs={'name': 'description'})\n        if meta_tag:\n            description = meta_tag.get('content')\n        return True, title, description\n    except Exception as e:\n        return False, None, None\n\n# Function to extract metadata using lxml\ndef extract_metadata_with_lxml(html_content):\n    try:\n        tree = lxml_html.fromstring(html_content)\n        title = tree.find('.//title').text if tree.find('.//title') is not None else None\n        description = None\n        meta = tree.xpath('//meta[@name=\"description\"]')\n        if meta and 'content' in meta[0].attrib:\n            description = meta[0].attrib['content']\n        return True, title, description\n    except Exception as e:\n        return False, None, None\n\n# Function to extract links using Fadex\ndef extract_links_with_fadex(html_content, base_url):\n    try:\n        links = extract_links_py(html_content, base_url)\n        return True, links\n    except Exception as e:\n        return False, []\n\n# Function to extract links using BeautifulSoup\ndef extract_links_with_beautifulsoup(html_content, base_url):\n    try:\n        soup = BeautifulSoup(html_content, 'html.parser')\n        links = [urljoin(base_url, a['href']) for a in soup.find_all('a', href=True)]\n        return True, [link for link in links if urlparse(link).scheme in [\"http\", \"https\"]]\n    except Exception as e:\n        return False, []\n\n# Function to extract links using lxml\ndef extract_links_with_lxml(html_content, base_url):\n    try:\n        tree = lxml_html.fromstring(html_content)\n        links = [urljoin(base_url, link) for link in tree.xpath('//a/@href')]\n        return True, [link for link in links if urlparse(link).scheme in [\"http\", \"https\"]]\n    except Exception as e:\n        return False, []\n\n# Function to measure average performance for each library\ndef measure_metadata_performance(html_contents, extract_func, iterations=5):\n    total_time = 0\n    successful_extracts = 0\n    for _ in range(iterations):\n        for html_content in html_contents:\n            start_time = time.time()\n            success, title, description = extract_func(html_content)\n            total_time += time.time() - start_time\n            if success:\n                successful_extracts += 1\n    average_time = total_time / (len(html_contents) * iterations)\n    return average_time, successful_extracts\n\n# Function to measure link extraction performance for each library\ndef measure_link_extraction_performance(html_contents, base_urls, extract_func, iterations=5):\n    total_time = 0\n    successful_extracts = 0\n    for _ in range(iterations):\n        for html_content, base_url in zip(html_contents, base_urls):\n            start_time = time.time()\n            success, links = extract_func(html_content, base_url)\n            total_time += time.time() - start_time\n            if success:\n                successful_extracts += 1\n    average_time = total_time / (len(html_contents) * iterations)\n    return average_time, successful_extracts\n\n# Main function to run the tests\nasync def main():\n    # List of popular URLs for testing\n    urls = [\n        \"https://www.google.com\",\n        \"https://www.wikipedia.org\",\n        \"https://www.github.com\",\n        \"https://www.reddit.com\",\n        \"https://www.stackoverflow.com\",\n        \"https://www.nytimes.com\",\n        \"https://www.bbc.com\",\n        \"https://www.amazon.com\",\n        \"https://www.apple.com\",\n        \"https://www.microsoft.com\"\n    ]\n\n    # Fetch page content using Fadex\n    html_contents = []\n    for url in urls:\n        try:\n            content = await fetch_page_py(url)\n            html_contents.append(content)\n        except Exception as e:\n            print(f\"Failed to fetch page from {url}: {e}\")\n\n    # Define number of iterations for performance measurement\n    iterations = 10\n\n    # Measure performance for Fadex (metadata extraction)\n    fadex_meta_average_time, fadex_meta_success = measure_metadata_performance(\n        html_contents, extract_metadata_with_fadex, iterations\n    )\n\n    # Measure performance for BeautifulSoup (metadata extraction)\n    bs_meta_average_time, bs_meta_success = measure_metadata_performance(\n        html_contents, extract_metadata_with_beautifulsoup, iterations\n    )\n\n    # Measure performance for lxml (metadata extraction)\n    lxml_meta_average_time, lxml_meta_success = measure_metadata_performance(\n       \n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A Powerful WebScraper With Unmatched Performance",
    "version": "0.1.5",
    "project_urls": {
        "Homepage": "https://github.com/fahad-programmer/fadex",
        "Source Code": "https://github.com/fahad-programmer/fadex"
    },
    "split_keywords": [
        "web scraping",
        " scraper",
        " async",
        " performance",
        " beautiful soap",
        " lxml"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "17672e63e844d4239b87e133a4dc0735a745ea943e92269c391ed87f021ec41c",
                "md5": "328459e6121dc223328dd45065857989",
                "sha256": "00f19cbd57d7b077b6a9dcc09fe082c627d1ae4c89674793318c94e80c7952d6"
            },
            "downloads": -1,
            "filename": "Fadex-0.1.5-cp310-cp310-manylinux_2_34_x86_64.whl",
            "has_sig": false,
            "md5_digest": "328459e6121dc223328dd45065857989",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": null,
            "size": 4359948,
            "upload_time": "2024-10-09T12:35:16",
            "upload_time_iso_8601": "2024-10-09T12:35:16.235949Z",
            "url": "https://files.pythonhosted.org/packages/17/67/2e63e844d4239b87e133a4dc0735a745ea943e92269c391ed87f021ec41c/Fadex-0.1.5-cp310-cp310-manylinux_2_34_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "6932adcd2542a5ea3057d64b045b134f4d2f5054cecceba79108ea6649648cbb",
                "md5": "48d3d50276cd7b21eb02c954d50e8fdb",
                "sha256": "0ffed853d83b3f5d36c0993aab8b1943b2bed3b78226100ab633276f685bc2fa"
            },
            "downloads": -1,
            "filename": "Fadex-0.1.5-cp310-none-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "48d3d50276cd7b21eb02c954d50e8fdb",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": null,
            "size": 3188565,
            "upload_time": "2024-10-09T12:39:32",
            "upload_time_iso_8601": "2024-10-09T12:39:32.682456Z",
            "url": "https://files.pythonhosted.org/packages/69/32/adcd2542a5ea3057d64b045b134f4d2f5054cecceba79108ea6649648cbb/Fadex-0.1.5-cp310-none-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "cfa0f4c9c29633bafe97917402f077d3a353dcd3fe4efe7ebd7bdd496e941aa7",
                "md5": "518a6f3d95b67aa61c516ecb46abf596",
                "sha256": "5f05f5d5707de00ea87ffc7952f3ab576ca2ac79bf0e779be6320ef4379dab40"
            },
            "downloads": -1,
            "filename": "Fadex-0.1.5.tar.gz",
            "has_sig": false,
            "md5_digest": "518a6f3d95b67aa61c516ecb46abf596",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 38409,
            "upload_time": "2024-10-09T12:35:19",
            "upload_time_iso_8601": "2024-10-09T12:35:19.488696Z",
            "url": "https://files.pythonhosted.org/packages/cf/a0/f4c9c29633bafe97917402f077d3a353dcd3fe4efe7ebd7bdd496e941aa7/Fadex-0.1.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-09 12:35:19",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "fahad-programmer",
    "github_project": "fadex",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "fadex"
}
        
Elapsed time: 2.69287s