Name | par-scrape JSON |
Version |
0.4.8
JSON |
| download |
home_page | None |
Summary | A versatile web scraping tool with options for Selenium or Playwright, featuring OpenAI-powered data extraction and formatting. |
upload_time | 2024-11-06 17:07:21 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.11 |
license | MIT License Copyright (c) 2024 Paul Robello Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
keywords |
data extraction
openai
playwright
selenium
web scraping
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# PAR Scrape
[![PyPI](https://img.shields.io/pypi/v/par_scrape)](https://pypi.org/project/par_scrape/)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/par_scrape.svg)](https://pypi.org/project/par_scrape/)
![Runs on Linux | MacOS | Windows](https://img.shields.io/badge/runs%20on-Linux%20%7C%20MacOS%20%7C%20Windows-blue)
![Arch x86-63 | ARM | AppleSilicon](https://img.shields.io/badge/arch-x86--64%20%7C%20ARM%20%7C%20AppleSilicon-blue)
![PyPI - License](https://img.shields.io/pypi/l/par_scrape)
PAR Scrape is a versatile web scraping tool with options for Selenium or Playwright, featuring AI-powered data extraction and formatting.
[!["Buy Me A Coffee"](https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png)](https://buymeacoffee.com/probello3)
## Screenshots
![PAR Scrape Screenshot](https://raw.githubusercontent.com/paulrobello/par_scrape/main/Screenshot.png)
## Features
- Web scraping using Playwright or Selenium
- AI-powered data extraction and formatting
- Supports multiple output formats (JSON, Excel, CSV, Markdown)
- Customizable field extraction
- Token usage and cost estimation
- Prompt cache for Anthropic provider
## Known Issues
- Selenium silent mode on windows still shows message about websocket. There is no simple way to get rid of this.
- Providers other than OpenAI are hit-and-miss depending on provider / model / data being extracted.
## Prompt Cache
- OpenAI will auto cache prompts that are over 1024 tokens.
- Anthropic will only cache prompts if you specify the --prompt-cache flag. Due to cache writes costing more only enable this if you intend to run multiple scrape jobs against the same url, also the cache will go stale within a couple of minutes so to reduce cost run your jobs as close together as possible.
## Prerequisites
To install PAR Scrape, make sure you have Python 3.11.
### [uv](https://pypi.org/project/uv/) is recommended
#### Linux and Mac
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
```
#### Windows
```bash
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
```
## Installation
### Installation From Source
Then, follow these steps:
1. Clone the repository:
```bash
git clone https://github.com/paulrobello/par_scrape.git
cd par_scrape
```
2. Install the package dependencies using uv:
```bash
uv sync
```
### Installation From PyPI
To install PAR Scrape from PyPI, run any of the following commands:
```bash
uv tool install par_scrape
```
```bash
pipx install par_scrape
```
### Playwright Installation
To use playwright as a scraper, you must install it and its browsers using the following commands:
```bash
uv tool install playwright
playwright install chromium
```
## Usage
To use PAR Scrape, you can run it from the command line with various options. Here's a basic example:
Ensure you have the AI provider api key in your environment.
You can also store your api keys in the file `~/.par_scrape.env` as follows:
```bash
GROQ_API_KEY= # is required for Groq. Get a free key from https://console.groq.com/
ANTHROPIC_API_KEY= # is required for Anthropic. Get a key from https://console.anthropic.com/
OPENAI_API_KEY= # is required for OpenAI. Get a key from https://platform.openai.com/account/api-keys
GITHUB_TOKEN= # is required for GitHub Models. Get a free key from https://github.com/marketplace/models
GOOGLE_API_KEY= # is required for Google Models. Get a free key from https://console.cloud.google.com
LANGCHAIN_API_KEY= # is required for Langchain Langsmith tracing. Get a free key from https://smith.langchain.com/settings
AWS_PROFILE= # is used for Bedrock authentication. The environment must already be authenticated with AWS.
# No key required to use with Ollama models.
```
### Running from source
```bash
uv run par_scrape --url "https://openai.com/api/pricing/" -f "Title" -f "Description" -f "Price" -f "Cache Price" --model gpt-4o-mini --display-output md
```
### Running if installed from PyPI
```bash
par_scrape --url "https://openai.com/api/pricing/" -f "Title" -f "Description" -f "Price" -f "Cache Price" --model gpt-4o-mini --display-output md
```
### Options
- `--url`, `-u`: The URL to scrape or path to a local file (default: "https://openai.com/api/pricing/")
- `--fields`, `-f`: Fields to extract from the webpage (default: ["Model", "Pricing Input", "Pricing Output"])
- `--scraper`, `-s`: Scraper to use: 'selenium' or 'playwright' (default: "playwright")
- `--headless`, `-h`: Run in headless mode (for Selenium) (default: False)
- `--wait-type`, `-w`: Method to use for page content load waiting [none|pause|sleep|idle|selector|text] (default: sleep).
- `--wait-selector`, `-i`: Selector or text to use for page content load waiting.
- `--sleep-time`, `-t`: Time to sleep (in seconds) before scrolling and closing browser (default: 5)
- `--ai-provider`, `-a`: AI provider to use for processing (default: "OpenAI")
- `--model`, `-m`: AI model to use for processing. If not specified, a default model will be used based on the provider.
- `--prompt-cache`: Enable prompt cache for Anthropic provider. (default: False)
- `--display-output`, `-d`: Display output in terminal (md, csv, or json)
- `--output-folder`, `-o`: Specify the location of the output folder (default: "./output")
- `--silent`, `-q`: Run in silent mode, suppressing output (default: False)
- `--run-name`, `-n`: Specify a name for this run
- `--version`, `-v`: Show the version and exit
- `--pricing`: Enable pricing summary display ('details','cost', 'none') (default: 'none')
- `--cleanup`, `-c`: How to handle cleanup of output folder (choices: none, before, after, both) (default: none)
- `--extraction-prompt`, `-e`: Path to alternate extraction prompt file
- `--ai-base-url`, `-b`: Override the base URL for the AI provider.
### Examples
1. Basic usage with default options:
```bash
par_scrape --url "https://openai.com/api/pricing/" -f "Model" -f "Pricing Input" -f "Pricing Output" --pricing -w text -i gpt-4o
```
2. Using Playwright and displaying JSON output:
```bash
par_scrape --url "https://openai.com/api/pricing/" -f "Title" -f "Description" -f "Price" --scraper playwright -d json --pricing -w text -i gpt-4o
```
3. Specifying a custom model and output folder:
```bash
par_scrape --url "https://openai.com/api/pricing/" -f "Title" -f "Description" -f "Price" --model gpt-4 --output-folder ./custom_output --pricing -w text -i gpt-4o
```
4. Running in silent mode with a custom run name:
```bash
par_scrape --url "https://openai.com/api/pricing/" -f "Title" -f "Description" -f "Price" --silent --run-name my_custom_run --pricing -w text -i gpt-4o
```
5. Using the cleanup option to remove the output folder after scraping:
```bash
par_scrape --url "https://openai.com/api/pricing/" -f "Title" -f "Description" -f "Price" --cleanup --pricing
```
6. Using the pause option to wait for user input before scrolling:
```bash
par_scrape --url "https://openai.com/api/pricing/" -f "Title" -f "Description" -f "Price" --pause --pricing
```
7. Using Anthropic provider with prompt cache enabled and detailed pricing breakdown:
```bash
par_scrape -a Anthropic --prompt-cache -d csv -p details -f "Title" -f "Description" -f "Price" -f "Cache Price"
```
## Whats New
- Version 0.4.8:
- Added Anthropic prompt cache option.
- Version 0.4.7:
- BREAKING CHANGE: --pricing cli option now takes a string value of 'details', 'cost', or 'none'.
- Added pool of user agents that gets randomly pulled from.
- Updating pricing data.
- Pricing token capture and compute now much more accurate.
- Version 0.4.6:
- Minor bug fixes.
- Updating pricing data.
- Added support for Amazon Bedrock
- Removed some unnecessary dependencies.
- Code cleanup.
- Version 0.4.5:
- Added new option --wait-type that allows you to specify the type of wait to use such as pause, sleep, idle, text or selector.
- Removed --pause option as it is no longer needed with --wait-type option.
- Playwright scraping now honors the headless mode.
- Playwright is now the default scraper as it is much faster.
- Version 0.4.4:
- Better Playwright scraping.
- Version 0.4.3:
- Added option to override the base URL for the AI provider.
- Version 0.4.2:
- The url parameter can now point to a local rawData_*.md file for easier testing of different models without having to re-fetch the data.
- Added ability to specify file with extraction prompt.
- Tweaked extraction prompt to work with Groq and Anthropic. Google still does not work.
- Remove need for ~/.par-scrape-config.json
- Version 0.4.1:
- Minor bug fixes for pricing summary.
- Default model for google changed to "gemini-1.5-pro-exp-0827" which is free and usually works well.
- Version 0.4.0:
- Added support for Anthropic, Google, Groq, and Ollama. (Not well tested with any providers other than OpenAI)
- Add flag for displaying pricing summary. Defaults to False.
- Added pricing data for Anthropic.
- Better error handling for llm calls.
- Updated cleanup flag to handle both before and after cleanup. Removed --remove-output-folder flag.
- Version 0.3.1:
- Add pause and sleep-time options to control the browser and scraping delays.
- Default headless mode to False so you can interact with the browser.
- Version 0.3.0:
- Fixed location of config.json file.
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Author
Paul Robello - probello@gmail.com
Raw data
{
"_id": null,
"home_page": null,
"name": "par-scrape",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": "Paul Robello <probello@gmail.com>",
"keywords": "data extraction, openai, playwright, selenium, web scraping",
"author": null,
"author_email": "Paul Robello <probello@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/39/da/c62968f40ac009a3771c3f96ea0d7e2abb6cb92feb1cc18aadec664aaf8e/par_scrape-0.4.8.tar.gz",
"platform": null,
"description": "# PAR Scrape\n\n[![PyPI](https://img.shields.io/pypi/v/par_scrape)](https://pypi.org/project/par_scrape/)\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/par_scrape.svg)](https://pypi.org/project/par_scrape/) \n![Runs on Linux | MacOS | Windows](https://img.shields.io/badge/runs%20on-Linux%20%7C%20MacOS%20%7C%20Windows-blue)\n![Arch x86-63 | ARM | AppleSilicon](https://img.shields.io/badge/arch-x86--64%20%7C%20ARM%20%7C%20AppleSilicon-blue) \n![PyPI - License](https://img.shields.io/pypi/l/par_scrape)\n\nPAR Scrape is a versatile web scraping tool with options for Selenium or Playwright, featuring AI-powered data extraction and formatting.\n\n[![\"Buy Me A Coffee\"](https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png)](https://buymeacoffee.com/probello3)\n\n## Screenshots\n![PAR Scrape Screenshot](https://raw.githubusercontent.com/paulrobello/par_scrape/main/Screenshot.png)\n\n## Features\n\n- Web scraping using Playwright or Selenium \n- AI-powered data extraction and formatting\n- Supports multiple output formats (JSON, Excel, CSV, Markdown)\n- Customizable field extraction\n- Token usage and cost estimation\n- Prompt cache for Anthropic provider\n\n## Known Issues\n- Selenium silent mode on windows still shows message about websocket. There is no simple way to get rid of this.\n- Providers other than OpenAI are hit-and-miss depending on provider / model / data being extracted.\n\n## Prompt Cache\n- OpenAI will auto cache prompts that are over 1024 tokens.\n- Anthropic will only cache prompts if you specify the --prompt-cache flag. Due to cache writes costing more only enable this if you intend to run multiple scrape jobs against the same url, also the cache will go stale within a couple of minutes so to reduce cost run your jobs as close together as possible.\n\n## Prerequisites\n\nTo install PAR Scrape, make sure you have Python 3.11.\n\n### [uv](https://pypi.org/project/uv/) is recommended\n\n#### Linux and Mac\n```bash\ncurl -LsSf https://astral.sh/uv/install.sh | sh\n```\n\n#### Windows\n```bash\npowershell -ExecutionPolicy ByPass -c \"irm https://astral.sh/uv/install.ps1 | iex\"\n```\n\n## Installation\n\n\n### Installation From Source\n\nThen, follow these steps:\n\n1. Clone the repository:\n ```bash\n git clone https://github.com/paulrobello/par_scrape.git\n cd par_scrape\n ```\n\n2. Install the package dependencies using uv:\n ```bash\n uv sync\n ```\n### Installation From PyPI\n\nTo install PAR Scrape from PyPI, run any of the following commands:\n\n```bash\nuv tool install par_scrape\n```\n\n```bash\npipx install par_scrape\n```\n### Playwright Installation\nTo use playwright as a scraper, you must install it and its browsers using the following commands:\n\n```bash\nuv tool install playwright\nplaywright install chromium\n```\n\n## Usage\n\nTo use PAR Scrape, you can run it from the command line with various options. Here's a basic example:\nEnsure you have the AI provider api key in your environment.\nYou can also store your api keys in the file `~/.par_scrape.env` as follows:\n```bash\nGROQ_API_KEY= # is required for Groq. Get a free key from https://console.groq.com/\nANTHROPIC_API_KEY= # is required for Anthropic. Get a key from https://console.anthropic.com/\nOPENAI_API_KEY= # is required for OpenAI. Get a key from https://platform.openai.com/account/api-keys\nGITHUB_TOKEN= # is required for GitHub Models. Get a free key from https://github.com/marketplace/models\nGOOGLE_API_KEY= # is required for Google Models. Get a free key from https://console.cloud.google.com\nLANGCHAIN_API_KEY= # is required for Langchain Langsmith tracing. Get a free key from https://smith.langchain.com/settings\nAWS_PROFILE= # is used for Bedrock authentication. The environment must already be authenticated with AWS.\n# No key required to use with Ollama models.\n```\n\n### Running from source\n```bash\nuv run par_scrape --url \"https://openai.com/api/pricing/\" -f \"Title\" -f \"Description\" -f \"Price\" -f \"Cache Price\" --model gpt-4o-mini --display-output md\n```\n\n### Running if installed from PyPI\n```bash\npar_scrape --url \"https://openai.com/api/pricing/\" -f \"Title\" -f \"Description\" -f \"Price\" -f \"Cache Price\" --model gpt-4o-mini --display-output md\n```\n\n### Options\n\n- `--url`, `-u`: The URL to scrape or path to a local file (default: \"https://openai.com/api/pricing/\")\n- `--fields`, `-f`: Fields to extract from the webpage (default: [\"Model\", \"Pricing Input\", \"Pricing Output\"])\n- `--scraper`, `-s`: Scraper to use: 'selenium' or 'playwright' (default: \"playwright\")\n- `--headless`, `-h`: Run in headless mode (for Selenium) (default: False)\n- `--wait-type`, `-w`: Method to use for page content load waiting [none|pause|sleep|idle|selector|text] (default: sleep).\n- `--wait-selector`, `-i`: Selector or text to use for page content load waiting.\n- `--sleep-time`, `-t`: Time to sleep (in seconds) before scrolling and closing browser (default: 5)\n- `--ai-provider`, `-a`: AI provider to use for processing (default: \"OpenAI\")\n- `--model`, `-m`: AI model to use for processing. If not specified, a default model will be used based on the provider.\n- `--prompt-cache`: Enable prompt cache for Anthropic provider. (default: False)\n- `--display-output`, `-d`: Display output in terminal (md, csv, or json)\n- `--output-folder`, `-o`: Specify the location of the output folder (default: \"./output\")\n- `--silent`, `-q`: Run in silent mode, suppressing output (default: False)\n- `--run-name`, `-n`: Specify a name for this run\n- `--version`, `-v`: Show the version and exit\n- `--pricing`: Enable pricing summary display ('details','cost', 'none') (default: 'none')\n- `--cleanup`, `-c`: How to handle cleanup of output folder (choices: none, before, after, both) (default: none)\n- `--extraction-prompt`, `-e`: Path to alternate extraction prompt file\n- `--ai-base-url`, `-b`: Override the base URL for the AI provider.\n\n### Examples\n\n1. Basic usage with default options:\n```bash\npar_scrape --url \"https://openai.com/api/pricing/\" -f \"Model\" -f \"Pricing Input\" -f \"Pricing Output\" --pricing -w text -i gpt-4o\n```\n2. Using Playwright and displaying JSON output:\n```bash\npar_scrape --url \"https://openai.com/api/pricing/\" -f \"Title\" -f \"Description\" -f \"Price\" --scraper playwright -d json --pricing -w text -i gpt-4o\n```\n3. Specifying a custom model and output folder:\n```bash\npar_scrape --url \"https://openai.com/api/pricing/\" -f \"Title\" -f \"Description\" -f \"Price\" --model gpt-4 --output-folder ./custom_output --pricing -w text -i gpt-4o\n```\n4. Running in silent mode with a custom run name:\n```bash\npar_scrape --url \"https://openai.com/api/pricing/\" -f \"Title\" -f \"Description\" -f \"Price\" --silent --run-name my_custom_run --pricing -w text -i gpt-4o\n```\n5. Using the cleanup option to remove the output folder after scraping:\n```bash\npar_scrape --url \"https://openai.com/api/pricing/\" -f \"Title\" -f \"Description\" -f \"Price\" --cleanup --pricing\n```\n6. Using the pause option to wait for user input before scrolling:\n```bash\npar_scrape --url \"https://openai.com/api/pricing/\" -f \"Title\" -f \"Description\" -f \"Price\" --pause --pricing\n```\n7. Using Anthropic provider with prompt cache enabled and detailed pricing breakdown:\n```bash\npar_scrape -a Anthropic --prompt-cache -d csv -p details -f \"Title\" -f \"Description\" -f \"Price\" -f \"Cache Price\"\n```\n\n## Whats New\n- Version 0.4.8:\n - Added Anthropic prompt cache option.\n- Version 0.4.7:\n - BREAKING CHANGE: --pricing cli option now takes a string value of 'details', 'cost', or 'none'.\n - Added pool of user agents that gets randomly pulled from.\n - Updating pricing data.\n - Pricing token capture and compute now much more accurate.\n- Version 0.4.6:\n - Minor bug fixes.\n - Updating pricing data.\n - Added support for Amazon Bedrock\n - Removed some unnecessary dependencies.\n - Code cleanup.\n- Version 0.4.5:\n - Added new option --wait-type that allows you to specify the type of wait to use such as pause, sleep, idle, text or selector.\n - Removed --pause option as it is no longer needed with --wait-type option.\n - Playwright scraping now honors the headless mode.\n - Playwright is now the default scraper as it is much faster.\n- Version 0.4.4:\n - Better Playwright scraping.\n- Version 0.4.3:\n - Added option to override the base URL for the AI provider.\n- Version 0.4.2:\n - The url parameter can now point to a local rawData_*.md file for easier testing of different models without having to re-fetch the data.\n - Added ability to specify file with extraction prompt.\n - Tweaked extraction prompt to work with Groq and Anthropic. Google still does not work.\n - Remove need for ~/.par-scrape-config.json\n- Version 0.4.1:\n - Minor bug fixes for pricing summary.\n - Default model for google changed to \"gemini-1.5-pro-exp-0827\" which is free and usually works well.\n- Version 0.4.0:\n - Added support for Anthropic, Google, Groq, and Ollama. (Not well tested with any providers other than OpenAI)\n - Add flag for displaying pricing summary. Defaults to False.\n - Added pricing data for Anthropic.\n - Better error handling for llm calls.\n - Updated cleanup flag to handle both before and after cleanup. Removed --remove-output-folder flag.\n- Version 0.3.1:\n - Add pause and sleep-time options to control the browser and scraping delays.\n - Default headless mode to False so you can interact with the browser.\n- Version 0.3.0:\n - Fixed location of config.json file.\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## Author\n\nPaul Robello - probello@gmail.com\n",
"bugtrack_url": null,
"license": "MIT License Copyright (c) 2024 Paul Robello Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.",
"summary": "A versatile web scraping tool with options for Selenium or Playwright, featuring OpenAI-powered data extraction and formatting.",
"version": "0.4.8",
"project_urls": {
"Discussions": "https://github.com/paulrobello/par_scrape/discussions",
"Documentation": "https://github.com/paulrobello/par_scrape/blob/main/README.md",
"Homepage": "https://github.com/paulrobello/par_scrape",
"Issues": "https://github.com/paulrobello/par_scrape/issues",
"Repository": "https://github.com/paulrobello/par_scrape",
"Wiki": "https://github.com/paulrobello/par_scrape/wiki"
},
"split_keywords": [
"data extraction",
" openai",
" playwright",
" selenium",
" web scraping"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "92c58eae9d9079b6330f19b8620b7fe4275a12555155804901bcefcfb1bfd1f7",
"md5": "a17e8253ea745f112e99b8377d0a0220",
"sha256": "5079c4bbb8186e318b0620c57bd0bcff4aad38a80a6fb29ef32e2f0a6d7670f9"
},
"downloads": -1,
"filename": "par_scrape-0.4.8-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a17e8253ea745f112e99b8377d0a0220",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 29070,
"upload_time": "2024-11-06T17:07:19",
"upload_time_iso_8601": "2024-11-06T17:07:19.754526Z",
"url": "https://files.pythonhosted.org/packages/92/c5/8eae9d9079b6330f19b8620b7fe4275a12555155804901bcefcfb1bfd1f7/par_scrape-0.4.8-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "39dac62968f40ac009a3771c3f96ea0d7e2abb6cb92feb1cc18aadec664aaf8e",
"md5": "9556081a3428926d3bd105d9d0be4ceb",
"sha256": "2724c35a362b206f4e05d50136f6b84dd21e3fcd11096df40e206f999d1f909f"
},
"downloads": -1,
"filename": "par_scrape-0.4.8.tar.gz",
"has_sig": false,
"md5_digest": "9556081a3428926d3bd105d9d0be4ceb",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 23961,
"upload_time": "2024-11-06T17:07:21",
"upload_time_iso_8601": "2024-11-06T17:07:21.900648Z",
"url": "https://files.pythonhosted.org/packages/39/da/c62968f40ac009a3771c3f96ea0d7e2abb6cb92feb1cc18aadec664aaf8e/par_scrape-0.4.8.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-06 17:07:21",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "paulrobello",
"github_project": "par_scrape",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "par-scrape"
}