scrape-it-now


Namescrape-it-now JSON
Version 3.0.2 PyPI version JSON
download
home_pageNone
SummaryWeb scraper made for AI and simplicity in mind. It runs as a CLI that can be parallelized and outputs high-quality markdown content.
upload_time2024-11-09 12:56:41
maintainerNone
docs_urlNone
authorNone
requires_python>=3.11
licenseApache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
keywords web scraper markdown ai parallel cli automation data-extraction web-crawling content-indexing
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # 🛰️ Scrape It Now!

Web scraper made for AI and simplicity in mind. It runs as a CLI that can be parallelized and outputs high-quality markdown content.

[![GitHub last release date](https://img.shields.io/github/release-date/clemlesne/scrape-it-now)](https://github.com/clemlesne/scrape-it-now/releases)
[![GitHub project license](https://img.shields.io/github/license/clemlesne/scrape-it-now)](https://github.com/clemlesne/scrape-it-now/blob/main/LICENSE)
[![PyPI package version](https://img.shields.io/pypi/v/scrape-it-now)](https://pypi.org/project/scrape-it-now)
[![PyPI supported Python versions](https://img.shields.io/pypi/pyversions/scrape-it-now)](https://pypi.org/project/scrape-it-now)

## Features

Shared:

- 🏗️ Decoupled architecture with [Azure Queue Storage](https://learn.microsoft.com/en-us/azure/storage/queues) or local [sqlite](https://sqlite.org)
- ⚙️ Idempotent operations that can be run in parallel
- 💾 Scraped content is stored in [Azure Blob Storage](https://learn.microsoft.com/en-us/azure/storage/blobs) or local disk

Scraper:

- 🛑 Avoid re-scraping a page if it hasn't changed
- 🚫 Block ads to lower network costs with [The Block List Project](https://github.com/blocklistproject/Lists)
- 🔗 Explore pages in depth by detecting links and de-duplicating them
- ✍️ Extract markdown content from a page with [Pandoc](https://github.com/jgm/pandoc)
- 🏷️ Extract [metadata elements](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/meta) from the page
- 🖥️ Load dynamic JavaScript content with [Playwright](https://github.com/microsoft/playwright-python) and [Chromium](https://www.chromium.org/Home)
- 🕵️‍♂️ Preserve anonymity with a random user agent, random viewport size, and no client hints headers
- 📊 Show progress with a status command
- 🖼️ Store images collected on the page
- 📸 Store screenshot of the page
- 📡 Track progress of total network usage

Indexer:

- 🧠 AI Search index is created automatically
- ✂️ Chunk markdown while keeping the content coherent
- 📈 Embed chunks with OpenAI embeddings
- 🔍 Indexed content is semantically searchable with [Azure AI Search](https://learn.microsoft.com/en-us/azure/search)

## Installation

### From PyPI

```bash
# Install the package
python3 -m pip install scrape-it-now
# Run the CLI
scrape-it-now --help
```

To configure the CLI (including authentication to the backend services), use environment variables, a `.env` file or command line options.

### From sources

Application must be run with Python 3.13 or later. If this version is not installed, an easy way to install it is [pyenv](https://github.com/pyenv/pyenv).

```bash
# Download the source code
git clone https://github.com/clemlesne/scrape-it-now.git
# Move to the directory
cd scrape-it-now
# Run install scripts
make install dev
# Run the CLI
scrape-it-now --help
```

## How to use

### Scrape a website

#### Run a job

Usage with Azure Blob Storage and Azure Queue Storage:

```bash
# Azure Storage configuration
export AZURE_STORAGE_ACCESS_KEY=xxx
export AZURE_STORAGE_ACCOUNT_NAME=xxx
# Run the job
scrape-it-now scrape run https://nytimes.com
```

Usage with Local Disk Blob and Local Disk Queue:

```bash
# Local disk configuration
export BLOB_PROVIDER=local_disk
export QUEUE_PROVIDER=local_disk
# Run the job
scrape-it-now scrape run https://nytimes.com
```

Example:

```bash
❯ scrape-it-now scrape run https://nytimes.com
2024-11-08T13:18:49.169320Z [info     ] Start scraping job lydmtyz
2024-11-08T13:18:49.169392Z [info     ] Installing dependencies if needed, this may take a few minutes
2024-11-08T13:18:52.542422Z [info     ] Queued 1/1 URLs
2024-11-08T13:18:58.509221Z [info     ] Start processing https://nytimes.com depth=1 process=scrape-lydmtyz-4 task=63dce50
2024-11-08T13:19:04.173198Z [info     ] Loaded 154554 ads and trackers process=scrape-lydmtyz-4
2024-11-08T13:19:16.393045Z [info     ] Queued 310/311 URLs            depth=1 process=scrape-lydmtyz-4 task=63dce50
2024-11-08T13:19:16.393323Z [info     ] Scraped                        depth=1 process=scrape-lydmtyz-4 task=63dce50
...
```

Most frequent options are:

| `Options` | Description | `Environment variable` |
|-|-|-|
| `--azure-storage-access-key`</br>`-asak` | Azure Storage access key | `AZURE_STORAGE_ACCESS_KEY` |
| `--azure-storage-account-name`</br>`-asan` | Azure Storage account name | `AZURE_STORAGE_ACCOUNT_NAME` |
| `--blob-provider`</br>`-bp` | Blob provider | `BLOB_PROVIDER` |
| `--job-name`</br>`-jn` | Job name | `JOB_NAME` |
| `--max-depth`</br>`-md` | Maximum depth | `MAX_DEPTH` |
| `--queue-provider`</br>`-qp` | Queue provider | `QUEUE_PROVIDER` |
| `--save-images`</br>`-si` | Save images | `SAVE_IMAGES` |
| `--save-screenshot`</br>`-ss` | Save screenshot | `SAVE_SCREENSHOT` |
| `--whitelist`</br>`-w` | Whitelist | `WHITELIST` |

For documentation on all available options, run:

```bash
scrape-it-now scrape run --help
```

#### Show job status

Usage with Azure Blob Storage:

```bash
# Azure Storage configuration
export AZURE_STORAGE_CONNECTION_STRING=xxx
# Show the job status
scrape-it-now scrape status [job_name]
```

Usage with Local Disk Blob:

```bash
# Local disk configuration
export BLOB_PROVIDER=local_disk
# Show the job status
scrape-it-now scrape status [job_name]
```

Example:

```bash
❯ scrape-it-now scrape status lydmtyz
{"created_at":"2024-11-08T13:18:52.839060Z","last_updated":"2024-11-08T13:19:16.528370Z","network_used_mb":2.6666793823242188,"processed":1,"queued":311}
```

Most frequent options are:

| `Options` | Description | `Environment variable` |
|-|-|-|
| `--azure-storage-access-key`</br>`-asak` | Azure Storage access key | `AZURE_STORAGE_ACCESS_KEY` |
| `--azure-storage-account-name`</br>`-asan` | Azure Storage account name | `AZURE_STORAGE_ACCOUNT_NAME` |
| `--blob-provider`</br>`-bp` | Blob provider | `BLOB_PROVIDER` |

For documentation on all available options, run:

```bash
scrape-it-now scrape status --help
```

### Index a scraped website

#### Run a job

Usage with Azure Blob Storage, Azure Queue Storage and Azure AI Search:

```bash
# Azure OpenAI configuration
export AZURE_OPENAI_API_KEY=xxx
export AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME=xxx
export AZURE_OPENAI_EMBEDDING_DIMENSIONS=xxx
export AZURE_OPENAI_EMBEDDING_MODEL_NAME=xxx
export AZURE_OPENAI_ENDPOINT=xxx

# Azure Search configuration
export AZURE_SEARCH_API_KEY=xxx
export AZURE_SEARCH_ENDPOINT=xxx

# Azure Storage configuration
export AZURE_STORAGE_ACCESS_KEY=xxx
export AZURE_STORAGE_ACCOUNT_NAME=xxx

# Run the job
scrape-it-now index run [job_name]
```

Usage with Local Disk Blob, Local Disk Queue and Azure AI Search:

```bash
# Azure OpenAI configuration
export AZURE_OPENAI_API_KEY=xxx
export AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME=xxx
export AZURE_OPENAI_EMBEDDING_DIMENSIONS=xxx
export AZURE_OPENAI_EMBEDDING_MODEL_NAME=xxx
export AZURE_OPENAI_ENDPOINT=xxx
# Azure Search configuration
export AZURE_SEARCH_API_KEY=xxx
export AZURE_SEARCH_ENDPOINT=xxx
# Local disk configuration
export BLOB_PROVIDER=local_disk
export QUEUE_PROVIDER=local_disk
# Run the job
scrape-it-now index run [job_name]
```

Example:

```bash
❯ scrape-it-now index run lydmtyz
2024-11-08T13:20:37.129411Z [info     ] Start indexing job lydmtyz
2024-11-08T13:20:38.945954Z [info     ] Start processing https://nytimes.com process=index-lydmtyz-4 task=63dce50
2024-11-08T13:20:39.162692Z [info     ] Chunked into 7 parts           process=index-lydmtyz-4 task=63dce50
2024-11-08T13:20:42.407391Z [info     ] Indexed 7 chunks               process=index-lydmtyz-4 task=63dce50
...
```

Most frequent options are:

| `Options` | Description | `Environment variable` |
|-|-|-|
| `--azure-openai-api-key`</br>`-aoak` | Azure OpenAI API key | `AZURE_OPENAI_API_KEY` |
| `--azure-openai-embedding-deployment-name`</br>`-aoedn` | Azure OpenAI embedding deployment name | `AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME` |
| `--azure-openai-embedding-dimensions`</br>`-aoed` | Azure OpenAI embedding dimensions | `AZURE_OPENAI_EMBEDDING_DIMENSIONS` |
| `--azure-openai-embedding-model-name`</br>`-aoemn` | Azure OpenAI embedding model name | `AZURE_OPENAI_EMBEDDING_MODEL_NAME` |
| `--azure-openai-endpoint`</br>`-aoe` | Azure OpenAI endpoint | `AZURE_OPENAI_ENDPOINT` |
| `--azure-search-api-key`</br>`-asak` | Azure Search API key | `AZURE_SEARCH_API_KEY` |
| `--azure-search-endpoint`</br>`-ase` | Azure Search endpoint | `AZURE_SEARCH_ENDPOINT` |
| `--azure-storage-access-key`</br>`-asak` | Azure Storage access key | `AZURE_STORAGE_ACCESS_KEY` |
| `--azure-storage-account-name`</br>`-asan` | Azure Storage account name | `AZURE_STORAGE_ACCOUNT_NAME` |
| `--blob-provider`</br>`-bp` | Blob provider | `BLOB_PROVIDER` |
| `--queue-provider`</br>`-qp` | Queue provider | `QUEUE_PROVIDER` |

For documentation on all available options, run:

```bash
scrape-it-now index run --help
```

## Architecture

### Scrape

```mermaid
---
title: Scrape process with Azure Storage
---
graph LR
  cli["CLI"]
  web["Website"]

  subgraph "Azure Queue Storage"
    to_chunk["To chunk"]
    to_scrape["To scrape"]
  end

  subgraph "Azure Blob Storage"
    subgraph "Container"
      job["job"]
      scraped["scraped"]
      state["state"]
    end
  end

  cli -- (1) Pull message --> to_scrape
  cli -- (2) Get cache --> scraped
  cli -- (3) Browse --> web
  cli -- (4) Update cache --> scraped
  cli -- (5) Push state --> state
  cli -- (6) Add message --> to_scrape
  cli -- (7) Add message --> to_chunk
  cli -- (8) Update state --> job
```

### Index

```mermaid
---
title: Scrape process with Azure Storage and Azure AI Search
---
graph LR
  search["Azure AI Search"]
  cli["CLI"]
  embeddings["Azure OpenAI Embeddings"]

  subgraph "Azure Queue Storage"
    to_chunk["To chunk"]
  end

  subgraph "Azure Blob Storage"
    subgraph "Container"
      scraped["scraped"]
    end
  end

  cli -- (1) Pull message --> to_chunk
  cli -- (2) Get cache --> scraped
  cli -- (3) Chunk --> cli
  cli -- (4) Embed --> embeddings
  cli -- (5) Push to search --> search
```

## Design

Blob storage is organized in folders:

```txt
[job_name]-scraping/            # Job name (either defined by the user or generated)
    scraped/                    # All the data from the pages
        [page_id]/              # Assets from a page
            screenshot.jpeg     # Screenshot (if enabled)
            [image_id].[ext]    # Image binary (if enabled)
            [image_id].json     # Image metadata (if enabled)
        [page_id].json          # Data from a page
    state/                      # Job states (cache & parallelization)
        [page_id]               # Page state
    job.json                    # Job state (aggregated stats)
```

Page data is considered as an API (won't break until the next major version) and is stored in JSON format:

```json
{
  "created_at": "2024-09-11T14:06:43.566187Z",
  "redirect": "https://www.nytimes.com/interactive/2024/podcasts/serial-season-four-guantanamo.html",
  "status": 200,
  "url": "https://www.nytimes.com/interactive/2024/podcasts/serial-season-four-guantanamo.html",
  "content": "## Listen to the trailer for Serial Season 4...",
  "etag": null,
  "links": [
    "https://podcasts.apple.com/us/podcast/serial/id917918570",
    "https://music.amazon.com/podcasts/d1022069-8863-42f3-823e-857fd8a7b616/serial?ref=dm_sh_OVBHkKYvW1poSzCOsBqHFXuLc",
    ...
  ],
  "metas": {
    "description": "“Serial” returns with a history of Guantánamo told by people who lived through key moments in Guantánamo’s evolution, who know things the rest of us don’t about what it’s like to be caught inside an improvised justice system.",
    "articleid": "100000009373583",
    "twitter:site": "@nytimes",
    ...
  },
  "network_used_mb": 1.041460037231445,
  "raw": "<head>...</head><body>...</body>",
  "valid_until": "2024-09-11T14:11:37.790570Z"
}
```

Then, indexed data is stored in Azure AI Search:

| Field | Type | Description |
|-|-|-|
| `chunck_number` | `Edm.Int32` | Chunk number, from `0` to *`x`* |
| `content` | `Edm.String` | Chunck content |
| `created_at` | `Edm.DateTimeOffset` | Source scrape date |
| `id` | `Edm.String` | Chunck ID |
| `title` | `Edm.String` | Source page title |
| `url` | `Edm.String` | Source page URL |

## Advanced usage

### Whitelist

Whitelist option allows to restrict to a domain and ignore sub paths. It is a list of regular expressions:

```txt
domain1,regexp1,regexp2 domain2,regexp3
```

For examples:

To whitelist `learn.microsoft.com`:

```txt
learn\.microsoft\.com
```

To whitelist `learn.microsoft.com` and `go.microsoft.com`, but ignore all sub paths except `/en-us`:

```txt
learn\.microsoft\.com,^/(?!en-us).* go\.microsoft\.com
```

### Source environment variables

To configure easily the CLI, source environment variables from a `.env` file. For example, for the `--azure-storage-access-key` option:

```bash
AZURE_STORAGE_ACCESS_KEY=xxx
```

For arguments that accept multiple values, use a space-separated list. For example, for the `--whitelist` option:

```bash
WHITELIST=learn\.microsoft\.com go\.microsoft\.com
```

### Application cache directory

The cache directoty depends on the operating system:

- `~/.config/scrape-it-now` (Unix)
- `~/Library/Application Support/scrape-it-now` (macOS)
- `C:\Users\<user>\AppData\Roaming\scrape-it-now` (Windows)

### Broswer binary installation

Browser binaries are automatically downloaded or updated at each run. Browser is Chromium and it is not configurable (feel free to open an issue if you need another browser), it weights around 450MB. Cache is stored in the cache directory.

### How Local Disk storage works

Local Disk storage is used for both blob and queue. It is not recommended for production use, as it is not easily scalable, and not fault-tolerant. It is useful for testing and development or when you cannot use Azure services.

Implementation:

- Local Disk Blob uses a directory structure to store blobs. Each blob is stored in a file with the blob name as the file name. Lease is implemented with lock files. By default, files are stored in a directory relative to the command execution directory.
- Local Disk Queue uses a SQLite database to store messages. Database is stored in the cache directory. SQL databases implement visibility timeout and deletion tokens to ensure consistency to the stateless queue services like Azure Queue Storage.

### Use proxies for anonymity

Proxies are not implemented in the application. Network security cannot be achieved from the application level. Use a VPN (e.g. your, third-party) or a proxy service (e.g. residential procies, Tor) to ensure anonymity and configure the system firewall to limit the application network access to it.

### Bundle with a container

As the application is packaged to PyPi, it can easily be bundled with a container. At every start, the application will download the dependencies (browser, etc.) and cache them. You can pre-download them by running the command `scrape-it-now scrape install`.

A good technique for performance would also to parallelize the scraping and indexing jobs by running multiple containers of each. This can be achieved with [KEDA](https://keda.sh), by configuring a [queue scaler](https://keda.sh/docs/2.16/scalers/azure-storage-queue).

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "scrape-it-now",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.11",
    "maintainer_email": "Cl\u00e9mence Lesn\u00e9 <clemence@lesne.pro>",
    "keywords": "web, scraper, markdown, ai, parallel, cli, automation, data-extraction, web-crawling, content-indexing",
    "author": null,
    "author_email": "Cl\u00e9mence Lesn\u00e9 <clemence@lesne.pro>",
    "download_url": null,
    "platform": null,
    "description": "# \ud83d\udef0\ufe0f Scrape It Now!\n\nWeb scraper made for AI and simplicity in mind. It runs as a CLI that can be parallelized and outputs high-quality markdown content.\n\n[![GitHub last release date](https://img.shields.io/github/release-date/clemlesne/scrape-it-now)](https://github.com/clemlesne/scrape-it-now/releases)\n[![GitHub project license](https://img.shields.io/github/license/clemlesne/scrape-it-now)](https://github.com/clemlesne/scrape-it-now/blob/main/LICENSE)\n[![PyPI package version](https://img.shields.io/pypi/v/scrape-it-now)](https://pypi.org/project/scrape-it-now)\n[![PyPI supported Python versions](https://img.shields.io/pypi/pyversions/scrape-it-now)](https://pypi.org/project/scrape-it-now)\n\n## Features\n\nShared:\n\n- \ud83c\udfd7\ufe0f Decoupled architecture with [Azure Queue Storage](https://learn.microsoft.com/en-us/azure/storage/queues) or local [sqlite](https://sqlite.org)\n- \u2699\ufe0f Idempotent operations that can be run in parallel\n- \ud83d\udcbe Scraped content is stored in [Azure Blob Storage](https://learn.microsoft.com/en-us/azure/storage/blobs) or local disk\n\nScraper:\n\n- \ud83d\uded1 Avoid re-scraping a page if it hasn't changed\n- \ud83d\udeab Block ads to lower network costs with [The Block List Project](https://github.com/blocklistproject/Lists)\n- \ud83d\udd17 Explore pages in depth by detecting links and de-duplicating them\n- \u270d\ufe0f Extract markdown content from a page with [Pandoc](https://github.com/jgm/pandoc)\n- \ud83c\udff7\ufe0f Extract [metadata elements](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/meta) from the page\n- \ud83d\udda5\ufe0f Load dynamic JavaScript content with [Playwright](https://github.com/microsoft/playwright-python) and [Chromium](https://www.chromium.org/Home)\n- \ud83d\udd75\ufe0f\u200d\u2642\ufe0f Preserve anonymity with a random user agent, random viewport size, and no client hints headers\n- \ud83d\udcca Show progress with a status command\n- \ud83d\uddbc\ufe0f Store images collected on the page\n- \ud83d\udcf8 Store screenshot of the page\n- \ud83d\udce1 Track progress of total network usage\n\nIndexer:\n\n- \ud83e\udde0 AI Search index is created automatically\n- \u2702\ufe0f Chunk markdown while keeping the content coherent\n- \ud83d\udcc8 Embed chunks with OpenAI embeddings\n- \ud83d\udd0d Indexed content is semantically searchable with [Azure AI Search](https://learn.microsoft.com/en-us/azure/search)\n\n## Installation\n\n### From PyPI\n\n```bash\n# Install the package\npython3 -m pip install scrape-it-now\n# Run the CLI\nscrape-it-now --help\n```\n\nTo configure the CLI (including authentication to the backend services), use environment variables, a `.env` file or command line options.\n\n### From sources\n\nApplication must be run with Python 3.13 or later. If this version is not installed, an easy way to install it is [pyenv](https://github.com/pyenv/pyenv).\n\n```bash\n# Download the source code\ngit clone https://github.com/clemlesne/scrape-it-now.git\n# Move to the directory\ncd scrape-it-now\n# Run install scripts\nmake install dev\n# Run the CLI\nscrape-it-now --help\n```\n\n## How to use\n\n### Scrape a website\n\n#### Run a job\n\nUsage with Azure Blob Storage and Azure Queue Storage:\n\n```bash\n# Azure Storage configuration\nexport AZURE_STORAGE_ACCESS_KEY=xxx\nexport AZURE_STORAGE_ACCOUNT_NAME=xxx\n# Run the job\nscrape-it-now scrape run https://nytimes.com\n```\n\nUsage with Local Disk Blob and Local Disk Queue:\n\n```bash\n# Local disk configuration\nexport BLOB_PROVIDER=local_disk\nexport QUEUE_PROVIDER=local_disk\n# Run the job\nscrape-it-now scrape run https://nytimes.com\n```\n\nExample:\n\n```bash\n\u276f scrape-it-now scrape run https://nytimes.com\n2024-11-08T13:18:49.169320Z [info     ] Start scraping job lydmtyz\n2024-11-08T13:18:49.169392Z [info     ] Installing dependencies if needed, this may take a few minutes\n2024-11-08T13:18:52.542422Z [info     ] Queued 1/1 URLs\n2024-11-08T13:18:58.509221Z [info     ] Start processing https://nytimes.com depth=1 process=scrape-lydmtyz-4 task=63dce50\n2024-11-08T13:19:04.173198Z [info     ] Loaded 154554 ads and trackers process=scrape-lydmtyz-4\n2024-11-08T13:19:16.393045Z [info     ] Queued 310/311 URLs            depth=1 process=scrape-lydmtyz-4 task=63dce50\n2024-11-08T13:19:16.393323Z [info     ] Scraped                        depth=1 process=scrape-lydmtyz-4 task=63dce50\n...\n```\n\nMost frequent options are:\n\n| `Options` | Description | `Environment variable` |\n|-|-|-|\n| `--azure-storage-access-key`</br>`-asak` | Azure Storage access key | `AZURE_STORAGE_ACCESS_KEY` |\n| `--azure-storage-account-name`</br>`-asan` | Azure Storage account name | `AZURE_STORAGE_ACCOUNT_NAME` |\n| `--blob-provider`</br>`-bp` | Blob provider | `BLOB_PROVIDER` |\n| `--job-name`</br>`-jn` | Job name | `JOB_NAME` |\n| `--max-depth`</br>`-md` | Maximum depth | `MAX_DEPTH` |\n| `--queue-provider`</br>`-qp` | Queue provider | `QUEUE_PROVIDER` |\n| `--save-images`</br>`-si` | Save images | `SAVE_IMAGES` |\n| `--save-screenshot`</br>`-ss` | Save screenshot | `SAVE_SCREENSHOT` |\n| `--whitelist`</br>`-w` | Whitelist | `WHITELIST` |\n\nFor documentation on all available options, run:\n\n```bash\nscrape-it-now scrape run --help\n```\n\n#### Show job status\n\nUsage with Azure Blob Storage:\n\n```bash\n# Azure Storage configuration\nexport AZURE_STORAGE_CONNECTION_STRING=xxx\n# Show the job status\nscrape-it-now scrape status [job_name]\n```\n\nUsage with Local Disk Blob:\n\n```bash\n# Local disk configuration\nexport BLOB_PROVIDER=local_disk\n# Show the job status\nscrape-it-now scrape status [job_name]\n```\n\nExample:\n\n```bash\n\u276f scrape-it-now scrape status lydmtyz\n{\"created_at\":\"2024-11-08T13:18:52.839060Z\",\"last_updated\":\"2024-11-08T13:19:16.528370Z\",\"network_used_mb\":2.6666793823242188,\"processed\":1,\"queued\":311}\n```\n\nMost frequent options are:\n\n| `Options` | Description | `Environment variable` |\n|-|-|-|\n| `--azure-storage-access-key`</br>`-asak` | Azure Storage access key | `AZURE_STORAGE_ACCESS_KEY` |\n| `--azure-storage-account-name`</br>`-asan` | Azure Storage account name | `AZURE_STORAGE_ACCOUNT_NAME` |\n| `--blob-provider`</br>`-bp` | Blob provider | `BLOB_PROVIDER` |\n\nFor documentation on all available options, run:\n\n```bash\nscrape-it-now scrape status --help\n```\n\n### Index a scraped website\n\n#### Run a job\n\nUsage with Azure Blob Storage, Azure Queue Storage and Azure AI Search:\n\n```bash\n# Azure OpenAI configuration\nexport AZURE_OPENAI_API_KEY=xxx\nexport AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME=xxx\nexport AZURE_OPENAI_EMBEDDING_DIMENSIONS=xxx\nexport AZURE_OPENAI_EMBEDDING_MODEL_NAME=xxx\nexport AZURE_OPENAI_ENDPOINT=xxx\n\n# Azure Search configuration\nexport AZURE_SEARCH_API_KEY=xxx\nexport AZURE_SEARCH_ENDPOINT=xxx\n\n# Azure Storage configuration\nexport AZURE_STORAGE_ACCESS_KEY=xxx\nexport AZURE_STORAGE_ACCOUNT_NAME=xxx\n\n# Run the job\nscrape-it-now index run [job_name]\n```\n\nUsage with Local Disk Blob, Local Disk Queue and Azure AI Search:\n\n```bash\n# Azure OpenAI configuration\nexport AZURE_OPENAI_API_KEY=xxx\nexport AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME=xxx\nexport AZURE_OPENAI_EMBEDDING_DIMENSIONS=xxx\nexport AZURE_OPENAI_EMBEDDING_MODEL_NAME=xxx\nexport AZURE_OPENAI_ENDPOINT=xxx\n# Azure Search configuration\nexport AZURE_SEARCH_API_KEY=xxx\nexport AZURE_SEARCH_ENDPOINT=xxx\n# Local disk configuration\nexport BLOB_PROVIDER=local_disk\nexport QUEUE_PROVIDER=local_disk\n# Run the job\nscrape-it-now index run [job_name]\n```\n\nExample:\n\n```bash\n\u276f scrape-it-now index run lydmtyz\n2024-11-08T13:20:37.129411Z [info     ] Start indexing job lydmtyz\n2024-11-08T13:20:38.945954Z [info     ] Start processing https://nytimes.com process=index-lydmtyz-4 task=63dce50\n2024-11-08T13:20:39.162692Z [info     ] Chunked into 7 parts           process=index-lydmtyz-4 task=63dce50\n2024-11-08T13:20:42.407391Z [info     ] Indexed 7 chunks               process=index-lydmtyz-4 task=63dce50\n...\n```\n\nMost frequent options are:\n\n| `Options` | Description | `Environment variable` |\n|-|-|-|\n| `--azure-openai-api-key`</br>`-aoak` | Azure OpenAI API key | `AZURE_OPENAI_API_KEY` |\n| `--azure-openai-embedding-deployment-name`</br>`-aoedn` | Azure OpenAI embedding deployment name | `AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME` |\n| `--azure-openai-embedding-dimensions`</br>`-aoed` | Azure OpenAI embedding dimensions | `AZURE_OPENAI_EMBEDDING_DIMENSIONS` |\n| `--azure-openai-embedding-model-name`</br>`-aoemn` | Azure OpenAI embedding model name | `AZURE_OPENAI_EMBEDDING_MODEL_NAME` |\n| `--azure-openai-endpoint`</br>`-aoe` | Azure OpenAI endpoint | `AZURE_OPENAI_ENDPOINT` |\n| `--azure-search-api-key`</br>`-asak` | Azure Search API key | `AZURE_SEARCH_API_KEY` |\n| `--azure-search-endpoint`</br>`-ase` | Azure Search endpoint | `AZURE_SEARCH_ENDPOINT` |\n| `--azure-storage-access-key`</br>`-asak` | Azure Storage access key | `AZURE_STORAGE_ACCESS_KEY` |\n| `--azure-storage-account-name`</br>`-asan` | Azure Storage account name | `AZURE_STORAGE_ACCOUNT_NAME` |\n| `--blob-provider`</br>`-bp` | Blob provider | `BLOB_PROVIDER` |\n| `--queue-provider`</br>`-qp` | Queue provider | `QUEUE_PROVIDER` |\n\nFor documentation on all available options, run:\n\n```bash\nscrape-it-now index run --help\n```\n\n## Architecture\n\n### Scrape\n\n```mermaid\n---\ntitle: Scrape process with Azure Storage\n---\ngraph LR\n  cli[\"CLI\"]\n  web[\"Website\"]\n\n  subgraph \"Azure Queue Storage\"\n    to_chunk[\"To chunk\"]\n    to_scrape[\"To scrape\"]\n  end\n\n  subgraph \"Azure Blob Storage\"\n    subgraph \"Container\"\n      job[\"job\"]\n      scraped[\"scraped\"]\n      state[\"state\"]\n    end\n  end\n\n  cli -- (1) Pull message --> to_scrape\n  cli -- (2) Get cache --> scraped\n  cli -- (3) Browse --> web\n  cli -- (4) Update cache --> scraped\n  cli -- (5) Push state --> state\n  cli -- (6) Add message --> to_scrape\n  cli -- (7) Add message --> to_chunk\n  cli -- (8) Update state --> job\n```\n\n### Index\n\n```mermaid\n---\ntitle: Scrape process with Azure Storage and Azure AI Search\n---\ngraph LR\n  search[\"Azure AI Search\"]\n  cli[\"CLI\"]\n  embeddings[\"Azure OpenAI Embeddings\"]\n\n  subgraph \"Azure Queue Storage\"\n    to_chunk[\"To chunk\"]\n  end\n\n  subgraph \"Azure Blob Storage\"\n    subgraph \"Container\"\n      scraped[\"scraped\"]\n    end\n  end\n\n  cli -- (1) Pull message --> to_chunk\n  cli -- (2) Get cache --> scraped\n  cli -- (3) Chunk --> cli\n  cli -- (4) Embed --> embeddings\n  cli -- (5) Push to search --> search\n```\n\n## Design\n\nBlob storage is organized in folders:\n\n```txt\n[job_name]-scraping/            # Job name (either defined by the user or generated)\n    scraped/                    # All the data from the pages\n        [page_id]/              # Assets from a page\n            screenshot.jpeg     # Screenshot (if enabled)\n            [image_id].[ext]    # Image binary (if enabled)\n            [image_id].json     # Image metadata (if enabled)\n        [page_id].json          # Data from a page\n    state/                      # Job states (cache & parallelization)\n        [page_id]               # Page state\n    job.json                    # Job state (aggregated stats)\n```\n\nPage data is considered as an API (won't break until the next major version) and is stored in JSON format:\n\n```json\n{\n  \"created_at\": \"2024-09-11T14:06:43.566187Z\",\n  \"redirect\": \"https://www.nytimes.com/interactive/2024/podcasts/serial-season-four-guantanamo.html\",\n  \"status\": 200,\n  \"url\": \"https://www.nytimes.com/interactive/2024/podcasts/serial-season-four-guantanamo.html\",\n  \"content\": \"## Listen to the trailer for Serial Season 4...\",\n  \"etag\": null,\n  \"links\": [\n    \"https://podcasts.apple.com/us/podcast/serial/id917918570\",\n    \"https://music.amazon.com/podcasts/d1022069-8863-42f3-823e-857fd8a7b616/serial?ref=dm_sh_OVBHkKYvW1poSzCOsBqHFXuLc\",\n    ...\n  ],\n  \"metas\": {\n    \"description\": \"\u201cSerial\u201d returns with a history of Guant\u00e1namo told by people who lived through key moments in Guant\u00e1namo\u2019s evolution, who know things the rest of us don\u2019t about what it\u2019s like to be caught inside an improvised justice system.\",\n    \"articleid\": \"100000009373583\",\n    \"twitter:site\": \"@nytimes\",\n    ...\n  },\n  \"network_used_mb\": 1.041460037231445,\n  \"raw\": \"<head>...</head><body>...</body>\",\n  \"valid_until\": \"2024-09-11T14:11:37.790570Z\"\n}\n```\n\nThen, indexed data is stored in Azure AI Search:\n\n| Field | Type | Description |\n|-|-|-|\n| `chunck_number` | `Edm.Int32` | Chunk number, from `0` to *`x`* |\n| `content` | `Edm.String` | Chunck content |\n| `created_at` | `Edm.DateTimeOffset` | Source scrape date |\n| `id` | `Edm.String` | Chunck ID |\n| `title` | `Edm.String` | Source page title |\n| `url` | `Edm.String` | Source page URL |\n\n## Advanced usage\n\n### Whitelist\n\nWhitelist option allows to restrict to a domain and ignore sub paths. It is a list of regular expressions:\n\n```txt\ndomain1,regexp1,regexp2 domain2,regexp3\n```\n\nFor examples:\n\nTo whitelist `learn.microsoft.com`:\n\n```txt\nlearn\\.microsoft\\.com\n```\n\nTo whitelist `learn.microsoft.com` and `go.microsoft.com`, but ignore all sub paths except `/en-us`:\n\n```txt\nlearn\\.microsoft\\.com,^/(?!en-us).* go\\.microsoft\\.com\n```\n\n### Source environment variables\n\nTo configure easily the CLI, source environment variables from a `.env` file. For example, for the `--azure-storage-access-key` option:\n\n```bash\nAZURE_STORAGE_ACCESS_KEY=xxx\n```\n\nFor arguments that accept multiple values, use a space-separated list. For example, for the `--whitelist` option:\n\n```bash\nWHITELIST=learn\\.microsoft\\.com go\\.microsoft\\.com\n```\n\n### Application cache directory\n\nThe cache directoty depends on the operating system:\n\n- `~/.config/scrape-it-now` (Unix)\n- `~/Library/Application Support/scrape-it-now` (macOS)\n- `C:\\Users\\<user>\\AppData\\Roaming\\scrape-it-now` (Windows)\n\n### Broswer binary installation\n\nBrowser binaries are automatically downloaded or updated at each run. Browser is Chromium and it is not configurable (feel free to open an issue if you need another browser), it weights around 450MB. Cache is stored in the cache directory.\n\n### How Local Disk storage works\n\nLocal Disk storage is used for both blob and queue. It is not recommended for production use, as it is not easily scalable, and not fault-tolerant. It is useful for testing and development or when you cannot use Azure services.\n\nImplementation:\n\n- Local Disk Blob uses a directory structure to store blobs. Each blob is stored in a file with the blob name as the file name. Lease is implemented with lock files. By default, files are stored in a directory relative to the command execution directory.\n- Local Disk Queue uses a SQLite database to store messages. Database is stored in the cache directory. SQL databases implement visibility timeout and deletion tokens to ensure consistency to the stateless queue services like Azure Queue Storage.\n\n### Use proxies for anonymity\n\nProxies are not implemented in the application. Network security cannot be achieved from the application level. Use a VPN (e.g. your, third-party) or a proxy service (e.g. residential procies, Tor) to ensure anonymity and configure the system firewall to limit the application network access to it.\n\n### Bundle with a container\n\nAs the application is packaged to PyPi, it can easily be bundled with a container. At every start, the application will download the dependencies (browser, etc.) and cache them. You can pre-download them by running the command `scrape-it-now scrape install`.\n\nA good technique for performance would also to parallelize the scraping and indexing jobs by running multiple containers of each. This can be achieved with [KEDA](https://keda.sh), by configuring a [queue scaler](https://keda.sh/docs/2.16/scalers/azure-storage-queue).\n",
    "bugtrack_url": null,
    "license": "Apache License Version 2.0, January 2004 http://www.apache.org/licenses/  TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION  1. Definitions.  \"License\" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.  \"Licensor\" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.  \"Legal Entity\" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, \"control\" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.  \"You\" (or \"Your\") shall mean an individual or Legal Entity exercising permissions granted by this License.  \"Source\" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.  \"Object\" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.  \"Work\" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).  \"Derivative Works\" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.  \"Contribution\" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, \"submitted\" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as \"Not a Contribution.\"  \"Contributor\" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.  2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.  3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.  4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:  (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and  (b) You must cause any modified files to carry prominent notices stating that You changed the files; and  (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and  (d) If the Work includes a \"NOTICE\" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.  You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.  5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.  6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.  7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.  8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.  9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.  END OF TERMS AND CONDITIONS  APPENDIX: How to apply the Apache License to your work.  To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets \"[]\" replaced with your own identifying information. (Don't include the brackets!)  The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same \"printed page\" as the copyright notice for easier identification within third-party archives.  Copyright [yyyy] [name of copyright owner]  Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at  http://www.apache.org/licenses/LICENSE-2.0  Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ",
    "summary": "Web scraper made for AI and simplicity in mind. It runs as a CLI that can be parallelized and outputs high-quality markdown content.",
    "version": "3.0.2",
    "project_urls": {
        "Homepage": "https://github.com/clemlesne/scrape-it-now",
        "Issues": "https://github.com/clemlesne/scrape-it-now/issues",
        "Repository": "https://github.com/clemlesne/scrape-it-now"
    },
    "split_keywords": [
        "web",
        " scraper",
        " markdown",
        " ai",
        " parallel",
        " cli",
        " automation",
        " data-extraction",
        " web-crawling",
        " content-indexing"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6aec96f3cec344e30ac5127e0ea3ee76f410ee0a5f1acb0f1eb60254f9b156f1",
                "md5": "a6e20fd9298bc480cd7e8898a63bc463",
                "sha256": "e5701f858f2e00fa3ee2991f0fb0be29c16c180d987f57abe3e5395f0cfab996"
            },
            "downloads": -1,
            "filename": "scrape_it_now-3.0.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a6e20fd9298bc480cd7e8898a63bc463",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.11",
            "size": 1027592,
            "upload_time": "2024-11-09T12:56:41",
            "upload_time_iso_8601": "2024-11-09T12:56:41.338512Z",
            "url": "https://files.pythonhosted.org/packages/6a/ec/96f3cec344e30ac5127e0ea3ee76f410ee0a5f1acb0f1eb60254f9b156f1/scrape_it_now-3.0.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-09 12:56:41",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "clemlesne",
    "github_project": "scrape-it-now",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "scrape-it-now"
}
        
Elapsed time: 1.20196s