scraply


Namescraply JSON
Version 1.0.2 PyPI version JSON
download
home_pagehttps://github.com/ByteBreach/scraply
SummaryA Python package to scrape and clone websites.
upload_time2024-12-20 14:39:37
maintainerNone
docs_urlNone
authorFidal
requires_python>=3.6
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Scraply

**Scraply** is a Python package designed to scrape websites, extract all internal URLs, and clone pages by saving them as HTML files. You can use it through the command line interface or import it as a library in your Python scripts.

## Features
- Scrape all internal URLs from a given website.
- Clone and save HTML content from all URLs or a specific URL.

## Installation

### Using `pip`

To install **Scraply**, use the following command:

```bash
pip install scraply
```

## Usage

#### Scraping All URLs from a Website

```python
import time
from scraply import scrape_urls

# URL to scrape
url = 'https://example.com'

# Scraping all URLs from the website
start_time = time.time()
urls = scrape_urls(url)

# Print the scraped URLs
for url in urls:
    print(url)

end_time = time.time()
print(f"Total scraping time: {end_time - start_time:.2f} seconds")
```

#### Cloning All Pages

```python
from scraply import clone_page

# Clone each URL from a list
urls = ['https://example.com/privacy', 'https://example.com/about']

for url in urls:
    clone_page(url)
```

#### Cloning a Single Specific Page

```python
from scraply import clone_page

# Clone a single page
clone_page('https://example.com/privacy')
```


### License

This project is licensed under the MIT License.

## Contributing

Feel free to fork, contribute, or open issues on the [GitHub repository](https://github.com/ByteBreach/scraply).

## Author

Developed by **Fidal**.  
Email: mrfidal@proton.me 
GitHub: [mr-fidal](https://github.com/mr-fidal)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/ByteBreach/scraply",
    "name": "scraply",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": null,
    "keywords": null,
    "author": "Fidal",
    "author_email": "mrfidal@proton.me",
    "download_url": "https://files.pythonhosted.org/packages/bf/d3/a24e2a5733e0c355f92a31c418f11c703346f8473da8e9fbeda6c7196788/scraply-1.0.2.tar.gz",
    "platform": null,
    "description": "# Scraply\r\n\r\n**Scraply** is a Python package designed to scrape websites, extract all internal URLs, and clone pages by saving them as HTML files. You can use it through the command line interface or import it as a library in your Python scripts.\r\n\r\n## Features\r\n- Scrape all internal URLs from a given website.\r\n- Clone and save HTML content from all URLs or a specific URL.\r\n\r\n## Installation\r\n\r\n### Using `pip`\r\n\r\nTo install **Scraply**, use the following command:\r\n\r\n```bash\r\npip install scraply\r\n```\r\n\r\n## Usage\r\n\r\n#### Scraping All URLs from a Website\r\n\r\n```python\r\nimport time\r\nfrom scraply import scrape_urls\r\n\r\n# URL to scrape\r\nurl = 'https://example.com'\r\n\r\n# Scraping all URLs from the website\r\nstart_time = time.time()\r\nurls = scrape_urls(url)\r\n\r\n# Print the scraped URLs\r\nfor url in urls:\r\n    print(url)\r\n\r\nend_time = time.time()\r\nprint(f\"Total scraping time: {end_time - start_time:.2f} seconds\")\r\n```\r\n\r\n#### Cloning All Pages\r\n\r\n```python\r\nfrom scraply import clone_page\r\n\r\n# Clone each URL from a list\r\nurls = ['https://example.com/privacy', 'https://example.com/about']\r\n\r\nfor url in urls:\r\n    clone_page(url)\r\n```\r\n\r\n#### Cloning a Single Specific Page\r\n\r\n```python\r\nfrom scraply import clone_page\r\n\r\n# Clone a single page\r\nclone_page('https://example.com/privacy')\r\n```\r\n\r\n\r\n### License\r\n\r\nThis project is licensed under the MIT License.\r\n\r\n## Contributing\r\n\r\nFeel free to fork, contribute, or open issues on the [GitHub repository](https://github.com/ByteBreach/scraply).\r\n\r\n## Author\r\n\r\nDeveloped by **Fidal**.  \r\nEmail: mrfidal@proton.me \r\nGitHub: [mr-fidal](https://github.com/mr-fidal)\r\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A Python package to scrape and clone websites.",
    "version": "1.0.2",
    "project_urls": {
        "Homepage": "https://github.com/ByteBreach/scraply"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0945d77ffbc6c542eeb8f063dec1f990d176f48386a24efd5becb9ae1359d73c",
                "md5": "35bdf0a75981dceea64ede1c2375137a",
                "sha256": "f2ef864376c4ffd8f57bda02674122652f6e79985a9f21c2d30044c3db09f3a0"
            },
            "downloads": -1,
            "filename": "scraply-1.0.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "35bdf0a75981dceea64ede1c2375137a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 3481,
            "upload_time": "2024-12-20T14:39:35",
            "upload_time_iso_8601": "2024-12-20T14:39:35.708721Z",
            "url": "https://files.pythonhosted.org/packages/09/45/d77ffbc6c542eeb8f063dec1f990d176f48386a24efd5becb9ae1359d73c/scraply-1.0.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "bfd3a24e2a5733e0c355f92a31c418f11c703346f8473da8e9fbeda6c7196788",
                "md5": "2ae7f4441f844970400f4403bf60f868",
                "sha256": "348e5afd154b02e01f3ee324d021dce196ead132a630da64bcb89d10679e4788"
            },
            "downloads": -1,
            "filename": "scraply-1.0.2.tar.gz",
            "has_sig": false,
            "md5_digest": "2ae7f4441f844970400f4403bf60f868",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 2856,
            "upload_time": "2024-12-20T14:39:37",
            "upload_time_iso_8601": "2024-12-20T14:39:37.247306Z",
            "url": "https://files.pythonhosted.org/packages/bf/d3/a24e2a5733e0c355f92a31c418f11c703346f8473da8e9fbeda6c7196788/scraply-1.0.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-20 14:39:37",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ByteBreach",
    "github_project": "scraply",
    "github_not_found": true,
    "lcname": "scraply"
}
        
Elapsed time: 1.13222s