scrappeycom


Namescrappeycom JSON
Version 0.3.7 PyPI version JSON
download
home_pagehttps://github.com/pim97/scrappey-wrapper-python
SummaryAn API wrapper for Scrappey.com written in Python (cloudflare bypass & solver)
upload_time2023-06-22 08:37:48
maintainer
docs_urlNone
authordormic97
requires_python
licenseMIT
keywords captcha shape web-scraping data-extraction akamai captcha-solver incapsula queue-it scraping-framework datadome scraping-tool cloudflare-bypass web-scraping-solution scraping-library cloudflare-anti-bot scraping-service web-data-extraction anti-bot-api perimetex
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # 🤖 Scrappey Wrapper - Data Extraction Made Easy

Introducing Scrappey, your comprehensive website scraping solution provided by Scrappey.com. With Scrappey's powerful and user-friendly API, you can effortlessly retrieve data from websites, including those protected by Cloudflare. Join Scrappey today and revolutionize your data extraction process. 🚀

**Disclaimer: Please ensure that your web scraping activities comply with the website's terms of service and legal regulations. Scrappey is not responsible for any misuse or unethical use of the library. Use it responsibly and respect the website's policies.**

Website: https://scrappey.com/

## Topics

- [Installation](#installation)
- [Usage](#usage)
- [Example](#example)
- [License](#license)

## Installation

Use pip to install the Scrappey library. 💻

```shell
pip install scrappeycom
```

## Usage

Import the Scrappey library in your code. 📦

```python
from scrappeycom.scrappey import Scrappey
```

Create an instance of Scrappey by providing your Scrappey API key. 🔑

```python
api_key = 'YOUR_API_KEY'
scrappey_instance = scrappey.Scrappey(api_key)
```

### Example

Here's an example of how to use Scrappey. 🚀

```python
from scrappeycom.scrappey import Scrappey
import uuid

scrappey = Scrappey('API_KEY')

def run_test():
    try:
        sessionData = {
            'session': str(uuid.uuid4()), #uuid is also optional, otherwise default uuid will be used
            #'proxy': 'http://username:password@ip:port' #proxy is optional, otherwise default proxy will be used
        }
        session = scrappey.create_session(sessionData)
        print('Session created:', session['session'])

        # View all the options here with the request builder
        # https://app.scrappey.com/#/builder
        # Just copy paste it below, example
        #
        #    {
        #       "cmd":  "request.get",
        #       "url":  "https://httpbin.rs/get"
        #    }

        get_request_result = scrappey.request({
            "cmd":  "request.get",
            'session': session['session'],
            'url': 'https://httpbin.rs/get',
        })
        print('GET Request Result:', get_request_result)

        post_request_result = scrappey.request({
            "cmd": "request.post",
            "url": "https://httpbin.rs/post",
            "postData": "test=test&test2=test2"
        })
        print('POST Request Result:', post_request_result)

        # JSON request example
        post_request_result_json = scrappey.request({
            "cmd": "request.post",
            "url": "https://backend.scrappey.com/api/auth/login",
            "postData": "{\"email\":\"email\",\"password\":\"password\"}",
            "customHeaders": {
                "content-type": "application/json"
            }
        })
        print('POST Request Result:', post_request_result_json)

        sessionDestroyed = scrappey.destroy_session(sessionData)
        print('Session destroyed:', sessionDestroyed)
    except Exception as error:
        print(error)

run_test()

```

For more information, please visit the [official Scrappey documentation](https://wiki.scrappey.com/getting-started). 📚

## License

This project is licensed under the MIT License.

## Additional Tags

cloudflare anti bot bypass, cloudflare solver, scraper, scraping, cloudflare scraper, cloudflare turnstile solver, turnstile solver, data extraction, web scraping, website scraping, data scraping, scraping tool, API scraping, scraping solution, web data extraction, website data extraction, web scraping library, website scraping library, cloudflare bypass, scraping API, web scraping API, cloudflare protection, data scraping tool, scraping service, cloudflare challenge solver, web scraping solution, web scraping service, cloudflare scraping, cloudflare bot protection, scraping framework, scraping library, cloudflare bypass tool, cloudflare anti-bot, cloudflare protection bypass, cloudflare solver tool, web scraping tool, data extraction library, website scraping tool, cloudflare turnstile bypass, cloudflare anti-bot solver, turnstile solver tool, cloudflare scraping solution, website data scraper, cloudflare challenge bypass, web scraping framework, cloudflare challenge solver tool, web data scraping, data scraper, scraping data from websites, SEO, data mining,

 data harvesting, data crawling, web scraping software, website scraping tool, web scraping framework, data extraction tool, web data scraper, data scraping service, scraping automation, scraping tutorial, scraping code, scraping techniques, scraping best practices, scraping scripts, scraping tutorial, scraping examples, scraping challenges, scraping tricks, scraping tips, scraping tricks, scraping strategies, scraping methods, cloudflare protection bypass, cloudflare security bypass, web scraping Python, web scraping JavaScript, web scraping PHP, web scraping Ruby, web scraping Java, web scraping C#, web scraping Node.js, web scraping BeautifulSoup, web scraping Selenium, web scraping Scrapy, web scraping Puppeteer, web scraping requests, web scraping headless browser, web scraping dynamic content, web scraping AJAX, web scraping pagination, web scraping authentication, web scraping cookies, web scraping session management, web scraping data parsing, web scraping data cleaning, web scraping data analysis, web scraping data visualization, web scraping legal issues, web scraping ethics, web scraping compliance, web scraping regulations, web scraping IP blocking, web scraping anti-scraping measures, web scraping proxy, web scraping CAPTCHA solving, web scraping IP rotation, web scraping rate limiting, web scraping data privacy, web scraping consent, web scraping terms of service, web scraping robots.txt, web scraping data storage, web scraping database integration, web scraping data integration, web scraping API integration, web scraping data export, web scraping data processing, web scraping data transformation, web scraping data enrichment, web scraping data validation, web scraping error handling, web scraping scalability, web scraping performance optimization, web scraping distributed scraping, web scraping cloud-based scraping, web scraping serverless scraping.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/pim97/scrappey-wrapper-python",
    "name": "scrappeycom",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "captcha,shape,web-scraping,data-extraction,akamai,captcha-solver,incapsula,queue-it,scraping-framework,datadome,scraping-tool,cloudflare-bypass,web-scraping-solution,scraping-library,cloudflare-anti-bot,scraping-service,web-data-extraction,anti-bot-api,perimetex",
    "author": "dormic97",
    "author_email": "crozz-boy@hotmail.com",
    "download_url": "https://files.pythonhosted.org/packages/4f/38/dbe12cb279d95b7c0499c45b5a428ae41d92719d10ed6f9249f81a50e7d5/scrappeycom-0.3.7.tar.gz",
    "platform": null,
    "description": "# \u00f0\u0178\u00a4\u2013 Scrappey Wrapper - Data Extraction Made Easy\r\n\r\nIntroducing Scrappey, your comprehensive website scraping solution provided by Scrappey.com. With Scrappey's powerful and user-friendly API, you can effortlessly retrieve data from websites, including those protected by Cloudflare. Join Scrappey today and revolutionize your data extraction process. \u00f0\u0178\u0161\u20ac\r\n\r\n**Disclaimer: Please ensure that your web scraping activities comply with the website's terms of service and legal regulations. Scrappey is not responsible for any misuse or unethical use of the library. Use it responsibly and respect the website's policies.**\r\n\r\nWebsite: https://scrappey.com/\r\n\r\n## Topics\r\n\r\n- [Installation](#installation)\r\n- [Usage](#usage)\r\n- [Example](#example)\r\n- [License](#license)\r\n\r\n## Installation\r\n\r\nUse pip to install the Scrappey library. \u00f0\u0178\u2019\u00bb\r\n\r\n```shell\r\npip install scrappeycom\r\n```\r\n\r\n## Usage\r\n\r\nImport the Scrappey library in your code. \u00f0\u0178\u201c\u00a6\r\n\r\n```python\r\nfrom scrappeycom.scrappey import Scrappey\r\n```\r\n\r\nCreate an instance of Scrappey by providing your Scrappey API key. \u00f0\u0178\u201d\u2018\r\n\r\n```python\r\napi_key = 'YOUR_API_KEY'\r\nscrappey_instance = scrappey.Scrappey(api_key)\r\n```\r\n\r\n### Example\r\n\r\nHere's an example of how to use Scrappey. \u00f0\u0178\u0161\u20ac\r\n\r\n```python\r\nfrom scrappeycom.scrappey import Scrappey\r\nimport uuid\r\n\r\nscrappey = Scrappey('API_KEY')\r\n\r\ndef run_test():\r\n    try:\r\n        sessionData = {\r\n            'session': str(uuid.uuid4()), #uuid is also optional, otherwise default uuid will be used\r\n            #'proxy': 'http://username:password@ip:port' #proxy is optional, otherwise default proxy will be used\r\n        }\r\n        session = scrappey.create_session(sessionData)\r\n        print('Session created:', session['session'])\r\n\r\n        # View all the options here with the request builder\r\n        # https://app.scrappey.com/#/builder\r\n        # Just copy paste it below, example\r\n        #\r\n        #    {\r\n        #       \"cmd\":  \"request.get\",\r\n        #       \"url\":  \"https://httpbin.rs/get\"\r\n        #    }\r\n\r\n        get_request_result = scrappey.request({\r\n            \"cmd\":  \"request.get\",\r\n            'session': session['session'],\r\n            'url': 'https://httpbin.rs/get',\r\n        })\r\n        print('GET Request Result:', get_request_result)\r\n\r\n        post_request_result = scrappey.request({\r\n            \"cmd\": \"request.post\",\r\n            \"url\": \"https://httpbin.rs/post\",\r\n            \"postData\": \"test=test&test2=test2\"\r\n        })\r\n        print('POST Request Result:', post_request_result)\r\n\r\n        # JSON request example\r\n        post_request_result_json = scrappey.request({\r\n            \"cmd\": \"request.post\",\r\n            \"url\": \"https://backend.scrappey.com/api/auth/login\",\r\n            \"postData\": \"{\\\"email\\\":\\\"email\\\",\\\"password\\\":\\\"password\\\"}\",\r\n            \"customHeaders\": {\r\n                \"content-type\": \"application/json\"\r\n            }\r\n        })\r\n        print('POST Request Result:', post_request_result_json)\r\n\r\n        sessionDestroyed = scrappey.destroy_session(sessionData)\r\n        print('Session destroyed:', sessionDestroyed)\r\n    except Exception as error:\r\n        print(error)\r\n\r\nrun_test()\r\n\r\n```\r\n\r\nFor more information, please visit the [official Scrappey documentation](https://wiki.scrappey.com/getting-started). \u00f0\u0178\u201c\u0161\r\n\r\n## License\r\n\r\nThis project is licensed under the MIT License.\r\n\r\n## Additional Tags\r\n\r\ncloudflare anti bot bypass, cloudflare solver, scraper, scraping, cloudflare scraper, cloudflare turnstile solver, turnstile solver, data extraction, web scraping, website scraping, data scraping, scraping tool, API scraping, scraping solution, web data extraction, website data extraction, web scraping library, website scraping library, cloudflare bypass, scraping API, web scraping API, cloudflare protection, data scraping tool, scraping service, cloudflare challenge solver, web scraping solution, web scraping service, cloudflare scraping, cloudflare bot protection, scraping framework, scraping library, cloudflare bypass tool, cloudflare anti-bot, cloudflare protection bypass, cloudflare solver tool, web scraping tool, data extraction library, website scraping tool, cloudflare turnstile bypass, cloudflare anti-bot solver, turnstile solver tool, cloudflare scraping solution, website data scraper, cloudflare challenge bypass, web scraping framework, cloudflare challenge solver tool, web data scraping, data scraper, scraping data from websites, SEO, data mining,\r\n\r\n data harvesting, data crawling, web scraping software, website scraping tool, web scraping framework, data extraction tool, web data scraper, data scraping service, scraping automation, scraping tutorial, scraping code, scraping techniques, scraping best practices, scraping scripts, scraping tutorial, scraping examples, scraping challenges, scraping tricks, scraping tips, scraping tricks, scraping strategies, scraping methods, cloudflare protection bypass, cloudflare security bypass, web scraping Python, web scraping JavaScript, web scraping PHP, web scraping Ruby, web scraping Java, web scraping C#, web scraping Node.js, web scraping BeautifulSoup, web scraping Selenium, web scraping Scrapy, web scraping Puppeteer, web scraping requests, web scraping headless browser, web scraping dynamic content, web scraping AJAX, web scraping pagination, web scraping authentication, web scraping cookies, web scraping session management, web scraping data parsing, web scraping data cleaning, web scraping data analysis, web scraping data visualization, web scraping legal issues, web scraping ethics, web scraping compliance, web scraping regulations, web scraping IP blocking, web scraping anti-scraping measures, web scraping proxy, web scraping CAPTCHA solving, web scraping IP rotation, web scraping rate limiting, web scraping data privacy, web scraping consent, web scraping terms of service, web scraping robots.txt, web scraping data storage, web scraping database integration, web scraping data integration, web scraping API integration, web scraping data export, web scraping data processing, web scraping data transformation, web scraping data enrichment, web scraping data validation, web scraping error handling, web scraping scalability, web scraping performance optimization, web scraping distributed scraping, web scraping cloud-based scraping, web scraping serverless scraping.\r\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "An API wrapper for Scrappey.com written in Python (cloudflare bypass & solver)",
    "version": "0.3.7",
    "project_urls": {
        "Download": "https://github.com/pim97/scrappey-wrapper-python/releases/tag/v_03",
        "Homepage": "https://github.com/pim97/scrappey-wrapper-python"
    },
    "split_keywords": [
        "captcha",
        "shape",
        "web-scraping",
        "data-extraction",
        "akamai",
        "captcha-solver",
        "incapsula",
        "queue-it",
        "scraping-framework",
        "datadome",
        "scraping-tool",
        "cloudflare-bypass",
        "web-scraping-solution",
        "scraping-library",
        "cloudflare-anti-bot",
        "scraping-service",
        "web-data-extraction",
        "anti-bot-api",
        "perimetex"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4f38dbe12cb279d95b7c0499c45b5a428ae41d92719d10ed6f9249f81a50e7d5",
                "md5": "a3f57b6eefefcc4453cc55313f1ed7b0",
                "sha256": "8bbcf05a924c0077f66aad6815cf24158bbddcf827d030220ff7fefee92cd696"
            },
            "downloads": -1,
            "filename": "scrappeycom-0.3.7.tar.gz",
            "has_sig": false,
            "md5_digest": "a3f57b6eefefcc4453cc55313f1ed7b0",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 5469,
            "upload_time": "2023-06-22T08:37:48",
            "upload_time_iso_8601": "2023-06-22T08:37:48.422409Z",
            "url": "https://files.pythonhosted.org/packages/4f/38/dbe12cb279d95b7c0499c45b5a428ae41d92719d10ed6f9249f81a50e7d5/scrappeycom-0.3.7.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-06-22 08:37:48",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "pim97",
    "github_project": "scrappey-wrapper-python",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "scrappeycom"
}
        
Elapsed time: 0.07838s