scrapybara


Namescrapybara JSON
Version 2.2.8 PyPI version JSON
download
home_pageNone
SummaryNone
upload_time2025-02-26 04:54:39
maintainerNone
docs_urlNone
authorNone
requires_python<4.0,>=3.8
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Scrapybara Python Library

[![fern shield](https://img.shields.io/badge/%F0%9F%8C%BF-Built%20with%20Fern-brightgreen)](https://buildwithfern.com?utm_source=github&utm_medium=github&utm_campaign=readme&utm_source=https%3A%2F%2Fgithub.com%2Fscrapybara%2Fscrapybara-python)
[![pypi](https://img.shields.io/pypi/v/scrapybara)](https://pypi.python.org/pypi/scrapybara)

The Scrapybara Python library provides convenient access to the Scrapybara API from Python.

## Installation

```sh
pip install scrapybara
```

## Reference

Please refer to [docs](https://docs.scrapybara.com) for more information.

## Requirements

- Python >= 3.8
- `requests` >= 2.25.1
- `anthropic` ^0.39.0
- `pydantic` ^2.0.0

## Usage

Instantiate and use the client with the following:

```python
from scrapybara import Scrapybara

client = Scrapybara(
    api_key="YOUR_API_KEY",
)
client.start_ubuntu()
```

## Async Client

The SDK also exports an `async` client so that you can make non-blocking calls to our API.

```python
import asyncio

from scrapybara import AsyncScrapybara

client = AsyncScrapybara(
    api_key="YOUR_API_KEY",
)


async def main() -> None:
    await client.start_ubuntu()


asyncio.run(main())
```

## Exception Handling

When the API returns a non-success status code (4xx or 5xx response), a subclass of the following error
will be thrown.

```python
from scrapybara.core.api_error import ApiError

try:
    client.start_ubuntu()
except ApiError as e:
    print(e.status_code)
    print(e.body)
```

## Advanced

### Retries

The SDK is instrumented with automatic retries with exponential backoff. A request will be retried as long
as the request is deemed retriable and the number of retry attempts has not grown larger than the configured
retry limit (default: 2).

A request is deemed retriable when any of the following HTTP status codes is returned:

- [408](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/408) (Timeout)
- [429](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429) (Too Many Requests)
- [5XX](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500) (Internal Server Errors)

Use the `max_retries` request option to configure this behavior.

```python
client.start_ubuntu(..., request_options={
    "max_retries": 1
})
```

### Timeouts

The SDK defaults to a 60 second timeout. You can configure this with a timeout option at the client or request level.

```python

from scrapybara import Scrapybara

client = Scrapybara(
    ...,
    timeout=20.0,
)


# Override timeout for a specific method
client.start_ubuntu(..., request_options={
    "timeout_in_seconds": 1
})
```

### Custom Client

You can override the `httpx` client to customize it for your use-case. Some common use-cases include support for proxies
and transports.

```python
import httpx
from scrapybara import Scrapybara

client = Scrapybara(
    ...,
    httpx_client=httpx.Client(
        proxies="http://my.test.proxy.example.com",
        transport=httpx.HTTPTransport(local_address="0.0.0.0"),
    ),
)
```

## Contributing

While we value open-source contributions to this SDK, this library is generated programmatically.
Additions made directly to this library would have to be moved over to our generation code,
otherwise they would be overwritten upon the next generated release. Feel free to open a PR as
a proof of concept, but know that we will not be able to merge it as-is. We suggest opening
an issue first to discuss with us!

On the other hand, contributions to the README are always very welcome!

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "scrapybara",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.8",
    "maintainer_email": null,
    "keywords": null,
    "author": null,
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/53/aa/a90b2972300b58e6a16ea7918d66ab77516905dca2a31ee981605ca7194e/scrapybara-2.2.8.tar.gz",
    "platform": null,
    "description": "# Scrapybara Python Library\n\n[![fern shield](https://img.shields.io/badge/%F0%9F%8C%BF-Built%20with%20Fern-brightgreen)](https://buildwithfern.com?utm_source=github&utm_medium=github&utm_campaign=readme&utm_source=https%3A%2F%2Fgithub.com%2Fscrapybara%2Fscrapybara-python)\n[![pypi](https://img.shields.io/pypi/v/scrapybara)](https://pypi.python.org/pypi/scrapybara)\n\nThe Scrapybara Python library provides convenient access to the Scrapybara API from Python.\n\n## Installation\n\n```sh\npip install scrapybara\n```\n\n## Reference\n\nPlease refer to [docs](https://docs.scrapybara.com) for more information.\n\n## Requirements\n\n- Python >= 3.8\n- `requests` >= 2.25.1\n- `anthropic` ^0.39.0\n- `pydantic` ^2.0.0\n\n## Usage\n\nInstantiate and use the client with the following:\n\n```python\nfrom scrapybara import Scrapybara\n\nclient = Scrapybara(\n    api_key=\"YOUR_API_KEY\",\n)\nclient.start_ubuntu()\n```\n\n## Async Client\n\nThe SDK also exports an `async` client so that you can make non-blocking calls to our API.\n\n```python\nimport asyncio\n\nfrom scrapybara import AsyncScrapybara\n\nclient = AsyncScrapybara(\n    api_key=\"YOUR_API_KEY\",\n)\n\n\nasync def main() -> None:\n    await client.start_ubuntu()\n\n\nasyncio.run(main())\n```\n\n## Exception Handling\n\nWhen the API returns a non-success status code (4xx or 5xx response), a subclass of the following error\nwill be thrown.\n\n```python\nfrom scrapybara.core.api_error import ApiError\n\ntry:\n    client.start_ubuntu()\nexcept ApiError as e:\n    print(e.status_code)\n    print(e.body)\n```\n\n## Advanced\n\n### Retries\n\nThe SDK is instrumented with automatic retries with exponential backoff. A request will be retried as long\nas the request is deemed retriable and the number of retry attempts has not grown larger than the configured\nretry limit (default: 2).\n\nA request is deemed retriable when any of the following HTTP status codes is returned:\n\n- [408](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/408) (Timeout)\n- [429](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429) (Too Many Requests)\n- [5XX](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500) (Internal Server Errors)\n\nUse the `max_retries` request option to configure this behavior.\n\n```python\nclient.start_ubuntu(..., request_options={\n    \"max_retries\": 1\n})\n```\n\n### Timeouts\n\nThe SDK defaults to a 60 second timeout. You can configure this with a timeout option at the client or request level.\n\n```python\n\nfrom scrapybara import Scrapybara\n\nclient = Scrapybara(\n    ...,\n    timeout=20.0,\n)\n\n\n# Override timeout for a specific method\nclient.start_ubuntu(..., request_options={\n    \"timeout_in_seconds\": 1\n})\n```\n\n### Custom Client\n\nYou can override the `httpx` client to customize it for your use-case. Some common use-cases include support for proxies\nand transports.\n\n```python\nimport httpx\nfrom scrapybara import Scrapybara\n\nclient = Scrapybara(\n    ...,\n    httpx_client=httpx.Client(\n        proxies=\"http://my.test.proxy.example.com\",\n        transport=httpx.HTTPTransport(local_address=\"0.0.0.0\"),\n    ),\n)\n```\n\n## Contributing\n\nWhile we value open-source contributions to this SDK, this library is generated programmatically.\nAdditions made directly to this library would have to be moved over to our generation code,\notherwise they would be overwritten upon the next generated release. Feel free to open a PR as\na proof of concept, but know that we will not be able to merge it as-is. We suggest opening\nan issue first to discuss with us!\n\nOn the other hand, contributions to the README are always very welcome!\n",
    "bugtrack_url": null,
    "license": null,
    "summary": null,
    "version": "2.2.8",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "241131a5cd561bcd63e1a3a051ac07d5ca5f2d0c7e8ac46c14364be072c9da4c",
                "md5": "b3d838c0d7b6fac0e7899aa70a5e1199",
                "sha256": "a11d26aeca6b4d6642ea1d7908cff5a553bc340434450af176b48fb845843a9b"
            },
            "downloads": -1,
            "filename": "scrapybara-2.2.8-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b3d838c0d7b6fac0e7899aa70a5e1199",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.8",
            "size": 63476,
            "upload_time": "2025-02-26T04:54:37",
            "upload_time_iso_8601": "2025-02-26T04:54:37.353364Z",
            "url": "https://files.pythonhosted.org/packages/24/11/31a5cd561bcd63e1a3a051ac07d5ca5f2d0c7e8ac46c14364be072c9da4c/scrapybara-2.2.8-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "53aaa90b2972300b58e6a16ea7918d66ab77516905dca2a31ee981605ca7194e",
                "md5": "50dd758c35e52eb1a3ff8782ab50cc7f",
                "sha256": "408f229ed254476e31311732336ac71a47b54c71d1a035f0ec33b0c3bbd5569a"
            },
            "downloads": -1,
            "filename": "scrapybara-2.2.8.tar.gz",
            "has_sig": false,
            "md5_digest": "50dd758c35e52eb1a3ff8782ab50cc7f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.8",
            "size": 39462,
            "upload_time": "2025-02-26T04:54:39",
            "upload_time_iso_8601": "2025-02-26T04:54:39.269883Z",
            "url": "https://files.pythonhosted.org/packages/53/aa/a90b2972300b58e6a16ea7918d66ab77516905dca2a31ee981605ca7194e/scrapybara-2.2.8.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-02-26 04:54:39",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "scrapybara"
}
        
Elapsed time: 8.25801s