# Apify SDK for Python
The Apify SDK for Python is the official library to create [Apify Actors](https://docs.apify.com/platform/actors)
in Python. It provides useful features like Actor lifecycle management, local storage emulation, and Actor
event handling.
If you just need to access the [Apify API](https://docs.apify.com/api/v2) from your Python applications,
check out the [Apify Client for Python](https://docs.apify.com/api/client/python) instead.
## Installation
The Apify SDK for Python is available on PyPI as the `apify` package.
For default installation, using Pip, run the following:
```bash
pip install apify
```
For users interested in integrating Apify with Scrapy, we provide a package extra called `scrapy`.
To install Apify with the `scrapy` extra, use the following command:
```bash
pip install apify[scrapy]
```
## Documentation
For usage instructions, check the documentation on [Apify Docs](https://docs.apify.com/sdk/python/).
## Examples
Below are few examples demonstrating how to use the Apify SDK with some web scraping-related libraries.
### Apify SDK with HTTPX and BeautifulSoup
This example illustrates how to integrate the Apify SDK with [HTTPX](https://www.python-httpx.org/) and [BeautifulSoup](https://pypi.org/project/beautifulsoup4/) to scrape data from web pages.
```python
from apify import Actor
from bs4 import BeautifulSoup
from httpx import AsyncClient
async def main() -> None:
async with Actor:
# Retrieve the Actor input, and use default values if not provided.
actor_input = await Actor.get_input() or {}
start_urls = actor_input.get('start_urls', [{'url': 'https://apify.com'}])
# Open the default request queue for handling URLs to be processed.
request_queue = await Actor.open_request_queue()
# Enqueue the start URLs.
for start_url in start_urls:
url = start_url.get('url')
await request_queue.add_request(url)
# Process the URLs from the request queue.
while request := await request_queue.fetch_next_request():
Actor.log.info(f'Scraping {request.url} ...')
# Fetch the HTTP response from the specified URL using HTTPX.
async with AsyncClient() as client:
response = await client.get(request.url)
# Parse the HTML content using Beautiful Soup.
soup = BeautifulSoup(response.content, 'html.parser')
# Extract the desired data.
data = {
'url': actor_input['url'],
'title': soup.title.string,
'h1s': [h1.text for h1 in soup.find_all('h1')],
'h2s': [h2.text for h2 in soup.find_all('h2')],
'h3s': [h3.text for h3 in soup.find_all('h3')],
}
# Store the extracted data to the default dataset.
await Actor.push_data(data)
```
### Apify SDK with PlaywrightCrawler from Crawlee
This example demonstrates how to use the Apify SDK alongside `PlaywrightCrawler` from [Crawlee](https://crawlee.dev/python) to perform web scraping.
```python
from apify import Actor, Request
from crawlee.playwright_crawler import PlaywrightCrawler, PlaywrightCrawlingContext
async def main() -> None:
async with Actor:
# Retrieve the Actor input, and use default values if not provided.
actor_input = await Actor.get_input() or {}
start_urls = [url.get('url') for url in actor_input.get('start_urls', [{'url': 'https://apify.com'}])]
# Exit if no start URLs are provided.
if not start_urls:
Actor.log.info('No start URLs specified in Actor input, exiting...')
await Actor.exit()
# Create a crawler.
crawler = PlaywrightCrawler(
# Limit the crawl to max requests. Remove or increase it for crawling all links.
max_requests_per_crawl=50,
headless=True,
)
# Define a request handler, which will be called for every request.
@crawler.router.default_handler
async def request_handler(context: PlaywrightCrawlingContext) -> None:
url = context.request.url
Actor.log.info(f'Scraping {url}...')
# Extract the desired data.
data = {
'url': context.request.url,
'title': await context.page.title(),
'h1s': [await h1.text_content() for h1 in await context.page.locator('h1').all()],
'h2s': [await h2.text_content() for h2 in await context.page.locator('h2').all()],
'h3s': [await h3.text_content() for h3 in await context.page.locator('h3').all()],
}
# Store the extracted data to the default dataset.
await context.push_data(data)
# Enqueue additional links found on the current page.
await context.enqueue_links()
# Run the crawler with the starting URLs.
await crawler.run(start_urls)
```
## What are Actors?
Actors are serverless cloud programs that can do almost anything a human can do in a web browser.
They can do anything from small tasks such as filling in forms or unsubscribing from online services,
all the way up to scraping and processing vast numbers of web pages.
They can be run either locally, or on the [Apify platform](https://docs.apify.com/platform/),
where you can run them at scale, monitor them, schedule them, or publish and monetize them.
If you're new to Apify, learn [what is Apify](https://docs.apify.com/platform/about)
in the Apify platform documentation.
## Creating Actors
To create and run Actors through Apify Console,
see the [Console documentation](https://docs.apify.com/academy/getting-started/creating-actors#choose-your-template).
To create and run Python Actors locally, check the documentation for
[how to create and run Python Actors locally](https://docs.apify.com/sdk/python/docs/overview/running-locally).
## Guides
To see how you can use the Apify SDK with other popular libraries used for web scraping,
check out our guides for using
[Requests and HTTPX](https://docs.apify.com/sdk/python/docs/guides/requests-and-httpx),
[Beautiful Soup](https://docs.apify.com/sdk/python/docs/guides/beautiful-soup),
[Playwright](https://docs.apify.com/sdk/python/docs/guides/playwright),
[Selenium](https://docs.apify.com/sdk/python/docs/guides/selenium),
or [Scrapy](https://docs.apify.com/sdk/python/docs/guides/scrapy).
## Usage concepts
To learn more about the features of the Apify SDK and how to use them,
check out the Usage Concepts section in the sidebar,
particularly the guides for the [Actor lifecycle](https://docs.apify.com/sdk/python/docs/concepts/actor-lifecycle),
[working with storages](https://docs.apify.com/sdk/python/docs/concepts/storages),
[handling Actor events](https://docs.apify.com/sdk/python/docs/concepts/actor-events)
or [how to use proxies](https://docs.apify.com/sdk/python/docs/concepts/proxy-management).
Raw data
{
"_id": null,
"home_page": null,
"name": "apify",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.9",
"maintainer_email": null,
"keywords": "apify, sdk, automation, chrome, crawlee, crawler, headless, scraper, scraping",
"author": "Apify Technologies s.r.o.",
"author_email": "support@apify.com",
"download_url": "https://files.pythonhosted.org/packages/33/c7/3cb099e7a0163945d477b873386137b40fd93cce1fca487057516d5716ad/apify-2.1.0.tar.gz",
"platform": null,
"description": "# Apify SDK for Python\n\nThe Apify SDK for Python is the official library to create [Apify Actors](https://docs.apify.com/platform/actors)\nin Python. It provides useful features like Actor lifecycle management, local storage emulation, and Actor\nevent handling.\n\nIf you just need to access the [Apify API](https://docs.apify.com/api/v2) from your Python applications,\ncheck out the [Apify Client for Python](https://docs.apify.com/api/client/python) instead.\n\n## Installation\n\nThe Apify SDK for Python is available on PyPI as the `apify` package.\nFor default installation, using Pip, run the following:\n\n```bash\npip install apify\n```\n\nFor users interested in integrating Apify with Scrapy, we provide a package extra called `scrapy`.\nTo install Apify with the `scrapy` extra, use the following command:\n\n```bash\npip install apify[scrapy]\n```\n\n## Documentation\n\nFor usage instructions, check the documentation on [Apify Docs](https://docs.apify.com/sdk/python/).\n\n## Examples\n\nBelow are few examples demonstrating how to use the Apify SDK with some web scraping-related libraries.\n\n### Apify SDK with HTTPX and BeautifulSoup\n\nThis example illustrates how to integrate the Apify SDK with [HTTPX](https://www.python-httpx.org/) and [BeautifulSoup](https://pypi.org/project/beautifulsoup4/) to scrape data from web pages.\n\n```python\nfrom apify import Actor\nfrom bs4 import BeautifulSoup\nfrom httpx import AsyncClient\n\n\nasync def main() -> None:\n async with Actor:\n # Retrieve the Actor input, and use default values if not provided.\n actor_input = await Actor.get_input() or {}\n start_urls = actor_input.get('start_urls', [{'url': 'https://apify.com'}])\n\n # Open the default request queue for handling URLs to be processed.\n request_queue = await Actor.open_request_queue()\n\n # Enqueue the start URLs.\n for start_url in start_urls:\n url = start_url.get('url')\n await request_queue.add_request(url)\n\n # Process the URLs from the request queue.\n while request := await request_queue.fetch_next_request():\n Actor.log.info(f'Scraping {request.url} ...')\n\n # Fetch the HTTP response from the specified URL using HTTPX.\n async with AsyncClient() as client:\n response = await client.get(request.url)\n\n # Parse the HTML content using Beautiful Soup.\n soup = BeautifulSoup(response.content, 'html.parser')\n\n # Extract the desired data.\n data = {\n 'url': actor_input['url'],\n 'title': soup.title.string,\n 'h1s': [h1.text for h1 in soup.find_all('h1')],\n 'h2s': [h2.text for h2 in soup.find_all('h2')],\n 'h3s': [h3.text for h3 in soup.find_all('h3')],\n }\n\n # Store the extracted data to the default dataset.\n await Actor.push_data(data)\n```\n\n### Apify SDK with PlaywrightCrawler from Crawlee\n\nThis example demonstrates how to use the Apify SDK alongside `PlaywrightCrawler` from [Crawlee](https://crawlee.dev/python) to perform web scraping.\n\n```python\nfrom apify import Actor, Request\nfrom crawlee.playwright_crawler import PlaywrightCrawler, PlaywrightCrawlingContext\n\n\nasync def main() -> None:\n async with Actor:\n # Retrieve the Actor input, and use default values if not provided.\n actor_input = await Actor.get_input() or {}\n start_urls = [url.get('url') for url in actor_input.get('start_urls', [{'url': 'https://apify.com'}])]\n\n # Exit if no start URLs are provided.\n if not start_urls:\n Actor.log.info('No start URLs specified in Actor input, exiting...')\n await Actor.exit()\n\n # Create a crawler.\n crawler = PlaywrightCrawler(\n # Limit the crawl to max requests. Remove or increase it for crawling all links.\n max_requests_per_crawl=50,\n headless=True,\n )\n\n # Define a request handler, which will be called for every request.\n @crawler.router.default_handler\n async def request_handler(context: PlaywrightCrawlingContext) -> None:\n url = context.request.url\n Actor.log.info(f'Scraping {url}...')\n\n # Extract the desired data.\n data = {\n 'url': context.request.url,\n 'title': await context.page.title(),\n 'h1s': [await h1.text_content() for h1 in await context.page.locator('h1').all()],\n 'h2s': [await h2.text_content() for h2 in await context.page.locator('h2').all()],\n 'h3s': [await h3.text_content() for h3 in await context.page.locator('h3').all()],\n }\n\n # Store the extracted data to the default dataset.\n await context.push_data(data)\n\n # Enqueue additional links found on the current page.\n await context.enqueue_links()\n\n # Run the crawler with the starting URLs.\n await crawler.run(start_urls)\n```\n\n## What are Actors?\n\nActors are serverless cloud programs that can do almost anything a human can do in a web browser.\nThey can do anything from small tasks such as filling in forms or unsubscribing from online services,\nall the way up to scraping and processing vast numbers of web pages.\n\nThey can be run either locally, or on the [Apify platform](https://docs.apify.com/platform/),\nwhere you can run them at scale, monitor them, schedule them, or publish and monetize them.\n\nIf you're new to Apify, learn [what is Apify](https://docs.apify.com/platform/about)\nin the Apify platform documentation.\n\n## Creating Actors\n\nTo create and run Actors through Apify Console,\nsee the [Console documentation](https://docs.apify.com/academy/getting-started/creating-actors#choose-your-template).\n\nTo create and run Python Actors locally, check the documentation for\n[how to create and run Python Actors locally](https://docs.apify.com/sdk/python/docs/overview/running-locally).\n\n## Guides\n\nTo see how you can use the Apify SDK with other popular libraries used for web scraping,\ncheck out our guides for using\n[Requests and HTTPX](https://docs.apify.com/sdk/python/docs/guides/requests-and-httpx),\n[Beautiful Soup](https://docs.apify.com/sdk/python/docs/guides/beautiful-soup),\n[Playwright](https://docs.apify.com/sdk/python/docs/guides/playwright),\n[Selenium](https://docs.apify.com/sdk/python/docs/guides/selenium),\nor [Scrapy](https://docs.apify.com/sdk/python/docs/guides/scrapy).\n\n## Usage concepts\n\nTo learn more about the features of the Apify SDK and how to use them,\ncheck out the Usage Concepts section in the sidebar,\nparticularly the guides for the [Actor lifecycle](https://docs.apify.com/sdk/python/docs/concepts/actor-lifecycle),\n[working with storages](https://docs.apify.com/sdk/python/docs/concepts/storages),\n[handling Actor events](https://docs.apify.com/sdk/python/docs/concepts/actor-events)\nor [how to use proxies](https://docs.apify.com/sdk/python/docs/concepts/proxy-management).\n\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Apify SDK for Python",
"version": "2.1.0",
"project_urls": {
"Apify Homepage": "https://apify.com",
"Changelog": "https://docs.apify.com/sdk/python/docs/changelog",
"Documentation": "https://docs.apify.com/sdk/python/",
"Homepage": "https://docs.apify.com/sdk/python/",
"Issue Tracker": "https://github.com/apify/apify-sdk-python/issues",
"Repository": "https://github.com/apify/apify-sdk-python"
},
"split_keywords": [
"apify",
" sdk",
" automation",
" chrome",
" crawlee",
" crawler",
" headless",
" scraper",
" scraping"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "ee55d8ce87f4605cb394f21d17fffe8b7763a6f81ddbe1faf7f782c88e72655d",
"md5": "01b4b73df027d67219c28b8ad84da379",
"sha256": "5b9f2257fe9c0946e0c15d260dddfd0d494d4242ec1351a5fa3b20759755d62b"
},
"downloads": -1,
"filename": "apify-2.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "01b4b73df027d67219c28b8ad84da379",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.9",
"size": 50002,
"upload_time": "2024-12-03T14:00:25",
"upload_time_iso_8601": "2024-12-03T14:00:25.241543Z",
"url": "https://files.pythonhosted.org/packages/ee/55/d8ce87f4605cb394f21d17fffe8b7763a6f81ddbe1faf7f782c88e72655d/apify-2.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "33c73cb099e7a0163945d477b873386137b40fd93cce1fca487057516d5716ad",
"md5": "58d1e70d155ac15c8331f7dc96c6be5b",
"sha256": "054bb2ecca8cf3f80fda37219a1d8cddd1068b2dba7d3dda4e46aabddff80983"
},
"downloads": -1,
"filename": "apify-2.1.0.tar.gz",
"has_sig": false,
"md5_digest": "58d1e70d155ac15c8331f7dc96c6be5b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.9",
"size": 42147,
"upload_time": "2024-12-03T14:00:27",
"upload_time_iso_8601": "2024-12-03T14:00:27.599797Z",
"url": "https://files.pythonhosted.org/packages/33/c7/3cb099e7a0163945d477b873386137b40fd93cce1fca487057516d5716ad/apify-2.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-03 14:00:27",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "apify",
"github_project": "apify-sdk-python",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "apify"
}