Name | scraperapi-sdk JSON |
Version |
1.5.2
JSON |
| download |
home_page | None |
Summary | ScraperAPI Python SDK |
upload_time | 2024-06-18 10:12:52 |
maintainer | None |
docs_url | None |
author | ScraperAPI |
requires_python | >=3.8 |
license | None |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# ScraperAPI Python SDK
## Install
```
pip install scraperapi-sdk
```
## Usage
```
from scraperapi_sdk import ScraperAPIClient
client = ScraperAPIClient("<API-KEY>")
# regular get request
content = client.get('https://amazon.com/')
# get request with premium
content = client.get('https://amazon.com/', params={'premium': True})
# post request
content = client.post('https://webhook.site/403e44ce-5835-4ce9-a648-188a51d9395d', headers={'Content-Type': 'application/x-www-form-urlencoded'}, data={'field1': 'data1'})
# put request
content = client.put('https://webhook.site/403e44ce-5835-4ce9-a648-188a51d9395d', headers={'Content-Type': 'application/json'}, data={'field1': 'data1'})
```
The `content` variable will contain the scraped page.
If you want to get the `Response` object instead of the content you can use `make_request`.
```
response = client.make_request(url='https://webhook.site/403e44ce-5835-4ce9-a648-188a51d9395d', headers={'Content-Type': 'application/json'}, data={'field1': 'data1'})
# response will be <Response [200]>
```
## Exception
```
from scraperapi_sdk import ScraperAPIClient
from scraperapi_sdk.exceptions import ScraperAPIException
client = ScraperAPIClient(
api_key=api_key,
)
try:
result = client.post('https://webhook.site/403e44ce-5835-4ce9-a648-188a51d9395d', headers={'Content-Type': 'application/x-www-form-urlencoded'}, data={'field1': 'data1'})
_ = result
except ScraperAPIException as e:
print(e.original_exception) # you can access the original exception via `.original_exception` property.
```
## Structured Data Collection Methods
### Amazon Endpoints
#### Amazon Product Page API
This method will retrieve product data from an Amazon product page and transform it into usable JSON.
```
result = client.amazon.product("<ASIN>")
result = client.amazon.product("<ASIN>", country="us", tld="com")
```
Read more in docs: [Amazon Product Page API](https://docs.scraperapi.com/making-requests/structured-data-collection-method/amazon-product-page-api)
#### Amazon Search API
This method will retrieve products for a specified search term from Amazon search page and transform it into usable JSON.
```
result = client.amazon.search("<QUERY>")
result = client.amazon.search("<QUERY>", country="us", tld="com")
```
Read more in docs: [Amazon Search API](https://docs.scraperapi.com/making-requests/structured-data-collection-method/amazon-search-api)
#### Amazon Offers API
This method will retrieve offers for a specified product from an Amazon offers page and transform it into usable JSON.
```
result = client.amazon.offers("<ASIN>")
result = client.amazon.offers("<ASIN>", country="us", tld="com")
```
Read more in docs: [Amazon Offers API](https://docs.scraperapi.com/making-requests/structured-data-collection-method/amazon-offers-api)
#### Amazon Reviews API
This method will retrieve reviews for a specified product from an Amazon reviews page and transform it into usable JSON.
```
result = client.amazon.review("<ASIN>")
result = client.amazon.offers("<ASIN>", country="us", tld="com")
```
Read more in docs: [Amazon Reviews API](https://docs.scraperapi.com/making-requests/structured-data-collection-method/amazon-reviews-api)
#### Amazon Prices API
This method will retrieve product prices for the given ASINs and transform it into usable JSON.
```
result = client.amazon.prices(['<ASIN1>'])
result = client.amazon.prices(['<ASIN1>', '<ASIN2>'])
result = client.amazon.prices(['<ASIN1>', '<ASIN2>'], country="us", tld="com")
```
Read more in docs: [Amazon Prices API](https://docs.scraperapi.com/making-requests/structured-data-collection-method/amazon-prices-api)
### Google API
#### Google SERP API
This method will retrieve product data from an Google search result page and transform it into usable JSON.
```
result = client.google.search('free hosting')
result = client.google.search('free hosting', country="us", tld="com")
```
Read more in docs: [Google SERP API](https://docs.scraperapi.com/making-requests/structured-data-collection-method/google-serp-api)
#### Google News API
This method will retrieve news data from an Google news result page and transform it into usable JSON.
```
result = client.google.news('tornado')
result = client.google.news('tornado', country="us", tld="com")
```
Read more in docs: [Google News API](https://docs.scraperapi.com/making-requests/structured-data-collection-method/google-news-api)
#### Google Jobs API
This method will retrieve jobs data from an Google jobs result page and transform it into usable JSON.
```
result = client.google.jobs('Senior Software Developer')
result = client.google.jobs('Senior Software Developer', country="us", tld="com")
```
Read more in docs: [Google Jobs API](https://docs.scraperapi.com/making-requests/structured-data-collection-method/google-jobs-api)
#### Google Shopping API
This method will retrieve shopping data from an Google shopping result page and transform it into usable JSON.
```
result = client.google.shopping('macbook air')
result = client.google.shopping('macbook air', country="us", tld="com")
```
Read more in docs: [Google Shopping API](https://docs.scraperapi.com/making-requests/structured-data-collection-method/google-shopping-api)
### Walmart API
#### Walmart Search API
This method will retrieve product list data from Walmart as a result of a search.
```
result = client.walmart.search('hoodie')
result = client.walmart.search('hoodie', page=2)
```
Read more in docs: [Walmart Search API](https://docs.scraperapi.com/making-requests/structured-data-collection-method/walmart-search-api)
#### Walmart Category API
This method will retrieve Walmart product list for a specified product category.
```
result = client.walmart.category('5438_7712430_8775031_5315201_3279226')
result = client.walmart.category('5438_7712430_8775031_5315201_3279226', page=2)
```
Read more in docs: [Walmart Category API](https://docs.scraperapi.com/making-requests/structured-data-collection-method/walmart-category-api)
#### Walmart Product API
This method will retrieve Walmart product details for one product.
```
result = client.walmart.product('5053452213')
```
Read more in docs: [Walmart Product API](https://docs.scraperapi.com/making-requests/structured-data-collection-method/walmart-product-api)
## Async Scraping
Basic scraping:
```
from scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException
client = ScraperAPIAsyncClient(api_key)
request_id = None
# request async scraping
try:
job = client.create('https://example.com')
request_id = job.get('id')
except ScraperAPIException as e:
print(e.original_exception)
# if job was submitted successfully we can request the result of scraping
if request_id:
result = client.get(request_id)
```
Read more in docs: [How to use Async Scraping](https://docs.scraperapi.com/making-requests/async-requests-method/how-to-use)
### Webhook Callback
```
from scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException
client = ScraperAPIAsyncClient(api_key)
request_id = None
# request async scraping
try:
job = client.create('https://example.com', webhook_url="https://webhook.site/#!/view/c4facc6e-c028-4d9c-9f58-b14c92a381fe")
request_id = job.get('id')
except ScraperAPIException as e:
print(e.original_exception)
# if job was submitted successfully we can request the result of scraping
if request_id:
result = client.get(request_id)
```
### Wait for results
You can use wait method which will poll ScraperAPI for result until its ready.
Use `client.wait`
Arguments:
`request_id` (required): ID returned from `client.create` call
`cooldown` (optional, default=5): number of seconds between retries
`max_retries` (optional, default=10): Maximum number of retries
`raise_for_exceeding_max_retries` (optional, default=False): If True will raise exception when reached max_retries, else returns the response from the API.
```
from scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException
client = ScraperAPIAsyncClient(api_key)
request_id = None
# request async scraping
try:
job = client.create('https://example.com')
request_id = job.get('id')
except ScraperAPIException as e:
print(e.original_exception)
# if job was submitted successfully we can request the result of scraping
if request_id:
result = client.wait(
request_id,
cooldown=5,
max_retries=10,
raise_for_exceeding_max_retries=False,
)
```
### Amazon Async Scraping
#### Amazon Product
Scrape a single Amazon Product asynchronously:
```
from scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException
client = ScraperAPIAsyncClient('<api_key>')
request_id = None
try:
job = client.amazon.product('B0CHVR5K7C')
request_id = job.get('id')
except ScraperAPIException as e:
print(e.original_exception)
result = client.get(request_id)
```
Single Product with params:
```
from scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException
client = ScraperAPIAsyncClient('<api_key>')
request_id = None
try:
job = client.amazon.product('B0B5PLT7FZ', api_params=dict(country_code='uk', tld='co.uk'))
request_id = job.get('id')
except ScraperAPIException as e:
print(e.original_exception)
result = client.get(request_id)
```
Scrape multiple Amazon Products asynchronously with params:
```
from scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException
client = ScraperAPIAsyncClient('<api_key>')
request_id = None
try:
job = client.amazon.products(['B0B5PLT7FZ', 'B00CL6353A'], api_params=dict(country_code='uk', tld='co.uk'))
request_id = job.get('id')
except ScraperAPIException as e:
print(e.original_exception)
result = client.get(request_id)
```
Read more in docs: [Async Amazon Product Scraping](https://docs.scraperapi.com/making-requests/async-structured-data-collection-method/amazon-product-page-api-async)
#### Amazon Search
Search Amazon asynchronously
```
from scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException
client = ScraperAPIAsyncClient('<api_key>')
request_id = None
try:
job = client.amazon.search('usb c microphone')
request_id = job.get('id')
except ScraperAPIException as e:
print(e.original_exception)
result = client.get(request_id)
```
Search Amazon asynchronously with `api_params`
```
from scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException
client = ScraperAPIAsyncClient('<api_key>')
request_id = None
try:
job = client.amazon.search('usb c microphone', api_params=dict(country_code='uk', tld='co.uk')
request_id = job.get('id')
except ScraperAPIException as e:
print(e.original_exception)
result = client.get(request_id)
```
Read more in docs: [Amazon Review Scraping Async](https://docs.scraperapi.com/making-requests/async-structured-data-collection-method/amazon-search-api-async)
#### Amazon Offers for a Product
Scrape Amazon offers for a single product
```
from scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException
client = ScraperAPIAsyncClient('<api_key>')
request_id = None
try:
job = client.amazon.offers('B0CHVR5K7C')
request_id = job.get('id')
except ScraperAPIException as e:
print(e.original_exception)
result = client.get(request_id)
```
Scrape Amazon offers for multiple products
```
from scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException
client = ScraperAPIAsyncClient('<api_key>')
request_id = None
try:
jobs = client.amazon.offers('B0CHVR5K7C')
except ScraperAPIException as e:
print(e.original_exception)
for job in jobs:
result = client.get(job.get('id'))
```
#### Amazon Reviews
Scrape Reviews for a single product asynchronously:
```
from scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException
client = ScraperAPIAsyncClient('<api_key>')
request_id = None
try:
job = client.amazon.product('B0B5PLT7FZ'], api_params=dict(country_code='uk', tld='co.uk'))
request_id = job.get('id')
except ScraperAPIException as e:
print(e.original_exception)
result = client.get(request_id)
```
Scrape reviews for multiple products asynchronously:
```
from scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException
client = ScraperAPIAsyncClient('<api_key>')
try:
jobs = client.amazon.products(['B0B5PLT7FZ', 'B00CL6353A'], api_params=dict(country_code='uk', tld='co.uk'))
except ScraperAPIException as e:
print(e.original_exception)
for job in jobs:
result = client.get(job.get('id'))
```
Read more in docs: [Amazon Review Scraping Async](https://docs.scraperapi.com/making-requests/async-structured-data-collection-method/amazon-review-details-async)
### Google Async Scraping
#### Google Async Search Scraping
```
from scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException
client = ScraperAPIAsyncClient('<api_key>')
try:
jobs = client.google.search('solar eclipse')
except ScraperAPIException as e:
print(e.original_exception)
for job in jobs:
result = client.get(job.get('id'))
```
Read more in docs: [Google Search API (Async)](https://docs.scraperapi.com/making-requests/async-structured-data-collection-method/google-search-api-async)
#### Google Async News Scraping
```
from scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException
client = ScraperAPIAsyncClient('<api_key>')
try:
jobs = client.google.news('solar eclipse')
except ScraperAPIException as e:
print(e.original_exception)
for job in jobs:
result = client.get(job.get('id'))
```
Read more in docs: [Google News API (Async)](https://docs.scraperapi.com/making-requests/async-structured-data-collection-method/google-news-api-async)
#### Google Async Jobs Scraping
```
from scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException
client = ScraperAPIAsyncClient('<api_key>')
try:
jobs = client.google.jobs('senior software developer')
except ScraperAPIException as e:
print(e.original_exception)
for job in jobs:
result = client.get(job.get('id'))
```
Read more in docs: [Google Jobs API (Async)](https://docs.scraperapi.com/making-requests/async-structured-data-collection-method/google-jobs-api-async)
#### Google Async Shopping Scraping
```
from scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException
client = ScraperAPIAsyncClient('<api_key>')
try:
jobs = client.google.shopping('usb c microphone')
except ScraperAPIException as e:
print(e.original_exception)
for job in jobs:
result = client.get(job.get('id'))
```
Read more in docs: [Google Shopping API (Async)](https://docs.scraperapi.com/making-requests/async-structured-data-collection-method/google-shopping-api-async)
Raw data
{
"_id": null,
"home_page": null,
"name": "scraperapi-sdk",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": null,
"author": "ScraperAPI",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/f1/7a/23b91dea1e955faa85565a7efe107abd12bdc7946250d37049555307e21b/scraperapi_sdk-1.5.2.tar.gz",
"platform": null,
"description": "# ScraperAPI Python SDK\n## Install \n\n```\npip install scraperapi-sdk\n```\n\n\n## Usage\n```\nfrom scraperapi_sdk import ScraperAPIClient\n\nclient = ScraperAPIClient(\"<API-KEY>\")\n\n# regular get request\ncontent = client.get('https://amazon.com/')\n# get request with premium\ncontent = client.get('https://amazon.com/', params={'premium': True})\n\n# post request\ncontent = client.post('https://webhook.site/403e44ce-5835-4ce9-a648-188a51d9395d', headers={'Content-Type': 'application/x-www-form-urlencoded'}, data={'field1': 'data1'})\n\n# put request\ncontent = client.put('https://webhook.site/403e44ce-5835-4ce9-a648-188a51d9395d', headers={'Content-Type': 'application/json'}, data={'field1': 'data1'})\n```\n\nThe `content` variable will contain the scraped page.\n\nIf you want to get the `Response` object instead of the content you can use `make_request`.\n\n```\nresponse = client.make_request(url='https://webhook.site/403e44ce-5835-4ce9-a648-188a51d9395d', headers={'Content-Type': 'application/json'}, data={'field1': 'data1'})\n# response will be <Response [200]>\n```\n\n## Exception\n\n```\nfrom scraperapi_sdk import ScraperAPIClient\nfrom scraperapi_sdk.exceptions import ScraperAPIException\n\nclient = ScraperAPIClient(\n api_key=api_key,\n)\ntry:\n result = client.post('https://webhook.site/403e44ce-5835-4ce9-a648-188a51d9395d', headers={'Content-Type': 'application/x-www-form-urlencoded'}, data={'field1': 'data1'})\n _ = result\nexcept ScraperAPIException as e:\n print(e.original_exception) # you can access the original exception via `.original_exception` property.\n```\n\n\n## Structured Data Collection Methods\n### Amazon Endpoints\n#### Amazon Product Page API\n\nThis method will retrieve product data from an Amazon product page and transform it into usable JSON.\n\n```\nresult = client.amazon.product(\"<ASIN>\")\nresult = client.amazon.product(\"<ASIN>\", country=\"us\", tld=\"com\")\n```\n\nRead more in docs: [Amazon Product Page API](https://docs.scraperapi.com/making-requests/structured-data-collection-method/amazon-product-page-api)\n\n#### Amazon Search API\n\nThis method will retrieve products for a specified search term from Amazon search page and transform it into usable JSON.\n\n```\nresult = client.amazon.search(\"<QUERY>\")\nresult = client.amazon.search(\"<QUERY>\", country=\"us\", tld=\"com\")\n```\n\nRead more in docs: [Amazon Search API](https://docs.scraperapi.com/making-requests/structured-data-collection-method/amazon-search-api)\n\n#### Amazon Offers API\nThis method will retrieve offers for a specified product from an Amazon offers page and transform it into usable JSON.\n\n```\nresult = client.amazon.offers(\"<ASIN>\")\nresult = client.amazon.offers(\"<ASIN>\", country=\"us\", tld=\"com\")\n```\nRead more in docs: [Amazon Offers API](https://docs.scraperapi.com/making-requests/structured-data-collection-method/amazon-offers-api)\n\n#### Amazon Reviews API \nThis method will retrieve reviews for a specified product from an Amazon reviews page and transform it into usable JSON.\n\n```\nresult = client.amazon.review(\"<ASIN>\")\nresult = client.amazon.offers(\"<ASIN>\", country=\"us\", tld=\"com\")\n```\nRead more in docs: [Amazon Reviews API](https://docs.scraperapi.com/making-requests/structured-data-collection-method/amazon-reviews-api)\n\n#### Amazon Prices API \n\nThis method will retrieve product prices for the given ASINs and transform it into usable JSON.\n\n```\nresult = client.amazon.prices(['<ASIN1>'])\nresult = client.amazon.prices(['<ASIN1>', '<ASIN2>'])\nresult = client.amazon.prices(['<ASIN1>', '<ASIN2>'], country=\"us\", tld=\"com\")\n```\nRead more in docs: [Amazon Prices API](https://docs.scraperapi.com/making-requests/structured-data-collection-method/amazon-prices-api)\n\n\n### Google API\n#### Google SERP API\nThis method will retrieve product data from an Google search result page and transform it into usable JSON.\n\n```\nresult = client.google.search('free hosting')\nresult = client.google.search('free hosting', country=\"us\", tld=\"com\")\n```\nRead more in docs: [Google SERP API](https://docs.scraperapi.com/making-requests/structured-data-collection-method/google-serp-api)\n#### Google News API\nThis method will retrieve news data from an Google news result page and transform it into usable JSON.\n```\nresult = client.google.news('tornado')\nresult = client.google.news('tornado', country=\"us\", tld=\"com\")\n```\nRead more in docs: [Google News API](https://docs.scraperapi.com/making-requests/structured-data-collection-method/google-news-api)\n\n#### Google Jobs API\n\nThis method will retrieve jobs data from an Google jobs result page and transform it into usable JSON.\n\n```\nresult = client.google.jobs('Senior Software Developer')\nresult = client.google.jobs('Senior Software Developer', country=\"us\", tld=\"com\")\n```\nRead more in docs: [Google Jobs API](https://docs.scraperapi.com/making-requests/structured-data-collection-method/google-jobs-api)\n\n#### Google Shopping API\nThis method will retrieve shopping data from an Google shopping result page and transform it into usable JSON.\n```\nresult = client.google.shopping('macbook air')\nresult = client.google.shopping('macbook air', country=\"us\", tld=\"com\")\n```\n\nRead more in docs: [Google Shopping API](https://docs.scraperapi.com/making-requests/structured-data-collection-method/google-shopping-api)\n\n### Walmart API\n#### Walmart Search API\nThis method will retrieve product list data from Walmart as a result of a search.\n```\nresult = client.walmart.search('hoodie')\nresult = client.walmart.search('hoodie', page=2)\n```\nRead more in docs: [Walmart Search API](https://docs.scraperapi.com/making-requests/structured-data-collection-method/walmart-search-api)\n\n#### Walmart Category API\n\nThis method will retrieve Walmart product list for a specified product category.\n```\nresult = client.walmart.category('5438_7712430_8775031_5315201_3279226')\nresult = client.walmart.category('5438_7712430_8775031_5315201_3279226', page=2)\n```\nRead more in docs: [Walmart Category API](https://docs.scraperapi.com/making-requests/structured-data-collection-method/walmart-category-api)\n\n#### Walmart Product API\nThis method will retrieve Walmart product details for one product.\n```\nresult = client.walmart.product('5053452213')\n```\n\nRead more in docs: [Walmart Product API](https://docs.scraperapi.com/making-requests/structured-data-collection-method/walmart-product-api)\n## Async Scraping\n\nBasic scraping:\n```\nfrom scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException\n\nclient = ScraperAPIAsyncClient(api_key)\nrequest_id = None\n# request async scraping\ntry:\n job = client.create('https://example.com')\n request_id = job.get('id')\nexcept ScraperAPIException as e:\n print(e.original_exception)\n\n# if job was submitted successfully we can request the result of scraping \n\nif request_id:\n result = client.get(request_id)\n```\n\nRead more in docs: [How to use Async Scraping](https://docs.scraperapi.com/making-requests/async-requests-method/how-to-use)\n\n### Webhook Callback\n```\nfrom scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException\n\nclient = ScraperAPIAsyncClient(api_key)\nrequest_id = None\n# request async scraping\ntry:\n job = client.create('https://example.com', webhook_url=\"https://webhook.site/#!/view/c4facc6e-c028-4d9c-9f58-b14c92a381fe\")\n request_id = job.get('id')\nexcept ScraperAPIException as e:\n print(e.original_exception)\n\n# if job was submitted successfully we can request the result of scraping \n\nif request_id:\n result = client.get(request_id)\n```\n### Wait for results\nYou can use wait method which will poll ScraperAPI for result until its ready.\n\nUse `client.wait`\n\nArguments:\n`request_id` (required): ID returned from `client.create` call\n`cooldown` (optional, default=5): number of seconds between retries\n`max_retries` (optional, default=10): Maximum number of retries \n`raise_for_exceeding_max_retries` (optional, default=False): If True will raise exception when reached max_retries, else returns the response from the API.\n\n```\nfrom scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException\n\nclient = ScraperAPIAsyncClient(api_key)\nrequest_id = None\n# request async scraping\ntry:\n job = client.create('https://example.com')\n request_id = job.get('id')\nexcept ScraperAPIException as e:\n print(e.original_exception)\n\n# if job was submitted successfully we can request the result of scraping \n\nif request_id:\n result = client.wait(\n request_id,\n cooldown=5,\n max_retries=10,\n raise_for_exceeding_max_retries=False,\n )\n```\n\n\n### Amazon Async Scraping\n\n#### Amazon Product\n\nScrape a single Amazon Product asynchronously:\n\n```\nfrom scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException\n\nclient = ScraperAPIAsyncClient('<api_key>')\nrequest_id = None\ntry:\n job = client.amazon.product('B0CHVR5K7C')\n request_id = job.get('id')\nexcept ScraperAPIException as e:\n print(e.original_exception)\nresult = client.get(request_id)\n```\n\nSingle Product with params:\n\n```\nfrom scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException\n\nclient = ScraperAPIAsyncClient('<api_key>')\nrequest_id = None\ntry:\n job = client.amazon.product('B0B5PLT7FZ', api_params=dict(country_code='uk', tld='co.uk'))\n request_id = job.get('id')\nexcept ScraperAPIException as e:\n print(e.original_exception)\nresult = client.get(request_id)\n```\n\nScrape multiple Amazon Products asynchronously with params:\n```\nfrom scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException\n\nclient = ScraperAPIAsyncClient('<api_key>')\nrequest_id = None\ntry:\n job = client.amazon.products(['B0B5PLT7FZ', 'B00CL6353A'], api_params=dict(country_code='uk', tld='co.uk'))\n request_id = job.get('id')\nexcept ScraperAPIException as e:\n print(e.original_exception)\nresult = client.get(request_id)\n\n```\nRead more in docs: [Async Amazon Product Scraping](https://docs.scraperapi.com/making-requests/async-structured-data-collection-method/amazon-product-page-api-async)\n\n#### Amazon Search\nSearch Amazon asynchronously\n```\nfrom scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException\n\nclient = ScraperAPIAsyncClient('<api_key>')\nrequest_id = None\ntry:\n job = client.amazon.search('usb c microphone')\n request_id = job.get('id')\nexcept ScraperAPIException as e:\n print(e.original_exception)\nresult = client.get(request_id)\n```\n\nSearch Amazon asynchronously with `api_params`\n```\nfrom scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException\n\nclient = ScraperAPIAsyncClient('<api_key>')\nrequest_id = None\ntry:\n job = client.amazon.search('usb c microphone', api_params=dict(country_code='uk', tld='co.uk')\n request_id = job.get('id')\nexcept ScraperAPIException as e:\n print(e.original_exception)\nresult = client.get(request_id)\n```\nRead more in docs: [Amazon Review Scraping Async](https://docs.scraperapi.com/making-requests/async-structured-data-collection-method/amazon-search-api-async)\n#### Amazon Offers for a Product\nScrape Amazon offers for a single product\n\n```\nfrom scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException\n\nclient = ScraperAPIAsyncClient('<api_key>')\nrequest_id = None\ntry:\n job = client.amazon.offers('B0CHVR5K7C')\n request_id = job.get('id')\nexcept ScraperAPIException as e:\n print(e.original_exception)\nresult = client.get(request_id)\n\n```\nScrape Amazon offers for multiple products\n```\nfrom scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException\n\nclient = ScraperAPIAsyncClient('<api_key>')\nrequest_id = None\ntry:\n jobs = client.amazon.offers('B0CHVR5K7C')\nexcept ScraperAPIException as e:\n print(e.original_exception)\nfor job in jobs:\n result = client.get(job.get('id'))\n```\n\n#### Amazon Reviews\nScrape Reviews for a single product asynchronously:\n\n```\nfrom scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException\n\nclient = ScraperAPIAsyncClient('<api_key>')\nrequest_id = None\ntry:\n job = client.amazon.product('B0B5PLT7FZ'], api_params=dict(country_code='uk', tld='co.uk'))\n request_id = job.get('id')\nexcept ScraperAPIException as e:\n print(e.original_exception)\nresult = client.get(request_id)\n```\n\n\nScrape reviews for multiple products asynchronously:\n\n```\nfrom scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException\n\nclient = ScraperAPIAsyncClient('<api_key>')\ntry:\n jobs = client.amazon.products(['B0B5PLT7FZ', 'B00CL6353A'], api_params=dict(country_code='uk', tld='co.uk'))\nexcept ScraperAPIException as e:\n print(e.original_exception)\nfor job in jobs:\n result = client.get(job.get('id'))\n```\nRead more in docs: [Amazon Review Scraping Async](https://docs.scraperapi.com/making-requests/async-structured-data-collection-method/amazon-review-details-async)\n\n### Google Async Scraping\n#### Google Async Search Scraping\n\n```\nfrom scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException\n\nclient = ScraperAPIAsyncClient('<api_key>')\ntry:\n jobs = client.google.search('solar eclipse')\nexcept ScraperAPIException as e:\n print(e.original_exception)\nfor job in jobs:\n result = client.get(job.get('id'))\n```\n\n\nRead more in docs: [Google Search API (Async)](https://docs.scraperapi.com/making-requests/async-structured-data-collection-method/google-search-api-async)\n\n#### Google Async News Scraping\n\n```\nfrom scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException\n\nclient = ScraperAPIAsyncClient('<api_key>')\ntry:\n jobs = client.google.news('solar eclipse')\nexcept ScraperAPIException as e:\n print(e.original_exception)\nfor job in jobs:\n result = client.get(job.get('id'))\n```\nRead more in docs: [Google News API (Async)](https://docs.scraperapi.com/making-requests/async-structured-data-collection-method/google-news-api-async)\n\n#### Google Async Jobs Scraping\n```\nfrom scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException\n\nclient = ScraperAPIAsyncClient('<api_key>')\ntry:\n jobs = client.google.jobs('senior software developer')\nexcept ScraperAPIException as e:\n print(e.original_exception)\nfor job in jobs:\n result = client.get(job.get('id'))\n```\nRead more in docs: [Google Jobs API (Async)](https://docs.scraperapi.com/making-requests/async-structured-data-collection-method/google-jobs-api-async)\n\n#### Google Async Shopping Scraping\n```\nfrom scraperapi_sdk import ScraperAPIAsyncClient, ScraperAPIException\n\nclient = ScraperAPIAsyncClient('<api_key>')\ntry:\n jobs = client.google.shopping('usb c microphone')\nexcept ScraperAPIException as e:\n print(e.original_exception)\nfor job in jobs:\n result = client.get(job.get('id'))\n```\nRead more in docs: [Google Shopping API (Async)](https://docs.scraperapi.com/making-requests/async-structured-data-collection-method/google-shopping-api-async)\n\n\n\n",
"bugtrack_url": null,
"license": null,
"summary": "ScraperAPI Python SDK",
"version": "1.5.2",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "3244d0b33700a8d72cd3a1d595ebcc0e345cfacb78c3e1dc9371181f045e7567",
"md5": "95fd9c034d0bbfd8863027ffa15b01af",
"sha256": "da642cdfd0f246b7f4f2044614e70d0ba34aadbae45863cdeea0aff4a00e566f"
},
"downloads": -1,
"filename": "scraperapi_sdk-1.5.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "95fd9c034d0bbfd8863027ffa15b01af",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 6891,
"upload_time": "2024-06-18T10:12:49",
"upload_time_iso_8601": "2024-06-18T10:12:49.164639Z",
"url": "https://files.pythonhosted.org/packages/32/44/d0b33700a8d72cd3a1d595ebcc0e345cfacb78c3e1dc9371181f045e7567/scraperapi_sdk-1.5.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "f17a23b91dea1e955faa85565a7efe107abd12bdc7946250d37049555307e21b",
"md5": "9f2adb5160064694e5deadb2b43f6487",
"sha256": "01e5204830c92e8da051dc80d62e2a540f436cc3a5ff410147a83243f96f9676"
},
"downloads": -1,
"filename": "scraperapi_sdk-1.5.2.tar.gz",
"has_sig": false,
"md5_digest": "9f2adb5160064694e5deadb2b43f6487",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 8109,
"upload_time": "2024-06-18T10:12:52",
"upload_time_iso_8601": "2024-06-18T10:12:52.450660Z",
"url": "https://files.pythonhosted.org/packages/f1/7a/23b91dea1e955faa85565a7efe107abd12bdc7946250d37049555307e21b/scraperapi_sdk-1.5.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-06-18 10:12:52",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "scraperapi-sdk"
}