scrapy-patchright


Namescrapy-patchright JSON
Version 0.0.1 PyPI version JSON
download
home_pageNone
SummaryPatchright Integration For Scrapy
upload_time2024-12-21 11:18:44
maintainerNone
docs_urlNone
authorEhsan U.
requires_python<4.0,>=3.9
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # scrapy-playwright: Playwright integration for Scrapy
[![version](https://img.shields.io/pypi/v/scrapy-playwright.svg)](https://pypi.python.org/pypi/scrapy-playwright)
[![pyversions](https://img.shields.io/pypi/pyversions/scrapy-playwright.svg)](https://pypi.python.org/pypi/scrapy-playwright)
[![Tests](https://github.com/scrapy-plugins/scrapy-playwright/actions/workflows/tests.yml/badge.svg)](https://github.com/scrapy-plugins/scrapy-playwright/actions/workflows/tests.yml)
[![codecov](https://codecov.io/gh/scrapy-plugins/scrapy-playwright/branch/master/graph/badge.svg)](https://codecov.io/gh/scrapy-plugins/scrapy-playwright)


A [Scrapy](https://github.com/scrapy/scrapy) Download Handler which performs requests using
[Playwright for Python](https://github.com/microsoft/playwright-python).
It can be used to handle pages that require JavaScript (among other things),
while adhering to the regular Scrapy workflow (i.e. without interfering
with request scheduling, item processing, etc).


## Requirements

After the release of [version 2.0](https://docs.scrapy.org/en/latest/news.html#scrapy-2-0-0-2020-03-03),
which includes [coroutine syntax support](https://docs.scrapy.org/en/2.0/topics/coroutines.html)
and [asyncio support](https://docs.scrapy.org/en/2.0/topics/asyncio.html), Scrapy allows
to integrate `asyncio`-based projects such as `Playwright`.


### Minimum required versions

* Python >= 3.8
* Scrapy >= 2.0 (!= 2.4.0)
* Playwright >= 1.15


## Installation

`scrapy-playwright` is available on PyPI and can be installed with `pip`:

```
pip install scrapy-playwright
```

`playwright` is defined as a dependency so it gets installed automatically,
however it might be necessary to install the specific browser(s) that will be
used:

```
playwright install
```

It's also possible to install only a subset of the available browsers:

```
playwright install firefox chromium
```

## Changelog

See the [changelog](docs/changelog.md) document.


## Activation

### Download handler

Replace the default `http` and/or `https` Download Handlers through
[`DOWNLOAD_HANDLERS`](https://docs.scrapy.org/en/latest/topics/settings.html):

```python
# settings.py
DOWNLOAD_HANDLERS = {
    "http": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler",
    "https": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler",
}
```

Note that the `ScrapyPlaywrightDownloadHandler` class inherits from the default
`http/https` handler. Unless explicitly marked (see [Basic usage](#basic-usage)),
requests will be processed by the regular Scrapy download handler.


### Twisted reactor

[Install the `asyncio`-based Twisted reactor](https://docs.scrapy.org/en/latest/topics/asyncio.html#installing-the-asyncio-reactor):

```python
# settings.py
TWISTED_REACTOR = "twisted.internet.asyncioreactor.AsyncioSelectorReactor"
```

This is the default in new projects since [Scrapy 2.7](https://github.com/scrapy/scrapy/releases/tag/2.7.0).


## Basic usage

Set the [`playwright`](#playwright) [Request.meta](https://docs.scrapy.org/en/latest/topics/request-response.html#scrapy.http.Request.meta)
key to download a request using Playwright:

```python
import scrapy

class AwesomeSpider(scrapy.Spider):
    name = "awesome"

    def start_requests(self):
        # GET request
        yield scrapy.Request("https://httpbin.org/get", meta={"playwright": True})
        # POST request
        yield scrapy.FormRequest(
            url="https://httpbin.org/post",
            formdata={"foo": "bar"},
            meta={"playwright": True},
        )

    def parse(self, response, **kwargs):
        # 'response' contains the page as seen by the browser
        return {"url": response.url}
```

### Notes about the User-Agent header

By default, outgoing requests include the `User-Agent` set by Scrapy (either with the
`USER_AGENT` or `DEFAULT_REQUEST_HEADERS` settings or via the `Request.headers` attribute).
This could cause some sites to react in unexpected ways, for instance if the user agent
does not match the running Browser. If you prefer the `User-Agent` sent by
default by the specific browser you're using, set the Scrapy user agent to `None`.


## Windows support

Windows support is possible by running Playwright in a `ProactorEventLoop` in a separate thread.
This is necessary because it's not possible to run Playwright in the same
asyncio event loop as the Scrapy crawler:
* Playwright runs the driver in a subprocess. Source:
  [Playwright repository](https://github.com/microsoft/playwright-python/blob/v1.44.0/playwright/_impl/_transport.py#L120-L130).
* "On Windows, the default event loop `ProactorEventLoop` supports subprocesses,
  whereas `SelectorEventLoop` does not". Source:
  [Python docs](https://docs.python.org/3/library/asyncio-platforms.html#asyncio-windows-subprocess).
* Twisted's `asyncio` reactor requires the `SelectorEventLoop`. Source:
  [Twisted repository](https://github.com/twisted/twisted/blob/twisted-24.3.0/src/twisted/internet/asyncioreactor.py#L31)


## Supported [settings](https://docs.scrapy.org/en/latest/topics/settings.html)

### `PLAYWRIGHT_BROWSER_TYPE`
Type `str`, default `"chromium"`.

The browser type to be launched, e.g. `chromium`, `firefox`, `webkit`.

```python
PLAYWRIGHT_BROWSER_TYPE = "firefox"
```

### `PLAYWRIGHT_LAUNCH_OPTIONS`
Type `dict`, default `{}`

A dictionary with options to be passed as keyword arguments when launching the
Browser. See the docs for
[`BrowserType.launch`](https://playwright.dev/python/docs/api/class-browsertype#browser-type-launch)
for a list of supported keyword arguments.

```python
PLAYWRIGHT_LAUNCH_OPTIONS = {
    "headless": False,
    "timeout": 20 * 1000,  # 20 seconds
}
```

### `PLAYWRIGHT_CDP_URL`
Type `Optional[str]`, default `None`

The endpoint of a remote Chromium browser to connect using the
[Chrome DevTools Protocol](https://chromedevtools.github.io/devtools-protocol/),
via [`BrowserType.connect_over_cdp`](https://playwright.dev/python/docs/api/class-browsertype#browser-type-connect-over-cdp).

```python
PLAYWRIGHT_CDP_URL = "http://localhost:9222"
```

If this setting is used:
* all non-persistent contexts will be created on the connected remote browser
* the `PLAYWRIGHT_LAUNCH_OPTIONS` setting is ignored
* the `PLAYWRIGHT_BROWSER_TYPE` setting must not be set to a value different than "chromium"

**This settings CANNOT be used at the same time as `PLAYWRIGHT_CONNECT_URL`**

### `PLAYWRIGHT_CDP_KWARGS`
Type `dict[str, Any]`, default `{}`

Additional keyword arguments to be passed to
[`BrowserType.connect_over_cdp`](https://playwright.dev/python/docs/api/class-browsertype#browser-type-connect-over-cdp)
when using `PLAYWRIGHT_CDP_URL`. The `endpoint_url` key is always ignored,
`PLAYWRIGHT_CDP_URL` is used instead.

```python
PLAYWRIGHT_CDP_KWARGS = {
    "slow_mo": 1000,
    "timeout": 10 * 1000
}
```

### `PLAYWRIGHT_CONNECT_URL`
Type `Optional[str]`, default `None`

URL of a remote Playwright browser instance to connect using
[`BrowserType.connect`](https://playwright.dev/python/docs/api/class-browsertype#browser-type-connect).

From the upstream Playwright docs:
> When connecting to another browser launched via
> [`BrowserType.launchServer`](https://playwright.dev/docs/api/class-browsertype#browser-type-launch-server)
> in Node.js, the major and minor version needs to match the client version (1.2.3 → is compatible with 1.2.x).

```python
PLAYWRIGHT_CONNECT_URL = "ws://localhost:35477/ae1fa0bc325adcfd9600d9f712e9c733"
```

If this setting is used:
* all non-persistent contexts will be created on the connected remote browser
* the `PLAYWRIGHT_LAUNCH_OPTIONS` setting is ignored

**This settings CANNOT be used at the same time as `PLAYWRIGHT_CDP_URL`**

### `PLAYWRIGHT_CONNECT_KWARGS`
Type `dict[str, Any]`, default `{}`

Additional keyword arguments to be passed to
[`BrowserType.connect`](https://playwright.dev/python/docs/api/class-browsertype#browser-type-connect)
when using `PLAYWRIGHT_CONNECT_URL`. The `ws_endpoint` key is always ignored,
`PLAYWRIGHT_CONNECT_URL` is used instead.

```python
PLAYWRIGHT_CONNECT_KWARGS = {
    "slow_mo": 1000,
    "timeout": 10 * 1000
}
```

### `PLAYWRIGHT_CONTEXTS`
Type `dict[str, dict]`, default `{}`

A dictionary which defines Browser contexts to be created on startup.
It should be a mapping of (name, keyword arguments).

```python
PLAYWRIGHT_CONTEXTS = {
    "foobar": {
        "context_arg1": "value",
        "context_arg2": "value",
    },
    "default": {
        "context_arg1": "value",
        "context_arg2": "value",
    },
    "persistent": {
        "user_data_dir": "/path/to/dir",  # will be a persistent context
        "context_arg1": "value",
    },
}
```

See the section on [browser contexts](#browser-contexts) for more information.
See also the docs for [`Browser.new_context`](https://playwright.dev/python/docs/api/class-browser#browser-new-context).

### `PLAYWRIGHT_MAX_CONTEXTS`
Type `Optional[int]`, default `None`

Maximum amount of allowed concurrent Playwright contexts. If unset or `None`,
no limit is enforced. See the [Maximum concurrent context count](#maximum-concurrent-context-count)
section for more information.

```python
PLAYWRIGHT_MAX_CONTEXTS = 8
```

### `PLAYWRIGHT_DEFAULT_NAVIGATION_TIMEOUT`
Type `Optional[float]`, default `None`

Timeout to be used when requesting pages by Playwright, in milliseconds. If
`None` or unset, the default value will be used (30000 ms at the time of writing).
See the docs for [BrowserContext.set_default_navigation_timeout](https://playwright.dev/python/docs/api/class-browsercontext#browser-context-set-default-navigation-timeout).

```python
PLAYWRIGHT_DEFAULT_NAVIGATION_TIMEOUT = 10 * 1000  # 10 seconds
```

### `PLAYWRIGHT_PROCESS_REQUEST_HEADERS`
Type `Optional[Union[Callable, str]]`, default `scrapy_playwright.headers.use_scrapy_headers`

A function (or the path to a function) that processes a Playwright request and returns a
dictionary with headers to be overridden (note that, depending on the browser, additional
default headers could be sent as well). Coroutine functions (`async def`) are supported.

This will be called at least once for each Scrapy request, but it could be called additional times
if Playwright generates more requests (e.g. to retrieve assets like images or scripts).

The function must return a `Dict[str, str]` object, and receives the following three **keyword** arguments:

```python
- browser_type_name: str
- playwright_request: playwright.async_api.Request
- scrapy_request_data: dict
    * method: str
    * url: str
    * headers: scrapy.http.headers.Headers
    * body: Optional[bytes]
    * encoding: str
```

The default function (`scrapy_playwright.headers.use_scrapy_headers`) tries to
emulate Scrapy's behaviour for navigation requests, i.e. overriding headers
with their values from the Scrapy request. For non-navigation requests (e.g.
images, stylesheets, scripts, etc), only the `User-Agent` header is overriden,
for consistency.

Setting `PLAYWRIGHT_PROCESS_REQUEST_HEADERS=None` will give complete control to
Playwright, i.e. headers from Scrapy requests will be ignored and only headers
set by Playwright will be sent. Keep in mind that in this case, headers passed
via the `Request.headers` attribute or set by Scrapy components are ignored
(including cookies set via the `Request.cookies` attribute).

Example:
```python
async def custom_headers(
    *,
    browser_type_name: str,
    playwright_request: playwright.async_api.Request,
    scrapy_request_data: dict,
) -> Dict[str, str]:
    headers = await playwright_request.all_headers()
    scrapy_headers = scrapy_request_data["headers"].to_unicode_dict()
    headers["Cookie"] = scrapy_headers.get("Cookie")
    return headers

PLAYWRIGHT_PROCESS_REQUEST_HEADERS = custom_headers
```

#### Deprecated argument handling

In version 0.0.40 and earlier, arguments were passed to the function positionally,
and only the Scrapy headers were passed instead of a dictionary with data about the
Scrapy request.
This is deprecated since version 0.0.41, and support for this way of handling arguments
will eventually be removed in accordance with the [Deprecation policy](#deprecation-policy).

Passed arguments:
```python
- browser_type: str
- playwright_request: playwright.async_api.Request
- scrapy_headers: scrapy.http.headers.Headers
```

Example:
```python
def custom_headers(
    browser_type: str,
    playwright_request: playwright.async_api.Request,
    scrapy_headers: scrapy.http.headers.Headers,
) -> dict:
    if browser_type == "firefox":
        return {"User-Agent": "foo"}
    return {"User-Agent": "bar"}

PLAYWRIGHT_PROCESS_REQUEST_HEADERS = custom_headers
```

### `PLAYWRIGHT_RESTART_DISCONNECTED_BROWSER`
Type `bool`, default `True`

Whether the browser will be restarted if it gets disconnected, for instance if the local
browser crashes or a remote connection times out.
Implemented by listening to the
[`disconnected` Browser event](https://playwright.dev/python/docs/api/class-browser#browser-event-disconnected),
for this reason it does not apply to persistent contexts since
[BrowserType.launch_persistent_context](https://playwright.dev/python/docs/api/class-browsertype#browser-type-launch-persistent-context)
returns the context directly.

### `PLAYWRIGHT_MAX_PAGES_PER_CONTEXT`
Type `int`, defaults to the value of Scrapy's `CONCURRENT_REQUESTS` setting

Maximum amount of allowed concurrent Playwright pages for each context.
See the [notes about leaving unclosed pages](#receiving-page-objects-in-callbacks).

```python
PLAYWRIGHT_MAX_PAGES_PER_CONTEXT = 4
```

### `PLAYWRIGHT_ABORT_REQUEST`
Type `Optional[Union[Callable, str]]`, default `None`

A predicate function (or the path to a function) that receives a
[`playwright.async_api.Request`](https://playwright.dev/python/docs/api/class-request)
object and must return `True` if the request should be aborted, `False` otherwise.
Coroutine functions (`async def`) are supported.

Note that all requests will appear in the DEBUG level logs, however there will
be no corresponding response log lines for aborted requests. Aborted requests
are counted in the `playwright/request_count/aborted` job stats item.

```python
def should_abort_request(request):
    return (
        request.resource_type == "image"
        or ".jpg" in request.url
    )

PLAYWRIGHT_ABORT_REQUEST = should_abort_request
```

### General note about settings
For settings that accept object paths as strings, passing callable objects is
only supported when using Scrapy>=2.4. With prior versions, only strings are
supported.


## Supported [`Request.meta`](https://docs.scrapy.org/en/latest/topics/request-response.html#scrapy.http.Request.meta) keys

### `playwright`
Type `bool`, default `False`

If set to a value that evaluates to `True` the request will be processed by Playwright.

```python
return scrapy.Request("https://example.org", meta={"playwright": True})
```

### `playwright_context`
Type `str`, default `"default"`

Name of the context to be used to download the request.
See the section on [browser contexts](#browser-contexts) for more information.

```python
return scrapy.Request(
    url="https://example.org",
    meta={
        "playwright": True,
        "playwright_context": "awesome_context",
    },
)
```

### `playwright_context_kwargs`
Type `dict`, default `{}`

A dictionary with keyword arguments to be used when creating a new context, if a context
with the name specified in the `playwright_context` meta key does not exist already.
See the section on [browser contexts](#browser-contexts) for more information.

```python
return scrapy.Request(
    url="https://example.org",
    meta={
        "playwright": True,
        "playwright_context": "awesome_context",
        "playwright_context_kwargs": {
            "ignore_https_errors": True,
        },
    },
)
```

### `playwright_include_page`
Type `bool`, default `False`

If `True`, the [Playwright page](https://playwright.dev/python/docs/api/class-page)
that was used to download the request will be available in the callback at
`response.meta['playwright_page']`. If `False` (or unset) the page will be
closed immediately after processing the request.

**Important!**

This meta key is entirely optional, it's NOT necessary for the page to load or for any
asynchronous operation to be performed (specifically, it's NOT necessary for `PageMethod`
objects to be applied). Use it only if you need access to the Page object in the callback
that handles the response.

For more information and important notes see
[Receiving Page objects in callbacks](#receiving-page-objects-in-callbacks).

```python
return scrapy.Request(
    url="https://example.org",
    meta={"playwright": True, "playwright_include_page": True},
)
```

### `playwright_page_event_handlers`
Type `Dict[Str, Callable]`, default `{}`

A dictionary of handlers to be attached to page events.
See [Handling page events](#handling-page-events).

### `playwright_page_init_callback`
Type `Optional[Union[Callable, str]]`, default `None`

A coroutine function (`async def`) to be invoked for newly created pages.
Called after attaching page event handlers & setting up internal route
handling, before making any request. It receives the Playwright page and the
Scrapy request as positional arguments. Useful for initialization code.
Ignored if the page for the request already exists (e.g. by passing
`playwright_page`).

```python
async def init_page(page, request):
    await page.add_init_script(path="./custom_script.js")

class AwesomeSpider(scrapy.Spider):
    def start_requests(self):
        yield scrapy.Request(
            url="https://httpbin.org/headers",
            meta={
                "playwright": True,
                "playwright_page_init_callback": init_page,
            },
        )
```

**Important!**

`scrapy-playwright` uses `Page.route` & `Page.unroute` internally, avoid using
these methods unless you know exactly what you're doing.

### `playwright_page_methods`
Type `Iterable[PageMethod]`, default `()`

An iterable of [`scrapy_playwright.page.PageMethod`](#pagemethod-class)
objects to indicate actions to be performed on the page before returning the
final response. See [Executing actions on pages](#executing-actions-on-pages).

### `playwright_page`
Type `Optional[playwright.async_api.Page]`, default `None`

A [Playwright page](https://playwright.dev/python/docs/api/class-page) to be used to
download the request. If unspecified, a new page is created for each request.
This key could be used in conjunction with `playwright_include_page` to make a chain of
requests using the same page. For instance:

```python
from playwright.async_api import Page

def start_requests(self):
    yield scrapy.Request(
        url="https://httpbin.org/get",
        meta={"playwright": True, "playwright_include_page": True},
    )

def parse(self, response, **kwargs):
    page: Page = response.meta["playwright_page"]
    yield scrapy.Request(
        url="https://httpbin.org/headers",
        callback=self.parse_headers,
        meta={"playwright": True, "playwright_page": page},
    )
```

### `playwright_page_goto_kwargs`
Type `dict`, default `{}`

A dictionary with keyword arguments to be passed to the page's
[`goto` method](https://playwright.dev/python/docs/api/class-page#page-goto)
when navigating to an URL. The `url` key is ignored if present, the request
URL is used instead.

```python
return scrapy.Request(
    url="https://example.org",
    meta={
        "playwright": True,
        "playwright_page_goto_kwargs": {
            "wait_until": "networkidle",
        },
    },
)
```

### `playwright_security_details`
Type `Optional[dict]`, read only

A dictionary with [security information](https://playwright.dev/python/docs/api/class-response#response-security-details)
about the give response. Only available for HTTPS requests. Could be accessed
in the callback via `response.meta['playwright_security_details']`

```python
def parse(self, response, **kwargs):
    print(response.meta["playwright_security_details"])
    # {'issuer': 'DigiCert TLS RSA SHA256 2020 CA1', 'protocol': 'TLS 1.3', 'subjectName': 'www.example.org', 'validFrom': 1647216000, 'validTo': 1678838399}
```

### `playwright_suggested_filename`
Type `Optional[str]`, read only

The value of the [`Download.suggested_filename`](https://playwright.dev/python/docs/api/class-download#download-suggested-filename)
attribute when the response is the binary contents of a
[download](https://playwright.dev/python/docs/downloads) (e.g. a PDF file).
Only available for responses that only caused a download. Can be accessed
in the callback via `response.meta['playwright_suggested_filename']`

```python
def parse(self, response, **kwargs):
    print(response.meta["playwright_suggested_filename"])
    # 'sample_file.pdf'
```

## Receiving Page objects in callbacks

Specifying a value that evaluates to `True` in the
[`playwright_include_page`](#playwright_include_page) meta key for a
request will result in the corresponding `playwright.async_api.Page` object
being available in the `playwright_page` meta key in the request callback.
In order to be able to `await` coroutines on the provided `Page` object,
the callback needs to be defined as a coroutine function (`async def`).

**Caution**

Use this carefully, and only if you really need to do things with the Page
object in the callback. If pages are not properly closed after they are no longer
necessary the spider job could get stuck because of the limit set by the
`PLAYWRIGHT_MAX_PAGES_PER_CONTEXT` setting.

```python
from playwright.async_api import Page
import scrapy

class AwesomeSpiderWithPage(scrapy.Spider):
    name = "page_spider"

    def start_requests(self):
        yield scrapy.Request(
            url="https://example.org",
            callback=self.parse_first,
            meta={"playwright": True, "playwright_include_page": True},
            errback=self.errback_close_page,
        )

    def parse_first(self, response):
        page: Page = response.meta["playwright_page"]
        return scrapy.Request(
            url="https://example.com",
            callback=self.parse_second,
            meta={"playwright": True, "playwright_include_page": True, "playwright_page": page},
            errback=self.errback_close_page,
        )

    async def parse_second(self, response):
        page: Page = response.meta["playwright_page"]
        title = await page.title()  # "Example Domain"
        await page.close()
        return {"title": title}

    async def errback_close_page(self, failure):
        page: Page = failure.request.meta["playwright_page"]
        await page.close()
```

**Notes:**

* When passing `playwright_include_page=True`, make sure pages are always closed
  when they are no longer used. It's recommended to set a Request errback to make
  sure pages are closed even if a request fails (if `playwright_include_page=False`
  pages are automatically closed upon encountering an exception).
  This is important, as open pages count towards the limit set by
  `PLAYWRIGHT_MAX_PAGES_PER_CONTEXT` and crawls could freeze if the limit is reached
  and pages remain open indefinitely.
* Defining callbacks as `async def` is only necessary if you need to `await` things,
  it's NOT necessary if you just need to pass over the Page object from one callback
  to another (see the example above).
* Any network operations resulting from awaiting a coroutine on a Page object
  (`goto`, `go_back`, etc) will be executed directly by Playwright, bypassing the
  Scrapy request workflow (Scheduler, Middlewares, etc).


## Browser contexts

Multiple [browser contexts](https://playwright.dev/python/docs/browser-contexts)
to be launched at startup can be defined via the
[`PLAYWRIGHT_CONTEXTS`](#playwright_contexts) setting.

### Choosing a specific context for a request

Pass the name of the desired context in the `playwright_context` meta key:

```python
yield scrapy.Request(
    url="https://example.org",
    meta={"playwright": True, "playwright_context": "first"},
)
```

### Default context

If a request does not explicitly indicate a context via the `playwright_context`
meta key, it falls back to using a general context called `default`. This `default`
context can also be customized on startup via the `PLAYWRIGHT_CONTEXTS` setting.

### Persistent contexts

Pass a value for the `user_data_dir` keyword argument to launch a context as
persistent. See also [`BrowserType.launch_persistent_context`](https://playwright.dev/python/docs/api/class-browsertype#browser-type-launch-persistent-context).

Note that persistent contexts are launched independently from the main browser
instance, hence keyword arguments passed in the
[`PLAYWRIGHT_LAUNCH_OPTIONS`](#playwright_launch_options)
setting do not apply.

### Creating contexts while crawling

If the context specified in the `playwright_context` meta key does not exist, it will be created.
You can specify keyword arguments to be passed to
[`Browser.new_context`](https://playwright.dev/python/docs/api/class-browser#browser-new-context)
in the `playwright_context_kwargs` meta key:

```python
yield scrapy.Request(
    url="https://example.org",
    meta={
        "playwright": True,
        "playwright_context": "new",
        "playwright_context_kwargs": {
            "java_script_enabled": False,
            "ignore_https_errors": True,
            "proxy": {
                "server": "http://myproxy.com:3128",
                "username": "user",
                "password": "pass",
            },
        },
    },
)
```

Please note that if a context with the specified name already exists,
that context is used and `playwright_context_kwargs` are ignored.

### Closing contexts while crawling

After [receiving the Page object in your callback](#receiving-page-objects-in-callbacks),
you can access a context though the corresponding [`Page.context`](https://playwright.dev/python/docs/api/class-page#page-context)
attribute, and await [`close`](https://playwright.dev/python/docs/api/class-browsercontext#browser-context-close) on it.

```python
def parse(self, response, **kwargs):
    yield scrapy.Request(
        url="https://example.org",
        callback=self.parse_in_new_context,
        errback=self.close_context_on_error,
        meta={
            "playwright": True,
            "playwright_context": "awesome_context",
            "playwright_include_page": True,
        },
    )

async def parse_in_new_context(self, response):
    page = response.meta["playwright_page"]
    title = await page.title()
    await page.close()
    await page.context.close()
    return {"title": title}

async def close_context_on_error(self, failure):
    page = failure.request.meta["playwright_page"]
    await page.close()
    await page.context.close()
```

### Avoid race conditions & memory leaks when closing contexts
Make sure to close the page before closing the context. See
[this comment](https://github.com/scrapy-plugins/scrapy-playwright/issues/191#issuecomment-1548097114)
in [#191](https://github.com/scrapy-plugins/scrapy-playwright/issues/191)
for more information.

### Maximum concurrent context count

Specify a value for the `PLAYWRIGHT_MAX_CONTEXTS` setting to limit the amount
of concurent contexts. Use with caution: it's possible to block the whole crawl
if contexts are not closed after they are no longer used (refer to
[this section](#closing-contexts-while-crawling) to dinamically close contexts).
Make sure to define an errback to still close contexts even if there are errors.


## Proxy support

Proxies are supported at the Browser level by specifying the `proxy` key in
the `PLAYWRIGHT_LAUNCH_OPTIONS` setting:

```python
from scrapy import Spider, Request

class ProxySpider(Spider):
    name = "proxy"
    custom_settings = {
        "PLAYWRIGHT_LAUNCH_OPTIONS": {
            "proxy": {
                "server": "http://myproxy.com:3128",
                "username": "user",
                "password": "pass",
            },
        }
    }

    def start_requests(self):
        yield Request("http://httpbin.org/get", meta={"playwright": True})

    def parse(self, response, **kwargs):
        print(response.text)
```

Proxies can also be set at the context level with the `PLAYWRIGHT_CONTEXTS` setting:

```python
PLAYWRIGHT_CONTEXTS = {
    "default": {
        "proxy": {
            "server": "http://default-proxy.com:3128",
            "username": "user1",
            "password": "pass1",
        },
    },
    "alternative": {
        "proxy": {
            "server": "http://alternative-proxy.com:3128",
            "username": "user2",
            "password": "pass2",
        },
    },
}
```

Or passing a `proxy` key when [creating contexts while crawling](#creating-contexts-while-crawling).

See also:
* [`zyte-smartproxy-playwright`](https://github.com/zytedata/zyte-smartproxy-playwright):
  seamless support for [Zyte Smart Proxy Manager](https://www.zyte.com/smart-proxy-manager/)
  in the Node.js version of Playwright.
* the [upstream Playwright for Python section](https://playwright.dev/python/docs/network#http-proxy)
  on HTTP Proxies.


## Executing actions on pages

A sorted iterable (e.g. `list`, `tuple`, `dict`) of `PageMethod` objects
could be passed in the `playwright_page_methods`
[Request.meta](https://docs.scrapy.org/en/latest/topics/request-response.html#scrapy.http.Request.meta)
key to request methods to be invoked on the `Page` object before returning the final
`Response` to the callback.

This is useful when you need to perform certain actions on a page (like scrolling
down or clicking links) and you want to handle only the final result in your callback.

### `PageMethod` class

#### `scrapy_playwright.page.PageMethod(method: str | callable, *args, **kwargs)`:

Represents a method to be called (and awaited if necessary) on a
`playwright.page.Page` object (e.g. "click", "screenshot", "evaluate", etc).
It's also possible to pass callable objects that will be invoked as callbacks
and receive Playwright Page as argument.
`method` is the name of the method, `*args` and `**kwargs`
are passed when calling such method. The return value
will be stored in the `PageMethod.result` attribute.

For instance:
```python
def start_requests(self):
    yield Request(
        url="https://example.org",
        meta={
            "playwright": True,
            "playwright_page_methods": [
                PageMethod("screenshot", path="example.png", full_page=True),
            ],
        },
    )

def parse(self, response, **kwargs):
    screenshot = response.meta["playwright_page_methods"][0]
    # screenshot.result contains the image's bytes
```

produces the same effect as:
```python
def start_requests(self):
    yield Request(
        url="https://example.org",
        meta={"playwright": True, "playwright_include_page": True},
    )

async def parse(self, response, **kwargs):
    page = response.meta["playwright_page"]
    screenshot = await page.screenshot(path="example.png", full_page=True)
    # screenshot contains the image's bytes
    await page.close()
```

### Passing callable objects

If a `PageMethod` receives a callable object as its first argument, it will be
called with the page as its first argument. Any additional arguments are passed
to the callable after the page.

```python
async def scroll_page(page: Page) -> str:
    await page.wait_for_selector(selector="div.quote")
    await page.evaluate("window.scrollBy(0, document.body.scrollHeight)")
    await page.wait_for_selector(selector="div.quote:nth-child(11)")
    return page.url


class MySpyder(scrapy.Spider):
    name = "scroll"

    def start_requests(self):
        yield Request(
            url="https://quotes.toscrape.com/scroll",
            meta={
                "playwright": True,
                "playwright_page_methods": [PageMethod(scroll_page)],
            },
        )
```

### Supported Playwright methods

Refer to the [upstream docs for the `Page` class](https://playwright.dev/python/docs/api/class-page)
to see available methods.

### Impact on Response objects

Certain `Response` attributes (e.g. `url`, `ip_address`) reflect the state after the last
action performed on a page. If you issue a `PageMethod` with an action that results in
a navigation (e.g. a `click` on a link), the `Response.url` attribute will point to the
new URL, which might be different from the request's URL.


## Handling page events

A dictionary of Page event handlers can be specified in the `playwright_page_event_handlers`
[Request.meta](https://docs.scrapy.org/en/latest/topics/request-response.html#scrapy.http.Request.meta) key.
Keys are the name of the event to be handled (e.g. `dialog`, `download`, etc).
Values can be either callables or strings (in which case a spider method with the name will be looked up).

Example:

```python
from playwright.async_api import Dialog

async def handle_dialog(dialog: Dialog) -> None:
    logging.info(f"Handled dialog with message: {dialog.message}")
    await dialog.dismiss()

class EventSpider(scrapy.Spider):
    name = "event"

    def start_requests(self):
        yield scrapy.Request(
            url="https://example.org",
            meta={
                "playwright": True,
                "playwright_page_event_handlers": {
                    "dialog": handle_dialog,
                    "response": "handle_response",
                },
            },
        )

    async def handle_response(self, response: PlaywrightResponse) -> None:
        logging.info(f"Received response with URL {response.url}")
```

See the [upstream `Page` docs](https://playwright.dev/python/docs/api/class-page)
for a list of the accepted events and the arguments passed to their handlers.

### Notes about page event handlers

* Event handlers will remain attached to the page and will be called for
  subsequent downloads using the same page unless they are
  [removed later](https://playwright.dev/python/docs/events#addingremoving-event-listener).
  This is usually not a problem, since by default requests are performed in
  single-use pages.
* Event handlers will process Playwright objects, not Scrapy ones. For example,
  for each Scrapy request/response there will be a matching Playwright
  request/response, but not the other way: background requests/responses to get
  images, scripts, stylesheets, etc are not seen by Scrapy.


## Memory usage extension

The default Scrapy memory usage extension
(`scrapy.extensions.memusage.MemoryUsage`) does not include the memory used by
Playwright because the browser is launched as a separate process. The
scrapy-playwright package provides a replacement extension which also considers
the memory used by Playwright. This extension needs the
[`psutil`](https://pypi.org/project/psutil/) package to work.

Update the [EXTENSIONS](https://docs.scrapy.org/en/latest/topics/settings.html#std-setting-EXTENSIONS)
setting to disable the built-in Scrapy extension and replace it with the one
from the scrapy-playwright package:

```python
# settings.py
EXTENSIONS = {
    "scrapy.extensions.memusage.MemoryUsage": None,
    "scrapy_playwright.memusage.ScrapyPlaywrightMemoryUsageExtension": 0,
}
```

Refer to the
[upstream docs](https://docs.scrapy.org/en/latest/topics/extensions.html#module-scrapy.extensions.memusage)
for more information about supported settings.

### Windows support

Just like the [upstream Scrapy extension](https://docs.scrapy.org/en/latest/topics/extensions.html#module-scrapy.extensions.memusage), this custom memory extension does not work
on Windows. This is because the stdlib [`resource`](https://docs.python.org/3/library/resource.html)
module is not available.


## Examples

**Click on a link, save the resulting page as PDF**

```python
class ClickAndSavePdfSpider(scrapy.Spider):
    name = "pdf"

    def start_requests(self):
        yield scrapy.Request(
            url="https://example.org",
            meta=dict(
                playwright=True,
                playwright_page_methods={
                    "click": PageMethod("click", selector="a"),
                    "pdf": PageMethod("pdf", path="/tmp/file.pdf"),
                },
            ),
        )

    def parse(self, response, **kwargs):
        pdf_bytes = response.meta["playwright_page_methods"]["pdf"].result
        with open("iana.pdf", "wb") as fp:
            fp.write(pdf_bytes)
        yield {"url": response.url}  # response.url is "https://www.iana.org/domains/reserved"
```

**Scroll down on an infinite scroll page, take a screenshot of the full page**

```python
class ScrollSpider(scrapy.Spider):
    name = "scroll"

    def start_requests(self):
        yield scrapy.Request(
            url="http://quotes.toscrape.com/scroll",
            meta=dict(
                playwright=True,
                playwright_include_page=True,
                playwright_page_methods=[
                    PageMethod("wait_for_selector", "div.quote"),
                    PageMethod("evaluate", "window.scrollBy(0, document.body.scrollHeight)"),
                    PageMethod("wait_for_selector", "div.quote:nth-child(11)"),  # 10 per page
                ],
            ),
        )

    async def parse(self, response, **kwargs):
        page = response.meta["playwright_page"]
        await page.screenshot(path="quotes.png", full_page=True)
        await page.close()
        return {"quote_count": len(response.css("div.quote"))}  # quotes from several pages
```


See the [examples](examples) directory for more.


## Known issues

### No per-request proxy support
Specifying a proxy via the `proxy` Request meta key is not supported.
Refer to the [Proxy support](#proxy-support) section for more information.

### Unsopported signals
The `headers_received` and `bytes_received` signals are not fired by the
scrapy-playwright download handler.


## Reporting issues

Before opening an issue please make sure the unexpected behavior can only be
observed by using this package and not with standalone Playwright. To do this,
translate your spider code to a reasonably close Playwright script: if the
issue also occurs this way, you should instead report it
[upstream](https://github.com/microsoft/playwright-python).
For instance:

```python
import scrapy

class ExampleSpider(scrapy.Spider):
    name = "example"

    def start_requests(self):
        yield scrapy.Request(
            url="https://example.org",
            meta=dict(
                playwright=True,
                playwright_page_methods=[
                    PageMethod("screenshot", path="example.png", full_page=True),
                ],
            ),
        )
```

translates roughly to:

```python
import asyncio
from playwright.async_api import async_playwright

async def main():
    async with async_playwright() as pw:
        browser = await pw.chromium.launch()
        page = await browser.new_page()
        await page.goto("https://example.org")
        await page.screenshot(path="example.png", full_page=True)
        await browser.close()

asyncio.run(main())
```

### Software versions

Be sure to include which versions of Scrapy, Playwright and scrapy-playwright you are using:

```
$ playwright --version
Version 1.44.0
```

```
$ python -c "import scrapy_playwright; print(scrapy_playwright.__version__)"
0.0.34
```

```
$ scrapy version -v
Scrapy       : 2.11.1
lxml         : 5.1.0.0
libxml2      : 2.12.3
cssselect    : 1.2.0
parsel       : 1.8.1
w3lib        : 2.1.2
Twisted      : 23.10.0
Python       : 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
pyOpenSSL    : 24.0.0 (OpenSSL 3.2.1 30 Jan 2024)
cryptography : 42.0.5
Platform     : Linux-6.5.0-35-generic-x86_64-with-glibc2.35
```

### Reproducible code example

When opening an issue please include a
[Minimal, Reproducible Example](https://stackoverflow.com/help/minimal-reproducible-example)
that shows the reported behavior. In addition, please make the code as self-contained as possible
so an active Scrapy project is not required and the spider can be executed directly from a file with
[`scrapy runspider`](https://docs.scrapy.org/en/latest/topics/commands.html#std-command-runspider).
This usually means including the relevant settings in the spider's
[`custom_settings`](https://docs.scrapy.org/en/latest/topics/settings.html#settings-per-spider)
attribute:

```python
import scrapy

class ExampleSpider(scrapy.Spider):
    name = "example"
    custom_settings = {
        "TWISTED_REACTOR": "twisted.internet.asyncioreactor.AsyncioSelectorReactor",
        "DOWNLOAD_HANDLERS": {
            "https": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler",
            "http": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler",
        },
    }

    def start_requests(self):
        yield scrapy.Request(
            url="https://example.org",
            meta={"playwright": True},
        )
```

#### Minimal code
Please make the effort to reduce the code to the minimum that still displays the issue.
It is very rare that a complete project (including middlewares, pipelines, item processing, etc)
is really needed to reproduce an issue. Reports that do not show an actual debugging attempt
will not be considered.

### Logs and stats

Logs for spider jobs displaying the issue in detail are extremely useful
for understanding possible bugs. Include lines before and after the problem,
not just isolated tracebacks. Job stats displayed at the end of the job
are also important.


## Frequently Asked Questions

See the [FAQ](docs/faq.md) document.


## Deprecation policy

Deprecated features will be supported for at least six months
following the release that deprecated them. After that, they
may be removed at any time. See the [changelog](docs/changelog.md)
for more information about deprecations and removals.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "scrapy-patchright",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.9",
    "maintainer_email": null,
    "keywords": null,
    "author": "Ehsan U.",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/e2/38/b81fc9894c1d9ba993c8173e73e05cf5b0e44fe61fb9308b0719e496372e/scrapy_patchright-0.0.1.tar.gz",
    "platform": null,
    "description": "# scrapy-playwright: Playwright integration for Scrapy\n[![version](https://img.shields.io/pypi/v/scrapy-playwright.svg)](https://pypi.python.org/pypi/scrapy-playwright)\n[![pyversions](https://img.shields.io/pypi/pyversions/scrapy-playwright.svg)](https://pypi.python.org/pypi/scrapy-playwright)\n[![Tests](https://github.com/scrapy-plugins/scrapy-playwright/actions/workflows/tests.yml/badge.svg)](https://github.com/scrapy-plugins/scrapy-playwright/actions/workflows/tests.yml)\n[![codecov](https://codecov.io/gh/scrapy-plugins/scrapy-playwright/branch/master/graph/badge.svg)](https://codecov.io/gh/scrapy-plugins/scrapy-playwright)\n\n\nA [Scrapy](https://github.com/scrapy/scrapy) Download Handler which performs requests using\n[Playwright for Python](https://github.com/microsoft/playwright-python).\nIt can be used to handle pages that require JavaScript (among other things),\nwhile adhering to the regular Scrapy workflow (i.e. without interfering\nwith request scheduling, item processing, etc).\n\n\n## Requirements\n\nAfter the release of [version 2.0](https://docs.scrapy.org/en/latest/news.html#scrapy-2-0-0-2020-03-03),\nwhich includes [coroutine syntax support](https://docs.scrapy.org/en/2.0/topics/coroutines.html)\nand [asyncio support](https://docs.scrapy.org/en/2.0/topics/asyncio.html), Scrapy allows\nto integrate `asyncio`-based projects such as `Playwright`.\n\n\n### Minimum required versions\n\n* Python >= 3.8\n* Scrapy >= 2.0 (!= 2.4.0)\n* Playwright >= 1.15\n\n\n## Installation\n\n`scrapy-playwright` is available on PyPI and can be installed with `pip`:\n\n```\npip install scrapy-playwright\n```\n\n`playwright` is defined as a dependency so it gets installed automatically,\nhowever it might be necessary to install the specific browser(s) that will be\nused:\n\n```\nplaywright install\n```\n\nIt's also possible to install only a subset of the available browsers:\n\n```\nplaywright install firefox chromium\n```\n\n## Changelog\n\nSee the [changelog](docs/changelog.md) document.\n\n\n## Activation\n\n### Download handler\n\nReplace the default `http` and/or `https` Download Handlers through\n[`DOWNLOAD_HANDLERS`](https://docs.scrapy.org/en/latest/topics/settings.html):\n\n```python\n# settings.py\nDOWNLOAD_HANDLERS = {\n    \"http\": \"scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler\",\n    \"https\": \"scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler\",\n}\n```\n\nNote that the `ScrapyPlaywrightDownloadHandler` class inherits from the default\n`http/https` handler. Unless explicitly marked (see [Basic usage](#basic-usage)),\nrequests will be processed by the regular Scrapy download handler.\n\n\n### Twisted reactor\n\n[Install the `asyncio`-based Twisted reactor](https://docs.scrapy.org/en/latest/topics/asyncio.html#installing-the-asyncio-reactor):\n\n```python\n# settings.py\nTWISTED_REACTOR = \"twisted.internet.asyncioreactor.AsyncioSelectorReactor\"\n```\n\nThis is the default in new projects since [Scrapy 2.7](https://github.com/scrapy/scrapy/releases/tag/2.7.0).\n\n\n## Basic usage\n\nSet the [`playwright`](#playwright) [Request.meta](https://docs.scrapy.org/en/latest/topics/request-response.html#scrapy.http.Request.meta)\nkey to download a request using Playwright:\n\n```python\nimport scrapy\n\nclass AwesomeSpider(scrapy.Spider):\n    name = \"awesome\"\n\n    def start_requests(self):\n        # GET request\n        yield scrapy.Request(\"https://httpbin.org/get\", meta={\"playwright\": True})\n        # POST request\n        yield scrapy.FormRequest(\n            url=\"https://httpbin.org/post\",\n            formdata={\"foo\": \"bar\"},\n            meta={\"playwright\": True},\n        )\n\n    def parse(self, response, **kwargs):\n        # 'response' contains the page as seen by the browser\n        return {\"url\": response.url}\n```\n\n### Notes about the User-Agent header\n\nBy default, outgoing requests include the `User-Agent` set by Scrapy (either with the\n`USER_AGENT` or `DEFAULT_REQUEST_HEADERS` settings or via the `Request.headers` attribute).\nThis could cause some sites to react in unexpected ways, for instance if the user agent\ndoes not match the running Browser. If you prefer the `User-Agent` sent by\ndefault by the specific browser you're using, set the Scrapy user agent to `None`.\n\n\n## Windows support\n\nWindows support is possible by running Playwright in a `ProactorEventLoop` in a separate thread.\nThis is necessary because it's not possible to run Playwright in the same\nasyncio event loop as the Scrapy crawler:\n* Playwright runs the driver in a subprocess. Source:\n  [Playwright repository](https://github.com/microsoft/playwright-python/blob/v1.44.0/playwright/_impl/_transport.py#L120-L130).\n* \"On Windows, the default event loop `ProactorEventLoop` supports subprocesses,\n  whereas `SelectorEventLoop` does not\". Source:\n  [Python docs](https://docs.python.org/3/library/asyncio-platforms.html#asyncio-windows-subprocess).\n* Twisted's `asyncio` reactor requires the `SelectorEventLoop`. Source:\n  [Twisted repository](https://github.com/twisted/twisted/blob/twisted-24.3.0/src/twisted/internet/asyncioreactor.py#L31)\n\n\n## Supported [settings](https://docs.scrapy.org/en/latest/topics/settings.html)\n\n### `PLAYWRIGHT_BROWSER_TYPE`\nType `str`, default `\"chromium\"`.\n\nThe browser type to be launched, e.g. `chromium`, `firefox`, `webkit`.\n\n```python\nPLAYWRIGHT_BROWSER_TYPE = \"firefox\"\n```\n\n### `PLAYWRIGHT_LAUNCH_OPTIONS`\nType `dict`, default `{}`\n\nA dictionary with options to be passed as keyword arguments when launching the\nBrowser. See the docs for\n[`BrowserType.launch`](https://playwright.dev/python/docs/api/class-browsertype#browser-type-launch)\nfor a list of supported keyword arguments.\n\n```python\nPLAYWRIGHT_LAUNCH_OPTIONS = {\n    \"headless\": False,\n    \"timeout\": 20 * 1000,  # 20 seconds\n}\n```\n\n### `PLAYWRIGHT_CDP_URL`\nType `Optional[str]`, default `None`\n\nThe endpoint of a remote Chromium browser to connect using the\n[Chrome DevTools Protocol](https://chromedevtools.github.io/devtools-protocol/),\nvia [`BrowserType.connect_over_cdp`](https://playwright.dev/python/docs/api/class-browsertype#browser-type-connect-over-cdp).\n\n```python\nPLAYWRIGHT_CDP_URL = \"http://localhost:9222\"\n```\n\nIf this setting is used:\n* all non-persistent contexts will be created on the connected remote browser\n* the `PLAYWRIGHT_LAUNCH_OPTIONS` setting is ignored\n* the `PLAYWRIGHT_BROWSER_TYPE` setting must not be set to a value different than \"chromium\"\n\n**This settings CANNOT be used at the same time as `PLAYWRIGHT_CONNECT_URL`**\n\n### `PLAYWRIGHT_CDP_KWARGS`\nType `dict[str, Any]`, default `{}`\n\nAdditional keyword arguments to be passed to\n[`BrowserType.connect_over_cdp`](https://playwright.dev/python/docs/api/class-browsertype#browser-type-connect-over-cdp)\nwhen using `PLAYWRIGHT_CDP_URL`. The `endpoint_url` key is always ignored,\n`PLAYWRIGHT_CDP_URL` is used instead.\n\n```python\nPLAYWRIGHT_CDP_KWARGS = {\n    \"slow_mo\": 1000,\n    \"timeout\": 10 * 1000\n}\n```\n\n### `PLAYWRIGHT_CONNECT_URL`\nType `Optional[str]`, default `None`\n\nURL of a remote Playwright browser instance to connect using\n[`BrowserType.connect`](https://playwright.dev/python/docs/api/class-browsertype#browser-type-connect).\n\nFrom the upstream Playwright docs:\n> When connecting to another browser launched via\n> [`BrowserType.launchServer`](https://playwright.dev/docs/api/class-browsertype#browser-type-launch-server)\n> in Node.js, the major and minor version needs to match the client version (1.2.3 \u2192 is compatible with 1.2.x).\n\n```python\nPLAYWRIGHT_CONNECT_URL = \"ws://localhost:35477/ae1fa0bc325adcfd9600d9f712e9c733\"\n```\n\nIf this setting is used:\n* all non-persistent contexts will be created on the connected remote browser\n* the `PLAYWRIGHT_LAUNCH_OPTIONS` setting is ignored\n\n**This settings CANNOT be used at the same time as `PLAYWRIGHT_CDP_URL`**\n\n### `PLAYWRIGHT_CONNECT_KWARGS`\nType `dict[str, Any]`, default `{}`\n\nAdditional keyword arguments to be passed to\n[`BrowserType.connect`](https://playwright.dev/python/docs/api/class-browsertype#browser-type-connect)\nwhen using `PLAYWRIGHT_CONNECT_URL`. The `ws_endpoint` key is always ignored,\n`PLAYWRIGHT_CONNECT_URL` is used instead.\n\n```python\nPLAYWRIGHT_CONNECT_KWARGS = {\n    \"slow_mo\": 1000,\n    \"timeout\": 10 * 1000\n}\n```\n\n### `PLAYWRIGHT_CONTEXTS`\nType `dict[str, dict]`, default `{}`\n\nA dictionary which defines Browser contexts to be created on startup.\nIt should be a mapping of (name, keyword arguments).\n\n```python\nPLAYWRIGHT_CONTEXTS = {\n    \"foobar\": {\n        \"context_arg1\": \"value\",\n        \"context_arg2\": \"value\",\n    },\n    \"default\": {\n        \"context_arg1\": \"value\",\n        \"context_arg2\": \"value\",\n    },\n    \"persistent\": {\n        \"user_data_dir\": \"/path/to/dir\",  # will be a persistent context\n        \"context_arg1\": \"value\",\n    },\n}\n```\n\nSee the section on [browser contexts](#browser-contexts) for more information.\nSee also the docs for [`Browser.new_context`](https://playwright.dev/python/docs/api/class-browser#browser-new-context).\n\n### `PLAYWRIGHT_MAX_CONTEXTS`\nType `Optional[int]`, default `None`\n\nMaximum amount of allowed concurrent Playwright contexts. If unset or `None`,\nno limit is enforced. See the [Maximum concurrent context count](#maximum-concurrent-context-count)\nsection for more information.\n\n```python\nPLAYWRIGHT_MAX_CONTEXTS = 8\n```\n\n### `PLAYWRIGHT_DEFAULT_NAVIGATION_TIMEOUT`\nType `Optional[float]`, default `None`\n\nTimeout to be used when requesting pages by Playwright, in milliseconds. If\n`None` or unset, the default value will be used (30000 ms at the time of writing).\nSee the docs for [BrowserContext.set_default_navigation_timeout](https://playwright.dev/python/docs/api/class-browsercontext#browser-context-set-default-navigation-timeout).\n\n```python\nPLAYWRIGHT_DEFAULT_NAVIGATION_TIMEOUT = 10 * 1000  # 10 seconds\n```\n\n### `PLAYWRIGHT_PROCESS_REQUEST_HEADERS`\nType `Optional[Union[Callable, str]]`, default `scrapy_playwright.headers.use_scrapy_headers`\n\nA function (or the path to a function) that processes a Playwright request and returns a\ndictionary with headers to be overridden (note that, depending on the browser, additional\ndefault headers could be sent as well). Coroutine functions (`async def`) are supported.\n\nThis will be called at least once for each Scrapy request, but it could be called additional times\nif Playwright generates more requests (e.g. to retrieve assets like images or scripts).\n\nThe function must return a `Dict[str, str]` object, and receives the following three **keyword** arguments:\n\n```python\n- browser_type_name: str\n- playwright_request: playwright.async_api.Request\n- scrapy_request_data: dict\n    * method: str\n    * url: str\n    * headers: scrapy.http.headers.Headers\n    * body: Optional[bytes]\n    * encoding: str\n```\n\nThe default function (`scrapy_playwright.headers.use_scrapy_headers`) tries to\nemulate Scrapy's behaviour for navigation requests, i.e. overriding headers\nwith their values from the Scrapy request. For non-navigation requests (e.g.\nimages, stylesheets, scripts, etc), only the `User-Agent` header is overriden,\nfor consistency.\n\nSetting `PLAYWRIGHT_PROCESS_REQUEST_HEADERS=None` will give complete control to\nPlaywright, i.e. headers from Scrapy requests will be ignored and only headers\nset by Playwright will be sent. Keep in mind that in this case, headers passed\nvia the `Request.headers` attribute or set by Scrapy components are ignored\n(including cookies set via the `Request.cookies` attribute).\n\nExample:\n```python\nasync def custom_headers(\n    *,\n    browser_type_name: str,\n    playwright_request: playwright.async_api.Request,\n    scrapy_request_data: dict,\n) -> Dict[str, str]:\n    headers = await playwright_request.all_headers()\n    scrapy_headers = scrapy_request_data[\"headers\"].to_unicode_dict()\n    headers[\"Cookie\"] = scrapy_headers.get(\"Cookie\")\n    return headers\n\nPLAYWRIGHT_PROCESS_REQUEST_HEADERS = custom_headers\n```\n\n#### Deprecated argument handling\n\nIn version 0.0.40 and earlier, arguments were passed to the function positionally,\nand only the Scrapy headers were passed instead of a dictionary with data about the\nScrapy request.\nThis is deprecated since version 0.0.41, and support for this way of handling arguments\nwill eventually be removed in accordance with the [Deprecation policy](#deprecation-policy).\n\nPassed arguments:\n```python\n- browser_type: str\n- playwright_request: playwright.async_api.Request\n- scrapy_headers: scrapy.http.headers.Headers\n```\n\nExample:\n```python\ndef custom_headers(\n    browser_type: str,\n    playwright_request: playwright.async_api.Request,\n    scrapy_headers: scrapy.http.headers.Headers,\n) -> dict:\n    if browser_type == \"firefox\":\n        return {\"User-Agent\": \"foo\"}\n    return {\"User-Agent\": \"bar\"}\n\nPLAYWRIGHT_PROCESS_REQUEST_HEADERS = custom_headers\n```\n\n### `PLAYWRIGHT_RESTART_DISCONNECTED_BROWSER`\nType `bool`, default `True`\n\nWhether the browser will be restarted if it gets disconnected, for instance if the local\nbrowser crashes or a remote connection times out.\nImplemented by listening to the\n[`disconnected` Browser event](https://playwright.dev/python/docs/api/class-browser#browser-event-disconnected),\nfor this reason it does not apply to persistent contexts since\n[BrowserType.launch_persistent_context](https://playwright.dev/python/docs/api/class-browsertype#browser-type-launch-persistent-context)\nreturns the context directly.\n\n### `PLAYWRIGHT_MAX_PAGES_PER_CONTEXT`\nType `int`, defaults to the value of Scrapy's `CONCURRENT_REQUESTS` setting\n\nMaximum amount of allowed concurrent Playwright pages for each context.\nSee the [notes about leaving unclosed pages](#receiving-page-objects-in-callbacks).\n\n```python\nPLAYWRIGHT_MAX_PAGES_PER_CONTEXT = 4\n```\n\n### `PLAYWRIGHT_ABORT_REQUEST`\nType `Optional[Union[Callable, str]]`, default `None`\n\nA predicate function (or the path to a function) that receives a\n[`playwright.async_api.Request`](https://playwright.dev/python/docs/api/class-request)\nobject and must return `True` if the request should be aborted, `False` otherwise.\nCoroutine functions (`async def`) are supported.\n\nNote that all requests will appear in the DEBUG level logs, however there will\nbe no corresponding response log lines for aborted requests. Aborted requests\nare counted in the `playwright/request_count/aborted` job stats item.\n\n```python\ndef should_abort_request(request):\n    return (\n        request.resource_type == \"image\"\n        or \".jpg\" in request.url\n    )\n\nPLAYWRIGHT_ABORT_REQUEST = should_abort_request\n```\n\n### General note about settings\nFor settings that accept object paths as strings, passing callable objects is\nonly supported when using Scrapy>=2.4. With prior versions, only strings are\nsupported.\n\n\n## Supported [`Request.meta`](https://docs.scrapy.org/en/latest/topics/request-response.html#scrapy.http.Request.meta) keys\n\n### `playwright`\nType `bool`, default `False`\n\nIf set to a value that evaluates to `True` the request will be processed by Playwright.\n\n```python\nreturn scrapy.Request(\"https://example.org\", meta={\"playwright\": True})\n```\n\n### `playwright_context`\nType `str`, default `\"default\"`\n\nName of the context to be used to download the request.\nSee the section on [browser contexts](#browser-contexts) for more information.\n\n```python\nreturn scrapy.Request(\n    url=\"https://example.org\",\n    meta={\n        \"playwright\": True,\n        \"playwright_context\": \"awesome_context\",\n    },\n)\n```\n\n### `playwright_context_kwargs`\nType `dict`, default `{}`\n\nA dictionary with keyword arguments to be used when creating a new context, if a context\nwith the name specified in the `playwright_context` meta key does not exist already.\nSee the section on [browser contexts](#browser-contexts) for more information.\n\n```python\nreturn scrapy.Request(\n    url=\"https://example.org\",\n    meta={\n        \"playwright\": True,\n        \"playwright_context\": \"awesome_context\",\n        \"playwright_context_kwargs\": {\n            \"ignore_https_errors\": True,\n        },\n    },\n)\n```\n\n### `playwright_include_page`\nType `bool`, default `False`\n\nIf `True`, the [Playwright page](https://playwright.dev/python/docs/api/class-page)\nthat was used to download the request will be available in the callback at\n`response.meta['playwright_page']`. If `False` (or unset) the page will be\nclosed immediately after processing the request.\n\n**Important!**\n\nThis meta key is entirely optional, it's NOT necessary for the page to load or for any\nasynchronous operation to be performed (specifically, it's NOT necessary for `PageMethod`\nobjects to be applied). Use it only if you need access to the Page object in the callback\nthat handles the response.\n\nFor more information and important notes see\n[Receiving Page objects in callbacks](#receiving-page-objects-in-callbacks).\n\n```python\nreturn scrapy.Request(\n    url=\"https://example.org\",\n    meta={\"playwright\": True, \"playwright_include_page\": True},\n)\n```\n\n### `playwright_page_event_handlers`\nType `Dict[Str, Callable]`, default `{}`\n\nA dictionary of handlers to be attached to page events.\nSee [Handling page events](#handling-page-events).\n\n### `playwright_page_init_callback`\nType `Optional[Union[Callable, str]]`, default `None`\n\nA coroutine function (`async def`) to be invoked for newly created pages.\nCalled after attaching page event handlers & setting up internal route\nhandling, before making any request. It receives the Playwright page and the\nScrapy request as positional arguments. Useful for initialization code.\nIgnored if the page for the request already exists (e.g. by passing\n`playwright_page`).\n\n```python\nasync def init_page(page, request):\n    await page.add_init_script(path=\"./custom_script.js\")\n\nclass AwesomeSpider(scrapy.Spider):\n    def start_requests(self):\n        yield scrapy.Request(\n            url=\"https://httpbin.org/headers\",\n            meta={\n                \"playwright\": True,\n                \"playwright_page_init_callback\": init_page,\n            },\n        )\n```\n\n**Important!**\n\n`scrapy-playwright` uses `Page.route` & `Page.unroute` internally, avoid using\nthese methods unless you know exactly what you're doing.\n\n### `playwright_page_methods`\nType `Iterable[PageMethod]`, default `()`\n\nAn iterable of [`scrapy_playwright.page.PageMethod`](#pagemethod-class)\nobjects to indicate actions to be performed on the page before returning the\nfinal response. See [Executing actions on pages](#executing-actions-on-pages).\n\n### `playwright_page`\nType `Optional[playwright.async_api.Page]`, default `None`\n\nA [Playwright page](https://playwright.dev/python/docs/api/class-page) to be used to\ndownload the request. If unspecified, a new page is created for each request.\nThis key could be used in conjunction with `playwright_include_page` to make a chain of\nrequests using the same page. For instance:\n\n```python\nfrom playwright.async_api import Page\n\ndef start_requests(self):\n    yield scrapy.Request(\n        url=\"https://httpbin.org/get\",\n        meta={\"playwright\": True, \"playwright_include_page\": True},\n    )\n\ndef parse(self, response, **kwargs):\n    page: Page = response.meta[\"playwright_page\"]\n    yield scrapy.Request(\n        url=\"https://httpbin.org/headers\",\n        callback=self.parse_headers,\n        meta={\"playwright\": True, \"playwright_page\": page},\n    )\n```\n\n### `playwright_page_goto_kwargs`\nType `dict`, default `{}`\n\nA dictionary with keyword arguments to be passed to the page's\n[`goto` method](https://playwright.dev/python/docs/api/class-page#page-goto)\nwhen navigating to an URL. The `url` key is ignored if present, the request\nURL is used instead.\n\n```python\nreturn scrapy.Request(\n    url=\"https://example.org\",\n    meta={\n        \"playwright\": True,\n        \"playwright_page_goto_kwargs\": {\n            \"wait_until\": \"networkidle\",\n        },\n    },\n)\n```\n\n### `playwright_security_details`\nType `Optional[dict]`, read only\n\nA dictionary with [security information](https://playwright.dev/python/docs/api/class-response#response-security-details)\nabout the give response. Only available for HTTPS requests. Could be accessed\nin the callback via `response.meta['playwright_security_details']`\n\n```python\ndef parse(self, response, **kwargs):\n    print(response.meta[\"playwright_security_details\"])\n    # {'issuer': 'DigiCert TLS RSA SHA256 2020 CA1', 'protocol': 'TLS 1.3', 'subjectName': 'www.example.org', 'validFrom': 1647216000, 'validTo': 1678838399}\n```\n\n### `playwright_suggested_filename`\nType `Optional[str]`, read only\n\nThe value of the [`Download.suggested_filename`](https://playwright.dev/python/docs/api/class-download#download-suggested-filename)\nattribute when the response is the binary contents of a\n[download](https://playwright.dev/python/docs/downloads) (e.g. a PDF file).\nOnly available for responses that only caused a download. Can be accessed\nin the callback via `response.meta['playwright_suggested_filename']`\n\n```python\ndef parse(self, response, **kwargs):\n    print(response.meta[\"playwright_suggested_filename\"])\n    # 'sample_file.pdf'\n```\n\n## Receiving Page objects in callbacks\n\nSpecifying a value that evaluates to `True` in the\n[`playwright_include_page`](#playwright_include_page) meta key for a\nrequest will result in the corresponding `playwright.async_api.Page` object\nbeing available in the `playwright_page` meta key in the request callback.\nIn order to be able to `await` coroutines on the provided `Page` object,\nthe callback needs to be defined as a coroutine function (`async def`).\n\n**Caution**\n\nUse this carefully, and only if you really need to do things with the Page\nobject in the callback. If pages are not properly closed after they are no longer\nnecessary the spider job could get stuck because of the limit set by the\n`PLAYWRIGHT_MAX_PAGES_PER_CONTEXT` setting.\n\n```python\nfrom playwright.async_api import Page\nimport scrapy\n\nclass AwesomeSpiderWithPage(scrapy.Spider):\n    name = \"page_spider\"\n\n    def start_requests(self):\n        yield scrapy.Request(\n            url=\"https://example.org\",\n            callback=self.parse_first,\n            meta={\"playwright\": True, \"playwright_include_page\": True},\n            errback=self.errback_close_page,\n        )\n\n    def parse_first(self, response):\n        page: Page = response.meta[\"playwright_page\"]\n        return scrapy.Request(\n            url=\"https://example.com\",\n            callback=self.parse_second,\n            meta={\"playwright\": True, \"playwright_include_page\": True, \"playwright_page\": page},\n            errback=self.errback_close_page,\n        )\n\n    async def parse_second(self, response):\n        page: Page = response.meta[\"playwright_page\"]\n        title = await page.title()  # \"Example Domain\"\n        await page.close()\n        return {\"title\": title}\n\n    async def errback_close_page(self, failure):\n        page: Page = failure.request.meta[\"playwright_page\"]\n        await page.close()\n```\n\n**Notes:**\n\n* When passing `playwright_include_page=True`, make sure pages are always closed\n  when they are no longer used. It's recommended to set a Request errback to make\n  sure pages are closed even if a request fails (if `playwright_include_page=False`\n  pages are automatically closed upon encountering an exception).\n  This is important, as open pages count towards the limit set by\n  `PLAYWRIGHT_MAX_PAGES_PER_CONTEXT` and crawls could freeze if the limit is reached\n  and pages remain open indefinitely.\n* Defining callbacks as `async def` is only necessary if you need to `await` things,\n  it's NOT necessary if you just need to pass over the Page object from one callback\n  to another (see the example above).\n* Any network operations resulting from awaiting a coroutine on a Page object\n  (`goto`, `go_back`, etc) will be executed directly by Playwright, bypassing the\n  Scrapy request workflow (Scheduler, Middlewares, etc).\n\n\n## Browser contexts\n\nMultiple [browser contexts](https://playwright.dev/python/docs/browser-contexts)\nto be launched at startup can be defined via the\n[`PLAYWRIGHT_CONTEXTS`](#playwright_contexts) setting.\n\n### Choosing a specific context for a request\n\nPass the name of the desired context in the `playwright_context` meta key:\n\n```python\nyield scrapy.Request(\n    url=\"https://example.org\",\n    meta={\"playwright\": True, \"playwright_context\": \"first\"},\n)\n```\n\n### Default context\n\nIf a request does not explicitly indicate a context via the `playwright_context`\nmeta key, it falls back to using a general context called `default`. This `default`\ncontext can also be customized on startup via the `PLAYWRIGHT_CONTEXTS` setting.\n\n### Persistent contexts\n\nPass a value for the `user_data_dir` keyword argument to launch a context as\npersistent. See also [`BrowserType.launch_persistent_context`](https://playwright.dev/python/docs/api/class-browsertype#browser-type-launch-persistent-context).\n\nNote that persistent contexts are launched independently from the main browser\ninstance, hence keyword arguments passed in the\n[`PLAYWRIGHT_LAUNCH_OPTIONS`](#playwright_launch_options)\nsetting do not apply.\n\n### Creating contexts while crawling\n\nIf the context specified in the `playwright_context` meta key does not exist, it will be created.\nYou can specify keyword arguments to be passed to\n[`Browser.new_context`](https://playwright.dev/python/docs/api/class-browser#browser-new-context)\nin the `playwright_context_kwargs` meta key:\n\n```python\nyield scrapy.Request(\n    url=\"https://example.org\",\n    meta={\n        \"playwright\": True,\n        \"playwright_context\": \"new\",\n        \"playwright_context_kwargs\": {\n            \"java_script_enabled\": False,\n            \"ignore_https_errors\": True,\n            \"proxy\": {\n                \"server\": \"http://myproxy.com:3128\",\n                \"username\": \"user\",\n                \"password\": \"pass\",\n            },\n        },\n    },\n)\n```\n\nPlease note that if a context with the specified name already exists,\nthat context is used and `playwright_context_kwargs` are ignored.\n\n### Closing contexts while crawling\n\nAfter [receiving the Page object in your callback](#receiving-page-objects-in-callbacks),\nyou can access a context though the corresponding [`Page.context`](https://playwright.dev/python/docs/api/class-page#page-context)\nattribute, and await [`close`](https://playwright.dev/python/docs/api/class-browsercontext#browser-context-close) on it.\n\n```python\ndef parse(self, response, **kwargs):\n    yield scrapy.Request(\n        url=\"https://example.org\",\n        callback=self.parse_in_new_context,\n        errback=self.close_context_on_error,\n        meta={\n            \"playwright\": True,\n            \"playwright_context\": \"awesome_context\",\n            \"playwright_include_page\": True,\n        },\n    )\n\nasync def parse_in_new_context(self, response):\n    page = response.meta[\"playwright_page\"]\n    title = await page.title()\n    await page.close()\n    await page.context.close()\n    return {\"title\": title}\n\nasync def close_context_on_error(self, failure):\n    page = failure.request.meta[\"playwright_page\"]\n    await page.close()\n    await page.context.close()\n```\n\n### Avoid race conditions & memory leaks when closing contexts\nMake sure to close the page before closing the context. See\n[this comment](https://github.com/scrapy-plugins/scrapy-playwright/issues/191#issuecomment-1548097114)\nin [#191](https://github.com/scrapy-plugins/scrapy-playwright/issues/191)\nfor more information.\n\n### Maximum concurrent context count\n\nSpecify a value for the `PLAYWRIGHT_MAX_CONTEXTS` setting to limit the amount\nof concurent contexts. Use with caution: it's possible to block the whole crawl\nif contexts are not closed after they are no longer used (refer to\n[this section](#closing-contexts-while-crawling) to dinamically close contexts).\nMake sure to define an errback to still close contexts even if there are errors.\n\n\n## Proxy support\n\nProxies are supported at the Browser level by specifying the `proxy` key in\nthe `PLAYWRIGHT_LAUNCH_OPTIONS` setting:\n\n```python\nfrom scrapy import Spider, Request\n\nclass ProxySpider(Spider):\n    name = \"proxy\"\n    custom_settings = {\n        \"PLAYWRIGHT_LAUNCH_OPTIONS\": {\n            \"proxy\": {\n                \"server\": \"http://myproxy.com:3128\",\n                \"username\": \"user\",\n                \"password\": \"pass\",\n            },\n        }\n    }\n\n    def start_requests(self):\n        yield Request(\"http://httpbin.org/get\", meta={\"playwright\": True})\n\n    def parse(self, response, **kwargs):\n        print(response.text)\n```\n\nProxies can also be set at the context level with the `PLAYWRIGHT_CONTEXTS` setting:\n\n```python\nPLAYWRIGHT_CONTEXTS = {\n    \"default\": {\n        \"proxy\": {\n            \"server\": \"http://default-proxy.com:3128\",\n            \"username\": \"user1\",\n            \"password\": \"pass1\",\n        },\n    },\n    \"alternative\": {\n        \"proxy\": {\n            \"server\": \"http://alternative-proxy.com:3128\",\n            \"username\": \"user2\",\n            \"password\": \"pass2\",\n        },\n    },\n}\n```\n\nOr passing a `proxy` key when [creating contexts while crawling](#creating-contexts-while-crawling).\n\nSee also:\n* [`zyte-smartproxy-playwright`](https://github.com/zytedata/zyte-smartproxy-playwright):\n  seamless support for [Zyte Smart Proxy Manager](https://www.zyte.com/smart-proxy-manager/)\n  in the Node.js version of Playwright.\n* the [upstream Playwright for Python section](https://playwright.dev/python/docs/network#http-proxy)\n  on HTTP Proxies.\n\n\n## Executing actions on pages\n\nA sorted iterable (e.g. `list`, `tuple`, `dict`) of `PageMethod` objects\ncould be passed in the `playwright_page_methods`\n[Request.meta](https://docs.scrapy.org/en/latest/topics/request-response.html#scrapy.http.Request.meta)\nkey to request methods to be invoked on the `Page` object before returning the final\n`Response` to the callback.\n\nThis is useful when you need to perform certain actions on a page (like scrolling\ndown or clicking links) and you want to handle only the final result in your callback.\n\n### `PageMethod` class\n\n#### `scrapy_playwright.page.PageMethod(method: str | callable, *args, **kwargs)`:\n\nRepresents a method to be called (and awaited if necessary) on a\n`playwright.page.Page` object (e.g. \"click\", \"screenshot\", \"evaluate\", etc).\nIt's also possible to pass callable objects that will be invoked as callbacks\nand receive Playwright Page as argument.\n`method` is the name of the method, `*args` and `**kwargs`\nare passed when calling such method. The return value\nwill be stored in the `PageMethod.result` attribute.\n\nFor instance:\n```python\ndef start_requests(self):\n    yield Request(\n        url=\"https://example.org\",\n        meta={\n            \"playwright\": True,\n            \"playwright_page_methods\": [\n                PageMethod(\"screenshot\", path=\"example.png\", full_page=True),\n            ],\n        },\n    )\n\ndef parse(self, response, **kwargs):\n    screenshot = response.meta[\"playwright_page_methods\"][0]\n    # screenshot.result contains the image's bytes\n```\n\nproduces the same effect as:\n```python\ndef start_requests(self):\n    yield Request(\n        url=\"https://example.org\",\n        meta={\"playwright\": True, \"playwright_include_page\": True},\n    )\n\nasync def parse(self, response, **kwargs):\n    page = response.meta[\"playwright_page\"]\n    screenshot = await page.screenshot(path=\"example.png\", full_page=True)\n    # screenshot contains the image's bytes\n    await page.close()\n```\n\n### Passing callable objects\n\nIf a `PageMethod` receives a callable object as its first argument, it will be\ncalled with the page as its first argument. Any additional arguments are passed\nto the callable after the page.\n\n```python\nasync def scroll_page(page: Page) -> str:\n    await page.wait_for_selector(selector=\"div.quote\")\n    await page.evaluate(\"window.scrollBy(0, document.body.scrollHeight)\")\n    await page.wait_for_selector(selector=\"div.quote:nth-child(11)\")\n    return page.url\n\n\nclass MySpyder(scrapy.Spider):\n    name = \"scroll\"\n\n    def start_requests(self):\n        yield Request(\n            url=\"https://quotes.toscrape.com/scroll\",\n            meta={\n                \"playwright\": True,\n                \"playwright_page_methods\": [PageMethod(scroll_page)],\n            },\n        )\n```\n\n### Supported Playwright methods\n\nRefer to the [upstream docs for the `Page` class](https://playwright.dev/python/docs/api/class-page)\nto see available methods.\n\n### Impact on Response objects\n\nCertain `Response` attributes (e.g. `url`, `ip_address`) reflect the state after the last\naction performed on a page. If you issue a `PageMethod` with an action that results in\na navigation (e.g. a `click` on a link), the `Response.url` attribute will point to the\nnew URL, which might be different from the request's URL.\n\n\n## Handling page events\n\nA dictionary of Page event handlers can be specified in the `playwright_page_event_handlers`\n[Request.meta](https://docs.scrapy.org/en/latest/topics/request-response.html#scrapy.http.Request.meta) key.\nKeys are the name of the event to be handled (e.g. `dialog`, `download`, etc).\nValues can be either callables or strings (in which case a spider method with the name will be looked up).\n\nExample:\n\n```python\nfrom playwright.async_api import Dialog\n\nasync def handle_dialog(dialog: Dialog) -> None:\n    logging.info(f\"Handled dialog with message: {dialog.message}\")\n    await dialog.dismiss()\n\nclass EventSpider(scrapy.Spider):\n    name = \"event\"\n\n    def start_requests(self):\n        yield scrapy.Request(\n            url=\"https://example.org\",\n            meta={\n                \"playwright\": True,\n                \"playwright_page_event_handlers\": {\n                    \"dialog\": handle_dialog,\n                    \"response\": \"handle_response\",\n                },\n            },\n        )\n\n    async def handle_response(self, response: PlaywrightResponse) -> None:\n        logging.info(f\"Received response with URL {response.url}\")\n```\n\nSee the [upstream `Page` docs](https://playwright.dev/python/docs/api/class-page)\nfor a list of the accepted events and the arguments passed to their handlers.\n\n### Notes about page event handlers\n\n* Event handlers will remain attached to the page and will be called for\n  subsequent downloads using the same page unless they are\n  [removed later](https://playwright.dev/python/docs/events#addingremoving-event-listener).\n  This is usually not a problem, since by default requests are performed in\n  single-use pages.\n* Event handlers will process Playwright objects, not Scrapy ones. For example,\n  for each Scrapy request/response there will be a matching Playwright\n  request/response, but not the other way: background requests/responses to get\n  images, scripts, stylesheets, etc are not seen by Scrapy.\n\n\n## Memory usage extension\n\nThe default Scrapy memory usage extension\n(`scrapy.extensions.memusage.MemoryUsage`) does not include the memory used by\nPlaywright because the browser is launched as a separate process. The\nscrapy-playwright package provides a replacement extension which also considers\nthe memory used by Playwright. This extension needs the\n[`psutil`](https://pypi.org/project/psutil/) package to work.\n\nUpdate the [EXTENSIONS](https://docs.scrapy.org/en/latest/topics/settings.html#std-setting-EXTENSIONS)\nsetting to disable the built-in Scrapy extension and replace it with the one\nfrom the scrapy-playwright package:\n\n```python\n# settings.py\nEXTENSIONS = {\n    \"scrapy.extensions.memusage.MemoryUsage\": None,\n    \"scrapy_playwright.memusage.ScrapyPlaywrightMemoryUsageExtension\": 0,\n}\n```\n\nRefer to the\n[upstream docs](https://docs.scrapy.org/en/latest/topics/extensions.html#module-scrapy.extensions.memusage)\nfor more information about supported settings.\n\n### Windows support\n\nJust like the [upstream Scrapy extension](https://docs.scrapy.org/en/latest/topics/extensions.html#module-scrapy.extensions.memusage), this custom memory extension does not work\non Windows. This is because the stdlib [`resource`](https://docs.python.org/3/library/resource.html)\nmodule is not available.\n\n\n## Examples\n\n**Click on a link, save the resulting page as PDF**\n\n```python\nclass ClickAndSavePdfSpider(scrapy.Spider):\n    name = \"pdf\"\n\n    def start_requests(self):\n        yield scrapy.Request(\n            url=\"https://example.org\",\n            meta=dict(\n                playwright=True,\n                playwright_page_methods={\n                    \"click\": PageMethod(\"click\", selector=\"a\"),\n                    \"pdf\": PageMethod(\"pdf\", path=\"/tmp/file.pdf\"),\n                },\n            ),\n        )\n\n    def parse(self, response, **kwargs):\n        pdf_bytes = response.meta[\"playwright_page_methods\"][\"pdf\"].result\n        with open(\"iana.pdf\", \"wb\") as fp:\n            fp.write(pdf_bytes)\n        yield {\"url\": response.url}  # response.url is \"https://www.iana.org/domains/reserved\"\n```\n\n**Scroll down on an infinite scroll page, take a screenshot of the full page**\n\n```python\nclass ScrollSpider(scrapy.Spider):\n    name = \"scroll\"\n\n    def start_requests(self):\n        yield scrapy.Request(\n            url=\"http://quotes.toscrape.com/scroll\",\n            meta=dict(\n                playwright=True,\n                playwright_include_page=True,\n                playwright_page_methods=[\n                    PageMethod(\"wait_for_selector\", \"div.quote\"),\n                    PageMethod(\"evaluate\", \"window.scrollBy(0, document.body.scrollHeight)\"),\n                    PageMethod(\"wait_for_selector\", \"div.quote:nth-child(11)\"),  # 10 per page\n                ],\n            ),\n        )\n\n    async def parse(self, response, **kwargs):\n        page = response.meta[\"playwright_page\"]\n        await page.screenshot(path=\"quotes.png\", full_page=True)\n        await page.close()\n        return {\"quote_count\": len(response.css(\"div.quote\"))}  # quotes from several pages\n```\n\n\nSee the [examples](examples) directory for more.\n\n\n## Known issues\n\n### No per-request proxy support\nSpecifying a proxy via the `proxy` Request meta key is not supported.\nRefer to the [Proxy support](#proxy-support) section for more information.\n\n### Unsopported signals\nThe `headers_received` and `bytes_received` signals are not fired by the\nscrapy-playwright download handler.\n\n\n## Reporting issues\n\nBefore opening an issue please make sure the unexpected behavior can only be\nobserved by using this package and not with standalone Playwright. To do this,\ntranslate your spider code to a reasonably close Playwright script: if the\nissue also occurs this way, you should instead report it\n[upstream](https://github.com/microsoft/playwright-python).\nFor instance:\n\n```python\nimport scrapy\n\nclass ExampleSpider(scrapy.Spider):\n    name = \"example\"\n\n    def start_requests(self):\n        yield scrapy.Request(\n            url=\"https://example.org\",\n            meta=dict(\n                playwright=True,\n                playwright_page_methods=[\n                    PageMethod(\"screenshot\", path=\"example.png\", full_page=True),\n                ],\n            ),\n        )\n```\n\ntranslates roughly to:\n\n```python\nimport asyncio\nfrom playwright.async_api import async_playwright\n\nasync def main():\n    async with async_playwright() as pw:\n        browser = await pw.chromium.launch()\n        page = await browser.new_page()\n        await page.goto(\"https://example.org\")\n        await page.screenshot(path=\"example.png\", full_page=True)\n        await browser.close()\n\nasyncio.run(main())\n```\n\n### Software versions\n\nBe sure to include which versions of Scrapy, Playwright and scrapy-playwright you are using:\n\n```\n$ playwright --version\nVersion 1.44.0\n```\n\n```\n$ python -c \"import scrapy_playwright; print(scrapy_playwright.__version__)\"\n0.0.34\n```\n\n```\n$ scrapy version -v\nScrapy       : 2.11.1\nlxml         : 5.1.0.0\nlibxml2      : 2.12.3\ncssselect    : 1.2.0\nparsel       : 1.8.1\nw3lib        : 2.1.2\nTwisted      : 23.10.0\nPython       : 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]\npyOpenSSL    : 24.0.0 (OpenSSL 3.2.1 30 Jan 2024)\ncryptography : 42.0.5\nPlatform     : Linux-6.5.0-35-generic-x86_64-with-glibc2.35\n```\n\n### Reproducible code example\n\nWhen opening an issue please include a\n[Minimal, Reproducible Example](https://stackoverflow.com/help/minimal-reproducible-example)\nthat shows the reported behavior. In addition, please make the code as self-contained as possible\nso an active Scrapy project is not required and the spider can be executed directly from a file with\n[`scrapy runspider`](https://docs.scrapy.org/en/latest/topics/commands.html#std-command-runspider).\nThis usually means including the relevant settings in the spider's\n[`custom_settings`](https://docs.scrapy.org/en/latest/topics/settings.html#settings-per-spider)\nattribute:\n\n```python\nimport scrapy\n\nclass ExampleSpider(scrapy.Spider):\n    name = \"example\"\n    custom_settings = {\n        \"TWISTED_REACTOR\": \"twisted.internet.asyncioreactor.AsyncioSelectorReactor\",\n        \"DOWNLOAD_HANDLERS\": {\n            \"https\": \"scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler\",\n            \"http\": \"scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler\",\n        },\n    }\n\n    def start_requests(self):\n        yield scrapy.Request(\n            url=\"https://example.org\",\n            meta={\"playwright\": True},\n        )\n```\n\n#### Minimal code\nPlease make the effort to reduce the code to the minimum that still displays the issue.\nIt is very rare that a complete project (including middlewares, pipelines, item processing, etc)\nis really needed to reproduce an issue. Reports that do not show an actual debugging attempt\nwill not be considered.\n\n### Logs and stats\n\nLogs for spider jobs displaying the issue in detail are extremely useful\nfor understanding possible bugs. Include lines before and after the problem,\nnot just isolated tracebacks. Job stats displayed at the end of the job\nare also important.\n\n\n## Frequently Asked Questions\n\nSee the [FAQ](docs/faq.md) document.\n\n\n## Deprecation policy\n\nDeprecated features will be supported for at least six months\nfollowing the release that deprecated them. After that, they\nmay be removed at any time. See the [changelog](docs/changelog.md)\nfor more information about deprecations and removals.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Patchright Integration For Scrapy",
    "version": "0.0.1",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "814f1159c6a343327361a1a49cd1b5bbee85fa88975761de9a0324f797a6f3c5",
                "md5": "9f3c3025a7d81fc42128377a01ac8da4",
                "sha256": "839d9726e3e103dd2693243a64d273fbd8a74e0adf5622d6a2e3d72e82d12106"
            },
            "downloads": -1,
            "filename": "scrapy_patchright-0.0.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "9f3c3025a7d81fc42128377a01ac8da4",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.9",
            "size": 26598,
            "upload_time": "2024-12-21T11:18:25",
            "upload_time_iso_8601": "2024-12-21T11:18:25.385656Z",
            "url": "https://files.pythonhosted.org/packages/81/4f/1159c6a343327361a1a49cd1b5bbee85fa88975761de9a0324f797a6f3c5/scrapy_patchright-0.0.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e238b81fc9894c1d9ba993c8173e73e05cf5b0e44fe61fb9308b0719e496372e",
                "md5": "5bedeeeac10c72016da69795a1816aa2",
                "sha256": "af4e802d44bac1375063bcff74b9de223ecf2aad115f48911ce48d24369d26b5"
            },
            "downloads": -1,
            "filename": "scrapy_patchright-0.0.1.tar.gz",
            "has_sig": false,
            "md5_digest": "5bedeeeac10c72016da69795a1816aa2",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.9",
            "size": 35525,
            "upload_time": "2024-12-21T11:18:44",
            "upload_time_iso_8601": "2024-12-21T11:18:44.806570Z",
            "url": "https://files.pythonhosted.org/packages/e2/38/b81fc9894c1d9ba993c8173e73e05cf5b0e44fe61fb9308b0719e496372e/scrapy_patchright-0.0.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-21 11:18:44",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "scrapy-patchright"
}
        
Elapsed time: 0.38814s