aiodataloader


Nameaiodataloader JSON
Version 0.4.0 PyPI version JSON
download
home_page
SummaryAsyncio DataLoader implementation for Python
upload_time2023-02-23 12:43:06
maintainer
docs_urlNone
author
requires_python>=3.7
license
keywords aiodataloader concurrent deferred future
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Asyncio DataLoader

DataLoader is a generic utility to be used as part of your application's data
fetching layer to provide a simplified and consistent API over various remote
data sources such as databases or web services via batching and caching.

[![PyPI version](https://badge.fury.io/py/aiodataloader.svg)](https://badge.fury.io/py/aiodataloader)
![Test Status](https://github.com/syrusakbary/aiodataloader/actions/workflows/test.yml/badge.svg)
![Lint Status](https://github.com/syrusakbary/aiodataloader/actions/workflows/lint.yml/badge.svg)
[![Coverage Status](https://coveralls.io/repos/syrusakbary/aiodataloader/badge.svg?branch=master&service=github)](https://coveralls.io/github/syrusakbary/aiodataloader?branch=master)

A port of the "Loader" API originally developed by [@schrockn][] at Facebook in
2010 as a simplifying force to coalesce the sundry key-value store back-end
APIs which existed at the time. At Facebook, "Loader" became one of the
implementation details of the "Ent" framework, a privacy-aware data entity
loading and caching layer within web server product code. This ultimately became
the underpinning for Facebook's GraphQL server implementation and type
definitions.

Asyncio DataLoader is a Python port of the original JavaScript [DataLoader][]
implementation. DataLoader is often used when implementing a [GraphQL][] service,
though it is also broadly useful in other situations.


## Getting Started

First, install DataLoader using pip.

```sh
pip install aiodataloader
```

To get started, create a `DataLoader`. Each `DataLoader` instance represents a
unique cache. Typically instances are created per request when used within a
web-server like [Sanic][] if different users can see different things.

> Note: DataLoader assumes a AsyncIO environment with `async/await`
  available only in Python 3.5+.


## Batching

Batching is not an advanced feature, it's DataLoader's primary feature.
Create loaders by providing a batch loading function.

```python
from aiodataloader import DataLoader

class UserLoader(DataLoader):
    async def batch_load_fn(self, keys):
        return await my_batch_get_users(keys)

user_loader = UserLoader()
```

A batch loading function accepts an Iterable of keys, and returns a Promise which
resolves to a List of values[<sup>*</sup>](#batch-function).

Then load individual values from the loader. DataLoader will coalesce all
individual loads which occur within a single frame of execution (a single tick
of the event loop) and then call your batch function with all requested keys.

```python
user1_future = user_loader.load(1)
user2_future = user_loader.load(2)

user1 = await user1_future
user2 = await user2_future

user1_invitedby = user_loader.load(user1.invited_by_id)
user2_invitedby = user_loader.load(user2.invited_by_id)

print("User 1 was invited by", await user1_invitedby)
print("User 2 was invited by", await user2_invitedby)
```

A naive application may have issued four round-trips to a backend for the
required information, but with DataLoader this application will make at most
two.

DataLoader allows you to decouple unrelated parts of your application without
sacrificing the performance of batch data-loading. While the loader presents an
API that loads individual values, all concurrent requests will be coalesced and
presented to your batch loading function. This allows your application to safely
distribute data fetching requirements throughout your application and maintain
minimal outgoing data requests.

#### Batch Function

A batch loading function accepts an List of keys, and returns a Future which
resolves to a List of values. There are a few constraints that must be upheld:

 * The List of values must be the same length as the List of keys.
 * Each index in the List of values must correspond to the same index in the List of keys.

For example, if your batch function was provided the List of keys: `[ 2, 9, 6, 1 ]`,
and loading from a back-end service returned the values:

```python
{ 'id': 9, 'name': 'Chicago' }
{ 'id': 1, 'name': 'New York' }
{ 'id': 2, 'name': 'San Francisco' }
```

Our back-end service returned results in a different order than we requested, likely
because it was more efficient for it to do so. Also, it omitted a result for key `6`,
which we can interpret as no value existing for that key.

To uphold the constraints of the batch function, it must return an List of values
the same length as the List of keys, and re-order them to ensure each index aligns
with the original keys `[ 2, 9, 6, 1 ]`:

```python
[
  { 'id': 2, 'name': 'San Francisco' },
  { 'id': 9, 'name': 'Chicago' },
  None,
  { 'id': 1, 'name': 'New York' }
]
```


## Caching

DataLoader provides a memoization cache for all loads which occur in a single
request to your application. After `.load()` is called once with a given key,
the resulting value is cached to eliminate redundant loads.

In addition to relieving pressure on your data storage, caching results per-request
also creates fewer objects which may relieve memory pressure on your application:

```python
user_future1 = user_loader.load(1)
user_future2 = user_loader.load(1)

assert user_future1 == user_future2
```

#### Caching per-Request

DataLoader caching *does not* replace Redis, Memcache, or any other shared
application-level cache. DataLoader is first and foremost a data loading mechanism,
and its cache only serves the purpose of not repeatedly loading the same data in
the context of a single request to your Application. To do this, it maintains a
simple in-memory memoization cache (more accurately: `.load()` is a memoized function).

Avoid multiple requests from different users using the DataLoader instance, which
could result in cached data incorrectly appearing in each request. Typically,
DataLoader instances are created when a Request begins, and are not used once the
Request ends.

For example, when using with [Sanic][]:

```python
def create_loaders(auth_token) {
    return {
      'users': user_loader,
    }
}


app = Sanic(__name__)

@app.route("/")
async def test(request):
    auth_token = authenticate_user(request)
    loaders = create_loaders(auth_token)
    return render_page(request, loaders)
```

#### Clearing Cache

In certain uncommon cases, clearing the request cache may be necessary.

The most common example when clearing the loader's cache is necessary is after
a mutation or update within the same request, when a cached value could be out of
date and future loads should not use any possibly cached value.

Here's a simple example using SQL UPDATE to illustrate.

```python
# Request begins...
user_loader = ...

# And a value happens to be loaded (and cached).
user4 = await user_loader.load(4)

# A mutation occurs, invalidating what might be in cache.
await sql_run('UPDATE users WHERE id=4 SET username="zuck"')
user_loader.clear(4)

# Later the value load is loaded again so the mutated data appears.
user4 = await user_loader.load(4)

# Request completes.
```

#### Caching Exceptions

If a batch load fails (that is, a batch function throws or returns a rejected
Promise), then the requested values will not be cached. However if a batch
function returns an `Exception` instance for an individual value, that `Exception` will
be cached to avoid frequently loading the same `Exception`.

In some circumstances you may wish to clear the cache for these individual Errors:

```python
try:
    user_loader.load(1)
except Exception as e:
    user_loader.clear(1)
    raise
```

#### Disabling Cache

In certain uncommon cases, a DataLoader which *does not* cache may be desirable.
Calling `DataLoader(batch_fn, cache=false)` will ensure that every
call to `.load()` will produce a *new* Future, and requested keys will not be
saved in memory.

However, when the memoization cache is disabled, your batch function will
receive an array of keys which may contain duplicates! Each key will be
associated with each call to `.load()`. Your batch loader should provide a value
for each instance of the requested key.

For example:

```python
class MyLoader(DataLoader):
    cache = False
    async def batch_load_fn(self, keys):
        print(keys)
        return keys

my_loader = MyLoader()

my_loader.load('A')
my_loader.load('B')
my_loader.load('A')

# > [ 'A', 'B', 'A' ]
```

More complex cache behavior can be achieved by calling `.clear()` or `.clear_all()`
rather than disabling the cache completely. For example, this DataLoader will
provide unique keys to a batch function due to the memoization cache being
enabled, but will immediately clear its cache when the batch function is called
so later requests will load new values.

```python
class MyLoader(DataLoader):
    cache = False
    async def batch_load_fn(self, keys):
        self.clear_all()
        return keys
```


## API

#### class DataLoader

DataLoader creates a public API for loading data from a particular
data back-end with unique keys such as the `id` column of a SQL table or
document name in a MongoDB database, given a batch loading function.

Each `DataLoader` instance contains a unique memoized cache. Use caution when
used in long-lived applications or those which serve many users with different
access permissions and consider creating a new instance per web request.

##### `DataLoader(batch_load_fn, **options)`

Create a new `DataLoader` given a batch loading function and options.

- *batch_load_fn*: An async function (coroutine) which accepts an List of keys
  and returns a Future which resolves to an List of values.

- *options*:

  - *batch*: Default `True`. Set to `False` to disable batching, instead
    immediately invoking `batch_load_fn` with a single load key.

  - *max_batch_size*: Default `Infinity`. Limits the number of items that get
    passed in to the `batch_load_fn`.

  - *cache*: Default `True`. Set to `False` to disable memoization caching,
    instead creating a new Promise and new key in the `batch_load_fn` for every
    load of the same key.

  - *cache_key_fn*: A function to produce a cache key for a given load key.
    Defaults to `key => key`. Useful to provide when Python objects are keys
    and two similarly shaped objects should be considered equivalent.

  - *cache_map*: An instance of [dict][] (or an object with a similar API) to be
    used as the underlying cache for this loader. Default `{}`.

##### `load(key)`

Loads a key, returning a `Future` for the value represented by that key.

- *key*: An key value to load.

##### `load_many(keys)`

Loads multiple keys, promising an array of values:

```python
a, b = await my_loader.load_many([ 'a', 'b' ]);
```

This is equivalent to the more verbose:

```python
from asyncio import gather
a, b = await gather(
    my_loader.load('a'),
    my_loader.load('b')
)
```

- *keys*: A list of key values to load.

##### `clear(key)`

Clears the value at `key` from the cache, if it exists. Returns itself for
method chaining.

- *key*: An key value to clear.

##### `clear_all()`

Clears the entire cache. To be used when some event results in unknown
invalidations across this particular `DataLoader`. Returns itself for
method chaining.

##### `prime(key, value)`

Primes the cache with the provided key and value. If the key already exists, no
change is made. (To forcefully prime the cache, clear the key first with
`loader.clear(key).prime(key, value)`.) Returns itself for method chaining.


## Using with GraphQL

DataLoader pairs nicely well with [GraphQL][]. GraphQL fields are
designed to be stand-alone functions. Without a caching or batching mechanism,
it's easy for a naive GraphQL server to issue new database requests each time a
field is resolved.

Consider the following GraphQL request:

```
{
  me {
    name
    bestFriend {
      name
    }
    friends(first: 5) {
      name
      bestFriend {
        name
      }
    }
  }
}
```

Naively, if `me`, `bestFriend` and `friends` each need to request the backend,
there could be at most 13 database requests!

When using DataLoader with [graphene][], we could define the `User` type with clearer code and
at most 4 database requests, and possibly fewer if there are cache hits.

```python
class User(graphene.ObjectType):
    name = graphene.String()
    best_friend = graphene.Field(lambda: User)
    friends = graphene.List(lambda: User)

    def resolve_best_friend(self, args, context, info):
        return user_loader.load(self.best_friend_id)

    def resolve_friends(self, args, context, info):
        return user_loader.load_many(self.friend_ids)
```


## Common Patterns

### Creating a new DataLoader per request.

In many applications, a web server using DataLoader serves requests to many
different users with different access permissions. It may be dangerous to use
one cache across many users, and is encouraged to create a new DataLoader
per request:

```python
def create_loaders(auth_token):
  return {
    'users': DataLoader(lambda ids: gen_users(auth_token, ids)),
    'cdn_urls': DataLoader(lambda raw_urls: gen_cdn_urls(auth_token, raw_urls)),
    'stories': DataLoader(lambda keys: gen_stories(auth_token, keys)),
  }
}

# When handling an incoming web request:
loaders = create_loaders(request.query.auth_token)

# Then, within application logic:
user = await loaders.users.load(4)
pic = await loaders.cdn_urls.load(user.raw_pic_url)
```

Creating an object where each key is a `DataLoader` is one common pattern which
provides a single value to pass around to code which needs to perform
data loading, such as part of the `root_value` in a [GraphQL][] request.

### Loading by alternative keys.

Occasionally, some kind of value can be accessed in multiple ways. For example,
perhaps a "User" type can be loaded not only by an "id" but also by a "username"
value. If the same user is loaded by both keys, then it may be useful to fill
both caches when a user is loaded from either source:

```python
async def user_by_id_batch_fn(ids):
    users = await gen_users_by_id(ids)
    for user in users:
        username_loader.prime(user.username, user)
    return users

user_by_id_loader = DataLoader(user_by_id_batch_fn)

async def username_batch_fn(names):
    users = await gen_usernames(names)
    for user in users:
        user_by_id_loader.prime(user.id, user)
    return users

username_loader = DataLoader(username_batch_fn)
```


## Custom Caches

DataLoader can optionaly be provided a custom dict instance to use as its
memoization cache. More specifically, any object that implements the methods `get()`,
`set()`, `delete()` and `clear()` can be provided. This allows for custom dicts
which implement various [cache algorithms][] to be provided. By default,
DataLoader uses the standard [dict][] which simply grows until the DataLoader
is released. The default is appropriate when requests to your application are
short-lived.



## Video Source Code Walkthrough

**DataLoader Source Code Walkthrough (YouTube):**

<a href="https://youtu.be/OQTnXNCDywA" target="_blank" alt="DataLoader Source Code Walkthrough"><img src="https://img.youtube.com/vi/OQTnXNCDywA/0.jpg" /></a>


[@schrockn]: https://github.com/schrockn
[DataLoader]: https://github.com/graphql/dataloader
[GraphQL]: https://graphql.org
[dict]: https://docs.python.org/3/tutorial/datastructures.html#dictionaries
[graphene]: https://github.com/graphql-python/graphene
[graphql-core]: https://github.com/graphql-python/graphql-core
[cache algorithms]: https://en.wikipedia.org/wiki/Cache_algorithms
[Sanic]: https://sanic.readthedocs.io/en/latest/

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "aiodataloader",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "aiodataloader,concurrent,deferred,future",
    "author": "",
    "author_email": "Syrus Akbary <me@syrusakbary.com>",
    "download_url": "https://files.pythonhosted.org/packages/e8/dd/e92b21d0c0d09bad4d41ce39def76deef63248c65e4ddc7ac2245ae0b2cc/aiodataloader-0.4.0.tar.gz",
    "platform": null,
    "description": "# Asyncio DataLoader\n\nDataLoader is a generic utility to be used as part of your application's data\nfetching layer to provide a simplified and consistent API over various remote\ndata sources such as databases or web services via batching and caching.\n\n[![PyPI version](https://badge.fury.io/py/aiodataloader.svg)](https://badge.fury.io/py/aiodataloader)\n![Test Status](https://github.com/syrusakbary/aiodataloader/actions/workflows/test.yml/badge.svg)\n![Lint Status](https://github.com/syrusakbary/aiodataloader/actions/workflows/lint.yml/badge.svg)\n[![Coverage Status](https://coveralls.io/repos/syrusakbary/aiodataloader/badge.svg?branch=master&service=github)](https://coveralls.io/github/syrusakbary/aiodataloader?branch=master)\n\nA port of the \"Loader\" API originally developed by [@schrockn][] at Facebook in\n2010 as a simplifying force to coalesce the sundry key-value store back-end\nAPIs which existed at the time. At Facebook, \"Loader\" became one of the\nimplementation details of the \"Ent\" framework, a privacy-aware data entity\nloading and caching layer within web server product code. This ultimately became\nthe underpinning for Facebook's GraphQL server implementation and type\ndefinitions.\n\nAsyncio DataLoader is a Python port of the original JavaScript [DataLoader][]\nimplementation. DataLoader is often used when implementing a [GraphQL][] service,\nthough it is also broadly useful in other situations.\n\n\n## Getting Started\n\nFirst, install DataLoader using pip.\n\n```sh\npip install aiodataloader\n```\n\nTo get started, create a `DataLoader`. Each `DataLoader` instance represents a\nunique cache. Typically instances are created per request when used within a\nweb-server like [Sanic][] if different users can see different things.\n\n> Note: DataLoader assumes a AsyncIO environment with `async/await`\n  available only in Python 3.5+.\n\n\n## Batching\n\nBatching is not an advanced feature, it's DataLoader's primary feature.\nCreate loaders by providing a batch loading function.\n\n```python\nfrom aiodataloader import DataLoader\n\nclass UserLoader(DataLoader):\n    async def batch_load_fn(self, keys):\n        return await my_batch_get_users(keys)\n\nuser_loader = UserLoader()\n```\n\nA batch loading function accepts an Iterable of keys, and returns a Promise which\nresolves to a List of values[<sup>*</sup>](#batch-function).\n\nThen load individual values from the loader. DataLoader will coalesce all\nindividual loads which occur within a single frame of execution (a single tick\nof the event loop) and then call your batch function with all requested keys.\n\n```python\nuser1_future = user_loader.load(1)\nuser2_future = user_loader.load(2)\n\nuser1 = await user1_future\nuser2 = await user2_future\n\nuser1_invitedby = user_loader.load(user1.invited_by_id)\nuser2_invitedby = user_loader.load(user2.invited_by_id)\n\nprint(\"User 1 was invited by\", await user1_invitedby)\nprint(\"User 2 was invited by\", await user2_invitedby)\n```\n\nA naive application may have issued four round-trips to a backend for the\nrequired information, but with DataLoader this application will make at most\ntwo.\n\nDataLoader allows you to decouple unrelated parts of your application without\nsacrificing the performance of batch data-loading. While the loader presents an\nAPI that loads individual values, all concurrent requests will be coalesced and\npresented to your batch loading function. This allows your application to safely\ndistribute data fetching requirements throughout your application and maintain\nminimal outgoing data requests.\n\n#### Batch Function\n\nA batch loading function accepts an List of keys, and returns a Future which\nresolves to a List of values. There are a few constraints that must be upheld:\n\n * The List of values must be the same length as the List of keys.\n * Each index in the List of values must correspond to the same index in the List of keys.\n\nFor example, if your batch function was provided the List of keys: `[ 2, 9, 6, 1 ]`,\nand loading from a back-end service returned the values:\n\n```python\n{ 'id': 9, 'name': 'Chicago' }\n{ 'id': 1, 'name': 'New York' }\n{ 'id': 2, 'name': 'San Francisco' }\n```\n\nOur back-end service returned results in a different order than we requested, likely\nbecause it was more efficient for it to do so. Also, it omitted a result for key `6`,\nwhich we can interpret as no value existing for that key.\n\nTo uphold the constraints of the batch function, it must return an List of values\nthe same length as the List of keys, and re-order them to ensure each index aligns\nwith the original keys `[ 2, 9, 6, 1 ]`:\n\n```python\n[\n  { 'id': 2, 'name': 'San Francisco' },\n  { 'id': 9, 'name': 'Chicago' },\n  None,\n  { 'id': 1, 'name': 'New York' }\n]\n```\n\n\n## Caching\n\nDataLoader provides a memoization cache for all loads which occur in a single\nrequest to your application. After `.load()` is called once with a given key,\nthe resulting value is cached to eliminate redundant loads.\n\nIn addition to relieving pressure on your data storage, caching results per-request\nalso creates fewer objects which may relieve memory pressure on your application:\n\n```python\nuser_future1 = user_loader.load(1)\nuser_future2 = user_loader.load(1)\n\nassert user_future1 == user_future2\n```\n\n#### Caching per-Request\n\nDataLoader caching *does not* replace Redis, Memcache, or any other shared\napplication-level cache. DataLoader is first and foremost a data loading mechanism,\nand its cache only serves the purpose of not repeatedly loading the same data in\nthe context of a single request to your Application. To do this, it maintains a\nsimple in-memory memoization cache (more accurately: `.load()` is a memoized function).\n\nAvoid multiple requests from different users using the DataLoader instance, which\ncould result in cached data incorrectly appearing in each request. Typically,\nDataLoader instances are created when a Request begins, and are not used once the\nRequest ends.\n\nFor example, when using with [Sanic][]:\n\n```python\ndef create_loaders(auth_token) {\n    return {\n      'users': user_loader,\n    }\n}\n\n\napp = Sanic(__name__)\n\n@app.route(\"/\")\nasync def test(request):\n    auth_token = authenticate_user(request)\n    loaders = create_loaders(auth_token)\n    return render_page(request, loaders)\n```\n\n#### Clearing Cache\n\nIn certain uncommon cases, clearing the request cache may be necessary.\n\nThe most common example when clearing the loader's cache is necessary is after\na mutation or update within the same request, when a cached value could be out of\ndate and future loads should not use any possibly cached value.\n\nHere's a simple example using SQL UPDATE to illustrate.\n\n```python\n# Request begins...\nuser_loader = ...\n\n# And a value happens to be loaded (and cached).\nuser4 = await user_loader.load(4)\n\n# A mutation occurs, invalidating what might be in cache.\nawait sql_run('UPDATE users WHERE id=4 SET username=\"zuck\"')\nuser_loader.clear(4)\n\n# Later the value load is loaded again so the mutated data appears.\nuser4 = await user_loader.load(4)\n\n# Request completes.\n```\n\n#### Caching Exceptions\n\nIf a batch load fails (that is, a batch function throws or returns a rejected\nPromise), then the requested values will not be cached. However if a batch\nfunction returns an `Exception` instance for an individual value, that `Exception` will\nbe cached to avoid frequently loading the same `Exception`.\n\nIn some circumstances you may wish to clear the cache for these individual Errors:\n\n```python\ntry:\n    user_loader.load(1)\nexcept Exception as e:\n    user_loader.clear(1)\n    raise\n```\n\n#### Disabling Cache\n\nIn certain uncommon cases, a DataLoader which *does not* cache may be desirable.\nCalling `DataLoader(batch_fn, cache=false)` will ensure that every\ncall to `.load()` will produce a *new* Future, and requested keys will not be\nsaved in memory.\n\nHowever, when the memoization cache is disabled, your batch function will\nreceive an array of keys which may contain duplicates! Each key will be\nassociated with each call to `.load()`. Your batch loader should provide a value\nfor each instance of the requested key.\n\nFor example:\n\n```python\nclass MyLoader(DataLoader):\n    cache = False\n    async def batch_load_fn(self, keys):\n        print(keys)\n        return keys\n\nmy_loader = MyLoader()\n\nmy_loader.load('A')\nmy_loader.load('B')\nmy_loader.load('A')\n\n# > [ 'A', 'B', 'A' ]\n```\n\nMore complex cache behavior can be achieved by calling `.clear()` or `.clear_all()`\nrather than disabling the cache completely. For example, this DataLoader will\nprovide unique keys to a batch function due to the memoization cache being\nenabled, but will immediately clear its cache when the batch function is called\nso later requests will load new values.\n\n```python\nclass MyLoader(DataLoader):\n    cache = False\n    async def batch_load_fn(self, keys):\n        self.clear_all()\n        return keys\n```\n\n\n## API\n\n#### class DataLoader\n\nDataLoader creates a public API for loading data from a particular\ndata back-end with unique keys such as the `id` column of a SQL table or\ndocument name in a MongoDB database, given a batch loading function.\n\nEach `DataLoader` instance contains a unique memoized cache. Use caution when\nused in long-lived applications or those which serve many users with different\naccess permissions and consider creating a new instance per web request.\n\n##### `DataLoader(batch_load_fn, **options)`\n\nCreate a new `DataLoader` given a batch loading function and options.\n\n- *batch_load_fn*: An async function (coroutine) which accepts an List of keys\n  and returns a Future which resolves to an List of values.\n\n- *options*:\n\n  - *batch*: Default `True`. Set to `False` to disable batching, instead\n    immediately invoking `batch_load_fn` with a single load key.\n\n  - *max_batch_size*: Default `Infinity`. Limits the number of items that get\n    passed in to the `batch_load_fn`.\n\n  - *cache*: Default `True`. Set to `False` to disable memoization caching,\n    instead creating a new Promise and new key in the `batch_load_fn` for every\n    load of the same key.\n\n  - *cache_key_fn*: A function to produce a cache key for a given load key.\n    Defaults to `key => key`. Useful to provide when Python objects are keys\n    and two similarly shaped objects should be considered equivalent.\n\n  - *cache_map*: An instance of [dict][] (or an object with a similar API) to be\n    used as the underlying cache for this loader. Default `{}`.\n\n##### `load(key)`\n\nLoads a key, returning a `Future` for the value represented by that key.\n\n- *key*: An key value to load.\n\n##### `load_many(keys)`\n\nLoads multiple keys, promising an array of values:\n\n```python\na, b = await my_loader.load_many([ 'a', 'b' ]);\n```\n\nThis is equivalent to the more verbose:\n\n```python\nfrom asyncio import gather\na, b = await gather(\n    my_loader.load('a'),\n    my_loader.load('b')\n)\n```\n\n- *keys*: A list of key values to load.\n\n##### `clear(key)`\n\nClears the value at `key` from the cache, if it exists. Returns itself for\nmethod chaining.\n\n- *key*: An key value to clear.\n\n##### `clear_all()`\n\nClears the entire cache. To be used when some event results in unknown\ninvalidations across this particular `DataLoader`. Returns itself for\nmethod chaining.\n\n##### `prime(key, value)`\n\nPrimes the cache with the provided key and value. If the key already exists, no\nchange is made. (To forcefully prime the cache, clear the key first with\n`loader.clear(key).prime(key, value)`.) Returns itself for method chaining.\n\n\n## Using with GraphQL\n\nDataLoader pairs nicely well with [GraphQL][]. GraphQL fields are\ndesigned to be stand-alone functions. Without a caching or batching mechanism,\nit's easy for a naive GraphQL server to issue new database requests each time a\nfield is resolved.\n\nConsider the following GraphQL request:\n\n```\n{\n  me {\n    name\n    bestFriend {\n      name\n    }\n    friends(first: 5) {\n      name\n      bestFriend {\n        name\n      }\n    }\n  }\n}\n```\n\nNaively, if `me`, `bestFriend` and `friends` each need to request the backend,\nthere could be at most 13 database requests!\n\nWhen using DataLoader with [graphene][], we could define the `User` type with clearer code and\nat most 4 database requests, and possibly fewer if there are cache hits.\n\n```python\nclass User(graphene.ObjectType):\n    name = graphene.String()\n    best_friend = graphene.Field(lambda: User)\n    friends = graphene.List(lambda: User)\n\n    def resolve_best_friend(self, args, context, info):\n        return user_loader.load(self.best_friend_id)\n\n    def resolve_friends(self, args, context, info):\n        return user_loader.load_many(self.friend_ids)\n```\n\n\n## Common Patterns\n\n### Creating a new DataLoader per request.\n\nIn many applications, a web server using DataLoader serves requests to many\ndifferent users with different access permissions. It may be dangerous to use\none cache across many users, and is encouraged to create a new DataLoader\nper request:\n\n```python\ndef create_loaders(auth_token):\n  return {\n    'users': DataLoader(lambda ids: gen_users(auth_token, ids)),\n    'cdn_urls': DataLoader(lambda raw_urls: gen_cdn_urls(auth_token, raw_urls)),\n    'stories': DataLoader(lambda keys: gen_stories(auth_token, keys)),\n  }\n}\n\n# When handling an incoming web request:\nloaders = create_loaders(request.query.auth_token)\n\n# Then, within application logic:\nuser = await loaders.users.load(4)\npic = await loaders.cdn_urls.load(user.raw_pic_url)\n```\n\nCreating an object where each key is a `DataLoader` is one common pattern which\nprovides a single value to pass around to code which needs to perform\ndata loading, such as part of the `root_value` in a [GraphQL][] request.\n\n### Loading by alternative keys.\n\nOccasionally, some kind of value can be accessed in multiple ways. For example,\nperhaps a \"User\" type can be loaded not only by an \"id\" but also by a \"username\"\nvalue. If the same user is loaded by both keys, then it may be useful to fill\nboth caches when a user is loaded from either source:\n\n```python\nasync def user_by_id_batch_fn(ids):\n    users = await gen_users_by_id(ids)\n    for user in users:\n        username_loader.prime(user.username, user)\n    return users\n\nuser_by_id_loader = DataLoader(user_by_id_batch_fn)\n\nasync def username_batch_fn(names):\n    users = await gen_usernames(names)\n    for user in users:\n        user_by_id_loader.prime(user.id, user)\n    return users\n\nusername_loader = DataLoader(username_batch_fn)\n```\n\n\n## Custom Caches\n\nDataLoader can optionaly be provided a custom dict instance to use as its\nmemoization cache. More specifically, any object that implements the methods `get()`,\n`set()`, `delete()` and `clear()` can be provided. This allows for custom dicts\nwhich implement various [cache algorithms][] to be provided. By default,\nDataLoader uses the standard [dict][] which simply grows until the DataLoader\nis released. The default is appropriate when requests to your application are\nshort-lived.\n\n\n\n## Video Source Code Walkthrough\n\n**DataLoader Source Code Walkthrough (YouTube):**\n\n<a href=\"https://youtu.be/OQTnXNCDywA\" target=\"_blank\" alt=\"DataLoader Source Code Walkthrough\"><img src=\"https://img.youtube.com/vi/OQTnXNCDywA/0.jpg\" /></a>\n\n\n[@schrockn]: https://github.com/schrockn\n[DataLoader]: https://github.com/graphql/dataloader\n[GraphQL]: https://graphql.org\n[dict]: https://docs.python.org/3/tutorial/datastructures.html#dictionaries\n[graphene]: https://github.com/graphql-python/graphene\n[graphql-core]: https://github.com/graphql-python/graphql-core\n[cache algorithms]: https://en.wikipedia.org/wiki/Cache_algorithms\n[Sanic]: https://sanic.readthedocs.io/en/latest/\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Asyncio DataLoader implementation for Python",
    "version": "0.4.0",
    "project_urls": {
        "Homepage": "https://github.com/syrusakbary/aiodataloader"
    },
    "split_keywords": [
        "aiodataloader",
        "concurrent",
        "deferred",
        "future"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "571172e151269095f4ca73416526b017b2b7ddbcbe5181ca7b0176e3ffd09db3",
                "md5": "a0c84c4412f6e5332756105870ae0c61",
                "sha256": "2775d8607e1b68ded82efc93c839846d0ae9d9e0421085e444f3c1c541f3c2b6"
            },
            "downloads": -1,
            "filename": "aiodataloader-0.4.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a0c84c4412f6e5332756105870ae0c61",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 10937,
            "upload_time": "2023-02-23T12:43:03",
            "upload_time_iso_8601": "2023-02-23T12:43:03.412086Z",
            "url": "https://files.pythonhosted.org/packages/57/11/72e151269095f4ca73416526b017b2b7ddbcbe5181ca7b0176e3ffd09db3/aiodataloader-0.4.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e8dde92b21d0c0d09bad4d41ce39def76deef63248c65e4ddc7ac2245ae0b2cc",
                "md5": "e41e7a8c4e5065e971be5bb9206b5bab",
                "sha256": "6de9ca0eb75c4eef686754679981ebd45a933391b40473f92419b8a747504169"
            },
            "downloads": -1,
            "filename": "aiodataloader-0.4.0.tar.gz",
            "has_sig": false,
            "md5_digest": "e41e7a8c4e5065e971be5bb9206b5bab",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 1629645,
            "upload_time": "2023-02-23T12:43:06",
            "upload_time_iso_8601": "2023-02-23T12:43:06.040004Z",
            "url": "https://files.pythonhosted.org/packages/e8/dd/e92b21d0c0d09bad4d41ce39def76deef63248c65e4ddc7ac2245ae0b2cc/aiodataloader-0.4.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-02-23 12:43:06",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "syrusakbary",
    "github_project": "aiodataloader",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "aiodataloader"
}
        
Elapsed time: 4.42938s