aics-dask-utils


Nameaics-dask-utils JSON
Version 0.2.4 PyPI version JSON
download
home_pagehttps://github.com/AllenCellModeling/aics_dask_utils
SummaryUtility functions and documentation related to Dask and AICS
upload_time2023-08-10 17:21:17
maintainer
docs_urlNone
authorJackson Maxfield Brown
requires_python>=3.6
licenseAllen Institute Software License
keywords aics_dask_utils
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # AICS Dask Utils

[![Build Status](https://github.com/AllenCellModeling/aics_dask_utils/workflows/Build%20Master/badge.svg)](https://github.com/AllenCellModeling/aics_dask_utils/actions)
[![Documentation](https://github.com/AllenCellModeling/aics_dask_utils/workflows/Documentation/badge.svg)](https://AllenCellModeling.github.io/aics_dask_utils)
[![Code Coverage](https://codecov.io/gh/AllenCellModeling/aics_dask_utils/branch/master/graph/badge.svg)](https://codecov.io/gh/AllenCellModeling/aics_dask_utils)

Documentation related to Dask, Distributed, and related packages.
Utility functions commonly used by AICS projects.

---

## Features
* Distributed handler to manage various debugging or cluster configurations
* Documentation on example cluster deployments

## Basics
Before we jump into quick starts there are some basic definitions to understand.

#### Task
A task is a single static function to be processed. Simple enough. However, relevant to
AICS, is that when using `aicsimageio` (and / or `dask.array.Array`), your image (or
`dask.array.Array`) is split up into _many_ tasks. This is dependent on the image reader
and the size of the file you are reading. But in general it is safe to assume that each
image you read is split many thousands of tasks. If you want to see how many tasks your
image is split into you can either compute:

1. Psuedo-code: `sum(2 * size(channel) for channel if channel not in ["Y", "X"])`
2. Dask graph length: `len(AICSImage.dask_data.__dask_graph__())`

#### Map
Apply a given function to the provided iterables as used as parameters to the function.
Given `lambda x: x + 1` and `[1, 2, 3]`, the result of `map(func, *iterables)` in this
case would be `[2, 3, 4]`. Usually, you are provided back an iterable of `future`
objects back from a `map` operation. The results from the map operation are not
guaranteed to be in the order of the iterable that went in as operations are started as
resources become available and item to item variance may result in different output
ordering.

#### Future
An object that will become available but is currently not defined. There is no guarantee
that the object is a valid result or an error and you should handle errors once the
future's state has resolved (usually this means after a `gather` operation).

#### Gather
Block the process from moving forward until all futures are resolved. Control flow here
would mean that you could potentially generate thousands of futures and keep moving on
locally while those futures slowly resolve but if you ever want a hard stop and wait for
some set of futures to complete, you would need gather them.

##### Other Comments
Dask tries to mirror the standard library `concurrent.futures` wherever possible which
is what allows for this library to have simple wrappers around Dask to allow for easy
debugging as we are simply swapping out `distributed.Client.map` with
`concurrent.futures.ThreadPoolExecutor.map` for example. If at any point in your code
you don't want to use `dask` for some reason or another, it is equally valid to use
`concurrent.futures.ThreadPoolExecutor` or `concurrent.futures.ProcessPoolExecutor`.

### Basic Mapping with Distributed Handler
If you have an iterable (or iterables) that would result in less than hundreds of
thousands of tasks, it you can simply use the normal `map` provided by the
`DistributedHandler.client`.

**Important Note:** Notice, "... iterable that would _result_ in less than hundreds
of thousands of tasks...". This is important because what happens when you try to `map`
over a thousand image paths, each which spawns an `AICSImage` object. Each one adds
thousands more tasks to the scheduler to complete. This will break and you should look
to [Large Iterable Batching](#large-iterable-batching) instead.

```python
from aics_dask_utils import DistributedHandler

# `None` address provided means use local machine threads
with DistributedHandler(None) as handler:
    futures = handler.client.map(
        lambda x: x + 1,
        [1, 2, 3]
    )

    results = handler.gather(futures)

from distributed import LocalCluster
cluster = LocalCluster()

# Actual address provided means use the dask scheduler
with DistributedHandler(cluster.scheduler_address) as handler:
    futures = handler.client.map(
        lambda x: x + 1,
        [1, 2, 3]
    )

    results = handler.gather(futures)
```

### Large Iterable Batching
If you have an iterable (or iterables) that would result in more than hundreds of
thousands of tasks, you should use `handler.batched_map` to reduce the load on the
client. This will batch your requests rather than send than all at once.

```python
from aics_dask_utils import DistributedHandler

# `None` address provided means use local machine threads
with DistributedHandler(None) as handler:
    results = handler.batched_map(
        lambda x: x + 1,
        range(1e9) # 1 billion
    )

from distributed import LocalCluster
cluster = LocalCluster()

# Actual address provided means use the dask scheduler
with DistributedHandler(cluster.scheduler_address) as handler:
    results = handler.batched_map(
        lambda x: x + 1,
        range(1e9) # 1 billion
    )
```

**Note:** Notice that there is no `handler.gather` call after `batched_map`. This is
because `batched_map` gathers results at the end of each batch rather than simply
returning their future's.

## Installation
**Stable Release:** `pip install aics_dask_utils`<br>
**Development Head:** `pip install git+https://github.com/AllenCellModeling/aics_dask_utils.git`

## Documentation
For full package documentation please visit
[AllenCellModeling.github.io/aics_dask_utils](https://AllenCellModeling.github.io/aics_dask_utils).

## Development
See [CONTRIBUTING.md](CONTRIBUTING.md) for information related to developing the code.

## Additional Comments
This README, provided tooling, and documentation are not meant to be all encompassing
of the various operations you can do with `dask` and other similar computing systems.
For further reading go to [dask.org](https://dask.org/).

**Free software: Allen Institute Software License**



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/AllenCellModeling/aics_dask_utils",
    "name": "aics-dask-utils",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "aics_dask_utils",
    "author": "Jackson Maxfield Brown",
    "author_email": "jacksonb@alleninstitute.org",
    "download_url": "https://files.pythonhosted.org/packages/95/35/6a6f43522953bc5098e8e4c51ccea8d1dadf862f149e7877ed018b755a37/aics_dask_utils-0.2.4.tar.gz",
    "platform": null,
    "description": "# AICS Dask Utils\n\n[![Build Status](https://github.com/AllenCellModeling/aics_dask_utils/workflows/Build%20Master/badge.svg)](https://github.com/AllenCellModeling/aics_dask_utils/actions)\n[![Documentation](https://github.com/AllenCellModeling/aics_dask_utils/workflows/Documentation/badge.svg)](https://AllenCellModeling.github.io/aics_dask_utils)\n[![Code Coverage](https://codecov.io/gh/AllenCellModeling/aics_dask_utils/branch/master/graph/badge.svg)](https://codecov.io/gh/AllenCellModeling/aics_dask_utils)\n\nDocumentation related to Dask, Distributed, and related packages.\nUtility functions commonly used by AICS projects.\n\n---\n\n## Features\n* Distributed handler to manage various debugging or cluster configurations\n* Documentation on example cluster deployments\n\n## Basics\nBefore we jump into quick starts there are some basic definitions to understand.\n\n#### Task\nA task is a single static function to be processed. Simple enough. However, relevant to\nAICS, is that when using `aicsimageio` (and / or `dask.array.Array`), your image (or\n`dask.array.Array`) is split up into _many_ tasks. This is dependent on the image reader\nand the size of the file you are reading. But in general it is safe to assume that each\nimage you read is split many thousands of tasks. If you want to see how many tasks your\nimage is split into you can either compute:\n\n1. Psuedo-code: `sum(2 * size(channel) for channel if channel not in [\"Y\", \"X\"])`\n2. Dask graph length: `len(AICSImage.dask_data.__dask_graph__())`\n\n#### Map\nApply a given function to the provided iterables as used as parameters to the function.\nGiven `lambda x: x + 1` and `[1, 2, 3]`, the result of `map(func, *iterables)` in this\ncase would be `[2, 3, 4]`. Usually, you are provided back an iterable of `future`\nobjects back from a `map` operation. The results from the map operation are not\nguaranteed to be in the order of the iterable that went in as operations are started as\nresources become available and item to item variance may result in different output\nordering.\n\n#### Future\nAn object that will become available but is currently not defined. There is no guarantee\nthat the object is a valid result or an error and you should handle errors once the\nfuture's state has resolved (usually this means after a `gather` operation).\n\n#### Gather\nBlock the process from moving forward until all futures are resolved. Control flow here\nwould mean that you could potentially generate thousands of futures and keep moving on\nlocally while those futures slowly resolve but if you ever want a hard stop and wait for\nsome set of futures to complete, you would need gather them.\n\n##### Other Comments\nDask tries to mirror the standard library `concurrent.futures` wherever possible which\nis what allows for this library to have simple wrappers around Dask to allow for easy\ndebugging as we are simply swapping out `distributed.Client.map` with\n`concurrent.futures.ThreadPoolExecutor.map` for example. If at any point in your code\nyou don't want to use `dask` for some reason or another, it is equally valid to use\n`concurrent.futures.ThreadPoolExecutor` or `concurrent.futures.ProcessPoolExecutor`.\n\n### Basic Mapping with Distributed Handler\nIf you have an iterable (or iterables) that would result in less than hundreds of\nthousands of tasks, it you can simply use the normal `map` provided by the\n`DistributedHandler.client`.\n\n**Important Note:** Notice, \"... iterable that would _result_ in less than hundreds\nof thousands of tasks...\". This is important because what happens when you try to `map`\nover a thousand image paths, each which spawns an `AICSImage` object. Each one adds\nthousands more tasks to the scheduler to complete. This will break and you should look\nto [Large Iterable Batching](#large-iterable-batching) instead.\n\n```python\nfrom aics_dask_utils import DistributedHandler\n\n# `None` address provided means use local machine threads\nwith DistributedHandler(None) as handler:\n    futures = handler.client.map(\n        lambda x: x + 1,\n        [1, 2, 3]\n    )\n\n    results = handler.gather(futures)\n\nfrom distributed import LocalCluster\ncluster = LocalCluster()\n\n# Actual address provided means use the dask scheduler\nwith DistributedHandler(cluster.scheduler_address) as handler:\n    futures = handler.client.map(\n        lambda x: x + 1,\n        [1, 2, 3]\n    )\n\n    results = handler.gather(futures)\n```\n\n### Large Iterable Batching\nIf you have an iterable (or iterables) that would result in more than hundreds of\nthousands of tasks, you should use `handler.batched_map` to reduce the load on the\nclient. This will batch your requests rather than send than all at once.\n\n```python\nfrom aics_dask_utils import DistributedHandler\n\n# `None` address provided means use local machine threads\nwith DistributedHandler(None) as handler:\n    results = handler.batched_map(\n        lambda x: x + 1,\n        range(1e9) # 1 billion\n    )\n\nfrom distributed import LocalCluster\ncluster = LocalCluster()\n\n# Actual address provided means use the dask scheduler\nwith DistributedHandler(cluster.scheduler_address) as handler:\n    results = handler.batched_map(\n        lambda x: x + 1,\n        range(1e9) # 1 billion\n    )\n```\n\n**Note:** Notice that there is no `handler.gather` call after `batched_map`. This is\nbecause `batched_map` gathers results at the end of each batch rather than simply\nreturning their future's.\n\n## Installation\n**Stable Release:** `pip install aics_dask_utils`<br>\n**Development Head:** `pip install git+https://github.com/AllenCellModeling/aics_dask_utils.git`\n\n## Documentation\nFor full package documentation please visit\n[AllenCellModeling.github.io/aics_dask_utils](https://AllenCellModeling.github.io/aics_dask_utils).\n\n## Development\nSee [CONTRIBUTING.md](CONTRIBUTING.md) for information related to developing the code.\n\n## Additional Comments\nThis README, provided tooling, and documentation are not meant to be all encompassing\nof the various operations you can do with `dask` and other similar computing systems.\nFor further reading go to [dask.org](https://dask.org/).\n\n**Free software: Allen Institute Software License**\n\n\n",
    "bugtrack_url": null,
    "license": "Allen Institute Software License",
    "summary": "Utility functions and documentation related to Dask and AICS",
    "version": "0.2.4",
    "project_urls": {
        "Homepage": "https://github.com/AllenCellModeling/aics_dask_utils"
    },
    "split_keywords": [
        "aics_dask_utils"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "902c222ad6984860374dee1a1b81a51c0644e6bca0aff29ee0bf9351dde537df",
                "md5": "1b6d216260f01b2169e16243a661a19b",
                "sha256": "7b7f3b0297660df3a280d2cdb5bdf972b79677523c265a0f14693b55dbdf9937"
            },
            "downloads": -1,
            "filename": "aics_dask_utils-0.2.4-py2.py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "1b6d216260f01b2169e16243a661a19b",
            "packagetype": "bdist_wheel",
            "python_version": "py2.py3",
            "requires_python": ">=3.6",
            "size": 7988,
            "upload_time": "2023-08-10T17:21:16",
            "upload_time_iso_8601": "2023-08-10T17:21:16.103768Z",
            "url": "https://files.pythonhosted.org/packages/90/2c/222ad6984860374dee1a1b81a51c0644e6bca0aff29ee0bf9351dde537df/aics_dask_utils-0.2.4-py2.py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "95356a6f43522953bc5098e8e4c51ccea8d1dadf862f149e7877ed018b755a37",
                "md5": "fc28a40d6bb3998e11855bbd875159a6",
                "sha256": "e1fd4e4d911981c9d513c9d952342df63adae3e830dcb829f5d562fde29c0c91"
            },
            "downloads": -1,
            "filename": "aics_dask_utils-0.2.4.tar.gz",
            "has_sig": false,
            "md5_digest": "fc28a40d6bb3998e11855bbd875159a6",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 12585,
            "upload_time": "2023-08-10T17:21:17",
            "upload_time_iso_8601": "2023-08-10T17:21:17.753691Z",
            "url": "https://files.pythonhosted.org/packages/95/35/6a6f43522953bc5098e8e4c51ccea8d1dadf862f149e7877ed018b755a37/aics_dask_utils-0.2.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-08-10 17:21:17",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "AllenCellModeling",
    "github_project": "aics_dask_utils",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "tox": true,
    "lcname": "aics-dask-utils"
}
        
Elapsed time: 0.13856s