dm-clrs


Namedm-clrs JSON
Version 2.0.0 PyPI version JSON
download
home_pagehttps://github.com/deepmind/clrs
SummaryThe CLRS Algorithmic Reasoning Benchmark.
upload_time2024-07-18 10:27:51
maintainerNone
docs_urlNone
authorDeepMind
requires_python>=3.6
licenseApache 2.0
keywords python machine learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # The CLRS Algorithmic Reasoning Benchmark

Learning representations of algorithms is an emerging area of machine learning,
seeking to bridge concepts from neural networks with classical algorithms. The
CLRS Algorithmic Reasoning Benchmark (CLRS) consolidates and extends previous
work toward evaluation algorithmic reasoning by providing a suite of
implementations of classical algorithms. These algorithms have been selected
from the third edition of the standard *Introduction to Algorithms* by Cormen,
Leiserson, Rivest and Stein.

## Getting started

The CLRS Algorithmic Reasoning Benchmark can be installed with pip, either from
PyPI:

```shell
pip install dm-clrs
```

or directly from GitHub (updated more frequently):

```shell
pip install git+https://github.com/google-deepmind/clrs.git
```

You may prefer to install it in a virtual environment if any requirements
clash with your Python installation:

```shell
python3 -m venv clrs_env
source clrs_env/bin/activate
pip install git+https://github.com/google-deepmind/clrs.git
```

Once installed you can run our example baseline model:

```shell
python3 -m clrs.examples.run
```

If this is the first run of the example, the dataset will be downloaded and
stored in `--dataset_path` (default '/tmp/CLRS30').
Alternatively, you can also download and extract https://storage.googleapis.com/dm-clrs/CLRS30_v1.0.0.tar.gz

## Algorithms as graphs

CLRS implements the selected algorithms in an idiomatic way, which aligns as
closely as possible to the original CLRS 3ed pseudocode. By controlling the
input data distribution to conform to the preconditions we are able to
automatically generate input/output pairs. We additionally provide trajectories
of "hints" that expose the internal state of each algorithm, to both optionally
simplify the learning challenge and to distinguish between different algorithms
that solve the same overall task (e.g. sorting).

In the most generic sense, algorithms can be seen as manipulating sets of
objects, along with any relations between them (which can themselves be
decomposed into binary relations). Accordingly, we study all of the algorithms
in this benchmark using a graph representation. In the event that objects obey a
more strict ordered structure (e.g. arrays or rooted trees), we impose this
ordering through inclusion of predecessor links.

## How it works

For each algorithm, we provide a canonical set of *train*, *eval* and *test*
trajectories for benchmarking out-of-distribution generalization.

|       | Trajectories    | Problem Size |
|-------|-----------------|--------------|
| Train | 1000            | 16           |
| Eval  | 32 x multiplier | 16           |
| Test  | 32 x multiplier | 64           |


Here, "problem size" refers to e.g. the length of an array or number of nodes in
a graph, depending on the algorithm. "multiplier" is an algorithm-specific
factor that increases the number of available *eval* and *test* trajectories
to compensate for paucity of evaluation signals. "multiplier" is 1 for all
algorithms except:

- Maximum subarray (Kadane), for which "multiplier" is 32.
- Quick select, minimum, binary search, string matchers (both naive and KMP),
and segment intersection, for which "multiplier" is 64.

The trajectories can be used like so:

```python
train_ds, num_samples, spec = clrs.create_dataset(
      folder='/tmp/CLRS30', algorithm='bfs',
      split='train', batch_size=32)

for i, feedback in enumerate(train_ds.as_numpy_iterator()):
  if i == 0:
    model.init(feedback.features, initial_seed)
  loss = model.feedback(rng_key, feedback)
```

Here, `feedback` is a `namedtuple` with the following structure:

```python
Feedback = collections.namedtuple('Feedback', ['features', 'outputs'])
Features = collections.namedtuple('Features', ['inputs', 'hints', 'lengths'])
```

where the content of `Features` can be used for training and `outputs` is
reserved for evaluation. Each field of the tuple is an `ndarray` with a leading
batch dimension. Because `hints` are provided for the full algorithm trajectory,
these contain an additional time dimension padded up to the maximum length
`max(T)` of any trajectory within the dataset. The `lengths` field specifies the
true length `t <= max(T)` for each trajectory, which can be used e.g. for loss
masking.

The `examples` directory contains a full working Graph Neural Network (GNN)
example using JAX and the DeepMind JAX Ecosystem of libraries. It allows
training of multiple algorithms on a single processor, as described in
["A Generalist Neural Algorithmic Learner"](https://arxiv.org/abs/2209.11142).

## What we provide

### Algorithms

Our initial CLRS-30 benchmark includes the following 30 algorithms. We aim to
support more algorithms in the future.

- Sorting
  - Insertion sort
  - Bubble sort
  - Heapsort (Williams, 1964)
  - Quicksort (Hoare, 1962)
- Searching
  - Minimum
  - Binary search
  - Quickselect (Hoare, 1961)
- Divide and conquer
  - Maximum subarray (Kadane's variant) (Bentley, 1984)
- Greedy
  - Activity selection (Gavril, 1972)
  - Task scheduling (Lawler, 1985)
- Dynamic programming
  - Matrix chain multiplication
  - Longest common subsequence
  - Optimal binary search tree (Aho et al., 1974)
- Graphs
  - Depth-first search (Moore, 1959)
  - Breadth-first search (Moore, 1959)
  - Topological sorting (Knuth, 1973)
  - Articulation points
  - Bridges
  - Kosaraju's strongly connected components algorithm (Aho et al., 1974)
  - Kruskal's minimum spanning tree algorithm (Kruskal, 1956)
  - Prim's minimum spanning tree algorithm (Prim, 1957)
  - Bellman-Ford algorithm for single-source shortest paths (Bellman, 1958)
  - Dijkstra's algorithm for single-source shortest paths (Dijkstra et al., 1959)
  - Directed acyclic graph single-source shortest paths
  - Floyd-Warshall algorithm for all-pairs shortest-paths (Floyd, 1962)
- Strings
  - Naïve string matching
  - Knuth-Morris-Pratt (KMP) string matcher (Knuth et al., 1977)
- Geometry
  - Segment intersection
  - Graham scan convex hull algorithm (Graham, 1972)
  - Jarvis' march convex hull algorithm (Jarvis, 1973)

### Baselines

Models consist of a *processor* and a number of *encoders* and *decoders*.
We provide JAX implementations of the following GNN baseline processors:

- Deep Sets (Zaheer et al., NIPS 2017)
- End-to-End Memory Networks (Sukhbaatar et al., NIPS 2015)
- Graph Attention Networks (Veličković et al., ICLR 2018)
- Graph Attention Networks v2 (Brody et al., ICLR 2022)
- Message-Passing Neural Networks (Gilmer et al., ICML 2017)
- Pointer Graph Networks (Veličković et al., NeurIPS 2020)

If you want to implement a new processor, the easiest way is to add
it in the `processors.py` file and make it available through the
`get_processor_factory` method there. A processor should have a `__call__`
method like this:

```
__call__(self,
         node_fts, edge_fts, graph_fts,
         adj_mat, hidden,
         nb_nodes, batch_size)
```

where `node_fts`, `edge_fts` and `graph_fts` will be float arrays of shape
`batch_size` x `nb_nodes` x H, `batch_size` x `nb_nodes` x `nb_nodes` x H,
and `batch_size` x H with encoded features for
nodes, edges and graph respectively, `adj_mat` a
`batch_size` x `nb_nodes` x `nb_nodes` boolean
array of connectivity built from hints and inputs, and `hidden` a
`batch_size` x `nb_nodes` x H float array with the previous-step outputs
of the processor. The method should return a `batch_size` x `nb_nodes` x H
float array.

For more fundamentally different baselines, it is necessary to create a new
class that extends the Model API (as found within `clrs/_src/model.py`).
`clrs/_src/baselines.py` provides one example of how this can be done.

## Creating your own dataset

We provide a `tensorflow_dataset` generator class in `dataset.py`. This file can
be modified to generate different versions of the available algorithms, and it
can be built by using `tfds build` after following the installation instructions
at https://www.tensorflow.org/datasets.

Alternatively, you can generate samples without going through `tfds` by
instantiating samplers with the `build_sampler` method in
`clrs/_src/samplers.py`, like so:

```
sampler, spec = clrs.build_sampler(
    name='bfs',
    seed=42,
    num_samples=1000,
    length=16)

def _iterate_sampler(batch_size):
  while True:
    yield sampler.next(batch_size)

for feedback in _iterate_sampler(batch_size=32):
  ...

```

Most recently, we are offering [**CLRS-Text**](https://github.com/google-deepmind/clrs/tree/master/clrs/_src/clrs_text),
a text-based variant of the benchmark suitable for training and evaluating the algorithmic reasoning
capabilities of language models. Please see the relevant subfolder for a
dedicated README file.

You may also see the [companion paper](https://arxiv.org/abs/2406.04229) on
CLRS-Text.

## Adding new algorithms

Adding a new algorithm to the task suite requires the following steps:

1. Determine the input/hint/output specification of your algorithm, and include
it within the `SPECS` dictionary of `clrs/_src/specs.py`.
2. Implement the desired algorithm in an abstractified form. Examples of this
can be found throughout the `clrs/_src/algorithms/` folder.
  - Choose appropriate moments within the algorithm’s execution to create probes
    that capture the inputs, outputs and all intermediate state (using
    the `probing.push` function).
  - Once generated, probes must be formatted using the `probing.finalize`
    method, and should be returned together with the algorithm output.
3. Implement an appropriate input data sampler for your algorithm,
and include it in the `SAMPLERS` dictionary within `clrs/_src/samplers.py`.

Once the algorithm has been added in this way, it can be accessed with the
`build_sampler` method, and will also be incorporated to the dataset if
regenerated with the generator class in `dataset.py`, as described above.

## Citation

To cite the CLRS Algorithmic Reasoning Benchmark:

```latex
@article{deepmind2022clrs,
  title={The CLRS Algorithmic Reasoning Benchmark},
  author={Petar Veli\v{c}kovi\'{c} and Adri\`{a} Puigdom\`{e}nech Badia and
    David Budden and Razvan Pascanu and Andrea Banino and Misha Dashevskiy and
    Raia Hadsell and Charles Blundell},
  journal={arXiv preprint arXiv:2205.15659},
  year={2022}
}
```

To cite the CLRS-Text Algorithmic Reasoning Language Benchmark:

```latex
@article{deepmind2024clrstext,
  title={The CLRS-Text Algorithmic Reasoning Language Benchmark},
  author={Larisa Markeeva and Sean McLeish and Borja Ibarz and Wilfried Bounsi
    and Olga Kozlova and Alex Vitvitskyi and Charles Blundell and
    Tom Goldstein and Avi Schwarzschild and Petar Veli\v{c}kovi\'{c}},
  journal={arXiv preprint arXiv:2406.04229},
  year={2024}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/deepmind/clrs",
    "name": "dm-clrs",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": null,
    "keywords": "python machine learning",
    "author": "DeepMind",
    "author_email": "clrs-dev@google.com",
    "download_url": "https://files.pythonhosted.org/packages/5a/79/6d1d25988d58328a74fbbcbefd7aaaac6700e0cffb576d4f6badd0b9e829/dm_clrs-2.0.0.tar.gz",
    "platform": null,
    "description": "# The CLRS Algorithmic Reasoning Benchmark\n\nLearning representations of algorithms is an emerging area of machine learning,\nseeking to bridge concepts from neural networks with classical algorithms. The\nCLRS Algorithmic Reasoning Benchmark (CLRS) consolidates and extends previous\nwork toward evaluation algorithmic reasoning by providing a suite of\nimplementations of classical algorithms. These algorithms have been selected\nfrom the third edition of the standard *Introduction to Algorithms* by Cormen,\nLeiserson, Rivest and Stein.\n\n## Getting started\n\nThe CLRS Algorithmic Reasoning Benchmark can be installed with pip, either from\nPyPI:\n\n```shell\npip install dm-clrs\n```\n\nor directly from GitHub (updated more frequently):\n\n```shell\npip install git+https://github.com/google-deepmind/clrs.git\n```\n\nYou may prefer to install it in a virtual environment if any requirements\nclash with your Python installation:\n\n```shell\npython3 -m venv clrs_env\nsource clrs_env/bin/activate\npip install git+https://github.com/google-deepmind/clrs.git\n```\n\nOnce installed you can run our example baseline model:\n\n```shell\npython3 -m clrs.examples.run\n```\n\nIf this is the first run of the example, the dataset will be downloaded and\nstored in `--dataset_path` (default '/tmp/CLRS30').\nAlternatively, you can also download and extract https://storage.googleapis.com/dm-clrs/CLRS30_v1.0.0.tar.gz\n\n## Algorithms as graphs\n\nCLRS implements the selected algorithms in an idiomatic way, which aligns as\nclosely as possible to the original CLRS 3ed pseudocode. By controlling the\ninput data distribution to conform to the preconditions we are able to\nautomatically generate input/output pairs. We additionally provide trajectories\nof \"hints\" that expose the internal state of each algorithm, to both optionally\nsimplify the learning challenge and to distinguish between different algorithms\nthat solve the same overall task (e.g. sorting).\n\nIn the most generic sense, algorithms can be seen as manipulating sets of\nobjects, along with any relations between them (which can themselves be\ndecomposed into binary relations). Accordingly, we study all of the algorithms\nin this benchmark using a graph representation. In the event that objects obey a\nmore strict ordered structure (e.g. arrays or rooted trees), we impose this\nordering through inclusion of predecessor links.\n\n## How it works\n\nFor each algorithm, we provide a canonical set of *train*, *eval* and *test*\ntrajectories for benchmarking out-of-distribution generalization.\n\n|       | Trajectories    | Problem Size |\n|-------|-----------------|--------------|\n| Train | 1000            | 16           |\n| Eval  | 32 x multiplier | 16           |\n| Test  | 32 x multiplier | 64           |\n\n\nHere, \"problem size\" refers to e.g. the length of an array or number of nodes in\na graph, depending on the algorithm. \"multiplier\" is an algorithm-specific\nfactor that increases the number of available *eval* and *test* trajectories\nto compensate for paucity of evaluation signals. \"multiplier\" is 1 for all\nalgorithms except:\n\n- Maximum subarray (Kadane), for which \"multiplier\" is 32.\n- Quick select, minimum, binary search, string matchers (both naive and KMP),\nand segment intersection, for which \"multiplier\" is 64.\n\nThe trajectories can be used like so:\n\n```python\ntrain_ds, num_samples, spec = clrs.create_dataset(\n      folder='/tmp/CLRS30', algorithm='bfs',\n      split='train', batch_size=32)\n\nfor i, feedback in enumerate(train_ds.as_numpy_iterator()):\n  if i == 0:\n    model.init(feedback.features, initial_seed)\n  loss = model.feedback(rng_key, feedback)\n```\n\nHere, `feedback` is a `namedtuple` with the following structure:\n\n```python\nFeedback = collections.namedtuple('Feedback', ['features', 'outputs'])\nFeatures = collections.namedtuple('Features', ['inputs', 'hints', 'lengths'])\n```\n\nwhere the content of `Features` can be used for training and `outputs` is\nreserved for evaluation. Each field of the tuple is an `ndarray` with a leading\nbatch dimension. Because `hints` are provided for the full algorithm trajectory,\nthese contain an additional time dimension padded up to the maximum length\n`max(T)` of any trajectory within the dataset. The `lengths` field specifies the\ntrue length `t <= max(T)` for each trajectory, which can be used e.g. for loss\nmasking.\n\nThe `examples` directory contains a full working Graph Neural Network (GNN)\nexample using JAX and the DeepMind JAX Ecosystem of libraries. It allows\ntraining of multiple algorithms on a single processor, as described in\n[\"A Generalist Neural Algorithmic Learner\"](https://arxiv.org/abs/2209.11142).\n\n## What we provide\n\n### Algorithms\n\nOur initial CLRS-30 benchmark includes the following 30 algorithms. We aim to\nsupport more algorithms in the future.\n\n- Sorting\n  - Insertion sort\n  - Bubble sort\n  - Heapsort (Williams, 1964)\n  - Quicksort (Hoare, 1962)\n- Searching\n  - Minimum\n  - Binary search\n  - Quickselect (Hoare, 1961)\n- Divide and conquer\n  - Maximum subarray (Kadane's variant) (Bentley, 1984)\n- Greedy\n  - Activity selection (Gavril, 1972)\n  - Task scheduling (Lawler, 1985)\n- Dynamic programming\n  - Matrix chain multiplication\n  - Longest common subsequence\n  - Optimal binary search tree (Aho et al., 1974)\n- Graphs\n  - Depth-first search (Moore, 1959)\n  - Breadth-first search (Moore, 1959)\n  - Topological sorting (Knuth, 1973)\n  - Articulation points\n  - Bridges\n  - Kosaraju's strongly connected components algorithm (Aho et al., 1974)\n  - Kruskal's minimum spanning tree algorithm (Kruskal, 1956)\n  - Prim's minimum spanning tree algorithm (Prim, 1957)\n  - Bellman-Ford algorithm for single-source shortest paths (Bellman, 1958)\n  - Dijkstra's algorithm for single-source shortest paths (Dijkstra et al., 1959)\n  - Directed acyclic graph single-source shortest paths\n  - Floyd-Warshall algorithm for all-pairs shortest-paths (Floyd, 1962)\n- Strings\n  - Na\u00efve string matching\n  - Knuth-Morris-Pratt (KMP) string matcher (Knuth et al., 1977)\n- Geometry\n  - Segment intersection\n  - Graham scan convex hull algorithm (Graham, 1972)\n  - Jarvis' march convex hull algorithm (Jarvis, 1973)\n\n### Baselines\n\nModels consist of a *processor* and a number of *encoders* and *decoders*.\nWe provide JAX implementations of the following GNN baseline processors:\n\n- Deep Sets (Zaheer et al., NIPS 2017)\n- End-to-End Memory Networks (Sukhbaatar et al., NIPS 2015)\n- Graph Attention Networks (Veli\u010dkovi\u0107 et al., ICLR 2018)\n- Graph Attention Networks v2 (Brody et al., ICLR 2022)\n- Message-Passing Neural Networks (Gilmer et al., ICML 2017)\n- Pointer Graph Networks (Veli\u010dkovi\u0107 et al., NeurIPS 2020)\n\nIf you want to implement a new processor, the easiest way is to add\nit in the `processors.py` file and make it available through the\n`get_processor_factory` method there. A processor should have a `__call__`\nmethod like this:\n\n```\n__call__(self,\n         node_fts, edge_fts, graph_fts,\n         adj_mat, hidden,\n         nb_nodes, batch_size)\n```\n\nwhere `node_fts`, `edge_fts` and `graph_fts` will be float arrays of shape\n`batch_size` x `nb_nodes` x H, `batch_size` x `nb_nodes` x `nb_nodes` x H,\nand `batch_size` x H with encoded features for\nnodes, edges and graph respectively, `adj_mat` a\n`batch_size` x `nb_nodes` x `nb_nodes` boolean\narray of connectivity built from hints and inputs, and `hidden` a\n`batch_size` x `nb_nodes` x H float array with the previous-step outputs\nof the processor. The method should return a `batch_size` x `nb_nodes` x H\nfloat array.\n\nFor more fundamentally different baselines, it is necessary to create a new\nclass that extends the Model API (as found within `clrs/_src/model.py`).\n`clrs/_src/baselines.py` provides one example of how this can be done.\n\n## Creating your own dataset\n\nWe provide a `tensorflow_dataset` generator class in `dataset.py`. This file can\nbe modified to generate different versions of the available algorithms, and it\ncan be built by using `tfds build` after following the installation instructions\nat https://www.tensorflow.org/datasets.\n\nAlternatively, you can generate samples without going through `tfds` by\ninstantiating samplers with the `build_sampler` method in\n`clrs/_src/samplers.py`, like so:\n\n```\nsampler, spec = clrs.build_sampler(\n    name='bfs',\n    seed=42,\n    num_samples=1000,\n    length=16)\n\ndef _iterate_sampler(batch_size):\n  while True:\n    yield sampler.next(batch_size)\n\nfor feedback in _iterate_sampler(batch_size=32):\n  ...\n\n```\n\nMost recently, we are offering [**CLRS-Text**](https://github.com/google-deepmind/clrs/tree/master/clrs/_src/clrs_text),\na text-based variant of the benchmark suitable for training and evaluating the algorithmic reasoning\ncapabilities of language models. Please see the relevant subfolder for a\ndedicated README file.\n\nYou may also see the [companion paper](https://arxiv.org/abs/2406.04229) on\nCLRS-Text.\n\n## Adding new algorithms\n\nAdding a new algorithm to the task suite requires the following steps:\n\n1. Determine the input/hint/output specification of your algorithm, and include\nit within the `SPECS` dictionary of `clrs/_src/specs.py`.\n2. Implement the desired algorithm in an abstractified form. Examples of this\ncan be found throughout the `clrs/_src/algorithms/` folder.\n  - Choose appropriate moments within the algorithm\u2019s execution to create probes\n    that capture the inputs, outputs and all intermediate state (using\n    the `probing.push` function).\n  - Once generated, probes must be formatted using the `probing.finalize`\n    method, and should be returned together with the algorithm output.\n3. Implement an appropriate input data sampler for your algorithm,\nand include it in the `SAMPLERS` dictionary within `clrs/_src/samplers.py`.\n\nOnce the algorithm has been added in this way, it can be accessed with the\n`build_sampler` method, and will also be incorporated to the dataset if\nregenerated with the generator class in `dataset.py`, as described above.\n\n## Citation\n\nTo cite the CLRS Algorithmic Reasoning Benchmark:\n\n```latex\n@article{deepmind2022clrs,\n  title={The CLRS Algorithmic Reasoning Benchmark},\n  author={Petar Veli\\v{c}kovi\\'{c} and Adri\\`{a} Puigdom\\`{e}nech Badia and\n    David Budden and Razvan Pascanu and Andrea Banino and Misha Dashevskiy and\n    Raia Hadsell and Charles Blundell},\n  journal={arXiv preprint arXiv:2205.15659},\n  year={2022}\n}\n```\n\nTo cite the CLRS-Text Algorithmic Reasoning Language Benchmark:\n\n```latex\n@article{deepmind2024clrstext,\n  title={The CLRS-Text Algorithmic Reasoning Language Benchmark},\n  author={Larisa Markeeva and Sean McLeish and Borja Ibarz and Wilfried Bounsi\n    and Olga Kozlova and Alex Vitvitskyi and Charles Blundell and\n    Tom Goldstein and Avi Schwarzschild and Petar Veli\\v{c}kovi\\'{c}},\n  journal={arXiv preprint arXiv:2406.04229},\n  year={2024}\n}\n```\n",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "The CLRS Algorithmic Reasoning Benchmark.",
    "version": "2.0.0",
    "project_urls": {
        "Homepage": "https://github.com/deepmind/clrs"
    },
    "split_keywords": [
        "python",
        "machine",
        "learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5b3169a89a67d188cf2962bcc2598f8b1f337b8b79ee22959136bf84c0259266",
                "md5": "e69d7755ffb35261a1a36b680b6e8eec",
                "sha256": "de2da4eb16e563a4c5bc9e3e89399d5e8b2ad7d194e90420a799a7410a25eb11"
            },
            "downloads": -1,
            "filename": "dm_clrs-2.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e69d7755ffb35261a1a36b680b6e8eec",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 127780,
            "upload_time": "2024-07-18T10:27:50",
            "upload_time_iso_8601": "2024-07-18T10:27:50.085553Z",
            "url": "https://files.pythonhosted.org/packages/5b/31/69a89a67d188cf2962bcc2598f8b1f337b8b79ee22959136bf84c0259266/dm_clrs-2.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5a796d1d25988d58328a74fbbcbefd7aaaac6700e0cffb576d4f6badd0b9e829",
                "md5": "1ee659e63a770ec6c1e31d0b1a06a42d",
                "sha256": "47d91da1fc6c271a7306c09936728566153329121d68333ce20be600bad66c46"
            },
            "downloads": -1,
            "filename": "dm_clrs-2.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "1ee659e63a770ec6c1e31d0b1a06a42d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 97423,
            "upload_time": "2024-07-18T10:27:51",
            "upload_time_iso_8601": "2024-07-18T10:27:51.346584Z",
            "url": "https://files.pythonhosted.org/packages/5a/79/6d1d25988d58328a74fbbcbefd7aaaac6700e0cffb576d4f6badd0b9e829/dm_clrs-2.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-07-18 10:27:51",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "deepmind",
    "github_project": "clrs",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "dm-clrs"
}
        
Elapsed time: 0.66605s