<table align="center">
<tbody>
<tr>
<td align="center" width="1182px">
<img src="https://raw.githubusercontent.com/virelay/corelay/refs/heads/main/design/corelay-logo-with-title.png" alt="CoRelAy Logo"/>
# Composing Relevance Analysis
[](https://github.com/virelay/corelay/blob/main/COPYING.LESSER)
[](https://github.com/virelay/corelay/actions/workflows/tests.yml)
[](https://corelay.readthedocs.io/en/latest)
[](https://github.com/virelay/corelay/releases/latest)
[](https://pypi.org/project/corelay/)
**CoRelAy** is a library designed for composing efficient, single-machine data analysis pipelines. It enables the rapid implementation of pipelines that can be used to analyze and process data. CoRelAy is primarily meant for the use in explainable artificial intelligence (XAI), often with the goal of producing output suitable for visualization in tools like [**ViRelAy**](https://github.com/virelay/virelay).
</td>
</tr>
</tbody>
</table>
At the core of CoRelAy are **pipelines** (`Pipeline`), which consist of a series of **tasks** (`Task`). Each task is a modular unit that can be populated with **operations** (`Processor`) to perform specific data processing tasks. These operations, known as processors, can be customized by assigning new instances or modifying their default configurations.
Tasks in CoRelAy are highly flexible and can be tailored to meet the needs of your analysis pipeline. By leveraging a wide range of configurable **processors** with their respective **parameters** (`Param`), you can easily adapt and optimize your data processing workflow.
For more information about CoRelAy, getting started guides, in-depth tutorials, and API documentation, please refer to the [documentation](https://corelay.readthedocs.io/en/latest/).
If you find CoRelAy useful for your research, why not cite our related [paper](https://arxiv.org/abs/2106.13200):
```bibtex
@article{anders2021software,
author = {Anders, Christopher J. and
Neumann, David and
Samek, Wojciech and
Müller, Klaus-Robert and
Lapuschkin, Sebastian},
title = {Software for Dataset-wide XAI: From Local Explanations to Global Insights with {Zennit}, {CoRelAy}, and {ViRelAy}},
year = {2021},
volume = {abs/2106.13200},
journal = {CoRR}
}
```
## Features
- **Pipeline Composition**: CoRelAy allows you to compose pipelines of processors, which can be executed in parallel or sequentially.
- **Task-based Design**: Each step in the pipeline is represented as a task, which can be easily modified or replaced.
- **Processor Library**: CoRelAy comes with a library of built-in processors for common tasks, such as clustering, embedding, and dimensionality reduction.
- **Memoization**: CoRelAy supports memoization of intermediate results, allowing you to reuse previously computed results and speed up your analysis.
## Getting Started
### Installation
To get started, you first have to install CoRelAy on your system. The recommended and easiest way to install CoRelAy is to use `pip`, the Python package manager. You can install CoRelAy using the following command:
```shell
$ pip install corelay
```
> [!NOTE]
> CoRelAy depends on the [`metrohash-python`](https://pypi.org/project/metrohash-python/) library, which requires a C++ compiler to be installed. This may mean that you will have to install extra packages (GCC or Clang) for the installation to succeed. For example, on Fedora, you may have to install the `gcc-c++` package in order to make the `c++` command available, which can be done using the following command:
>
> ```shell
> $ sudo dnf install gcc-c++
> ```
To install CoRelAy with optional HDBSCAN and UMAP support, use
```shell
$ pip install corelay[umap,hdbscan]
```
### Usage
Examples to highlight some features of CoRelAy can be found in [`docs/examples`](https://github.com/virelay/corelay/tree/main/docs/examples).
We mainly use HDF5 files to store results. If you wish to visualize your analysis results using **ViRelAy**, please have a look at the [**ViRelAy documentation**](https://virelay.readthedocs.io/en/latest/contributors-guide/database-specification.html) to find out more about its database specification. An example to create HDF5 files which can be used with **ViRelAy** is shown in [`docs/examples/hdf5_structure.py`](https://github.com/virelay/corelay/tree/main/docs/examples/hdf5_structure.py).
To do a full SpRAy analysis which can be visualized with **ViRelAy**, an advanced script can be found in [`docs/examples/virelay_analysis.py`](https://github.com/virelay/corelay/tree/main/docs/examples/virelay_analysis.py).
The following shows the contents of [`docs/examples/memoize_spectral_pipeline.py`](https://github.com/virelay/corelay/tree/main/docs/examples/memoize_spectral_pipeline.py):
```python
"""An example script, which uses memoization to store (intermediate) results."""
import time
import typing
from collections.abc import Sequence
from typing import Annotated, SupportsIndex
import h5py
import numpy
from corelay.base import Param
from corelay.io.storage import HashedHDF5
from corelay.pipeline.spectral import SpectralClustering
from corelay.processor.base import Processor
from corelay.processor.clustering import KMeans
from corelay.processor.embedding import TSNEEmbedding, EigenDecomposition
from corelay.processor.flow import Sequential, Parallel
class Flatten(Processor):
"""Represents a :py:class:`~corelay.processor.base.Processor`, which flattens its input data."""
def function(self, data: typing.Any) -> typing.Any:
"""Applies the flattening to the input data.
Args:
data (typing.Any): The input data that is to be flattened.
Returns:
typing.Any: Returns the flattened data.
"""
input_data: numpy.ndarray[typing.Any, typing.Any] = data
input_data.sum()
return input_data.reshape(input_data.shape[0], numpy.prod(input_data.shape[1:]))
class SumChannel(Processor):
"""Represents a :py:class:`~corelay.processor.base.Processor`, which sums its input data across channels, i.e., its second axis."""
def function(self, data: typing.Any) -> typing.Any:
"""Applies the summation over the channels to the input data.
Args:
data (typing.Any): The input data that is to be summed over its channels.
Returns:
typing.Any: Returns the data that was summed up over its channels.
"""
input_data: numpy.ndarray[typing.Any, typing.Any] = data
return input_data.sum(axis=1)
class Normalize(Processor):
"""Represents a :py:class:`~corelay.processor.base.Processor`, which normalizes its input data."""
axes: Annotated[SupportsIndex | Sequence[SupportsIndex], Param((SupportsIndex, Sequence), (1, 2))]
"""A parameter of the :py:class:`~corelay.processor.base.Processor`, which determines the axis over which the data is to be normalized. Defaults
to the second and third axes.
"""
def function(self, data: typing.Any) -> typing.Any:
"""Normalizes the specified input data.
Args:
data (typing.Any): The input data that is to be normalized.
Returns:
typing.Any: Returns the normalized input data.
"""
input_data: numpy.ndarray[typing.Any, typing.Any] = data
return input_data / input_data.sum(self.axes, keepdims=True)
def main() -> None:
"""The entrypoint to the :py:mod:`memoize_spectral_pipeline` script."""
# Fixes the random seed for reproducibility
numpy.random.seed(0xDEADBEEF)
# Opens an HDF5 file in append mode for the storing the results of the analysis and the memoization of intermediate pipeline results
with h5py.File('test.analysis.h5', 'a') as analysis_file:
# Creates a HashedHDF5 IO object, which is an IO object that stores outputs of processors based on hashes in an HDF5 file
io_object = HashedHDF5(analysis_file.require_group('proc_data'))
# Generates some exemplary data
data = numpy.random.normal(size=(64, 3, 32, 32))
number_of_clusters = range(2, 20)
# Creates a SpectralClustering pipeline, which is one of the pre-defined built-in pipelines
pipeline = SpectralClustering(
# Processors, such as EigenDecomposition, can be assigned to pre-defined tasks
embedding=EigenDecomposition(n_eigval=8, io=io_object),
# Flow-based processors, such as Parallel, can combine multiple processors; broadcast=True copies the input as many times as there are
# processors; broadcast=False instead attempts to match each input to a processor
clustering=Parallel([
Parallel([
KMeans(n_clusters=k, io=io_object) for k in number_of_clusters
], broadcast=True),
# IO objects will be used during computation when supplied to processors, if a corresponding output value (here identified by hashes)
# already exists, the value is not computed again but instead loaded from the IO object
TSNEEmbedding(io=io_object)
], broadcast=True, is_output=True)
)
# Processors (and Params) can be updated by simply assigning corresponding attributes
pipeline.preprocessing = Sequential([
SumChannel(),
Normalize(),
Flatten()
])
# Processors flagged with "is_output=True" will be accumulated in the output; the output will be a tree of tuples, with the same hierarchy as
# the pipeline (i.e., _clusterings here contains a tuple of the k-means outputs)
start_time = time.perf_counter()
_clusterings, _tsne = pipeline(data)
# Since we memoize our results in an HDF5 file, subsequent calls will not compute the values (for the same inputs), but rather load them from
# the HDF5 file; try running the script multiple times
duration = time.perf_counter() - start_time
print(f'Pipeline execution time: {duration:.4f} seconds')
if __name__ == '__main__':
main()
```
## Contributing
If you would like to contribute, there are multiple ways you can help out. If you find a bug or have a feature request, please feel free to [open an issue on GitHub](https://github.com/virelay/corelay/issues). If you want to contribute code, please [fork the repository](https://github.com/virelay/corelay/fork) and use a feature branch. Pull requests are always welcome. Before forking, please open an issue where you describe what you want to do. This helps to align your ideas with ours and may prevent you from doing work, that we are already planning on doing. If you have contributed to the project, please add yourself to the [contributors list](https://github.com/virelay/corelay/blob/main/CONTRIBUTORS.md).
To help speed up the merging of your pull request, please comment and document your code extensively, try to emulate the coding style of the project, and update the documentation if necessary.
For more information on how to contribute, please refer to our [contributor's guide](https://corelay.readthedocs.io/en/latest/contributors-guide/index.html).
## License
CoRelAy is dual-licensed under the [GNU General Public License Version 3 (GPL-3.0)](https://www.gnu.org/licenses/gpl-3.0.html) or later, and the [GNU Lesser General Public License Version 3 (LGPL-3.0)](https://www.gnu.org/licenses/lgpl-3.0.html) or later. For more information see the [GPL-3.0](https://github.com/virelay/corelay/blob/main/COPYING) and [LGPL-3.0](https://github.com/virelay/corelay/blob/main/COPYING.LESSER) license files.
Raw data
{
"_id": null,
"home_page": null,
"name": "corelay",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11.0",
"maintainer_email": "\"Christopher J. Anders\" <contact@cjanders.de>, David Neumann <david.neumann@lecode.de>",
"keywords": "Artificial Intelligence, CoRelAy, Deep Learning, Explainable AI, Interpretability, Machine Learning, SpRAy, Spectral Relevance Analysis, XAI",
"author": null,
"author_email": "\"Christopher J. Anders\" <contact@cjanders.de>, David Neumann <david.neumann@lecode.de>, Sebastian Lapuschkin <sebastian.lapuschkin@hhi.fraunhofer.de>, Pattarawat Chormai <pat.chormai@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/96/23/74a2da812be234b182c70cacf0373975b7f86c87377437413be00df9a691/corelay-1.0.0.tar.gz",
"platform": null,
"description": "\n<table align=\"center\">\n<tbody>\n<tr>\n<td align=\"center\" width=\"1182px\">\n\n<img src=\"https://raw.githubusercontent.com/virelay/corelay/refs/heads/main/design/corelay-logo-with-title.png\" alt=\"CoRelAy Logo\"/>\n\n# Composing Relevance Analysis\n\n[](https://github.com/virelay/corelay/blob/main/COPYING.LESSER)\n[](https://github.com/virelay/corelay/actions/workflows/tests.yml)\n[](https://corelay.readthedocs.io/en/latest)\n[](https://github.com/virelay/corelay/releases/latest)\n[](https://pypi.org/project/corelay/)\n\n**CoRelAy** is a library designed for composing efficient, single-machine data analysis pipelines. It enables the rapid implementation of pipelines that can be used to analyze and process data. CoRelAy is primarily meant for the use in explainable artificial intelligence (XAI), often with the goal of producing output suitable for visualization in tools like [**ViRelAy**](https://github.com/virelay/virelay).\n</td>\n</tr>\n</tbody>\n</table>\n\nAt the core of CoRelAy are **pipelines** (`Pipeline`), which consist of a series of **tasks** (`Task`). Each task is a modular unit that can be populated with **operations** (`Processor`) to perform specific data processing tasks. These operations, known as processors, can be customized by assigning new instances or modifying their default configurations.\n\nTasks in CoRelAy are highly flexible and can be tailored to meet the needs of your analysis pipeline. By leveraging a wide range of configurable **processors** with their respective **parameters** (`Param`), you can easily adapt and optimize your data processing workflow.\n\nFor more information about CoRelAy, getting started guides, in-depth tutorials, and API documentation, please refer to the [documentation](https://corelay.readthedocs.io/en/latest/).\n\nIf you find CoRelAy useful for your research, why not cite our related [paper](https://arxiv.org/abs/2106.13200):\n\n```bibtex\n@article{anders2021software,\n author = {Anders, Christopher J. and\n Neumann, David and\n Samek, Wojciech and\n M\u00fcller, Klaus-Robert and\n Lapuschkin, Sebastian},\n title = {Software for Dataset-wide XAI: From Local Explanations to Global Insights with {Zennit}, {CoRelAy}, and {ViRelAy}},\n year = {2021},\n volume = {abs/2106.13200},\n journal = {CoRR}\n}\n```\n\n## Features\n\n- **Pipeline Composition**: CoRelAy allows you to compose pipelines of processors, which can be executed in parallel or sequentially.\n- **Task-based Design**: Each step in the pipeline is represented as a task, which can be easily modified or replaced.\n- **Processor Library**: CoRelAy comes with a library of built-in processors for common tasks, such as clustering, embedding, and dimensionality reduction.\n- **Memoization**: CoRelAy supports memoization of intermediate results, allowing you to reuse previously computed results and speed up your analysis.\n\n## Getting Started\n\n### Installation\n\nTo get started, you first have to install CoRelAy on your system. The recommended and easiest way to install CoRelAy is to use `pip`, the Python package manager. You can install CoRelAy using the following command:\n\n```shell\n$ pip install corelay\n```\n\n> [!NOTE]\n> CoRelAy depends on the [`metrohash-python`](https://pypi.org/project/metrohash-python/) library, which requires a C++ compiler to be installed. This may mean that you will have to install extra packages (GCC or Clang) for the installation to succeed. For example, on Fedora, you may have to install the `gcc-c++` package in order to make the `c++` command available, which can be done using the following command:\n>\n> ```shell\n> $ sudo dnf install gcc-c++\n> ```\n\nTo install CoRelAy with optional HDBSCAN and UMAP support, use\n\n```shell\n$ pip install corelay[umap,hdbscan]\n```\n\n### Usage\n\nExamples to highlight some features of CoRelAy can be found in [`docs/examples`](https://github.com/virelay/corelay/tree/main/docs/examples).\n\nWe mainly use HDF5 files to store results. If you wish to visualize your analysis results using **ViRelAy**, please have a look at the [**ViRelAy documentation**](https://virelay.readthedocs.io/en/latest/contributors-guide/database-specification.html) to find out more about its database specification. An example to create HDF5 files which can be used with **ViRelAy** is shown in [`docs/examples/hdf5_structure.py`](https://github.com/virelay/corelay/tree/main/docs/examples/hdf5_structure.py).\n\nTo do a full SpRAy analysis which can be visualized with **ViRelAy**, an advanced script can be found in [`docs/examples/virelay_analysis.py`](https://github.com/virelay/corelay/tree/main/docs/examples/virelay_analysis.py).\n\nThe following shows the contents of [`docs/examples/memoize_spectral_pipeline.py`](https://github.com/virelay/corelay/tree/main/docs/examples/memoize_spectral_pipeline.py):\n\n```python\n\"\"\"An example script, which uses memoization to store (intermediate) results.\"\"\"\n\nimport time\nimport typing\nfrom collections.abc import Sequence\nfrom typing import Annotated, SupportsIndex\n\nimport h5py\nimport numpy\n\nfrom corelay.base import Param\nfrom corelay.io.storage import HashedHDF5\nfrom corelay.pipeline.spectral import SpectralClustering\nfrom corelay.processor.base import Processor\nfrom corelay.processor.clustering import KMeans\nfrom corelay.processor.embedding import TSNEEmbedding, EigenDecomposition\nfrom corelay.processor.flow import Sequential, Parallel\n\n\nclass Flatten(Processor):\n \"\"\"Represents a :py:class:`~corelay.processor.base.Processor`, which flattens its input data.\"\"\"\n\n def function(self, data: typing.Any) -> typing.Any:\n \"\"\"Applies the flattening to the input data.\n\n Args:\n data (typing.Any): The input data that is to be flattened.\n\n Returns:\n typing.Any: Returns the flattened data.\n \"\"\"\n\n input_data: numpy.ndarray[typing.Any, typing.Any] = data\n input_data.sum()\n return input_data.reshape(input_data.shape[0], numpy.prod(input_data.shape[1:]))\n\n\nclass SumChannel(Processor):\n \"\"\"Represents a :py:class:`~corelay.processor.base.Processor`, which sums its input data across channels, i.e., its second axis.\"\"\"\n\n def function(self, data: typing.Any) -> typing.Any:\n \"\"\"Applies the summation over the channels to the input data.\n\n Args:\n data (typing.Any): The input data that is to be summed over its channels.\n\n Returns:\n typing.Any: Returns the data that was summed up over its channels.\n \"\"\"\n\n input_data: numpy.ndarray[typing.Any, typing.Any] = data\n return input_data.sum(axis=1)\n\n\nclass Normalize(Processor):\n \"\"\"Represents a :py:class:`~corelay.processor.base.Processor`, which normalizes its input data.\"\"\"\n\n axes: Annotated[SupportsIndex | Sequence[SupportsIndex], Param((SupportsIndex, Sequence), (1, 2))]\n \"\"\"A parameter of the :py:class:`~corelay.processor.base.Processor`, which determines the axis over which the data is to be normalized. Defaults\n to the second and third axes.\n \"\"\"\n\n def function(self, data: typing.Any) -> typing.Any:\n \"\"\"Normalizes the specified input data.\n\n Args:\n data (typing.Any): The input data that is to be normalized.\n\n Returns:\n typing.Any: Returns the normalized input data.\n \"\"\"\n\n input_data: numpy.ndarray[typing.Any, typing.Any] = data\n return input_data / input_data.sum(self.axes, keepdims=True)\n\n\ndef main() -> None:\n \"\"\"The entrypoint to the :py:mod:`memoize_spectral_pipeline` script.\"\"\"\n\n # Fixes the random seed for reproducibility\n numpy.random.seed(0xDEADBEEF)\n\n # Opens an HDF5 file in append mode for the storing the results of the analysis and the memoization of intermediate pipeline results\n with h5py.File('test.analysis.h5', 'a') as analysis_file:\n\n # Creates a HashedHDF5 IO object, which is an IO object that stores outputs of processors based on hashes in an HDF5 file\n io_object = HashedHDF5(analysis_file.require_group('proc_data'))\n\n # Generates some exemplary data\n data = numpy.random.normal(size=(64, 3, 32, 32))\n number_of_clusters = range(2, 20)\n\n # Creates a SpectralClustering pipeline, which is one of the pre-defined built-in pipelines\n pipeline = SpectralClustering(\n\n # Processors, such as EigenDecomposition, can be assigned to pre-defined tasks\n embedding=EigenDecomposition(n_eigval=8, io=io_object),\n\n # Flow-based processors, such as Parallel, can combine multiple processors; broadcast=True copies the input as many times as there are\n # processors; broadcast=False instead attempts to match each input to a processor\n clustering=Parallel([\n Parallel([\n KMeans(n_clusters=k, io=io_object) for k in number_of_clusters\n ], broadcast=True),\n\n # IO objects will be used during computation when supplied to processors, if a corresponding output value (here identified by hashes)\n # already exists, the value is not computed again but instead loaded from the IO object\n TSNEEmbedding(io=io_object)\n ], broadcast=True, is_output=True)\n )\n\n # Processors (and Params) can be updated by simply assigning corresponding attributes\n pipeline.preprocessing = Sequential([\n SumChannel(),\n Normalize(),\n Flatten()\n ])\n\n # Processors flagged with \"is_output=True\" will be accumulated in the output; the output will be a tree of tuples, with the same hierarchy as\n # the pipeline (i.e., _clusterings here contains a tuple of the k-means outputs)\n start_time = time.perf_counter()\n _clusterings, _tsne = pipeline(data)\n\n # Since we memoize our results in an HDF5 file, subsequent calls will not compute the values (for the same inputs), but rather load them from\n # the HDF5 file; try running the script multiple times\n duration = time.perf_counter() - start_time\n print(f'Pipeline execution time: {duration:.4f} seconds')\n\n\nif __name__ == '__main__':\n main()\n```\n\n## Contributing\n\nIf you would like to contribute, there are multiple ways you can help out. If you find a bug or have a feature request, please feel free to [open an issue on GitHub](https://github.com/virelay/corelay/issues). If you want to contribute code, please [fork the repository](https://github.com/virelay/corelay/fork) and use a feature branch. Pull requests are always welcome. Before forking, please open an issue where you describe what you want to do. This helps to align your ideas with ours and may prevent you from doing work, that we are already planning on doing. If you have contributed to the project, please add yourself to the [contributors list](https://github.com/virelay/corelay/blob/main/CONTRIBUTORS.md).\n\nTo help speed up the merging of your pull request, please comment and document your code extensively, try to emulate the coding style of the project, and update the documentation if necessary.\n\nFor more information on how to contribute, please refer to our [contributor's guide](https://corelay.readthedocs.io/en/latest/contributors-guide/index.html).\n\n## License\n\nCoRelAy is dual-licensed under the [GNU General Public License Version 3 (GPL-3.0)](https://www.gnu.org/licenses/gpl-3.0.html) or later, and the [GNU Lesser General Public License Version 3 (LGPL-3.0)](https://www.gnu.org/licenses/lgpl-3.0.html) or later. For more information see the [GPL-3.0](https://github.com/virelay/corelay/blob/main/COPYING) and [LGPL-3.0](https://github.com/virelay/corelay/blob/main/COPYING.LESSER) license files.\n",
"bugtrack_url": null,
"license": null,
"summary": "CoRelAy is a tool to compose small-scale (single-machine) analysis pipelines to generate analysis data which can then be visualized using ViRelAy.",
"version": "1.0.0",
"project_urls": {
"Changelog": "https://github.com/virelay/corelay/blob/main/CHANGELOG.md",
"Documentation": "https://corelay.readthedocs.io/en/latest/",
"Issues": "https://github.com/virelay/corelay/issues",
"Repository": "https://github.com/virelay/corelay.git"
},
"split_keywords": [
"artificial intelligence",
" corelay",
" deep learning",
" explainable ai",
" interpretability",
" machine learning",
" spray",
" spectral relevance analysis",
" xai"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "4c33b2c30823a06d7dc497cb04a9ab405818e83a69198476869b9dd49d7b3a32",
"md5": "d1cd6ea00e74a9a8106b2fe1d1063ee6",
"sha256": "29d8c10f2f1dfc5c00d40c5faead3b32268d01d6ace0e48771d1da08dc9a229d"
},
"downloads": -1,
"filename": "corelay-1.0.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d1cd6ea00e74a9a8106b2fe1d1063ee6",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11.0",
"size": 77379,
"upload_time": "2025-07-21T09:48:38",
"upload_time_iso_8601": "2025-07-21T09:48:38.335233Z",
"url": "https://files.pythonhosted.org/packages/4c/33/b2c30823a06d7dc497cb04a9ab405818e83a69198476869b9dd49d7b3a32/corelay-1.0.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "962374a2da812be234b182c70cacf0373975b7f86c87377437413be00df9a691",
"md5": "71c3995d41b8ee0b19539fdd8d595736",
"sha256": "7d04fe0410bc5c862f9d692dfc13975124d29be9342810ade33642129cf4c611"
},
"downloads": -1,
"filename": "corelay-1.0.0.tar.gz",
"has_sig": false,
"md5_digest": "71c3995d41b8ee0b19539fdd8d595736",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11.0",
"size": 64239,
"upload_time": "2025-07-21T09:48:39",
"upload_time_iso_8601": "2025-07-21T09:48:39.362379Z",
"url": "https://files.pythonhosted.org/packages/96/23/74a2da812be234b182c70cacf0373975b7f86c87377437413be00df9a691/corelay-1.0.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-21 09:48:39",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "virelay",
"github_project": "corelay",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"tox": true,
"lcname": "corelay"
}