[![PyPI version](https://badge.fury.io/py/cmind.svg)](https://pepy.tech/project/cmind)
[![Python Version](https://img.shields.io/badge/python-3+-blue.svg)](https://github.com/mlcommons/ck/tree/master/cm/cmind)
[![License](https://img.shields.io/badge/License-Apache%202.0-green)](LICENSE.md)
[![Downloads](https://static.pepy.tech/badge/cmind)](https://pepy.tech/project/cmind)
[![arXiv](https://img.shields.io/badge/arXiv-2406.16791-b31b1b.svg)](https://arxiv.org/abs/2406.16791)
[![CM test](https://github.com/mlcommons/ck/actions/workflows/test-cm.yml/badge.svg)](https://github.com/mlcommons/ck/actions/workflows/test-cm.yml)
[![CM script automation features test](https://github.com/mlcommons/ck/actions/workflows/test-cm-script-features.yml/badge.svg)](https://github.com/mlcommons/ck/actions/workflows/test-cm-script-features.yml)
## Collective Mind (CM)
Collective Mind (CM) is a very lightweight [Python-based framework](https://github.com/mlcommons/ck/tree/master/cm)
featuring a unified CLI, Python API, and minimal dependencies. It is available through [PYPI](https://pypi.org/project/cmind).
CM is designed for creating and managing portable and technology-agnostic automations for MLOps, DevOps and ResearchOps.
It aims to assist researchers and engineers in automating their repetitive, tedious and time-consuming tasks
to build, run, benchmark and optimize various applications
across diverse and continuously changing models, data, software and hardware.
Collective Mind is a part of [Collective Knowledge (CK)](https://github.com/mlcommons/ck) -
an educational community project to learn how to run AI, ML and other emerging workloads
in the most efficient and cost-effective way across diverse
and ever-evolving systems using the MLPerf benchmarking methodology.
## Collective Mind architecture
The diagram below illustrates the primary classes, functions, and internal automations within the Collective Mind framework:
![](https://github.com/mlcommons/ck/tree/master/docs/specs/cm-diagram-v3.5.1.png)
The CM API documentation is available [here](https://cknowledge.org/docs/cm/api/cmind.html).
## Collective Mind repositories
Collective Mind is continuously enhanced through public and private CM4* Git repositories,
which serve as the unified interface for various collections of reusable automations and artifacts.
The most notable projects and repositories powered by CM are:
#### CM4MLOps
[CM4MLOPS repository powered by CM](https://github.com/mlcommons/cm4mlops) -
a collection of portable, extensible and technology-agnostic automation recipes
with a common CLI and Python API (CM scripts) to unify and automate
all the manual steps required to compose, run, benchmark and optimize complex ML/AI applications
on diverse platforms with any software and hardware.
The two key automations are *script" and *cache*:
see [online catalog at CK playground](https://access.cknowledge.org/playground/?action=scripts),
[online MLCommons catalog](https://docs.mlcommons.org/cm4mlops/scripts).
CM scripts extend the concept of `cmake` with simple Python automations, native scripts
and JSON/YAML meta descriptions. They require Python 3.7+ with minimal dependencies and are
[continuously extended by the community and MLCommons members](https://github.com/mlcommons/ck/blob/master/CONTRIBUTING.md)
to run natively on Ubuntu, MacOS, Windows, RHEL, Debian, Amazon Linux
and any other operating system, in a cloud or inside automatically generated containers
while keeping backward compatibility.
CM scripts were originally developed based on the following requirements from the
[MLCommons members](https://mlcommons.org)
to help them automatically compose and optimize complex MLPerf benchmarks, applications and systems
across diverse and continuously changing models, data sets, software and hardware
from Nvidia, Intel, AMD, Google, Qualcomm, Amazon and other vendors:
* must work out of the box with the default options and without the need to edit some paths, environment variables and configuration files;
* must be non-intrusive, easy to debug and must reuse existing
user scripts and automation tools (such as cmake, make, ML workflows,
python poetry and containers) rather than substituting them;
* must have a very simple and human-friendly command line with a Python API and minimal dependencies;
* must require minimal or zero learning curve by using plain Python, native scripts, environment variables
and simple JSON/YAML descriptions instead of inventing new workflow languages;
* must have the same interface to run all automations natively, in a cloud or inside containers.
See the [online documentation](https://docs.mlcommons.org/inference)
at MLCommons to run MLPerf inference benchmarks across diverse systems using CM.
#### CM4ABTF
[CM4ABTF repository powered by CM](https://github.com/mlcommons/cm4abtf) -
a collection of portable automations and CM scripts to run the upcoming
automotive MLPerf benchmark across different models, data sets, software
and hardware from different vendors.
#### CM4MLPerf-results
[CM4MLPerf-results powered by CM](https://github.com/mlcommons/cm4mlperf-results) -
a simplified and unified representation of the past MLPerf results
in the CM format for further visualization and analysis using [CK graphs](https://access.cknowledge.org/playground/?action=experiments).
#### CM4Research
[CM4Research repository powered by CM](https://github.com/ctuning/cm4research) -
a unified interface designed to streamline the preparation, execution, and reproduction of experiments in research projects.
### Projects powered by Collective Mind
#### Collective Knowledge Playground
[Collective Knowledge Playground](https://access.cKnowledge.org) -
a unified and open-source platform designed to [index all CM scripts](https://access.cknowledge.org/playground/?action=scripts) similar to PYPI,
assist users in preparing CM commands to:
* [run MLPerf benchmarks](https://access.cknowledge.org/playground/?action=howtorun)
* aggregate, process, visualize, and compare [benchmarking results](https://access.cknowledge.org/playground/?action=experiments) for AI and ML systems
* organize [open, reproducible optimization challenges and tournaments](https://access.cknowledge.org/playground/?action=challenges).
These initiatives aim to help academia and industry
collaboratively enhance the efficiency and cost-effectiveness of AI systems.
#### Artifact Evaluation
[Artifact Evaluation automation](https://cTuning.org/ae) - a community-driven initiative
leveraging the Collective Mind framework to automate artifact evaluation
and support reproducibility efforts at ML and systems conferences.
CM scripts extend the concept of `cmake` with simple Python automations, native scripts
and JSON/YAML meta descriptions. They require Python 3.7+ with minimal dependencies and are
[continuously extended by the community and MLCommons members](https://github.com/mlcommons/ck/blob/master/CONTRIBUTING.md)
to run natively on Ubuntu, MacOS, Windows, RHEL, Debian, Amazon Linux
and any other operating system, in a cloud or inside automatically generated containers
while keeping backward compatibility.
CM scripts were originally developed based on the following requirements from the
[MLCommons members](https://mlcommons.org)
to help them automatically compose and optimize complex MLPerf benchmarks, applications and systems
across diverse and continuously changing models, data sets, software and hardware
from Nvidia, Intel, AMD, Google, Qualcomm, Amazon and other vendors:
* must work out of the box with the default options and without the need to edit some paths, environment variables and configuration files;
* must be non-intrusive, easy to debug and must reuse existing
user scripts and automation tools (such as cmake, make, ML workflows,
python poetry and containers) rather than substituting them;
* must have a very simple and human-friendly command line with a Python API and minimal dependencies;
* must require minimal or zero learning curve by using plain Python, native scripts, environment variables
and simple JSON/YAML descriptions instead of inventing new workflow languages;
* must have the same interface to run all automations natively, in a cloud or inside containers.
### Author and maintainer
* [Grigori Fursin](https://cKnowledge.org/gfursin) (FlexAI, cTuning)
### Repositories powered by CM
* [CM4MLOPS / CM4MLPerf](https://github.com/mlcommons/cm4mlops) -
a collection of portable, extensible and technology-agnostic automation recipes
with a common CLI and Python API (CM scripts) to unify and automate
all the manual steps required to compose, run, benchmark and optimize complex ML/AI applications
on diverse platforms with any software and hardware: see [online catalog at CK playground](https://access.cknowledge.org/playground/?action=scripts),
[online MLCommons catalog](https://docs.mlcommons.org/cm4mlops/scripts)
* [CM interface to run MLPerf inference benchmarks](https://docs.mlcommons.org/inference)
* [CM4ABTF](https://github.com/mlcommons/cm4abtf) - a unified CM interface and automation recipes
to run automotive benchmark across different models, data sets, software and hardware from different vendors.
* [CM4Research](https://github.com/ctuning/cm4research) - a unified CM interface an automation recipes
to make it easier to reproduce results from published research papers.
### Resources
* CM v2.x (2022-cur) (stable): [installation on Linux, Windows, MacOS](https://access.cknowledge.org/playground/?action=install) ;
[docs](https://docs.mlcommons.org/ck) ; [popular commands](https://github.com/mlcommons/ck/tree/master/cm/docs/demos/some-cm-commands.md) ;
[getting started guide](https://github.com/mlcommons/ck/blob/master/docs/getting-started.md)
* CM v3.x aka CMX (2024-cur) (stable): [docs](https://github.com/orgs/mlcommons/projects/46)
* MLPerf inference benchmark automated via CM
* [Run MLPerf for submissions](https://docs.mlcommons.org/inference)
* [Run MLPerf at the Student Cluster Competition'24](https://docs.mlcommons.org/inference/benchmarks/text_to_image/reproducibility/scc24)
* Examples of modular containers and GitHub actions with CM commands:
* [GitHub action with CM commands to test MLPerf inference benchmark](https://github.com/mlcommons/inference/blob/master/.github/workflows/test-bert.yml)
* [Dockerfile to run MLPerf inference benchmark via CM](https://github.com/mlcommons/ck/blob/master/cm-mlops/script/app-mlperf-inference/dockerfiles/bert-99.9/ubuntu_22.04_python_onnxruntime_cpu.Dockerfile)
### License
[Apache 2.0](LICENSE.md)
### Citing Collective Mind
If you found CM automations, please cite this article:
[ [ArXiv](https://arxiv.org/abs/2406.16791) ], [ [BibTex](https://github.com/mlcommons/ck/blob/master/citation.bib) ].
You can learn more about the motivation behind these projects from the following presentations:
* "Enabling more efficient and cost-effective AI/ML systems with Collective Mind, virtualized MLOps, MLPerf, Collective Knowledge Playground and reproducible optimization tournaments": [ [ArXiv](https://arxiv.org/abs/2406.16791) ]
* ACM REP'23 keynote about the MLCommons CM automation framework: [ [slides](https://doi.org/10.5281/zenodo.8105339) ]
* ACM TechTalk'21 about Collective Knowledge project: [ [YouTube](https://www.youtube.com/watch?v=7zpeIVwICa4) ] [ [slides](https://learning.acm.org/binaries/content/assets/leaning-center/webinar-slides/2021/grigorifursin_techtalk_slides.pdf) ]
### Acknowledgments
Collective Mind (CM) was originally developed by [Grigori Fursin](https://cKnowledge.org/gfursin),
as a part of the [Collective Knowledge educational initiative](https://cKnowledge.org),
sponsored by [cTuning.org](https://cTuning.org) and [cKnowledge.org](https://cKnowledge.org),
and contributed to MLCommons for the benefit of all.
This open-source technology, including CM4MLOps/CM4MLPerf, CM4ABTF, CM4Research, and more,
is a collaborative project supported by [MLCommons](https://mlcommons.org),
[FlexAI](https://flex.ai), [cTuning](https://cTuning.org)
and our [amazing volunteers, collaborators, and contributors](https://github.com/mlcommons/ck/blob/master/CONTRIBUTING.md)!
Raw data
{
"_id": null,
"home_page": null,
"name": "cmind",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "Grigori Fursin <grigori.fursin@ctuning.org>",
"keywords": "cmind, cm, cmx, collective mind, automation, portability, reusability, mlops, devops, vmlops, api, cli",
"author": null,
"author_email": "Grigori Fursin <grigori.fursin@ctuning.org>",
"download_url": "https://files.pythonhosted.org/packages/2c/1b/96eda19f989c0f523977413548d7d69ead125ee442dfe77291c9a1eb8fff/cmind-3.5.1.tar.gz",
"platform": null,
"description": "[![PyPI version](https://badge.fury.io/py/cmind.svg)](https://pepy.tech/project/cmind)\n[![Python Version](https://img.shields.io/badge/python-3+-blue.svg)](https://github.com/mlcommons/ck/tree/master/cm/cmind)\n[![License](https://img.shields.io/badge/License-Apache%202.0-green)](LICENSE.md)\n[![Downloads](https://static.pepy.tech/badge/cmind)](https://pepy.tech/project/cmind)\n\n[![arXiv](https://img.shields.io/badge/arXiv-2406.16791-b31b1b.svg)](https://arxiv.org/abs/2406.16791)\n[![CM test](https://github.com/mlcommons/ck/actions/workflows/test-cm.yml/badge.svg)](https://github.com/mlcommons/ck/actions/workflows/test-cm.yml)\n[![CM script automation features test](https://github.com/mlcommons/ck/actions/workflows/test-cm-script-features.yml/badge.svg)](https://github.com/mlcommons/ck/actions/workflows/test-cm-script-features.yml)\n\n## Collective Mind (CM)\n\nCollective Mind (CM) is a very lightweight [Python-based framework](https://github.com/mlcommons/ck/tree/master/cm)\nfeaturing a unified CLI, Python API, and minimal dependencies. It is available through [PYPI](https://pypi.org/project/cmind).\n\nCM is designed for creating and managing portable and technology-agnostic automations for MLOps, DevOps and ResearchOps.\nIt aims to assist researchers and engineers in automating their repetitive, tedious and time-consuming tasks\nto build, run, benchmark and optimize various applications \nacross diverse and continuously changing models, data, software and hardware.\n\nCollective Mind is a part of [Collective Knowledge (CK)](https://github.com/mlcommons/ck) - \nan educational community project to learn how to run AI, ML and other emerging workloads \nin the most efficient and cost-effective way across diverse \nand ever-evolving systems using the MLPerf benchmarking methodology.\n\n## Collective Mind architecture\n\nThe diagram below illustrates the primary classes, functions, and internal automations within the Collective Mind framework:\n\n![](https://github.com/mlcommons/ck/tree/master/docs/specs/cm-diagram-v3.5.1.png)\n\nThe CM API documentation is available [here](https://cknowledge.org/docs/cm/api/cmind.html).\n\n## Collective Mind repositories\n\nCollective Mind is continuously enhanced through public and private CM4* Git repositories, \nwhich serve as the unified interface for various collections of reusable automations and artifacts.\n\nThe most notable projects and repositories powered by CM are:\n\n#### CM4MLOps\n\n[CM4MLOPS repository powered by CM](https://github.com/mlcommons/cm4mlops) - \na collection of portable, extensible and technology-agnostic automation recipes\nwith a common CLI and Python API (CM scripts) to unify and automate \nall the manual steps required to compose, run, benchmark and optimize complex ML/AI applications \non diverse platforms with any software and hardware. \n\nThe two key automations are *script\" and *cache*:\nsee [online catalog at CK playground](https://access.cknowledge.org/playground/?action=scripts),\n[online MLCommons catalog](https://docs.mlcommons.org/cm4mlops/scripts).\n\nCM scripts extend the concept of `cmake` with simple Python automations, native scripts\nand JSON/YAML meta descriptions. They require Python 3.7+ with minimal dependencies and are \n[continuously extended by the community and MLCommons members](https://github.com/mlcommons/ck/blob/master/CONTRIBUTING.md)\nto run natively on Ubuntu, MacOS, Windows, RHEL, Debian, Amazon Linux\nand any other operating system, in a cloud or inside automatically generated containers\nwhile keeping backward compatibility.\n\nCM scripts were originally developed based on the following requirements from the\n[MLCommons members](https://mlcommons.org) \nto help them automatically compose and optimize complex MLPerf benchmarks, applications and systems\nacross diverse and continuously changing models, data sets, software and hardware\nfrom Nvidia, Intel, AMD, Google, Qualcomm, Amazon and other vendors:\n* must work out of the box with the default options and without the need to edit some paths, environment variables and configuration files;\n* must be non-intrusive, easy to debug and must reuse existing \n user scripts and automation tools (such as cmake, make, ML workflows, \n python poetry and containers) rather than substituting them; \n* must have a very simple and human-friendly command line with a Python API and minimal dependencies;\n* must require minimal or zero learning curve by using plain Python, native scripts, environment variables \n and simple JSON/YAML descriptions instead of inventing new workflow languages;\n* must have the same interface to run all automations natively, in a cloud or inside containers.\n\nSee the [online documentation](https://docs.mlcommons.org/inference) \nat MLCommons to run MLPerf inference benchmarks across diverse systems using CM.\n\n#### CM4ABTF\n\n[CM4ABTF repository powered by CM](https://github.com/mlcommons/cm4abtf) - \na collection of portable automations and CM scripts to run the upcoming \nautomotive MLPerf benchmark across different models, data sets, software \nand hardware from different vendors.\n\n#### CM4MLPerf-results\n\n[CM4MLPerf-results powered by CM](https://github.com/mlcommons/cm4mlperf-results) - \na simplified and unified representation of the past MLPerf results \nin the CM format for further visualization and analysis using [CK graphs](https://access.cknowledge.org/playground/?action=experiments).\n\n#### CM4Research\n\n[CM4Research repository powered by CM](https://github.com/ctuning/cm4research) - \na unified interface designed to streamline the preparation, execution, and reproduction of experiments in research projects.\n\n\n### Projects powered by Collective Mind\n\n#### Collective Knowledge Playground\n\n[Collective Knowledge Playground](https://access.cKnowledge.org) - \na unified and open-source platform designed to [index all CM scripts](https://access.cknowledge.org/playground/?action=scripts) similar to PYPI,\nassist users in preparing CM commands to:\n\n* [run MLPerf benchmarks](https://access.cknowledge.org/playground/?action=howtorun)\n* aggregate, process, visualize, and compare [benchmarking results](https://access.cknowledge.org/playground/?action=experiments) for AI and ML systems\n* organize [open, reproducible optimization challenges and tournaments](https://access.cknowledge.org/playground/?action=challenges). \n\nThese initiatives aim to help academia and industry\ncollaboratively enhance the efficiency and cost-effectiveness of AI systems.\n\n#### Artifact Evaluation\n\n[Artifact Evaluation automation](https://cTuning.org/ae) - a community-driven initiative \nleveraging the Collective Mind framework to automate artifact evaluation \nand support reproducibility efforts at ML and systems conferences.\n\nCM scripts extend the concept of `cmake` with simple Python automations, native scripts\nand JSON/YAML meta descriptions. They require Python 3.7+ with minimal dependencies and are \n[continuously extended by the community and MLCommons members](https://github.com/mlcommons/ck/blob/master/CONTRIBUTING.md)\nto run natively on Ubuntu, MacOS, Windows, RHEL, Debian, Amazon Linux\nand any other operating system, in a cloud or inside automatically generated containers\nwhile keeping backward compatibility.\n\nCM scripts were originally developed based on the following requirements from the\n[MLCommons members](https://mlcommons.org) \nto help them automatically compose and optimize complex MLPerf benchmarks, applications and systems\nacross diverse and continuously changing models, data sets, software and hardware\nfrom Nvidia, Intel, AMD, Google, Qualcomm, Amazon and other vendors:\n* must work out of the box with the default options and without the need to edit some paths, environment variables and configuration files;\n* must be non-intrusive, easy to debug and must reuse existing \n user scripts and automation tools (such as cmake, make, ML workflows, \n python poetry and containers) rather than substituting them; \n* must have a very simple and human-friendly command line with a Python API and minimal dependencies;\n* must require minimal or zero learning curve by using plain Python, native scripts, environment variables \n and simple JSON/YAML descriptions instead of inventing new workflow languages;\n* must have the same interface to run all automations natively, in a cloud or inside containers.\n\n### Author and maintainer\n\n* [Grigori Fursin](https://cKnowledge.org/gfursin) (FlexAI, cTuning)\n\n### Repositories powered by CM\n\n* [CM4MLOPS / CM4MLPerf](https://github.com/mlcommons/cm4mlops) - \n a collection of portable, extensible and technology-agnostic automation recipes\n with a common CLI and Python API (CM scripts) to unify and automate \n all the manual steps required to compose, run, benchmark and optimize complex ML/AI applications \n on diverse platforms with any software and hardware: see [online catalog at CK playground](https://access.cknowledge.org/playground/?action=scripts),\n [online MLCommons catalog](https://docs.mlcommons.org/cm4mlops/scripts) \n\n* [CM interface to run MLPerf inference benchmarks](https://docs.mlcommons.org/inference)\n\n* [CM4ABTF](https://github.com/mlcommons/cm4abtf) - a unified CM interface and automation recipes\n to run automotive benchmark across different models, data sets, software and hardware from different vendors.\n\n* [CM4Research](https://github.com/ctuning/cm4research) - a unified CM interface an automation recipes\n to make it easier to reproduce results from published research papers.\n\n\n### Resources\n\n* CM v2.x (2022-cur) (stable): [installation on Linux, Windows, MacOS](https://access.cknowledge.org/playground/?action=install) ; \n [docs](https://docs.mlcommons.org/ck) ; [popular commands](https://github.com/mlcommons/ck/tree/master/cm/docs/demos/some-cm-commands.md) ; \n [getting started guide](https://github.com/mlcommons/ck/blob/master/docs/getting-started.md)\n* CM v3.x aka CMX (2024-cur) (stable): [docs](https://github.com/orgs/mlcommons/projects/46)\n* MLPerf inference benchmark automated via CM\n * [Run MLPerf for submissions](https://docs.mlcommons.org/inference)\n * [Run MLPerf at the Student Cluster Competition'24](https://docs.mlcommons.org/inference/benchmarks/text_to_image/reproducibility/scc24)\n* Examples of modular containers and GitHub actions with CM commands:\n * [GitHub action with CM commands to test MLPerf inference benchmark](https://github.com/mlcommons/inference/blob/master/.github/workflows/test-bert.yml)\n * [Dockerfile to run MLPerf inference benchmark via CM](https://github.com/mlcommons/ck/blob/master/cm-mlops/script/app-mlperf-inference/dockerfiles/bert-99.9/ubuntu_22.04_python_onnxruntime_cpu.Dockerfile)\n\n### License\n\n[Apache 2.0](LICENSE.md)\n\n### Citing Collective Mind\n\nIf you found CM automations, please cite this article: \n[ [ArXiv](https://arxiv.org/abs/2406.16791) ], [ [BibTex](https://github.com/mlcommons/ck/blob/master/citation.bib) ].\n\nYou can learn more about the motivation behind these projects from the following presentations:\n\n* \"Enabling more efficient and cost-effective AI/ML systems with Collective Mind, virtualized MLOps, MLPerf, Collective Knowledge Playground and reproducible optimization tournaments\": [ [ArXiv](https://arxiv.org/abs/2406.16791) ]\n* ACM REP'23 keynote about the MLCommons CM automation framework: [ [slides](https://doi.org/10.5281/zenodo.8105339) ] \n* ACM TechTalk'21 about Collective Knowledge project: [ [YouTube](https://www.youtube.com/watch?v=7zpeIVwICa4) ] [ [slides](https://learning.acm.org/binaries/content/assets/leaning-center/webinar-slides/2021/grigorifursin_techtalk_slides.pdf) ]\n\n### Acknowledgments\n\nCollective Mind (CM) was originally developed by [Grigori Fursin](https://cKnowledge.org/gfursin), \nas a part of the [Collective Knowledge educational initiative](https://cKnowledge.org),\nsponsored by [cTuning.org](https://cTuning.org) and [cKnowledge.org](https://cKnowledge.org), \nand contributed to MLCommons for the benefit of all. \n\nThis open-source technology, including CM4MLOps/CM4MLPerf, CM4ABTF, CM4Research, and more, \nis a collaborative project supported by [MLCommons](https://mlcommons.org), \n[FlexAI](https://flex.ai), [cTuning](https://cTuning.org)\nand our [amazing volunteers, collaborators, and contributors](https://github.com/mlcommons/ck/blob/master/CONTRIBUTING.md)! \n",
"bugtrack_url": null,
"license": "Apache 2.0",
"summary": "Collective Mind automation framework (CM)",
"version": "3.5.1",
"project_urls": {
"Changelog": "https://github.com/mlcommons/ck/blob/master/cm/CHANGES.md",
"Homepage": "https://cKnowledge.org",
"Issues": "https://github.com/mlcommons/ck/issues",
"Repository": "https://github.com/mlcommons/ck/tree/master/cm"
},
"split_keywords": [
"cmind",
" cm",
" cmx",
" collective mind",
" automation",
" portability",
" reusability",
" mlops",
" devops",
" vmlops",
" api",
" cli"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "65d075ea98e41bf1f2fe520e7796772718e6153107b8294ad9b71db9f1f623f9",
"md5": "5c8368da3a1150eb082d85d1fea79a7c",
"sha256": "00320ad1fbef751e27f0b46727348363d92c8d92c2d913c424dfb570b8167341"
},
"downloads": -1,
"filename": "cmind-3.5.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "5c8368da3a1150eb082d85d1fea79a7c",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 80589,
"upload_time": "2024-12-03T18:35:26",
"upload_time_iso_8601": "2024-12-03T18:35:26.208994Z",
"url": "https://files.pythonhosted.org/packages/65/d0/75ea98e41bf1f2fe520e7796772718e6153107b8294ad9b71db9f1f623f9/cmind-3.5.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "2c1b96eda19f989c0f523977413548d7d69ead125ee442dfe77291c9a1eb8fff",
"md5": "1c3febbaf26e72712c534c144027dfcb",
"sha256": "8114d1473e6b805b9e870ed4edd78e26e0a19738d94330966d87dbacc25a3a7d"
},
"downloads": -1,
"filename": "cmind-3.5.1.tar.gz",
"has_sig": false,
"md5_digest": "1c3febbaf26e72712c534c144027dfcb",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 78660,
"upload_time": "2024-12-03T18:35:27",
"upload_time_iso_8601": "2024-12-03T18:35:27.668061Z",
"url": "https://files.pythonhosted.org/packages/2c/1b/96eda19f989c0f523977413548d7d69ead125ee442dfe77291c9a1eb8fff/cmind-3.5.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-03 18:35:27",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "mlcommons",
"github_project": "ck",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "cmind",
"specs": [
[
">=",
"3.3.3"
]
]
},
{
"name": "pyyaml",
"specs": []
},
{
"name": "requests",
"specs": []
},
{
"name": "setuptools",
"specs": []
},
{
"name": "giturlparse",
"specs": []
}
],
"lcname": "cmind"
}