<img src="https://zenoml.com/img/zeno.png" width="250px"/>
[](https://badge.fury.io/py/zenoml)

[](https://lbesson.mit-license.org/)
[](https://cabreraalex.com/paper/zeno)
[](https://discord.gg/km62pDKAkE)
Zeno is a general-purpose framework for evaluating machine learning models.
It combines a **Python API** with an **interactive UI** to allow users to discover, explore, and analyze the performance of their models across diverse use cases.
Zeno can be used for any data type or task with [modular views](https://zenoml.com/docs/views/) for everything from object detection to audio transcription.
### Demos
| **Image Classification** | **Audio Transcription** | **Image Generation** | **Dataset Chatbot** | **Sensor Classification** |
| :---------------------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------: | :-----------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------: |
| Imagenette | Speech Accent Archive | DiffusionDB | LangChain + Notion | MotionSense |
| [](https://zeno-ml-imagenette.hf.space/) | [](https://zeno-ml-audio-transcription.hf.space/) | [](https://zeno-ml-diffusiondb.hf.space/) | [](https://zeno-ml-langchain-qa.hf.space/) | [](https://zeno-ml-imu-classification.hf.space) |
| [code](https://huggingface.co/spaces/zeno-ml/imagenette/tree/main) | [code](https://huggingface.co/spaces/zeno-ml/audio-transcription/tree/main) | [code](https://huggingface.co/spaces/zeno-ml/diffusiondb/tree/main) | [code](https://huggingface.co/spaces/zeno-ml/audio-transcription/tree/main) | [code](https://huggingface.co/spaces/zeno-ml/imu-classification/tree/main) |
<br />
https://user-images.githubusercontent.com/4563691/220689691-1ad7c184-02db-4615-b5ac-f52b8d5b8ea3.mp4
## Quickstart
Install the Zeno Python package from PyPI:
```bash
pip install zenoml
```
### Command Line
To get started, run the following command to initialize a Zeno project. It will walk you through creating the `zeno.toml` configuration file:
```bash
zeno init
```
Take a look at the [configuration documentation](https://zenoml.com/docs/configuration) for additional `toml` file options like adding model functions.
Start Zeno with `zeno zeno.toml`.
### Jupyter Notebook
You can also run Zeno directly from Jupyter notebooks or lab. The `zeno` command takes a dictionary of configuration options as input. See [the docs](https://zenoml.com/docs/configuration) for a full list of options. In this example we pass the minimum options for exploring a non-tabular dataset:
```python
import pandas as pd
from zeno import zeno
df = pd.read_csv("/path/to/metadata/file.csv")
zeno({
"metadata": df, # Pandas DataFrame with a row for each instance
"view": "audio-transcription", # The type of view for this data/task
"data_path": "/path/to/raw/data/", # The folder with raw data (images, audio, etc.)
"data_column": "id" # The column in the metadata file that contains the relative paths of files in data_path
})
```
You can pass a list of decorated function references directly Zeno as you add models and metrics.
## Citation
Please reference our [CHI'23 paper](https://arxiv.org/pdf/2302.04732.pdf)
```bibtex
@inproceedings{cabrera23zeno,
author = {Cabrera, Ángel Alexander and Fu, Erica and Bertucci, Donald and Holstein, Kenneth and Talwalkar, Ameet and Hong, Jason I. and Perer, Adam},
title = {Zeno: An Interactive Framework for Behavioral Evaluation of Machine Learning},
year = {2023},
isbn = {978-1-4503-9421-5/23/04},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3544548.3581268},
doi = {10.1145/3544548.3581268},
booktitle = {CHI Conference on Human Factors in Computing Systems},
location = {Hamburg, Germany},
series = {CHI '23}
}
```
## Community
Chat with us on our [Discord channel](https://discord.gg/km62pDKAkE) or leave an issue on this repository if you run into any issues or have a request!
Raw data
{
"_id": null,
"home_page": "https://zenoml.com",
"name": "zenoml",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.8,<4.0",
"maintainer_email": "",
"keywords": "ml,testing,evaluation,machine learning,ai",
"author": "\u00c1ngel Alexander Cabrera",
"author_email": "alex.cabrera@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/a0/42/ce15c018075b5a88c8d79da3a373c1dba48e0e651974e792e6aa89c4cd90/zenoml-0.6.4.tar.gz",
"platform": null,
"description": "<img src=\"https://zenoml.com/img/zeno.png\" width=\"250px\"/>\n\n[](https://badge.fury.io/py/zenoml)\n\n[](https://lbesson.mit-license.org/)\n[](https://cabreraalex.com/paper/zeno)\n[](https://discord.gg/km62pDKAkE)\n\nZeno is a general-purpose framework for evaluating machine learning models.\nIt combines a **Python API** with an **interactive UI** to allow users to discover, explore, and analyze the performance of their models across diverse use cases.\nZeno can be used for any data type or task with [modular views](https://zenoml.com/docs/views/) for everything from object detection to audio transcription.\n\n### Demos\n\n| **Image Classification** | **Audio Transcription** | **Image Generation** | **Dataset Chatbot** | **Sensor Classification** |\n| :---------------------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------: | :-----------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------: |\n| Imagenette | Speech Accent Archive | DiffusionDB | LangChain + Notion | MotionSense |\n| [](https://zeno-ml-imagenette.hf.space/) | [](https://zeno-ml-audio-transcription.hf.space/) | [](https://zeno-ml-diffusiondb.hf.space/) | [](https://zeno-ml-langchain-qa.hf.space/) | [](https://zeno-ml-imu-classification.hf.space) |\n| [code](https://huggingface.co/spaces/zeno-ml/imagenette/tree/main) | [code](https://huggingface.co/spaces/zeno-ml/audio-transcription/tree/main) | [code](https://huggingface.co/spaces/zeno-ml/diffusiondb/tree/main) | [code](https://huggingface.co/spaces/zeno-ml/audio-transcription/tree/main) | [code](https://huggingface.co/spaces/zeno-ml/imu-classification/tree/main) |\n\n<br />\n\nhttps://user-images.githubusercontent.com/4563691/220689691-1ad7c184-02db-4615-b5ac-f52b8d5b8ea3.mp4\n\n## Quickstart\n\nInstall the Zeno Python package from PyPI:\n\n```bash\npip install zenoml\n```\n\n### Command Line\n\nTo get started, run the following command to initialize a Zeno project. It will walk you through creating the `zeno.toml` configuration file:\n\n```bash\nzeno init\n```\n\nTake a look at the [configuration documentation](https://zenoml.com/docs/configuration) for additional `toml` file options like adding model functions.\n\nStart Zeno with `zeno zeno.toml`.\n\n### Jupyter Notebook\n\nYou can also run Zeno directly from Jupyter notebooks or lab. The `zeno` command takes a dictionary of configuration options as input. See [the docs](https://zenoml.com/docs/configuration) for a full list of options. In this example we pass the minimum options for exploring a non-tabular dataset:\n\n```python\nimport pandas as pd\nfrom zeno import zeno\n\ndf = pd.read_csv(\"/path/to/metadata/file.csv\")\n\nzeno({\n \"metadata\": df, # Pandas DataFrame with a row for each instance\n \"view\": \"audio-transcription\", # The type of view for this data/task\n \"data_path\": \"/path/to/raw/data/\", # The folder with raw data (images, audio, etc.)\n \"data_column\": \"id\" # The column in the metadata file that contains the relative paths of files in data_path\n})\n\n```\n\nYou can pass a list of decorated function references directly Zeno as you add models and metrics.\n\n## Citation\n\nPlease reference our [CHI'23 paper](https://arxiv.org/pdf/2302.04732.pdf)\n\n```bibtex\n@inproceedings{cabrera23zeno,\n author = {Cabrera, \u00c1ngel Alexander and Fu, Erica and Bertucci, Donald and Holstein, Kenneth and Talwalkar, Ameet and Hong, Jason I. and Perer, Adam},\n title = {Zeno: An Interactive Framework for Behavioral Evaluation of Machine Learning},\n year = {2023},\n isbn = {978-1-4503-9421-5/23/04},\n publisher = {Association for Computing Machinery},\n address = {New York, NY, USA},\n url = {https://doi.org/10.1145/3544548.3581268},\n doi = {10.1145/3544548.3581268},\n booktitle = {CHI Conference on Human Factors in Computing Systems},\n location = {Hamburg, Germany},\n series = {CHI '23}\n}\n```\n\n## Community\n\nChat with us on our [Discord channel](https://discord.gg/km62pDKAkE) or leave an issue on this repository if you run into any issues or have a request!\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Interactive Evaluation Framework for Machine Learning",
"version": "0.6.4",
"project_urls": {
"Homepage": "https://zenoml.com",
"Repository": "https://github.com/zeno-ml/zeno"
},
"split_keywords": [
"ml",
"testing",
"evaluation",
"machine learning",
"ai"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "3d40635690b49db1bf45fbfefaea8c2dc06357623578e6d2185612b49cb25673",
"md5": "3d7be1260a8a7b6953d194c2781cf189",
"sha256": "c9ef5fcd875886a80844fb56c6476feeb1337df149166c95faa053d6b0bc573f"
},
"downloads": -1,
"filename": "zenoml-0.6.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "3d7be1260a8a7b6953d194c2781cf189",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8,<4.0",
"size": 704120,
"upload_time": "2023-07-26T14:56:09",
"upload_time_iso_8601": "2023-07-26T14:56:09.402002Z",
"url": "https://files.pythonhosted.org/packages/3d/40/635690b49db1bf45fbfefaea8c2dc06357623578e6d2185612b49cb25673/zenoml-0.6.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "a042ce15c018075b5a88c8d79da3a373c1dba48e0e651974e792e6aa89c4cd90",
"md5": "89dadc1fd0f8802f8834a21826fbc52f",
"sha256": "d032fc76c569c04a7883ad6d8e33b3b55081cf00d5e6fdf586945badce60cf91"
},
"downloads": -1,
"filename": "zenoml-0.6.4.tar.gz",
"has_sig": false,
"md5_digest": "89dadc1fd0f8802f8834a21826fbc52f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8,<4.0",
"size": 699023,
"upload_time": "2023-07-26T14:56:11",
"upload_time_iso_8601": "2023-07-26T14:56:11.401417Z",
"url": "https://files.pythonhosted.org/packages/a0/42/ce15c018075b5a88c8d79da3a373c1dba48e0e651974e792e6aa89c4cd90/zenoml-0.6.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-07-26 14:56:11",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "zeno-ml",
"github_project": "zeno",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "zenoml"
}