<div align="center">
<img referrerpolicy="no-referrer-when-downgrade" src="https://static.scarf.sh/a.png?x-pxid=0fcbab94-8fbe-4a38-93e8-c2348450a42e" />
<h1 align="center">MLOps for Reliable AI - From Classical ML to Agents</h1>
<h3 align="center">Your unified toolkit for shipping everything from decision trees to complex AI agents, built on the MLOps principles you already trust.</h3>
</div>
<div align="center">
<!-- PROJECT LOGO -->
<br />
<a href="https://zenml.io">
<img alt="ZenML Logo" src="docs/book/.gitbook/assets/header.png" alt="ZenML Logo">
</a>
<br />
[![PyPi][pypi-shield]][pypi-url]
[![PyPi][pypiversion-shield]][pypi-url]
[![PyPi][downloads-shield]][downloads-url]
[![Contributors][contributors-shield]][contributors-url]
[![License][license-shield]][license-url]
</div>
<!-- MARKDOWN LINKS & IMAGES -->
[pypi-shield]: https://img.shields.io/pypi/pyversions/zenml?color=281158
[pypi-url]: https://pypi.org/project/zenml/
[pypiversion-shield]: https://img.shields.io/pypi/v/zenml?color=361776
[downloads-shield]: https://img.shields.io/pypi/dm/zenml?color=431D93
[downloads-url]: https://pypi.org/project/zenml/
[contributors-shield]: https://img.shields.io/github/contributors/zenml-io/zenml?color=7A3EF4
[contributors-url]: https://github.com/zenml-io/zenml/graphs/contributors
[license-shield]: https://img.shields.io/github/license/zenml-io/zenml?color=9565F6
[license-url]: https://github.com/zenml-io/zenml/blob/main/LICENSE
<div align="center">
<p>
<a href="https://zenml.io/features">Features</a> •
<a href="https://zenml.io/roadmap">Roadmap</a> •
<a href="https://github.com/zenml-io/zenml/issues">Report Bug</a> •
<a href="https://zenml.io/pro">Sign up for ZenML Pro</a> •
<a href="https://www.zenml.io/blog">Blog</a> •
<a href="https://zenml.io/podcast">Podcast</a>
<br />
<br />
🎉 For the latest release, see the <a href="https://github.com/zenml-io/zenml/releases">release notes</a>.
</p>
</div>
---
## 🚨 The Problem: MLOps Works for Models, But What About AI?

You're an ML engineer. You've perfected deploying `scikit-learn` models and wrangling PyTorch jobs. Your MLOps stack is dialed in. But now, you're being asked to build and ship AI agents, and suddenly your trusted toolkit is starting to crack.
- **The Adaptation Struggle:** Your MLOps habits (rigorous testing, versioning, CI/CD) don’t map cleanly onto agent development. How do you version a prompt? How do you regression test a non-deterministic system? The tools that gave you confidence for models now create friction for agents.
- **The Divided Stack:** To cope, teams are building a second, parallel stack just for LLM-based systems. Now you’re maintaining two sets of tools, two deployment pipelines, and two mental models. Your classical models live in one world, your agents in another. It's expensive, complex, and slows everyone down.
- **The Broken Feedback Loop:** Getting an agent from your local environment to production is a slow, painful journey. By the time you get feedback on performance, cost, or quality, the requirements have already changed. Iteration is a guessing game, not a data-driven process.
## 💡 The Solution: One Framework for your Entire AI Stack
Stop maintaining two separate worlds. ZenML is a unified MLOps framework that extends the battle-tested principles you rely on for classical ML to the new world of AI agents. It’s one platform to develop, evaluate, and deploy your entire AI portfolio.
```python
# Morning: Your sklearn pipeline is still versioned and reproducible.
train_and_deploy_classifier()
# Afternoon: Your new agent evaluation pipeline uses the same logic.
evaluate_and_deploy_agent()
# Same platform. Same principles. New possibilities.
```
With ZenML, you're not replacing your knowledge; you're extending it. Use the pipelines and practices you already know to version, test, deploy, and monitor everything from classic models to the most advanced agents.
## 💻 See It In Action: Multi-Agent Architecture Comparison
**The Challenge:** Your team built three different customer service agents. Which one should go to production? With ZenML, you can build a reproducible pipeline to test them on real data and make a data-driven decision.
```python
from zenml import pipeline, step
import pandas as pd
@step
def load_real_conversations() -> pd.DataFrame:
"""Load actual customer queries from a feature store."""
return load_from_feature_store("customer_queries_sample_1k")
@step
def run_architecture_comparison(queries: pd.DataFrame) -> dict:
"""Test three different agent architectures on the same data."""
architectures = {
"single_agent": SingleAgentRAG(),
"multi_specialist": MultiSpecialistAgents(),
"hierarchical": HierarchicalAgentTeam()
}
results = {}
for name, agent in architectures.items():
# ZenML automatically versions the agent's code, prompts, and tools
results[name] = agent.batch_process(queries)
return results
@step
def evaluate_and_decide(results: dict) -> str:
"""Evaluate results and generate a recommendation report."""
# Compare architectures on quality, cost, latency, etc.
evaluation_df = evaluate_results(results)
# Generate a rich report comparing the architectures
report = create_comparison_report(evaluation_df)
# Automatically tag the winning architecture for a staging deployment
winner = evaluation_df.sort_values("overall_score").iloc[0]
tag_for_staging(winner["architecture_name"])
return report
@pipeline
def compare_agent_architectures():
"""Your new Friday afternoon ritual: data-driven agent decisions."""
queries = load_real_conversations()
results = run_architecture_comparison(queries)
report = evaluate_and_decide(results)
if __name__ == "__main__":
# Run locally, compare results in the ZenML dashboard
compare_agent_architectures()
```
**The Result:** A clear winner is selected based on data, not opinions. You have full lineage from the test data and agent versions to the final report and deployment decision.
## 🔄 The AI Development Lifecycle with ZenML
### From Chaos to Process

<details>
<summary><b>Click to see your new, structured workflow</b></summary>
### Your New Workflow
**Monday: Quick Prototype**
```python
# Start with a local script, just like always
agent = LangGraphAgent(prompt="You are a helpful assistant...")
response = agent.chat("Help me with my order")
```
**Tuesday: Make it a Pipeline**
```python
# Wrap your code in a ZenML step to make it reproducible
@step
def customer_service_agent(query: str) -> str:
return agent.chat(query)
```
**Wednesday: Add Evaluation**
```python
# Test on real data, not toy examples
@pipeline
def eval_pipeline():
test_data = load_production_samples()
responses = customer_service_agent.map(test_data)
scores = evaluate_responses(responses)
track_experiment(scores)
```
**Thursday: Compare Architectures**
```python
# Make data-driven architecture decisions
results = compare_architectures(
baseline="current_prod",
challenger="new_multiagent_v2"
)
```
**Friday: Ship with Confidence**
```python
# Deploy the new agent with the same command you use for ML models
python agent_deployment.py --env=prod --model="customer_service:challenger"
```
</details>
## 🚀 Get Started (5 minutes)
### For ML Engineers Ready to Tame AI
```bash
# You know this drill
pip install zenml # Includes LangChain, LlamaIndex integrations
zenml integration install langchain llamaindex
# Initialize (your ML pipelines still work!)
zenml init
# Pull our agent evaluation template
zenml init --template agent-evaluation-starter
```
### Your First AI Pipeline
```python
# look_familiar.py
from zenml import pipeline, step
@step
def run_my_agent(test_queries: list[str]) -> list[str]:
"""Your existing agent code, now with MLOps superpowers."""
# Use ANY framework - LangGraph, CrewAI, raw OpenAI
agent = YourExistingAgent()
# Automatic versioning of prompts, tools, code, and configs
return [agent.run(q) for q in test_queries]
@step
def evaluate_responses(queries: list[str], responses: list[str]) -> dict:
"""LLM judges + your custom business metrics."""
quality = llm_judge(queries, responses)
latency = measure_response_times()
costs = calculate_token_usage()
return {
"quality": quality.mean(),
"p95_latency": latency.quantile(0.95),
"cost_per_query": costs.mean()
}
@pipeline
def my_first_agent_pipeline():
# Look ma, no YAML!
queries = ["How do I return an item?", "What's your refund policy?"]
responses = run_my_agent(queries)
metrics = evaluate_responses(queries, responses)
# Metrics are auto-logged, versioned, and comparable in the dashboard
return metrics
if __name__ == "__main__":
my_first_agent_pipeline()
print("Check your dashboard: http://localhost:8080")
```
## 📚 Learn More
### 🖼️ Getting Started Resources
The best way to learn about ZenML is through our comprehensive documentation and tutorials:
- **[Starter Guide](https://docs.zenml.io/user-guides/starter-guide)** - From zero to production in 30 minutes
- **[LLMOps Guide](https://docs.zenml.io/user-guides/llmops-guide)** - Specific patterns for LLM applications
- **[SDK Reference](https://sdkdocs.zenml.io/)** - Complete API documentation
For visual learners, start with this 11-minute introduction:
[](https://www.youtube.com/watch?v=wEVwIkDvUPs)
### 📖 Production Examples
1. **[E2E Batch Inference](examples/e2e/)** - Complete MLOps pipeline with feature engineering
2. **[LLM RAG Pipeline](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide)** - Production RAG with evaluation loops
3. **[Agentic Workflow (Deep Research)](https://github.com/zenml-io/zenml-projects/tree/main/deep_research)** - Orchestrate your agents with ZenML
4. **[Fine-tuning Pipeline](https://github.com/zenml-io/zenml-projects/tree/main/gamesense)** - Fine-tune and deploy LLMs
### 🏢 Deployment Options
**For Teams:**
- **[Self-hosted](https://docs.zenml.io/getting-started/deploying-zenml)** - Deploy on your infrastructure with Helm/Docker
- **[ZenML Pro](https://cloud.zenml.io/?utm_source=readme)** - Managed service with enterprise support (free trial)
**Infrastructure Requirements:**
- Kubernetes cluster (or local Docker)
- Object storage (S3/GCS/Azure)
- PostgreSQL database
- _[Complete requirements](https://docs.zenml.io/getting-started/deploying-zenml/deploy-with-helm)_
### 🎓 Books & Resources
<div align="center">
<a href="https://www.amazon.com/LLM-Engineers-Handbook-engineering-production/dp/1836200072">
<img src="docs/book/.gitbook/assets/llm_engineering_handbook_cover.jpg" alt="LLM Engineer's Handbook Cover" width="200"/>
</a>
<a href="https://www.amazon.com/-/en/Andrew-McMahon/dp/1837631964">
<img src="docs/book/.gitbook/assets/ml_engineering_with_python.jpg" alt="Machine Learning Engineering with Python Cover" width="200"/>
</a>
</div>
ZenML is featured in these comprehensive guides to production AI systems.
## 🤝 Join ML Engineers Building the Future of AI
**Contribute:**
- 🌟 [Star us on GitHub](https://github.com/zenml-io/zenml/stargazers) - Help others discover ZenML
- 🤝 [Contributing Guide](CONTRIBUTING.md) - Start with [`good-first-issue`](https://github.com/issues?q=is%3Aopen+is%3Aissue+archived%3Afalse+user%3Azenml-io+label%3A%22good+first+issue%22)
- 💻 [Write Integrations](https://github.com/zenml-io/zenml/blob/main/src/zenml/integrations/README.md) - Add your favorite tools
**Stay Updated:**
- 🗺 [Public Roadmap](https://zenml.io/roadmap) - See what's coming next
- 📰 [Blog](https://zenml.io/blog) - Best practices and case studies
- 🎙 [Podcast](https://zenml.io/podcast) - Interviews with ML practitioners
## ❓ FAQs from ML Engineers Like You
**Q: "Do I need to rewrite my agents or models to use ZenML?"**
A: No. Wrap your existing code in a `@step`. Keep using `scikit-learn`, PyTorch, LangGraph, LlamaIndex, or raw API calls. ZenML orchestrates your tools, it doesn't replace them.
**Q: "How is this different from LangSmith/Langfuse?"**
A: They provide excellent observability for LLM applications. We orchestrate the **full MLOps lifecycle for your entire AI stack**. With ZenML, you manage both your classical ML models and your AI agents in one unified framework, from development and evaluation all the way to production deployment.
**Q: "Can I use my existing MLflow/W&B setup?"**
A: Yes! We integrate with both. Your experiments, our pipelines.
**Q: "Is this just MLflow with extra steps?"**
A: No. MLflow tracks experiments. We orchestrate the entire development process – from training and evaluation to deployment and monitoring – for both models and agents.
**Q: "What about cost? I can't afford another platform."**
A: ZenML's open-source version is free forever. You likely already have the required infrastructure (like a Kubernetes cluster and object storage). We just help you make better use of it for MLOps.
### 🛠 VS Code Extension
Manage pipelines directly from your editor:
<details>
<summary>🖥️ VS Code Extension in Action!</summary>
<div align="center">
<img width="60%" src="docs/book/.gitbook/assets/zenml-extension-shortened.gif" alt="ZenML Extension">
</div>
</details>
Install from [VS Code Marketplace](https://marketplace.visualstudio.com/items?itemName=ZenML.zenml-vscode).
## 📜 License
ZenML is distributed under the terms of the Apache License Version 2.0. See
[LICENSE](LICENSE) for details.
Raw data
{
"_id": null,
"home_page": "https://zenml.io",
"name": "zenml",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.13,>=3.9",
"maintainer_email": null,
"keywords": "machine learning, production, pipeline, mlops, devops",
"author": "ZenML GmbH",
"author_email": "info@zenml.io",
"download_url": "https://files.pythonhosted.org/packages/52/00/fbd73cf7bce90a7337e347926d6ef036ca2a12a6af51953fcfbb0c242108/zenml-0.84.0.tar.gz",
"platform": null,
"description": "<div align=\"center\">\n <img referrerpolicy=\"no-referrer-when-downgrade\" src=\"https://static.scarf.sh/a.png?x-pxid=0fcbab94-8fbe-4a38-93e8-c2348450a42e\" />\n <h1 align=\"center\">MLOps for Reliable AI - From Classical ML to Agents</h1>\n <h3 align=\"center\">Your unified toolkit for shipping everything from decision trees to complex AI agents, built on the MLOps principles you already trust.</h3>\n</div>\n\n<div align=\"center\">\n\n <!-- PROJECT LOGO -->\n <br />\n <a href=\"https://zenml.io\">\n <img alt=\"ZenML Logo\" src=\"docs/book/.gitbook/assets/header.png\" alt=\"ZenML Logo\">\n </a>\n <br />\n\n [![PyPi][pypi-shield]][pypi-url]\n [![PyPi][pypiversion-shield]][pypi-url]\n [![PyPi][downloads-shield]][downloads-url]\n [![Contributors][contributors-shield]][contributors-url]\n [![License][license-shield]][license-url]\n\n</div>\n\n<!-- MARKDOWN LINKS & IMAGES -->\n[pypi-shield]: https://img.shields.io/pypi/pyversions/zenml?color=281158\n[pypi-url]: https://pypi.org/project/zenml/\n[pypiversion-shield]: https://img.shields.io/pypi/v/zenml?color=361776\n[downloads-shield]: https://img.shields.io/pypi/dm/zenml?color=431D93\n[downloads-url]: https://pypi.org/project/zenml/\n[contributors-shield]: https://img.shields.io/github/contributors/zenml-io/zenml?color=7A3EF4\n[contributors-url]: https://github.com/zenml-io/zenml/graphs/contributors\n[license-shield]: https://img.shields.io/github/license/zenml-io/zenml?color=9565F6\n[license-url]: https://github.com/zenml-io/zenml/blob/main/LICENSE\n\n<div align=\"center\">\n<p>\n <a href=\"https://zenml.io/features\">Features</a> \u2022\n <a href=\"https://zenml.io/roadmap\">Roadmap</a> \u2022\n <a href=\"https://github.com/zenml-io/zenml/issues\">Report Bug</a> \u2022\n <a href=\"https://zenml.io/pro\">Sign up for ZenML Pro</a> \u2022\n <a href=\"https://www.zenml.io/blog\">Blog</a> \u2022\n <a href=\"https://zenml.io/podcast\">Podcast</a>\n <br />\n <br />\n \ud83c\udf89 For the latest release, see the <a href=\"https://github.com/zenml-io/zenml/releases\">release notes</a>.\n</p>\n</div>\n\n---\n\n## \ud83d\udea8 The Problem: MLOps Works for Models, But What About AI?\n\n\n\nYou're an ML engineer. You've perfected deploying `scikit-learn` models and wrangling PyTorch jobs. Your MLOps stack is dialed in. But now, you're being asked to build and ship AI agents, and suddenly your trusted toolkit is starting to crack.\n\n- **The Adaptation Struggle:** Your MLOps habits (rigorous testing, versioning, CI/CD) don\u2019t map cleanly onto agent development. How do you version a prompt? How do you regression test a non-deterministic system? The tools that gave you confidence for models now create friction for agents.\n\n- **The Divided Stack:** To cope, teams are building a second, parallel stack just for LLM-based systems. Now you\u2019re maintaining two sets of tools, two deployment pipelines, and two mental models. Your classical models live in one world, your agents in another. It's expensive, complex, and slows everyone down.\n\n- **The Broken Feedback Loop:** Getting an agent from your local environment to production is a slow, painful journey. By the time you get feedback on performance, cost, or quality, the requirements have already changed. Iteration is a guessing game, not a data-driven process.\n\n## \ud83d\udca1 The Solution: One Framework for your Entire AI Stack\n\nStop maintaining two separate worlds. ZenML is a unified MLOps framework that extends the battle-tested principles you rely on for classical ML to the new world of AI agents. It\u2019s one platform to develop, evaluate, and deploy your entire AI portfolio.\n\n```python\n# Morning: Your sklearn pipeline is still versioned and reproducible.\ntrain_and_deploy_classifier()\n\n# Afternoon: Your new agent evaluation pipeline uses the same logic.\nevaluate_and_deploy_agent()\n\n# Same platform. Same principles. New possibilities.\n```\n\nWith ZenML, you're not replacing your knowledge; you're extending it. Use the pipelines and practices you already know to version, test, deploy, and monitor everything from classic models to the most advanced agents.\n\n## \ud83d\udcbb See It In Action: Multi-Agent Architecture Comparison\n\n**The Challenge:** Your team built three different customer service agents. Which one should go to production? With ZenML, you can build a reproducible pipeline to test them on real data and make a data-driven decision.\n\n```python\nfrom zenml import pipeline, step\nimport pandas as pd\n\n@step\ndef load_real_conversations() -> pd.DataFrame:\n \"\"\"Load actual customer queries from a feature store.\"\"\"\n return load_from_feature_store(\"customer_queries_sample_1k\")\n\n@step\ndef run_architecture_comparison(queries: pd.DataFrame) -> dict:\n \"\"\"Test three different agent architectures on the same data.\"\"\"\n architectures = {\n \"single_agent\": SingleAgentRAG(),\n \"multi_specialist\": MultiSpecialistAgents(),\n \"hierarchical\": HierarchicalAgentTeam()\n }\n \n results = {}\n for name, agent in architectures.items():\n # ZenML automatically versions the agent's code, prompts, and tools\n results[name] = agent.batch_process(queries)\n return results\n\n@step\ndef evaluate_and_decide(results: dict) -> str:\n \"\"\"Evaluate results and generate a recommendation report.\"\"\"\n # Compare architectures on quality, cost, latency, etc.\n evaluation_df = evaluate_results(results)\n \n # Generate a rich report comparing the architectures\n report = create_comparison_report(evaluation_df)\n \n # Automatically tag the winning architecture for a staging deployment\n winner = evaluation_df.sort_values(\"overall_score\").iloc[0]\n tag_for_staging(winner[\"architecture_name\"])\n \n return report\n\n@pipeline\ndef compare_agent_architectures():\n \"\"\"Your new Friday afternoon ritual: data-driven agent decisions.\"\"\"\n queries = load_real_conversations()\n results = run_architecture_comparison(queries)\n report = evaluate_and_decide(results)\n\nif __name__ == \"__main__\":\n # Run locally, compare results in the ZenML dashboard\n compare_agent_architectures()\n```\n\n**The Result:** A clear winner is selected based on data, not opinions. You have full lineage from the test data and agent versions to the final report and deployment decision.\n\n## \ud83d\udd04 The AI Development Lifecycle with ZenML\n\n### From Chaos to Process\n\n\n\n<details>\n <summary><b>Click to see your new, structured workflow</b></summary>\n\n### Your New Workflow\n\n**Monday: Quick Prototype**\n```python\n# Start with a local script, just like always\nagent = LangGraphAgent(prompt=\"You are a helpful assistant...\")\nresponse = agent.chat(\"Help me with my order\")\n```\n\n**Tuesday: Make it a Pipeline**\n```python\n# Wrap your code in a ZenML step to make it reproducible\n@step\ndef customer_service_agent(query: str) -> str:\n return agent.chat(query)\n```\n\n**Wednesday: Add Evaluation**\n```python\n# Test on real data, not toy examples\n@pipeline\ndef eval_pipeline():\n test_data = load_production_samples()\n responses = customer_service_agent.map(test_data)\n scores = evaluate_responses(responses)\n track_experiment(scores)\n```\n\n**Thursday: Compare Architectures**\n```python\n# Make data-driven architecture decisions\nresults = compare_architectures(\n baseline=\"current_prod\",\n challenger=\"new_multiagent_v2\"\n)\n```\n\n**Friday: Ship with Confidence**\n```python\n# Deploy the new agent with the same command you use for ML models\npython agent_deployment.py --env=prod --model=\"customer_service:challenger\"\n```\n</details>\n\n## \ud83d\ude80 Get Started (5 minutes)\n\n### For ML Engineers Ready to Tame AI\n\n```bash\n# You know this drill\npip install zenml # Includes LangChain, LlamaIndex integrations\nzenml integration install langchain llamaindex\n\n# Initialize (your ML pipelines still work!)\nzenml init\n\n# Pull our agent evaluation template\nzenml init --template agent-evaluation-starter\n```\n\n### Your First AI Pipeline\n\n```python\n# look_familiar.py\nfrom zenml import pipeline, step\n\n@step\ndef run_my_agent(test_queries: list[str]) -> list[str]:\n \"\"\"Your existing agent code, now with MLOps superpowers.\"\"\"\n # Use ANY framework - LangGraph, CrewAI, raw OpenAI\n agent = YourExistingAgent()\n \n # Automatic versioning of prompts, tools, code, and configs\n return [agent.run(q) for q in test_queries]\n\n@step\ndef evaluate_responses(queries: list[str], responses: list[str]) -> dict:\n \"\"\"LLM judges + your custom business metrics.\"\"\"\n quality = llm_judge(queries, responses)\n latency = measure_response_times()\n costs = calculate_token_usage()\n \n return {\n \"quality\": quality.mean(),\n \"p95_latency\": latency.quantile(0.95),\n \"cost_per_query\": costs.mean()\n }\n\n@pipeline\ndef my_first_agent_pipeline():\n # Look ma, no YAML!\n queries = [\"How do I return an item?\", \"What's your refund policy?\"]\n responses = run_my_agent(queries)\n metrics = evaluate_responses(queries, responses)\n \n # Metrics are auto-logged, versioned, and comparable in the dashboard\n return metrics\n\nif __name__ == \"__main__\":\n my_first_agent_pipeline()\n print(\"Check your dashboard: http://localhost:8080\")\n```\n\n## \ud83d\udcda Learn More\n\n### \ud83d\uddbc\ufe0f Getting Started Resources\n\nThe best way to learn about ZenML is through our comprehensive documentation and tutorials:\n\n- **[Starter Guide](https://docs.zenml.io/user-guides/starter-guide)** - From zero to production in 30 minutes\n- **[LLMOps Guide](https://docs.zenml.io/user-guides/llmops-guide)** - Specific patterns for LLM applications\n- **[SDK Reference](https://sdkdocs.zenml.io/)** - Complete API documentation\n\nFor visual learners, start with this 11-minute introduction:\n\n[](https://www.youtube.com/watch?v=wEVwIkDvUPs)\n\n### \ud83d\udcd6 Production Examples\n\n1. **[E2E Batch Inference](examples/e2e/)** - Complete MLOps pipeline with feature engineering\n2. **[LLM RAG Pipeline](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide)** - Production RAG with evaluation loops\n3. **[Agentic Workflow (Deep Research)](https://github.com/zenml-io/zenml-projects/tree/main/deep_research)** - Orchestrate your agents with ZenML\n4. **[Fine-tuning Pipeline](https://github.com/zenml-io/zenml-projects/tree/main/gamesense)** - Fine-tune and deploy LLMs\n\n### \ud83c\udfe2 Deployment Options\n\n**For Teams:**\n- **[Self-hosted](https://docs.zenml.io/getting-started/deploying-zenml)** - Deploy on your infrastructure with Helm/Docker\n- **[ZenML Pro](https://cloud.zenml.io/?utm_source=readme)** - Managed service with enterprise support (free trial)\n\n**Infrastructure Requirements:**\n- Kubernetes cluster (or local Docker)\n- Object storage (S3/GCS/Azure)\n- PostgreSQL database\n- _[Complete requirements](https://docs.zenml.io/getting-started/deploying-zenml/deploy-with-helm)_\n\n### \ud83c\udf93 Books & Resources\n\n<div align=\"center\">\n <a href=\"https://www.amazon.com/LLM-Engineers-Handbook-engineering-production/dp/1836200072\">\n <img src=\"docs/book/.gitbook/assets/llm_engineering_handbook_cover.jpg\" alt=\"LLM Engineer's Handbook Cover\" width=\"200\"/>\n </a>\n <a href=\"https://www.amazon.com/-/en/Andrew-McMahon/dp/1837631964\">\n <img src=\"docs/book/.gitbook/assets/ml_engineering_with_python.jpg\" alt=\"Machine Learning Engineering with Python Cover\" width=\"200\"/>\n </a>\n</div>\n\nZenML is featured in these comprehensive guides to production AI systems.\n\n## \ud83e\udd1d Join ML Engineers Building the Future of AI\n\n**Contribute:**\n- \ud83c\udf1f [Star us on GitHub](https://github.com/zenml-io/zenml/stargazers) - Help others discover ZenML\n- \ud83e\udd1d [Contributing Guide](CONTRIBUTING.md) - Start with [`good-first-issue`](https://github.com/issues?q=is%3Aopen+is%3Aissue+archived%3Afalse+user%3Azenml-io+label%3A%22good+first+issue%22)\n- \ud83d\udcbb [Write Integrations](https://github.com/zenml-io/zenml/blob/main/src/zenml/integrations/README.md) - Add your favorite tools\n\n**Stay Updated:**\n- \ud83d\uddfa [Public Roadmap](https://zenml.io/roadmap) - See what's coming next\n- \ud83d\udcf0 [Blog](https://zenml.io/blog) - Best practices and case studies\n- \ud83c\udf99 [Podcast](https://zenml.io/podcast) - Interviews with ML practitioners\n\n## \u2753 FAQs from ML Engineers Like You\n\n**Q: \"Do I need to rewrite my agents or models to use ZenML?\"**\nA: No. Wrap your existing code in a `@step`. Keep using `scikit-learn`, PyTorch, LangGraph, LlamaIndex, or raw API calls. ZenML orchestrates your tools, it doesn't replace them.\n\n**Q: \"How is this different from LangSmith/Langfuse?\"**\nA: They provide excellent observability for LLM applications. We orchestrate the **full MLOps lifecycle for your entire AI stack**. With ZenML, you manage both your classical ML models and your AI agents in one unified framework, from development and evaluation all the way to production deployment.\n\n**Q: \"Can I use my existing MLflow/W&B setup?\"**\nA: Yes! We integrate with both. Your experiments, our pipelines.\n\n**Q: \"Is this just MLflow with extra steps?\"**\nA: No. MLflow tracks experiments. We orchestrate the entire development process \u2013 from training and evaluation to deployment and monitoring \u2013 for both models and agents.\n\n**Q: \"What about cost? I can't afford another platform.\"**\nA: ZenML's open-source version is free forever. You likely already have the required infrastructure (like a Kubernetes cluster and object storage). We just help you make better use of it for MLOps.\n\n### \ud83d\udee0 VS Code Extension\n\nManage pipelines directly from your editor:\n\n<details>\n <summary>\ud83d\udda5\ufe0f VS Code Extension in Action!</summary>\n <div align=\"center\">\n <img width=\"60%\" src=\"docs/book/.gitbook/assets/zenml-extension-shortened.gif\" alt=\"ZenML Extension\">\n</div>\n</details>\n\nInstall from [VS Code Marketplace](https://marketplace.visualstudio.com/items?itemName=ZenML.zenml-vscode).\n\n## \ud83d\udcdc License\n\nZenML is distributed under the terms of the Apache License Version 2.0. See\n[LICENSE](LICENSE) for details.\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "ZenML: Write production-ready ML code.",
"version": "0.84.0",
"project_urls": {
"Documentation": "https://docs.zenml.io",
"Homepage": "https://zenml.io",
"Repository": "https://github.com/zenml-io/zenml"
},
"split_keywords": [
"machine learning",
" production",
" pipeline",
" mlops",
" devops"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "96cd1e94e7f3e94f54c5f7046ab42cb5ef39e9896968e3e0dbaa01a05dade488",
"md5": "2d44eda3f552d79d7e35766864d80687",
"sha256": "94c034a4433f88e85ca427b1c7d9e4ee280749f30cc774aa3c901c2be769c018"
},
"downloads": -1,
"filename": "zenml-0.84.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "2d44eda3f552d79d7e35766864d80687",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.13,>=3.9",
"size": 4613125,
"upload_time": "2025-07-11T12:48:20",
"upload_time_iso_8601": "2025-07-11T12:48:20.150852Z",
"url": "https://files.pythonhosted.org/packages/96/cd/1e94e7f3e94f54c5f7046ab42cb5ef39e9896968e3e0dbaa01a05dade488/zenml-0.84.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "5200fbd73cf7bce90a7337e347926d6ef036ca2a12a6af51953fcfbb0c242108",
"md5": "a3ad36b0677841053ac0e5db0983f598",
"sha256": "ac8652bce8ebe2856d0423e7a81fe7f73fa4e2411136c59d9cc523f0910c8175"
},
"downloads": -1,
"filename": "zenml-0.84.0.tar.gz",
"has_sig": false,
"md5_digest": "a3ad36b0677841053ac0e5db0983f598",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.13,>=3.9",
"size": 3725636,
"upload_time": "2025-07-11T12:48:22",
"upload_time_iso_8601": "2025-07-11T12:48:22.302796Z",
"url": "https://files.pythonhosted.org/packages/52/00/fbd73cf7bce90a7337e347926d6ef036ca2a12a6af51953fcfbb0c242108/zenml-0.84.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-11 12:48:22",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "zenml-io",
"github_project": "zenml",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "zenml"
}