zenml0.82.0
Published
ZenML: Write production-ready ML code.
pip install zenml
Package Downloads
Authors
Project URLs
Requires Python
<3.13,>=3.9
Dependencies
- alembic
<1.9.0,>=1.8.1
- bcrypt
==4.0.1
- click
<8.1.8,>=8.0.1
- cloudpickle
<3,>=2.0.0
- distro
<2.0.0,>=1.6.0
- docker
<7.2.0,>=7.1.0
- gitpython
<4.0.0,>=3.1.18
- packaging
>=24.1
- passlib
[bcrypt]<1.8.0,>=1.7.4
- psutil
>=5.0.0
- pydantic
<2.11.2,>=2.0
- pydantic-settings
- pymysql
<1.2.0,>=1.1.1
- python-dateutil
<3.0.0,>=2.8.1
- pyyaml
>=6.0.1
- rich
[jupyter]>=12.0.0
- setuptools
- sqlalchemy
<3.0.0,>=2.0.0
- sqlalchemy_utils
- sqlmodel
==0.0.18
- importlib_metadata
<=7.0.0; python_version < "3.10"
- fastapi
<=0.115.8,>=0.100; extra == "server"
- uvicorn
[standard]>=0.17.5; extra == "server"
- python-multipart
<0.1.0,>=0.0.9; extra == "server"
- pyjwt
[crypto]==2.7.*; extra == "server"
- orjson
<3.11.0,>=3.10.0; extra == "server"
- Jinja2
; extra == "server"
- ipinfo
>=4.4.3; extra == "server"
- secure
<0.4.0,>=0.3.0; extra == "server"
- tldextract
<5.2.0,>=5.1.0; extra == "server"
- itsdangerous
<2.3.0,>=2.2.0; extra == "server"
- copier
>=8.1.0; extra == "templates"
- pyyaml-include
<2.0; extra == "templates"
- jinja2-time
<0.3.0,>=0.2.0; extra == "templates"
- boto3
>=1.16.0; extra == "secrets-aws" or extra == "connectors-aws"
- google-cloud-secret-manager
>=2.12.5; extra == "secrets-gcp"
- requests
<3.0.0,>=2.27.11; extra == "connectors-azure"
- azure-identity
>=1.4.0; extra == "secrets-azure" or extra == "connectors-azure"
- azure-keyvault-secrets
>=4.0.0; extra == "secrets-azure"
- hvac
>=0.11.2; extra == "secrets-hashicorp"
- aws-profile-manager
>=0.5.0; extra == "connectors-aws"
- kubernetes
>=18.20.0; extra == "connectors-kubernetes" or extra == "connectors-aws" or extra == "connectors-gcp" or extra == "connectors-azure"
- google-cloud-container
>=2.21.0; extra == "connectors-gcp"
- google-cloud-storage
>=2.9.0; extra == "connectors-gcp"
- google-cloud-artifact-registry
>=1.11.3; extra == "connectors-gcp"
- azure-mgmt-containerservice
>=20.0.0; extra == "connectors-azure"
- azure-mgmt-containerregistry
>=10.0.0; extra == "connectors-azure"
- azure-mgmt-storage
>=20.0.0; extra == "connectors-azure"
- azure-storage-blob
>=12.0.0; extra == "connectors-azure"
- azure-mgmt-resource
>=21.0.0; extra == "connectors-azure"
- s3fs
!=2025.3.1,>=2022.11.0; extra == "s3fs"
- sagemaker
>=2.237.3; extra == "sagemaker"
- gcsfs
>=2022.11.0; extra == "gcsfs"
- kfp
>=2.6.0; extra == "vertex"
- google-cloud-aiplatform
>=1.34.0; extra == "vertex"
- google-cloud-pipeline-components
>=2.19.0; extra == "vertex"
- adlfs
>=2021.10.0; extra == "adlfs"
- azure-ai-ml
==1.23.1; extra == "azureml"
- bandit
<2.0.0,>=1.7.5; extra == "dev"
- coverage
[toml]<6.0,>=5.5; extra == "dev"
- mypy
==1.7.1; extra == "dev"
- pyment
<0.4.0,>=0.3.3; extra == "dev"
- tox
<4.0.0,>=3.24.3; extra == "dev"
- hypothesis
<7.0.0,>=6.43.1; extra == "dev"
- typing-extensions
>=3.7.4; extra == "dev"
- darglint
<2.0.0,>=1.8.1; extra == "dev"
- ruff
>=0.1.7; extra == "templates" or extra == "dev"
- yamlfix
<2.0.0,>=1.16.0; extra == "dev"
- maison
<2.0; extra == "dev"
- pytest
<8.0.0,>=7.4.0; extra == "dev"
- pytest-randomly
<4.0.0,>=3.10.1; extra == "dev"
- pytest-mock
<4.0.0,>=3.6.1; extra == "dev"
- pytest-clarity
<2.0.0,>=1.0.1; extra == "dev"
- pytest-instafail
>=0.5.0; extra == "dev"
- pytest-rerunfailures
>=13.0; extra == "dev"
- pytest-split
<0.9.0,>=0.8.1; extra == "dev"
- mkdocs
<2.0.0,>=1.6.1; extra == "dev"
- mkdocs-material
==9.6.8; extra == "dev"
- mkdocs-awesome-pages-plugin
<3.0.0,>=2.10.1; extra == "dev"
- mkdocstrings
[python]<0.29.0,>=0.28.1; extra == "dev"
- mkdocs-autorefs
<2.0.0,>=1.4.0; extra == "dev"
- mike
<2.0.0,>=1.1.2; extra == "dev"
- types-certifi
<2022.0.0.0,>=2021.10.8.0; extra == "dev"
- types-croniter
<2.0.0,>=1.0.2; extra == "dev"
- types-futures
<4.0.0,>=3.3.1; extra == "dev"
- types-Markdown
<4.0.0,>=3.3.6; extra == "dev"
- types-paramiko
>=3.4.0; extra == "dev"
- types-Pillow
<10.0.0,>=9.2.1; extra == "dev"
- types-protobuf
<4.0.0,>=3.18.0; extra == "dev"
- types-PyMySQL
<2.0.0,>=1.0.4; extra == "dev"
- types-python-dateutil
<3.0.0,>=2.8.2; extra == "dev"
- types-python-slugify
<6.0.0,>=5.0.2; extra == "dev"
- types-PyYAML
<7.0.0,>=6.0.0; extra == "dev"
- types-redis
<5.0.0,>=4.1.19; extra == "dev"
- types-requests
<3.0.0,>=2.27.11; extra == "dev"
- types-setuptools
<58.0.0,>=57.4.2; extra == "dev"
- types-six
<2.0.0,>=1.16.2; extra == "dev"
- types-termcolor
<2.0.0,>=1.1.2; extra == "dev"
- types-psutil
<6.0.0,>=5.8.13; extra == "dev"
- types-passlib
<2.0.0,>=1.7.7; extra == "dev"

Beyond The Demo: Production-Grade AI Systems
ZenML brings battle-tested MLOps practices to your AI applications, handling evaluation, monitoring, and deployment at scale
Need help with documentation? Visit our docs site for comprehensive guides and tutorials, or browse the SDK reference to find specific functions and classes.
βοΈ Show Your Support
If you find ZenML helpful or interesting, please consider giving us a star on GitHub. Your support helps promote the project and lets others know that it's worth checking out.
Thank you for your support! π
π€Έ Quickstart
Install ZenML via PyPI. Python 3.9 - 3.12 is required:
pip install "zenml[server]" notebook
Take a tour with the guided quickstart by running:
zenml go
πͺ From Prototype to Production: AI Made Simple
Create AI pipelines with minimal code changes
ZenML is an open-source framework that handles MLOps and LLMOps for engineers scaling AI beyond prototypes. Automate evaluation loops, track performance, and deploy updates across 100s of pipelinesβall while your RAG apps run like clockwork.
from zenml import pipeline, step
@step
def load_rag_documents() -> dict:
# Load and chunk documents for RAG pipeline
documents = extract_web_content(url="https://www.zenml.io/")
return {"chunks": chunk_documents(documents)}
@step
def generate_embeddings(data: dict) -> None:
# Generate embeddings for RAG pipeline
embeddings = embed_documents(data['chunks'])
return {"embeddings": embeddings}
@step
def index_generator(
embeddings: dict,
) -> str:
# Generate index for RAG pipeline
index = create_index(embeddings)
return index.id
@pipeline
def rag_pipeline() -> str:
documents = load_rag_documents()
embeddings = generate_embeddings(documents)
index = index_generator(embeddings)
return index
Easily provision an MLOps stack or reuse your existing infrastructure
The framework is a gentle entry point for practitioners to build complex ML pipelines with little knowledge required of the underlying infrastructure complexity. ZenML pipelines can be run on AWS, GCP, Azure, Airflow, Kubeflow and even on Kubernetes without having to change any code or know underlying internals.
ZenML provides different features to aid people to get started quickly on a remote setting as well. If you want to deploy a remote stack from scratch on your selected cloud provider, you can use the 1-click deployment feature either through the dashboard:
Or, through our CLI command:
zenml stack deploy --provider aws
Alternatively, if the necessary pieces of infrastructure are already deployed, you can register a cloud stack seamlessly through the stack wizard:
zenml stack register <STACK_NAME> --provider aws
Read more about ZenML stacks.
Run workloads easily on your production infrastructure
Once you have your MLOps stack configured, you can easily run workloads on it:
zenml stack set <STACK_NAME>
python run.py
from zenml.config import ResourceSettings, DockerSettings
@step(
settings={
"resources": ResourceSettings(memory="16GB", gpu_count="1", cpu_count="8"),
"docker": DockerSettings(parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime")
}
)
def training(...):
...
Track models, pipeline, and artifacts
Create a complete lineage of who, where, and what data and models are produced.
You'll be able to find out who produced which model, at what time, with which data, and on which version of the code. This guarantees full reproducibility and auditability.
from zenml import Model
@step(model=Model(name="rag_llm", tags=["staging"]))
def deploy_rag(index_id: str) -> str:
deployment_id = deploy_to_endpoint(index_id)
return deployment_id
π Key LLMOps Capabilities
Continual RAG Improvement
Build production-ready retrieval systems

ZenML tracks document ingestion, embedding versions, and query patterns. Implement feedback loops and:
- Fix your RAG logic based on production logs
- Automatically re-ingest updated documents
- A/B test different embedding models
- Monitor retrieval quality metrics
Reproducible Model Fine-Tuning
Confidence in model updates

Maintain full lineage of SLM/LLM training runs:
- Version training data and hyperparameters
- Track performance across iterations
- Automatically promote validated models
- Roll back to previous versions if needed
Purpose built for machine learning with integrations to your favorite tools
While ZenML brings a lot of value out of the box, it also integrates into your existing tooling and infrastructure without you having to be locked in.
from bentoml._internal.bento import bento
@step(on_failure=alert_slack, experiment_tracker="mlflow")
def train_and_deploy(training_df: pd.DataFrame) -> bento.Bento
mlflow.autolog()
...
return bento
π Your LLM Framework Isn't Enough for Production
While tools like LangChain and LlamaIndex help you build LLM workflows, ZenML helps you productionize them by adding:
β
Artifact Tracking - Every vector store index, fine-tuned model, and evaluation result versioned automatically
β
Pipeline History - See exactly what code/data produced each version of your RAG system
β
Stage Promotion - Move validated pipelines from staging β production with one click
πΌοΈ Learning
The best way to learn about ZenML is the docs. We recommend beginning with the Starter Guide to get up and running quickly.
If you are a visual learner, this 11-minute video tutorial is also a great start:
And finally, here are some other examples and use cases for inspiration:
- E2E Batch Inference: Feature engineering, training, and inference pipelines for tabular machine learning.
- Basic NLP with BERT: Feature engineering, training, and inference focused on NLP.
- LLM RAG Pipeline with Langchain and OpenAI: Using Langchain to create a simple RAG pipeline.
- Huggingface Model to Sagemaker Endpoint: Automated MLOps on Amazon Sagemaker and HuggingFace
- LLMops: Complete guide to do LLM with ZenML
π Learn from Books
ZenML is featured in these comprehensive guides to modern MLOps and LLM engineering. Learn how to build production-ready machine learning systems with real-world examples and best practices.
π Deploy ZenML
For full functionality ZenML should be deployed on the cloud to enable collaborative features as the central MLOps interface for teams.
Read more about various deployment options here.
Or, sign up for ZenML Pro to get a fully managed server on a free trial.
Use ZenML with VS Code
ZenML has a VS Code extension that allows you to inspect your stacks and pipeline runs directly from your editor. The extension also allows you to switch your stacks without needing to type any CLI commands.
π₯οΈ VS Code Extension in Action!

πΊ Roadmap
ZenML is being built in public. The roadmap is a regularly updated source of truth for the ZenML community to understand where the product is going in the short, medium, and long term.
ZenML is managed by a core team of developers that are responsible for making key decisions and incorporating feedback from the community. The team oversees feedback via various channels, and you can directly influence the roadmap as follows:
- Vote on your most wanted feature on our Discussion board.
- Start a thread in our Slack channel.
- Create an issue on our GitHub repo.
π Contributing and Community
We would love to develop ZenML together with our community! The best way to get
started is to select any issue from the [good-first-issue
label](https://github.com/issues?q=is%3Aopen+is%3Aissue+archived%3Afalse+user%3Azenml-io+label%3A%22good+first+issue%22)
and open up a Pull Request!
If you would like to contribute, please review our Contributing Guide for all relevant details.
π Getting Help
The first point of call should be our Slack group. Ask your questions about bugs or specific use cases, and someone from the core team will respond. Or, if you prefer, open an issue on our GitHub repo.
π LLM-focused Learning Resources
- LL Complete Guide - Full RAG Pipeline - Document ingestion, embedding management, and query serving
- LLM Fine-Tuning Pipeline - From data prep to deployed model
- LLM Agents Example - Track conversation quality and tool usage
π€ AI-Friendly Documentation with llms.txt
ZenML implements the llms.txt standard to make our documentation more accessible to AI assistants and LLMs. Our implementation includes:
- Base documentation at zenml.io/llms.txt with core user guides
- Specialized files for different documentation aspects:
- Component guides for integration details
- How-to guides for practical implementations
- Complete documentation corpus for comprehensive access
This structured approach helps AI tools better understand and utilize ZenML's documentation, enabling more accurate code suggestions and improved documentation search.
π License
ZenML is distributed under the terms of the Apache License Version 2.0. A complete version of the license is available in the LICENSE file in this repository. Any contribution made to this project will be licensed under the Apache License Version 2.0.

Features Β· Roadmap Β· Report Bug Β· Sign up for ZenML Pro Β· Read Blog Β· Contribute to Open Source Β· Projects Showcase
π Version 0.82.0 is out. Check out the release notes here.
π₯οΈ Download our VS Code Extension here.