mosaicml0.27.0
Published
Composer is a PyTorch library that enables you to train neural networks faster, at lower cost, and to higher accuracy.
pip install mosaicml
Package Downloads
Authors
Project URLs
Requires Python
>=3.9
Dependencies
- pyyaml
<7,>=6.0
- tqdm
<5,>=4.62.3
- torchmetrics
<1.5.3,>=1.0
- torch-optimizer
<0.4,>=0.3.0
- torchvision
<0.20.2,>=0.18.0
- torch
<2.5.2,>=2.3.0
- requests
<3,>=2.26.0
- numpy
<2.2.0,>=1.21.5
- psutil
<7,>=5.8.0
- coolname
<3,>=1.1.0
- tabulate
==0.9.0
- py-cpuinfo
<10,>=8.0.0
- packaging
<24.3,>=21.3.0
- importlib-metadata
<9,>=5.0.0
- mosaicml-cli
<0.7,>=0.5.25
- pillow
<12,>=10.3.0
- huggingface-hub
<0.27,>=0.21.2; extra == "all"
- pytest-httpserver
<1.1,>=1.0.4; extra == "all"
- sentencepiece
==0.2.0; extra == "all"
- pydantic
<2,>=1.0; extra == "all"
- comet-ml
<4.0.0,>=3.31.12; extra == "all"
- myst-parser
==0.16.1; extra == "all"
- datasets
<4,>=2.4; extra == "all"
- recommonmark
==0.7.1; extra == "all"
- peft
<0.14,>=0.10.0; extra == "all"
- pytest-codeblocks
==0.17.0; extra == "all"
- onnx
<2,>=1.12.0; extra == "all"
- sphinx-argparse
==0.4.0; extra == "all"
- ipython
==8.11.0; extra == "all"
- cryptography
==43.0.3; extra == "all"
- nbsphinx
==0.9.1; extra == "all"
- pandas
<3.0,>=2.0.0; extra == "all"
- numpy
<2; extra == "all"
- custom-inherit
==2.4.1; extra == "all"
- coverage
[toml]==7.6.4; extra == "all"
- GitPython
==3.1.43; extra == "all"
- sphinxemoji
==0.2.0; extra == "all"
- pynvml
<12,>=11.5.0; extra == "all"
- mosaicml-streaming
<1.0; extra == "all"
- sphinxcontrib-images
==0.9.4; extra == "all"
- sphinxcontrib-qthelp
==1.0.0; extra == "all"
- pre-commit
<5,>=3.4.0; extra == "all"
- deepspeed
==0.8.3; extra == "all"
- junitparser
==3.1.2; extra == "all"
- protobuf
<5.29; extra == "all"
- pycocotools
<3,>=2.0.4; extra == "all"
- tensorboard
<3.0.0,>=2.9.1; extra == "all"
- docutils
==0.17.1; extra == "all"
- py-cpuinfo
<10,>=8.0.0; extra == "all"
- oci
<3.0.0,>=2.88.2; extra == "all"
- wandb
<0.19,>=0.13.2; extra == "all"
- pypandoc
==1.14; extra == "all"
- mlflow
<3.0,>=2.14.1; extra == "all"
- sphinxcontrib-htmlhelp
==2.0.0; extra == "all"
- traitlets
==5.14.3; extra == "all"
- sphinxcontrib.katex
==0.9.10; extra == "all"
- pandoc
==2.4; extra == "all"
- paramiko
<4,>=3.4.0; extra == "all"
- google-cloud-storage
<3.0,>=2.0.0; extra == "all"
- slack-sdk
<4,>=3.19.5; extra == "all"
- boto3
<2,>=1.21.45; extra == "all"
- sphinx-copybutton
==0.5.2; extra == "all"
- sphinxcontrib-serializinghtml
==1.1.5; extra == "all"
- sphinx
==4.4.0; extra == "all"
- sphinxext.opengraph
==0.9.1; extra == "all"
- setuptools
<=59.5.0; extra == "all"
- mock-ssh-server
==0.9.1; extra == "all"
- transformers
!=4.34.0,<4.46,>=4.11; extra == "all"
- sphinx-markdown-tables
==0.0.17; extra == "all"
- yamllint
==1.35.1; extra == "all"
- apache-libcloud
<4,>=3.3.1; extra == "all"
- neptune
<2.0.0,>=1.6.2; extra == "all"
- ipykernel
==6.29.5; extra == "all"
- pytest
==7.4.4; extra == "all"
- sphinxcontrib-applehelp
==1.0.0; extra == "all"
- sphinx-panels
==0.6.0; extra == "all"
- sphinxcontrib-devhelp
==1.0.0; extra == "all"
- onnxruntime
<2,>=1.12.1; extra == "all"
- testbook
==0.4.2; extra == "all"
- jupyter
==1.1.1; extra == "all"
- fasteners
==0.18; extra == "all"
- databricks-sdk
==0.36.0; extra == "all"
- furo
==2022.9.29; extra == "all"
- moto
[s3]<6,>=5.0.1; extra == "all"
- pycocotools
<3,>=2.0.4; extra == "coco"
- comet-ml
<4.0.0,>=3.31.12; extra == "comet-ml"
- databricks-sdk
==0.36.0; extra == "databricks"
- numpy
<2; extra == "deepspeed"
- deepspeed
==0.8.3; extra == "deepspeed"
- pydantic
<2,>=1.0; extra == "deepspeed"
- custom-inherit
==2.4.1; extra == "dev"
- junitparser
==3.1.2; extra == "dev"
- coverage
[toml]==7.6.4; extra == "dev"
- fasteners
==0.18; extra == "dev"
- pytest
==7.4.4; extra == "dev"
- ipython
==8.11.0; extra == "dev"
- ipykernel
==6.29.5; extra == "dev"
- jupyter
==1.1.1; extra == "dev"
- yamllint
==1.35.1; extra == "dev"
- recommonmark
==0.7.1; extra == "dev"
- sphinx
==4.4.0; extra == "dev"
- pre-commit
<5,>=3.4.0; extra == "dev"
- docutils
==0.17.1; extra == "dev"
- sphinx-markdown-tables
==0.0.17; extra == "dev"
- sphinx-argparse
==0.4.0; extra == "dev"
- sphinxcontrib.katex
==0.9.10; extra == "dev"
- sphinxcontrib-applehelp
==1.0.0; extra == "dev"
- sphinxcontrib-devhelp
==1.0.0; extra == "dev"
- sphinxcontrib-htmlhelp
==2.0.0; extra == "dev"
- sphinxcontrib-serializinghtml
==1.1.5; extra == "dev"
- sphinxcontrib-qthelp
==1.0.0; extra == "dev"
- sphinxext.opengraph
==0.9.1; extra == "dev"
- sphinxemoji
==0.2.0; extra == "dev"
- furo
==2022.9.29; extra == "dev"
- sphinx-copybutton
==0.5.2; extra == "dev"
- testbook
==0.4.2; extra == "dev"
- myst-parser
==0.16.1; extra == "dev"
- sphinx-panels
==0.6.0; extra == "dev"
- sphinxcontrib-images
==0.9.4; extra == "dev"
- pytest-codeblocks
==0.17.0; extra == "dev"
- traitlets
==5.14.3; extra == "dev"
- nbsphinx
==0.9.1; extra == "dev"
- pandoc
==2.4; extra == "dev"
- pypandoc
==1.14; extra == "dev"
- GitPython
==3.1.43; extra == "dev"
- moto
[s3]<6,>=5.0.1; extra == "dev"
- mock-ssh-server
==0.9.1; extra == "dev"
- cryptography
==43.0.3; extra == "dev"
- pytest-httpserver
<1.1,>=1.0.4; extra == "dev"
- setuptools
<=59.5.0; extra == "dev"
- google-cloud-storage
<3.0,>=2.0.0; extra == "gcs"
- apache-libcloud
<4,>=3.3.1; extra == "libcloud"
- mlflow
<3.0,>=2.14.1; extra == "mlflow"
- databricks-sdk
==0.36.0; extra == "mlflow"
- pynvml
<12,>=11.5.0; extra == "mlflow"
- py-cpuinfo
<10,>=8.0.0; extra == "mlperf"
- neptune
<2.0.0,>=1.6.2; extra == "neptune"
- transformers
!=4.34.0,<4.46,>=4.11; extra == "nlp"
- datasets
<4,>=2.4; extra == "nlp"
- huggingface-hub
<0.27,>=0.21.2; extra == "nlp"
- oci
<3.0.0,>=2.88.2; extra == "oci"
- onnx
<2,>=1.12.0; extra == "onnx"
- onnxruntime
<2,>=1.12.1; extra == "onnx"
- pandas
<3.0,>=2.0.0; extra == "pandas"
- peft
<0.14,>=0.10.0; extra == "peft"
- protobuf
<5.29; extra == "sentencepiece"
- sentencepiece
==0.2.0; extra == "sentencepiece"
- slack-sdk
<4,>=3.19.5; extra == "slack"
- mosaicml-streaming
<1.0; extra == "streaming"
- boto3
<2,>=1.21.45; extra == "streaming"
- paramiko
<4,>=3.4.0; extra == "streaming"
- pynvml
<12,>=11.5.0; extra == "system-metrics-monitor"
- tensorboard
<3.0.0,>=2.9.1; extra == "tensorboard"
- wandb
<0.19,>=0.13.2; extra == "wandb"
Supercharge your Model Training
Deep Learning Framework for Training at Scale
[Website] - [Getting Started] - [Docs] - [We're Hiring!]
👋 Welcome
Composer is an open-source deep learning training library by MosaicML. Built on top of PyTorch, the Composer library makes it easier to implement distributed training workflows on large-scale clusters.
We built Composer to be optimized for scalability and usability, integrating best practices for efficient, multi-node training. By abstracting away low-level complexities like parallelism techniques, distributed data loading, and memory optimization, you can focus on training modern ML models and running experiments without slowing down.
We recommend using Composer to speedup your experimentation workflow if you’re training neural networks of any size, including:
- Large Language Models (LLMs)
- Diffusion models
- Embedding models (e.g. BERT)
- Transformer-based models
- Convolutional Neural Networks (CNNs)
Composer is heavily used by the MosaicML research team to train state-of-the-art models like MPT, and we open-sourced this library to enable the ML community to do the same. This framework is used by organizations in both the tech industry and the academic sphere and is continually updated with new features, bug fixes, and stability improvements for production workloads.
🔑 Key Features
We designed Composer from the ground up for modern deep learning workloads. Gone are the days of AlexNet and ResNet, when state-of-the-art models could be trained on a couple of desktop GPUs. Today, developing the latest and greatest deep learning models often requires cluster-scale hardware — but with Composer’s help, you’ll hardly notice the difference.
The heart of Composer is our Trainer abstraction: a highly optimized PyTorch training loop designed to allow both you and your model to iterate faster. Our trainer has simple ways for you to configure your parallelization scheme, data loaders, metrics, loggers, and more.
Scalability
Whether you’re training on 1 GPU or 512 GPUs, 50MB or 10TB of data - Composer is built to keep your workflow simple.
- FSDP: For large models that are too large to fit on GPUs, Composer has integrated PyTorch FullyShardedDataParallelism into our trainer and made it simple to efficiently parallelize custom models. We’ve found FSDP is competitive performance-wise with much more complex parallelism strategies. Alternatively, Composer also supports standard PyTorch distributed data parallelism (DDP) and Deepspeed execution.
- Elastic sharded checkpointing: Save on eight GPUs, resume on sixteen. Composer supports elastic sharded checkpointing, so you never have to worry if your sharded saved state is compatible with your new hardware setup.
- Data streaming: Working with large datasets? Download datasets from cloud blob storage on the fly by integrating with MosaicML StreamingDataset during model training.
Customizability
Other high-level deep learning trainers provide simplicity at the cost of rigidity. When you want to add your own features, their abstractions get in your way. Composer, on the other hand, provides simple ways for you to customize our Trainer to your needs.
Fig. 1: Composer’s training loop has a series of events that occur at each stage in the training process. Callbacks are functions that users write to run at specific events. For example, our Learning Rate Monitor Callback logs the learning rate at every BATCH_END event.
- Callbacks: Composer’s callback system allows you to insert custom logic at any point in the training loop. We’ve written callbacks to monitor memory usage, log and visualize images, and estimate your model’s remaining training time, to name a few. This feature is popular among researchers who want to implement and experiment with custom training techniques.
- Speedup algorithms: We draw from the latest research to create a collection of algorithmic speedups. Stack these speedups into MosaicML recipes to boost your training speeds. Our team has open-sourced the optimal combinations of speedups for different types of models.
Better workflows
Composer is built to automate away low-level pain points and headaches so you can focus on the important (and fun) parts of deep learning and iterate faster.
- Auto-resumption: Failed training run? Have no fear — just re-run your code, and Composer will automatically resume from your latest saved checkpoint.
- CUDA OOM Prevention: Say goodbye to out-of-memory errors. Set your microbatch size to “auto”, and Composer will automatically select the biggest one that fits on your GPUs.
- Time Abstractions: Ever messed up your conversion between update steps, epochs, samples, and tokens? Specify your training duration with custom units (epochs, batches, samples, and tokens) in your training loop with our
Time
class.
Integrations
Integrate with the tools you know and love for experiment tracking and data streaming.
- Cloud integrations: Our Checkpointing and logging features have first-class support for remote storage and loading from Cloud bucket (OCI, GCP, AWS S3).
- Experiment tracking: Weights and Biases, MLFlow, CometML, and neptune.ai — the choice is yours, easily log your data to your favorite platform.
🚀 Getting Started
📍Prerequisites
Composer is designed for users who are comfortable with Python and have basic familiarity with deep learning fundamentals and PyTorch.
Software requirements: A recent version of PyTorch.
Hardware requirements: System with CUDA-compatible GPUs (AMD + RoCM coming soon!). Composer can run on CPUs, but for full benefits, we recommend using it on hardware accelerators.
💾 Installation
Composer can be installed with pip
:
pip install mosaicml
To simplify the environment setup for Composer, we also provide a set of pre-built Docker images. We highly recommend you use our Docker images.
🏁 Quick Start
Here is a code snippet demonstrating our Trainer on the MNIST dataset.
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
from composer import Trainer
from composer.models import ComposerClassifier
from composer.algorithms import LabelSmoothing, CutMix, ChannelsLast
class Model(nn.Module):
"""Toy convolutional neural network architecture in pytorch for MNIST."""
def __init__(self, num_classes: int = 10):
super().__init__()
self.num_classes = num_classes
self.conv1 = nn.Conv2d(1, 16, (3, 3), padding=0)
self.conv2 = nn.Conv2d(16, 32, (3, 3), padding=0)
self.bn = nn.BatchNorm2d(32)
self.fc1 = nn.Linear(32 * 16, 32)
self.fc2 = nn.Linear(32, num_classes)
def forward(self, x):
out = self.conv1(x)
out = F.relu(out)
out = self.conv2(out)
out = self.bn(out)
out = F.relu(out)
out = F.adaptive_avg_pool2d(out, (4, 4))
out = torch.flatten(out, 1, -1)
out = self.fc1(out)
out = F.relu(out)
return self.fc2(out)
transform = transforms.Compose([transforms.ToTensor()])
dataset = datasets.MNIST("data", train=True, download=True, transform=transform)
train_dataloader = DataLoader(dataset, batch_size=128)
trainer = Trainer(
model=ComposerClassifier(module=Model(), num_classes=10),
train_dataloader=train_dataloader,
max_duration="2ep",
algorithms=[
LabelSmoothing(smoothing=0.1),
CutMix(alpha=1.0),
ChannelsLast(),
],
)
trainer.fit()
Next, check out our Getting Started Colab for a walk-through of Composer’s main features. In this tutorial, we will cover the basics of the Composer Trainer:
- Dataloader
- Trainer
- Optimizer and Scheduler
- Logging
- Training a baseline model
- Speeding up training
📚 Learn more
Once you’ve completed the Quick Start, you can go through the below tutorials or our documentation to further familiarize yourself with Composer.
If you have any questions, please feel free to reach out to us on our Community Slack!
Here are some resources actively maintained by the Composer community to help you get started:
Resource | Details |
---|---|
Training BERTs with Composer and 🤗 | A Colab Notebook showing how to train BERT models with Composer and 🤗! |
Pretraining and Finetuning an LLM Tutorial | A tutorial from MosaicML’s LLM Foundry, using MosaicML Composer, StreamingDataset, and MCLI on training and evaluating LLMs. |
Migrating from PyTorch Lightning | A tutorial is to illustrating a path from working in PyTorch Lightning to working in Composer. |
Finetuning and Pretraining HuggingFace Models | Want to use Hugging Face models with Composer? No problem. Here, we’ll walk through using Composer to fine-tune a pretrained Hugging Face BERT model. |
Building Speedup Methods | A Colab Notebook showing how to build new training modifications on top of Composer |
🛠️ For Best Results, Use within the Databricks & MosaicML Ecosystem
Composer can be used on its own, but for the smoothest experience we recommend using it in combination with other components of the MosaicML ecosystem:
- Mosaic AI training (MCLI)- Our proprietary Command Line Interface (CLI) and Python SDK for orchestrating, scaling, and monitoring the GPU nodes and container images executing training and deployment. Used by our customers for training their own Generative AI models.
- To get started, reach out here and check out our Training product pages
- MosaicML LLM Foundry - This open source repository contains code for training, finetuning, evaluating, and preparing LLMs for inference with Composer. Designed to be easy to use, efficient and flexible, this codebase is designed to enable rapid experimentation with the latest techniques.
- MosaicML StreamingDataset - Open-source library for fast, accurate streaming from cloud storage.
- MosaicML Diffusion - Open-source code to train your own Stable Diffusion model on your own data. Learn more via our blogs: (Results , Speedup Details)
🏆 Project Showcase
Here are some projects and experiments that used Composer. Got something to add? Share in our Community Slack!
- MPT Foundation Series: Commercially usable open source LLMs, optimized for fast training and inference and trained with Composer.
- Mosaic Diffusion Models: see how we trained a stable diffusion model from scratch for <$50k
- replit-code-v1-3b: A 2.7B Causal Language Model focused on Code Completion, trained by Replit on Mosaic AI training in 10 days.
- BabyLLM: the first LLM to support both Arabic and English. This 7B model was trained by MetaDialog on the world’s largest Arabic/English dataset to improve customer support workflows (Blog)
- BioMedLM: a domain-specific LLM for Bio Medicine built by MosaicML and Stanford CRFM
💫 Contributors
Composer is part of the broader Machine Learning community, and we welcome any contributions, pull requests, or issues!
To start contributing, see our Contributing page.
P.S.: We're hiring!
❓FAQ
- What is the best tech stack you recommend when training large models?
- We recommend that users combine components of the MosaicML ecosystem for the smoothest experience:
- Composer
- StreamingDataset
- MCLI (Databricks Mosaic AI Training)
- We recommend that users combine components of the MosaicML ecosystem for the smoothest experience:
- How can I get community support for using Composer?
- You can join our Community Slack!
- How does Composer compare to other trainers like NeMo Megatron and PyTorch Lightning?
- We built Composer to be optimized for both simplicity and efficiency. Community users have shared that they enjoy Composer for its capabilities and ease of use compared to alternative libraries.
- How do I use Composer to train graph neural networks (GNNs), or Generative Adversarial Networks (GANs), or models for reinforcement learning (RL)?
- We recommend you use alternative libraries for if you want to train these types of models - a lot of assumptions we made when designing Composer are suboptimal for GNNs, RL, and GANs
✍️ Citation
@misc{mosaicml2022composer,
author = {The Mosaic ML Team},
title = {composer},
year = {2021},
howpublished = {\url{https://github.com/mosaicml/composer/}},
}