chromadb1.3.4
Published
Chroma.
pip install chromadb
Package Downloads
Project URLs
Requires Python
>=3.9
Dependencies
- build
>=1.0.3 - pydantic
>=1.9 - pybase64
>=1.4.1 - uvicorn
[standard]>=0.18.3 - numpy
>=1.22.5 - posthog
<6.0.0,>=2.4.0 - typing-extensions
>=4.5.0 - onnxruntime
>=1.14.1 - opentelemetry-api
>=1.2.0 - opentelemetry-exporter-otlp-proto-grpc
>=1.2.0 - opentelemetry-sdk
>=1.2.0 - tokenizers
>=0.13.2 - pypika
>=0.48.9 - tqdm
>=4.65.0 - overrides
>=7.3.1 - importlib-resources
- graphlib-backport
>=1.0.3; python_full_version < "3.9" - grpcio
>=1.58.0 - bcrypt
>=4.0.1 - typer
>=0.9.0 - kubernetes
>=28.1.0 - tenacity
>=8.2.3 - pyyaml
>=6.0.0 - mmh3
>=4.0.1 - orjson
>=3.9.12 - httpx
>=0.27.0 - rich
>=10.11.0 - jsonschema
>=4.19.0 - chroma-hnswlib
==0.7.6; extra == "dev" - fastapi
>=0.115.9; extra == "dev" - opentelemetry-instrumentation-fastapi
>=0.41b0; extra == "dev"
Chroma - the open-source embedding database.
The fastest way to build Python or JavaScript LLM apps with memory!
pip install chromadb # python client
# for javascript, npm install chromadb!
# for client-server mode, chroma run --path /chroma_db_path
Chroma Cloud
Our hosted service, Chroma Cloud, powers serverless vector and full-text search. It's extremely fast, cost-effective, scalable and painless. Create a DB and try it out in under 30 seconds with $5 of free credits.
API
The core API is only 4 functions (run our 💡 Google Colab):
import chromadb
# setup Chroma in-memory, for easy prototyping. Can add persistence easily!
client = chromadb.Client()
# Create collection. get_collection, get_or_create_collection, delete_collection also available!
collection = client.create_collection("all-my-documents")
# Add docs to the collection. Can also update and delete. Row-based API coming soon!
collection.add(
documents=["This is document1", "This is document2"], # we handle tokenization, embedding, and indexing automatically. You can skip that and add your own embeddings as well
metadatas=[{"source": "notion"}, {"source": "google-docs"}], # filter on these!
ids=["doc1", "doc2"], # unique for each doc
)
# Query/search 2 most similar results. You can also .get by id
results = collection.query(
query_texts=["This is a query document"],
n_results=2,
# where={"metadata_field": "is_equal_to_this"}, # optional filter
# where_document={"$contains":"search_string"} # optional filter
)
Learn about all features on our Docs
Features
- Simple: Fully-typed, fully-tested, fully-documented == happiness
- Integrations:
🦜️🔗 LangChain(python and js),🦙 LlamaIndexand more soon - Dev, Test, Prod: the same API that runs in your python notebook, scales to your cluster
- Feature-rich: Queries, filtering, regex and more
- Free & Open Source: Apache 2.0 Licensed
Use case: ChatGPT for ______
For example, the "Chat your data" use case:
- Add documents to your database. You can pass in your own embeddings, embedding function, or let Chroma embed them for you.
- Query relevant documents with natural language.
- Compose documents into the context window of an LLM like
GPT4for additional summarization or analysis.
Embeddings?
What are embeddings?
- Read the guide from OpenAI
- Literal: Embedding something turns it from image/text/audio into a list of numbers. 🖼️ or 📄 =>
[1.2, 2.1, ....]. This process makes documents "understandable" to a machine learning model. - By analogy: An embedding represents the essence of a document. This enables documents and queries with the same essence to be "near" each other and therefore easy to find.
- Technical: An embedding is the latent-space position of a document at a layer of a deep neural network. For models trained specifically to embed data, this is the last layer.
- A small example: If you search your photos for "famous bridge in San Francisco". By embedding this query and comparing it to the embeddings of your photos and their metadata - it should return photos of the Golden Gate Bridge.
Embeddings databases (also known as vector databases) store embeddings and allow you to search by nearest neighbors rather than by substrings like a traditional database. By default, Chroma uses Sentence Transformers to embed for you but you can also use OpenAI embeddings, Cohere (multilingual) embeddings, or your own.
Get involved
Chroma is a rapidly developing project. We welcome PR contributors and ideas for how to improve the project.
- Join the conversation on Discord -
#contributingchannel - Review the 🛣️ Roadmap and contribute your ideas
- Grab an issue and open a PR -
Good first issue tag - Read our contributing guide
Release Cadence
We currently release new tagged versions of the pypi and npm packages on Mondays. Hotfixes go out at any time during the week.
