Oven logo

Oven

Arize Phoenix logo
arize-phoenix-otel

PyPI Version Documentation

Provides a lightweight wrapper around OpenTelemetry primitives with Phoenix-aware defaults. Phoenix OTEL also gives you access to tracing decorators for common GenAI patterns.

Features

arize-phoenix-otel simplifies OpenTelemetry configuration for Phoenix users by providing:

  • Phoenix-aware defaults for common OpenTelemetry primitives
  • Automatic configuration from environment variables
  • Drop-in replacements for OTel classes with enhanced functionality
  • Simplified tracing setup with the register() function
  • Tracing decorators for GenAI patterns

Key Benefits

  • Zero Code Changes: Enable auto_instrument=True to automatically instrument AI libraries
  • Production Ready: Built-in batching and authentication
  • Phoenix Integration: Seamless integration with Phoenix Cloud and self-hosted instances
  • OpenTelemetry Compatible: Works with existing OpenTelemetry infrastructure

These defaults are aware of environment variables you may have set to configure Phoenix:

  • PHOENIX_COLLECTOR_ENDPOINT
  • PHOENIX_PROJECT_NAME
  • PHOENIX_CLIENT_HEADERS
  • PHOENIX_API_KEY
  • PHOENIX_GRPC_PORT

Installation

Install via pip:

pip install arize-phoenix-otel

Quick Start

Recommended: Enable automatic instrumentation to trace your AI libraries with zero code changes:

from phoenix.otel import register

# Recommended: Automatic instrumentation + production settings
tracer_provider = register(
    auto_instrument=True,  # Auto-trace OpenAI, LangChain, LlamaIndex, etc.
    batch=True,           # Production-ready batching
    project_name="my-app" # Organize your traces
)

That's it! All openinference-* AI libraries are now automatically traced and sent to Phoenix.

Note: auto_instrument=True only works if the corresponding OpenInference instrumentation libraries are installed. For example, to automatically trace OpenAI calls, you need openinference-instrumentation-openai installed:

pip install openinference-instrumentation-openai
pip install openinference-instrumentation-langchain  # For LangChain
pip install openinference-instrumentation-llama-index  # For LlamaIndex

See the OpenInference repository for the complete list of available instrumentation packages.

Authentication

export PHOENIX_API_KEY="your-api-key"
# Or pass directly to register()
tracer_provider = register(api_key="your-api-key")

Endpoint Configuration

Configure where to send your traces:

Environment Variables (Recommended):

export PHOENIX_COLLECTOR_ENDPOINT="https://app.phoenix.arize.com/s/your-space"
export PHOENIX_PROJECT_NAME="my-project"

Direct Configuration:

tracer_provider = register(
    endpoint="http://localhost:6006/v1/traces",  # HTTP endpoint
    protocol="grpc"  # Or force gRPC protocol
)

Usage Examples

Simple Setup

from phoenix.otel import register

# Basic setup - sends to localhost
tracer_provider = register(auto_instrument=True)

Production Configuration

tracer_provider = register(
    project_name="my-production-app",
    auto_instrument=True,      # Auto-trace AI/ML libraries
    batch=True,               # Background batching for performance
    api_key="your-api-key",   # Authentication
    endpoint="https://app.phoenix.arize.com/s/your-space"
)

Manual Configuration

For advanced use cases, use Phoenix OTEL components directly:

from phoenix.otel import TracerProvider, BatchSpanProcessor, HTTPSpanExporter

tracer_provider = TracerProvider()
exporter = HTTPSpanExporter(endpoint="http://localhost:6006/v1/traces")
processor = BatchSpanProcessor(span_exporter=exporter)
tracer_provider.add_span_processor(processor)

Using Decorators

from phoenix.otel import register

tracer_provider = register()

# Get a tracer for manual instrumentation
tracer = tracer_provider.get_tracer(__name__)

@tracer.chain
def process_data(data):
    return data + " processed"

@tracer.tool
def weather(location):
    return "sunny"

Environment Variables

VariableDescriptionExample
PHOENIX_COLLECTOR_ENDPOINTWhere to send traceshttps://app.phoenix.arize.com/s/your-space
PHOENIX_PROJECT_NAMEProject namemy-llm-app
PHOENIX_API_KEYAuthentication keyyour-api-key
PHOENIX_CLIENT_HEADERSCustom headersAuthorization=Bearer token
PHOENIX_GRPC_PORTgRPC port override4317

Documentation

Community

Join our community to connect with thousands of AI builders:

  • 🌍 Join our Slack community.
  • πŸ’‘ Ask questions and provide feedback in the #phoenix-support channel.
  • 🌟 Leave a star on our GitHub.
  • 🐞 Report bugs with GitHub Issues.
  • 𝕏 Follow us on 𝕏.
  • πŸ—ΊοΈ Check out our roadmap to see where we're heading next.