Published
OpenInference LangChain Instrumentation
pip install openinference-instrumentation-langchain
Package Downloads
Authors
Project URLs
Requires Python
<3.15,>=3.10
Dependencies
- openinference-instrumentation
>=0.1.27 - openinference-semantic-conventions
>=0.1.17 - opentelemetry-api
- opentelemetry-instrumentation
- opentelemetry-semantic-conventions
- wrapt
- langchain-core
>=0.3.9; extra == "instruments" - langchain-classic
>=1.0.0; extra == "test" - langchain-community
>=0.4.0; extra == "test" - langchain-core
<2.0.0,>=1.0.0; extra == "test" - langchain-google-vertexai
>=2.0.12; extra == "test" - langchain-openai
>=0.2.14; extra == "test" - langchain
>=1.0.0; extra == "test" - langsmith
; extra == "test" - opentelemetry-sdk
; extra == "test" - portpicker
; extra == "test" - pytest-recording
; extra == "test" - pytest-rerunfailures
; extra == "test" - respx
; extra == "test" - starlette
; extra == "test" - uvicorn
; extra == "test" - vcrpy
>=6.0.1; extra == "test" - langchain-core
==0.3.9; extra == "type-check"
OpenInference LangChain Instrumentation
Python auto-instrumentation library for LangChain.
These traces are fully OpenTelemetry compatible and can be sent to an OpenTelemetry collector for viewing, such as arize-phoenix.
Compatibility
This instrumentation works with:
- LangChain 1.x (
langchain>=1.0.0): Modern agent framework built on LangGraph - LangChain Classic (
langchain-classic>=1.0.0): Legacy chains and tools (formerlylangchain 0.x) - All LangChain partner packages (
langchain-openai,langchain-anthropic,langchain-google-vertexai, etc.)
The instrumentation hooks into langchain-core, which is the shared foundation used by all LangChain packages.
Installation
For LangChain 1.x (Recommended for New Projects)
pip install openinference-instrumentation-langchain langchain langchain-openai
For LangChain Classic (Legacy Applications)
pip install openinference-instrumentation-langchain langchain-classic langchain-openai
For Both (Migration Scenarios)
pip install openinference-instrumentation-langchain langchain langchain-classic langchain-openai
Quickstart
Example with LangChain 1.x (New Agent Framework)
Install packages needed for this demonstration.
pip install openinference-instrumentation-langchain langchain langchain-openai arize-phoenix opentelemetry-sdk opentelemetry-exporter-otlp
Start the Phoenix app in the background as a collector. By default, it listens on http://localhost:6006. You can visit the app via a browser at the same address.
The Phoenix app does not send data over the internet. It only operates locally on your machine.
python -m phoenix.server.main serve
The following Python code sets up the LangChainInstrumentor to trace langchain and send the traces to Phoenix at the endpoint shown below.
from langchain.agents import create_agent
from langchain_openai import ChatOpenAI
from openinference.instrumentation.langchain import LangChainInstrumentor
from opentelemetry import trace as trace_api
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor
endpoint = "http://127.0.0.1:6006/v1/traces"
tracer_provider = trace_sdk.TracerProvider()
trace_api.set_tracer_provider(tracer_provider)
tracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))
tracer_provider.add_span_processor(SimpleSpanProcessor(ConsoleSpanExporter()))
LangChainInstrumentor().instrument()
To demonstrate tracing, we'll create a simple agent. First, configure your OpenAI credentials.
import os
os.environ["OPENAI_API_KEY"] = "<your openai key>"
Now we can create an agent and run it.
def get_weather(city: str) -> str:
"""Get the weather for a city."""
return f"The weather in {city} is sunny!"
model = ChatOpenAI(model="gpt-4")
agent = create_agent(model, tools=[get_weather])
result = agent.invoke({"messages": [{"role": "user", "content": "What's the weather in Paris?"}]})
print(result)
Example with LangChain Classic (Legacy Chains)
For legacy applications using LangChain Classic:
from langchain_classic.chains import LLMChain
from langchain_core.prompts import PromptTemplate
from langchain_openai import OpenAI
# ... (same instrumentation setup as above)
prompt_template = "Tell me a {adjective} joke"
prompt = PromptTemplate(input_variables=["adjective"], template=prompt_template)
llm = LLMChain(llm=OpenAI(), prompt=prompt, metadata={"category": "jokes"})
completion = llm.predict(adjective="funny", metadata={"variant": "funny"})
print(completion)
Visit the Phoenix app at http://localhost:6006 to see the traces.