Skip to main content
LangSmith can capture traces generated by AutoGen using OpenTelemetry instrumentation. This guide shows you how to automatically capture traces from your AutoGen multi-agent conversations and send them to LangSmith for monitoring and analysis.

Installation

Install the required packages using your preferred package manager:
pip install langsmith autogen-agentchat autogen-ext opentelemetry-instrumentation-openai

Setup

1. Configure environment variables

Set your API keys and project name:
export LANGSMITH_API_KEY=<your_langsmith_api_key>
export LANGSMITH_PROJECT=<your_project_name>
export OPENAI_API_KEY=<your_openai_api_key>

2. Configure OpenTelemetry integration

In your AutoGen application, configure the LangSmith OpenTelemetry integration along with the OpenAI instrumentor:
from langsmith.integrations.otel import OtelSpanProcessor
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.instrumentation.openai import OpenAIInstrumentor

# Set up tracer provider
tracer_provider = TracerProvider()
tracer_provider.add_span_processor(OtelSpanProcessor())
trace.set_tracer_provider(tracer_provider)

# Instrument OpenAI calls
OpenAIInstrumentor().instrument()

3. Create and run your AutoGen application

Once configured, your AutoGen application will automatically send traces to LangSmith. Pass the tracer provider to the runtime for full tracing coverage:
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.conditions import MaxMessageTermination, TextMentionTermination
from autogen_agentchat.teams import SelectorGroupChat
from autogen_agentchat.ui import Console
from autogen_core import SingleThreadedAgentRuntime
from autogen_ext.models.openai import OpenAIChatCompletionClient
from langsmith.integrations.otel import OtelSpanProcessor
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.instrumentation.openai import OpenAIInstrumentor

# Set up tracing
tracer_provider = TracerProvider()
tracer_provider.add_span_processor(OtelSpanProcessor())
trace.set_tracer_provider(tracer_provider)
OpenAIInstrumentor().instrument()

# Define a tool
def percentage_change(start: float, end: float) -> float:
    """Calculate percentage change between two values."""
    if start == 0:
        return float("inf")
    return ((end - start) / start) * 100

async def main():
    model_client = OpenAIChatCompletionClient(model="gpt-4o")
    tracer = trace.get_tracer("autogen-demo")

    with tracer.start_as_current_span("run_team"):
        planning_agent = AssistantAgent(
            "PlanningAgent",
            description="Plans tasks and delegates.",
            model_client=model_client,
            system_message=(
                "You are a planning agent. Plan and delegate tasks.\n"
                "When assigning tasks, use: 1. <agent> : <task>\n"
                'After tasks complete, summarize and end with "TERMINATE".'
            ),
        )

        data_analyst = AssistantAgent(
            "DataAnalystAgent",
            description="Performs calculations.",
            model_client=model_client,
            tools=[percentage_change],
            system_message="You are a data analyst. Use tools to compute results.",
        )

        termination = TextMentionTermination("TERMINATE") | MaxMessageTermination(max_messages=25)

        # Pass tracer_provider to the runtime
        runtime = SingleThreadedAgentRuntime(tracer_provider=trace.get_tracer_provider())
        runtime.start()

        team = SelectorGroupChat(
            [planning_agent, data_analyst],
            model_client=model_client,
            termination_condition=termination,
            allow_repeated_speaker=True,
            runtime=runtime,
        )

        task = "You started with 100 apples, now you have 120 apples. What is the percentage change?"
        await Console(team.run_stream(task=task))

        await runtime.stop()

    await model_client.close()

if __name__ == "__main__":
    asyncio.run(main())

Advanced usage

Custom metadata and tags

You can add custom metadata to your traces by setting span attributes:
from opentelemetry import trace

tracer = trace.get_tracer(__name__)

async def run_with_metadata():
    with tracer.start_as_current_span("autogen_workflow") as span:
        span.set_attribute("langsmith.metadata.session_type", "multi_agent")
        span.set_attribute("langsmith.metadata.agent_count", "2")
        span.set_attribute("langsmith.span.tags", "autogen,planning")

        # Your AutoGen code here
        await Console(team.run_stream(task=task))

Combining with other instrumentors

You can combine AutoGen tracing with other OpenTelemetry instrumentors:
from opentelemetry.instrumentation.openai import OpenAIInstrumentor
from opentelemetry.instrumentation.httpx import HTTPXClientInstrumentor

# Initialize multiple instrumentors
OpenAIInstrumentor().instrument()
HTTPXClientInstrumentor().instrument()

Resources


Connect these docs to Claude, VSCode, and more via MCP for real-time answers.