LangSmith can capture traces generated by Semantic Kernel using its built-in OpenTelemetry support. This guide shows you how to automatically capture traces from your Semantic Kernel applications and send them to LangSmith for monitoring and analysis.
Installation
Install the required packages using your preferred package manager:
pip install langsmith semantic-kernel opentelemetry-instrumentation-openai
Setup
Set your API keys and project name:
export LANGSMITH_API_KEY=<your_langsmith_api_key>
export LANGSMITH_PROJECT=<your_project_name>
export OPENAI_API_KEY=<your_openai_api_key>
In your Semantic Kernel application, configure the LangSmith OpenTelemetry integration along with the OpenAI instrumentor:
from langsmith.integrations.otel import configure
from opentelemetry.instrumentation.openai import OpenAIInstrumentor
# Configure LangSmith tracing
configure(project_name="semantic-kernel-demo")
# Instrument OpenAI calls
OpenAIInstrumentor().instrument()
You do not need to set any OpenTelemetry environment variables or configure exporters manually—configure() handles everything automatically.
3. Create and run your Semantic Kernel application
Once configured, your Semantic Kernel application will automatically send traces to LangSmith:
import asyncio
from semantic_kernel import Kernel
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion
from semantic_kernel.prompt_template import InputVariable, PromptTemplateConfig
from langsmith.integrations.otel import configure
from opentelemetry.instrumentation.openai import OpenAIInstrumentor
# Configure LangSmith tracing
configure(project_name="semantic-kernel-assistant")
# Instrument OpenAI calls
OpenAIInstrumentor().instrument()
# Configure Semantic Kernel
kernel = Kernel()
kernel.add_service(OpenAIChatCompletion())
# Create a prompt template
code_analysis_prompt = """
Analyze the following code and provide insights:
Code: {{$code}}
Please provide:
1. A brief summary of what the code does
2. Any potential improvements
3. Code quality assessment
"""
prompt_template_config = PromptTemplateConfig(
template=code_analysis_prompt,
name="code_analyzer",
template_format="semantic-kernel",
input_variables=[
InputVariable(name="code", description="The code to analyze", is_required=True),
],
)
# Add the function to the kernel
code_analyzer = kernel.add_function(
function_name="analyzeCode",
plugin_name="codeAnalysisPlugin",
prompt_template_config=prompt_template_config,
)
async def main():
sample_code = """
def fibonacci(n):
if n <= 1:
return n
return fibonacci(n-1) + fibonacci(n-2)
"""
result = await kernel.invoke(code_analyzer, code=sample_code)
print("Code Analysis:")
print(result)
if __name__ == "__main__":
asyncio.run(main())
Advanced usage
You can add custom metadata to your traces by setting span attributes:
from opentelemetry import trace
tracer = trace.get_tracer(__name__)
async def analyze_with_metadata(code: str):
with tracer.start_as_current_span("semantic_kernel_workflow") as span:
span.set_attribute("langsmith.metadata.workflow_type", "code_analysis")
span.set_attribute("langsmith.metadata.user_id", "developer_123")
span.set_attribute("langsmith.span.tags", "semantic-kernel,code-analysis")
result = await kernel.invoke(code_analyzer, code=code)
return result
Combining with other instrumentors
You can combine Semantic Kernel tracing with other OpenTelemetry instrumentors:
from opentelemetry.instrumentation.openai import OpenAIInstrumentor
from opentelemetry.instrumentation.httpx import HTTPXClientInstrumentor
# Initialize multiple instrumentors
OpenAIInstrumentor().instrument()
HTTPXClientInstrumentor().instrument()
Resources