# OpenTelemetry

Scorable accepts OpenTelemetry (OTEL) traces from any agent framework. Once traces arrive, Scorable shows a per-trace view of every LLM call, its inputs and outputs, latency, and span count — and can automatically evaluate traces against your configured evaluators and judges.

## Prerequisites

You need a Scorable API key. Find it under **Settings → API Keys** in the dashboard.

***

## Example: pydantic-ai

[pydantic-ai](https://ai.pydantic.dev/) has built-in OTEL support via `InstrumentationSettings`. Configure it to export to Scorable:

```python
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from pydantic_ai import Agent, InstrumentationSettings


def _build_tracer_provider() -> TracerProvider:
    exporter = OTLPSpanExporter(
        endpoint="https://api.scorable.ai/otel/v1/traces",
        headers={"Authorization": "Api-Key <your-api-key>"},
    )
    resource = Resource.create({"service.name": "my-agent"})
    provider = TracerProvider(resource=resource)
    provider.add_span_processor(BatchSpanProcessor(exporter))
    return provider


agent = Agent(
    model="openai:gpt-5.2",
    instrument=InstrumentationSettings(
        tracer_provider=_build_tracer_provider(),
    ),
)
```

Every `agent.run()` call now produces a trace visible in Scorable.

***

## Example: any other framework

Configure the OTEL SDK to point at Scorable's collector endpoint and set the `Authorization` header. The example below works with any framework that supports OTEL instrumentation (LangChain, LlamaIndex, raw `openai` SDK with `opentelemetry-instrumentation-openai`, etc.).

```python
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor

exporter = OTLPSpanExporter(
    endpoint="https://api.scorable.ai/otel/v1/traces",
    headers={"Authorization": "Api-Key <your-api-key>"},
)

resource = Resource.create({"service.name": "my-agent"})
provider = TracerProvider(resource=resource)
provider.add_span_processor(BatchSpanProcessor(exporter))
trace.set_tracer_provider(provider)
```

Then instrument your framework as usual — Scorable receives whatever spans the framework emits.

### Instrumentation libraries

Any OpenTelemetry-compatible instrumentation library works with Scorable. Popular options for AI/LLM workloads:

| Library                                                                                                           | Frameworks covered                                                 |
| ----------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------ |
| [OpenLIT](https://docs.openlit.io)                                                                                | OpenAI, Anthropic, LangChain, LlamaIndex, CrewAI, Cohere, and more |
| [OpenLLMetry](https://www.traceloop.com/docs/openllmetry/introduction)                                            | OpenAI, Anthropic, LangChain, LlamaIndex, Haystack, and more       |
| [smolagents](https://huggingface.co/docs/smolagents/en/tutorials/inspect_runs)                                    | Hugging Face smolagents                                            |
| [CrewAI](https://docs.crewai.com/en/observability/opentelemetry)                                                  | CrewAI                                                             |
| [AutoGen](https://microsoft.github.io/autogen/stable/user-guide/core-user-guide/framework/telemetry.html)         | AutoGen                                                            |
| [LlamaIndex](https://docs.llamaindex.ai/en/stable/module_guides/observability/)                                   | LlamaIndex                                                         |
| [Semantic Kernel](https://learn.microsoft.com/en-us/semantic-kernel/concepts/enterprise-readiness/observability/) | Semantic Kernel (Python, .NET, Java)                               |

### Environment variable alternative

If you prefer to configure the exporter through env vars rather than code:

```bash
OTEL_EXPORTER_OTLP_ENDPOINT=https://api.scorable.ai/otel/v1/traces
OTEL_EXPORTER_OTLP_HEADERS="Authorization=Api-Key <your-api-key>"
OTEL_SERVICE_NAME=my-agent
```

***

## Viewing traces

Traces appear in the **Traces** tab in the dashboard. Each row represents one agent run (one `trace_id`), showing the root span name, time, and total span count. Click a trace to see the full span tree.

***

## Automatic evaluation

You can configure Scorable to automatically evaluate incoming traces against an evaluator or judge. See **Settings → Trace Evaluation Filters** to set up filter criteria, sampling rate, and evaluation delay (to allow late-arriving spans before evaluation runs).

Evaluation uses the `gen_ai.input.messages` and `gen_ai.output.messages` span attributes, which pydantic-ai and most OTEL-instrumented LLM frameworks emit automatically.
