Langfuse
Example requires langfuse >=v3.0.0
Setup
from langfuse import observe, get_client
from scorable import Scorable
# Initialize Langfuse client using environment variables
# LANGFUSE_SECRET_KEY, LANGFUSE_PUBLIC_KEY, LANGFUSE_HOST
langfuse = get_client()
# Initialize Scorable client
root_signals = Scorable()Real-Time Evaluation
Instrumented LLM Function
@observe(name="explain_concept_generation") # Name for traces in Langfuse UI
def explain_concept(topic: str) -> tuple[str | None, str | None]:
# Get the trace_id for the current operation, created by @observe
current_trace_id = langfuse.get_current_trace_id()
prompt = prompt_template.format(question=topic)
response_obj = client.chat.completions.create(
messages=[{"role": "user", "content": prompt}],
model="gpt-4",
)
content = response_obj.choices[0].message.content
return content, current_trace_idEvaluation Function
Usage
Mapping Scorable to Langfuse
Scorable
Langfuse
Description in Langfuse Context
Batch Evaluation
Evaluating Historical Traces

Last updated