Skip to main content

Overview

CrewForm’s task runner supports opt-in observability via OpenTelemetry and Langfuse. When enabled, every task execution, LLM call, tool invocation, and team run is traced with span-level detail — giving you full visibility into multi-agent workflows.
Tracing is entirely opt-in. If no observability env vars are set, there is zero overhead — no SDK is loaded, no spans are emitted.

Supported Backends

BackendSetupBest For
LangfuseLANGFUSE_PUBLIC_KEY + LANGFUSE_SECRET_KEYAI-native observability with LLM generation tracking, cost analysis, prompt debugging
DatadogOTEL_EXPORTER_OTLP_ENDPOINTEnterprise APM with existing Datadog infrastructure
JaegerOTEL_EXPORTER_OTLP_ENDPOINTSelf-hosted open-source tracing
Grafana TempoOTEL_EXPORTER_OTLP_ENDPOINTGrafana stack users
Any OTLP-compatibleOTEL_EXPORTER_OTLP_ENDPOINTAny backend that accepts OTLP HTTP traces

Quick Start

Set these environment variables on your task runner:
LANGFUSE_PUBLIC_KEY=pk-lf-...
LANGFUSE_SECRET_KEY=sk-lf-...
LANGFUSE_BASE_URL=https://cloud.langfuse.com  # or your self-hosted URL
That’s it. Restart the task runner and traces will appear in your Langfuse dashboard.

Generic OTLP (Datadog, Jaeger, etc.)

OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318  # Your OTLP collector
OTEL_EXPORTER_OTLP_HEADERS=Authorization=Bearer your-token  # Optional auth

What Gets Traced

Single Task Execution

Trace: task.execute
  ├── Generation: llm.call (model, provider, tokens, cost)
  ├── Span: mcp.discover (server_count, tool_count)
  ├── Generation: llm.tool_use_call (if tools enabled)
  └── attributes: task_id, agent_id, workspace_id

Team Runs

Trace: team.run (team_id, mode)
  ├── Span: pipeline.execute
  │   └── (individual task traces nested within)
  ├── Span: orchestrator.execute
  │   └── (brain + delegate task traces)
  └── Span: collaboration.execute
      └── (turn-by-turn task traces)

Attributes on Every Trace

AttributeDescription
crewform.workspace_idWorkspace that owns the task
crewform.task_idUnique task identifier
crewform.agent_idAgent executing the task
crewform.agent_nameAgent display name
crewform.team_idTeam ID (for team runs)
crewform.team_modepipeline, orchestrator, or collaboration
crewform.run_idTeam run ID

LLM Generation Attributes (Langfuse)

In Langfuse, LLM calls appear as Generations with:
FieldDescription
modelModel identifier (e.g. gpt-4o, claude-3.5-sonnet)
providerProvider name (e.g. openai, anthropic)
promptTokensInput token count
completionTokensOutput token count
totalTokensTotal token count
costEstimated cost in USD
inputFirst 500 chars of the user prompt
outputFirst 500 chars of the result

Environment Variables Reference

VariableRequiredDescription
LANGFUSE_PUBLIC_KEYFor LangfuseYour Langfuse public key
LANGFUSE_SECRET_KEYFor LangfuseYour Langfuse secret key
LANGFUSE_BASE_URLNoLangfuse server URL (default: https://cloud.langfuse.com)
OTEL_EXPORTER_OTLP_ENDPOINTFor OTLPOTLP HTTP collector endpoint (e.g. http://localhost:4318)
OTEL_EXPORTER_OTLP_HEADERSNoAuth headers for OTLP endpoint

Docker / Self-Hosted Setup

Add the env vars to your task runner service in docker-compose.yml:
task-runner:
  environment:
    # Langfuse
    - LANGFUSE_PUBLIC_KEY=pk-lf-...
    - LANGFUSE_SECRET_KEY=sk-lf-...
    # Or OTLP
    # - OTEL_EXPORTER_OTLP_ENDPOINT=http://jaeger:4318
For self-hosted Langfuse, you can run it alongside CrewForm in the same Docker Compose stack. See langfuse.com/docs/deployment/self-host for setup instructions.

Troubleshooting

Traces Not Appearing

  1. Verify env vars are set on the task runner process (not the web app)
  2. Check task runner logs for [Tracing] Langfuse client initialized or [Tracing] OTLP exporter initialized
  3. If you see [Tracing] No observability env vars set, the vars aren’t reaching the process

High Latency

Tracing adds minimal overhead (typically less than 1ms per span). If you notice latency:
  • Ensure your OTLP collector is network-local to the task runner
  • Langfuse batches traces automatically — no additional config needed