Orchestration Teams
Orchestration is CrewForm’s most autonomous team mode. A brain agent receives the task, breaks it down, delegates subtasks to worker agents, evaluates their outputs, requests revisions when needed, and synthesises a final answer — all without human intervention.How It Works
final_answer is called or the loop safety limit (20 iterations) is reached.
Creating an Orchestration Team
- Navigate to Teams → New Team
- Give it a name and description
- Select Orchestrator as the team mode
- Configure the brain and worker agents (see below)
Configuration
Brain Agent
The brain agent is the orchestrator. It receives the original task and is responsible for the full reasoning loop. Choose a capable, instruction-following model here — Claude Opus or GPT-4o work well.Tip: The brain’s system prompt is overridden by CrewForm’s orchestrator prompt, which teaches it how to use the delegation tools. The agent’s own system prompt is prepended as additional context.
Worker Agents
Workers are the specialists — each receives a focused subtask from the brain and returns a result. You can add as many workers as needed. Each worker’s system prompt defines their expertise:Configuration Fields
| Field | Description | Default |
|---|---|---|
| Brain Agent | The orchestrator agent that plans and delegates | Required |
| Worker Agents | One or more specialist agents to delegate to | Min 1 |
| Quality Threshold | Minimum acceptable quality score (0.0–1.0). Outputs below this trigger a revision request | 0.7 |
| Max Delegation Depth | Maximum revision rounds per delegation before the brain must accept or skip | 3 |
| Routing Strategy | How the brain selects workers — currently auto (brain decides freely) | auto |
| Planner Enabled | Reserved for future structured planning step (currently unused) | false |
Brain Agent Tools
The brain agent has four tools available during its reasoning loop:delegate_to_worker
Sends a subtask to a specific worker agent.
request_revision
Asks a worker to revise their previous output, with specific feedback.
accept_result
Marks a delegation as accepted — no further revision needed.
final_answer
Submits the synthesised final output. The run completes immediately when this is called.
Delegation Lifecycle
Each delegation follows this lifecycle:Team Memory
Orchestration teams have persistent memory across runs. After each completed run, the output is stored as a memory entry. On subsequent runs, relevant past memories are automatically retrieved and injected into the brain’s system prompt. This means your team improves over time — the brain learns from previous orchestrations on similar tasks. Memory is scoped to the team — each team has its own memory store that doesn’t bleed into other teams.Example: Multi-Step Research Report
A three-worker orchestration team for producing research reports: Brain Agent — Claude Opus- System prompt: “You are a research director. Break complex research tasks into focused subtasks and synthesise professional reports.”
- Ava (Researcher) — “Search and gather factual information, statistics, and expert sources on the given topic.”
- Sam (Writer) — “Transform research notes into clear, structured prose. Use professional tone and logical flow.”
- Smith (Editor) — “Review and polish written content. Fix grammar, improve clarity, ensure factual accuracy.”
- Brain delegates “Research: AI adoption in healthcare 2025” → Ava
- Brain delegates “Write a 1500-word report from these notes” + Ava’s output → Sam
- Brain evaluates Sam’s draft — requests revision if below quality threshold
- Brain delegates “Final editorial review” → Smith
- Brain calls
final_answerwith synthesised report
Visual Workflow Builder (Canvas)
Orchestration teams include a Visual Workflow Builder — an interactive canvas for designing and monitoring your brain + worker graph in real-time. See the full Visual Workflow Builder Guide for complete documentation.Canvas Features
- Drag agents from the sidebar onto the canvas to add them as workers
- Connect nodes by dragging edges to define delegation relationships
- Right-click context menu — Delete, Auto-layout, Set as Brain, Fit View
- Glassmorphism styling — frosted glass nodes with hover lift effects
- Searchable sidebar — filter agents by name or model
Live Execution Visualization
During a team run, the canvas shows live execution state on each node:- Node states — Idle, Running (blue pulse), Completed (green ✓), Failed (red ✕)
- Camera auto-follow — Canvas pans to the currently executing agent
- Execution timeline — Step-by-step progress rail with clickable steps
- Transcript panel (
T) — Real-time brain-to-worker message feed with delegation/result filters - Tool heatmap — Tool usage stats with success rates
Keyboard Shortcuts
Press? for the full shortcuts overlay. Key shortcuts: F (fit view), L (auto-layout), T (transcript), ⌘Z (undo), ⌘A (select all).
Auto-Layout
Click Auto-Layout or pressL for a top-to-bottom layout — brain at the top, workers fanning out below.
Position Persistence
Node positions are saved automatically and restored when you revisit — stored inteams.config JSONB column.
Monitoring
The run detail page shows the full delegation tree:- Delegations panel — each delegation with status, worker name, instruction, output, and revision count
- Messages feed — real-time log of brain decisions and worker responses
- Token usage — per-delegation breakdown
- Delegation depth — current iteration count in the orchestrator loop
Tips
- Brain model matters. The brain needs to reliably output JSON tool calls. Claude Sonnet 4+ and GPT-4o handle this well. Smaller or older models may produce malformed tool calls.
- Specific worker prompts = better delegation. The brain picks workers based on their name and description. Clear, focused descriptions (e.g. “TypeScript code reviewer” vs “AI assistant”) lead to better routing.
- Set quality threshold thoughtfully. Too high (0.9+) and the brain will loop excessively. Too low (0.3) and poor outputs get accepted. 0.6–0.8 is a good starting range.
- Watch delegation depth. If runs are looping on revisions, consider increasing
max_delegation_depthor lowering the quality threshold, or improving the worker’s system prompt. - Use team memory. After a few runs on similar tasks, team memory kicks in and the brain starts with better context.
Related
- Pipeline Teams — Fixed sequential steps, no dynamic routing
- Collaboration Teams — Agents discuss and reach consensus

