Updated 2 hours ago
OpenAI Symphony Turns Linear Boards Into Autonomous Coding Agent Orchestration

OpenAI Symphony Codex

OpenAI Symphony Turns Linear Boards Into Autonomous Coding Agent Orchestration

OpenAI released Symphony, an open‑source orchestration spec that turns Linear issue trackers into control planes for autonomous Codex agents. Teams report a 500% increase in landed PRs, and the framework now supports multi‑model runtimes beyond Codex.

What Symphony Actually Does

OpenAI just published Symphony, an open‑source orchestration spec that solves a problem every team using AI coding agents has hit: context‑switching overload. Instead of engineers babysitting 3‑5 Codex sessions, Symphony turns your Linear issue tracker into a control plane that dispatches autonomous coding agents to every open task. The result? A 500% increase in landed PRs on some teams in the first three weeks (directional metric with no published baseline), according to OpenAI's announcement.

Important context: OpenAI describes Symphony as a reference implementation and open‑source spec, not a standalone product. As OpenAI's blog post noted: "We don't plan to maintain Symphony as a standalone product. Think of it as a reference implementation." The Symphony docs also noted it as "prototype software intended for evaluation only."

The core idea is simple: for every open task, guarantee that an agent is running in its own workspace. Symphony continuously watches the Linear board, picks up unblocked issues, creates isolated workspaces, runs Codex sessions, and shepherds changes through CI. If an agent crashes, it restarts. If new work appears, it picks it up. The system never sleeps.

The Architecture: Why Elixir and BEAM

Symphony's reference implementation is written in Elixir, running on the BEAM virtual machine. This isn't a random choice — BEAM's OTP supervision trees give you fault‑tolerant process isolation for free. When one agent crashes, a supervised restart fires with full error context while every other agent keeps working. As Stephen Jones notes in his technical breakdown, this is the kind of process isolation, supervision strategies, and graceful degradation you'd spend months building in Python or TypeScript — in Elixir, it's a first‑class language feature.

The orchestrator is a GenServer with in‑memory state that polls Linear every 30 seconds, manages concurrency limits (default: 10 concurrent agents), tracks completed and claimed issues, and handles retry with exponential backoff starting at 10 seconds and capping at 300 seconds. No database is required — state is rebuilt from Linear on every restart.

How Agents Actually Work Inside Symphony

Each Linear issue gets its own AgentRunner process with a multi‑turn continuation pattern. Turn 1 delivers the full prompt with the WORKFLOW.md template rendered with issue context. Subsequent turns use a minimal continuation prompt — the workspace persists, so the agent sees its prior commits, partially‑written code, and test results, picking up where it left off without re‑analyzing the entire problem. The default max is 20 turns per agent, configurable per project.

Symphony uses the Codex App Server — a headless JSON‑RPC 2.0 API mode for programmatic interaction over stdio. This means direct process‑to‑process communication, not HTTP overhead. Symphony also injects a linear_graphql dynamic tool into every Codex session, letting agents query and mutate Linear directly without human mediation — but without exposing the Linear access token to containers.

From Codex‑Only to Multi‑Model Support

Symphony started as a Codex‑only orchestrator, but the community has already pushed it further. With v1.1.0, Symphony now supports the Kata CLI (based on pi‑coding‑agent) as an alternative agent runtime, which opens the door to running Claude Code, Gemini, and other models inside the same orchestration framework. As discussed on the Agent Orchestrator GitHub, this makes Symphony a model‑agnostic orchestrator rather than an OpenAI walled garden.

The spec itself is model‑agnostic — OpenAI's blog notes they asked Codex to implement Symphony in TypeScript, Go, Rust, Java, and Python to refine and polish the specification. The Elixir implementation is the reference, but the spec is the real product.

What Developers Are Saying

Reactions on Hacker News are mixed but telling. Some developers see Symphony as a natural evolution: if you're already using Codex, orchestrating it at the ticket level instead of the session level is an obvious upgrade. Others are more skeptical — user exclipy called the specs "inscrutable agent slop" that "lists database fields" instead of explaining what the system does, and questioned whether GPT‑generated specs are publication‑ready from the company that makes GPT.

The most substantive critique comes from developers comparing Symphony to StrongDM's Attractor, an autonomous agent approach that provides the "orchestration harness" Symphony lacks. One commenter on HN described them as complementary: Symphony creates the outer loop (ticket‑to‑workspace), while Attractor provides the inner loop (determinative workflow orchestration inside each agent run).

The Bigger Picture: From Chat to Always‑On Agents

Symphony represents something bigger than a new dev tool. Wells Fargo analyst Ryan MacWilliams called it "an OS framework for AI agents taking actions based on user project‑level decisions," positioning it as the shift from standalone chat assistants to AI agents embedded directly in business workflows. The traditional model — open a chat, type a prompt, get a response — gives way to a model where AI is a background service triggered by project state changes.

This has implications beyond coding. If the pattern works — issue tracker triggers autonomous agent, agent delivers PR, human reviews — the same architecture applies to support tickets, content pipelines, data analysis tasks, and any repeatable knowledge work. The question is whether the 500% PR claim holds up at scale, or whether it reflects OpenAI's own monorepo where tasks are well‑scoped and tests are thorough.

Limitations and Tradeoffs

Symphony is not a silver bullet. OpenAI's own post acknowledges that not every task fits the model: ambiguous problems and work requiring strong judgment still need interactive Codex sessions. Teams also lose the ability to nudge agents mid‑flight when moving to ticket‑level assignment. And the Elixir reference implementation, while technically sound, requires expertise in a language with a smaller ecosystem than Python or TypeScript.

The spec is early‑stage. According to Digital Applied's analysis, API stability is not guaranteed, documentation is incomplete, and only OpenAI is officially supported as a model provider — Anthropic and Google integrations are community‑contributed and incomplete. If you're a builder considering Symphony, start with well‑scoped, test‑covered tasks in a project where failures are cheap to detect.

Additionally, Symphony works best in codebases that have already adopted what OpenAI calls agent‑friendly repository structures, automated tests, and guardrails — teams without these prerequisites may find Symphony less effective out of the box.

Share this article

PostShare

More on This Story

Related News