ORC Control Plane Reference
The ORC project is a working control plane for governed multi-agent execution. It turns operator intent into a tracked workflow, enforces risk and approval policy, routes work into the correct phase pipeline, runs subscription-authenticated coding agents, and stores the full audit trail in SQLite.
This page documents the actual implementation pattern used in /home/ORC, so teams can reproduce the design with the same governance boundaries instead of reverse-engineering it from code.
What ORC Implements
ORC covers the orchestration layer that sits between an operator channel and one or more agent runtimes.
It is responsible for:
- project registration and allowed command families
- workflow planning and risk classification
- approval fan-out by required role group
- safe launch of worker execution
- multi-phase execution pipelines such as
read_only,build_test,agent_implementation, anddeployment_runbook - backlog extraction, memory distillation, and audit persistence
In AEEF terms, ORC is the implementation bridge between governance controls and agent execution.
Tech Stack
The live ORC project is implemented with:
- Python 3 for the control-plane runtime
- SQLite for workflow, approvals, executions, phase runs, worker runs, memory, and audit storage
- YAML configuration for project registry, policies, and phase contracts
- Codex CLI and Claude CLI as worker runtimes
- PM2 for long-running service supervision
- a lightweight observability web service for operator visibility
Runtime Topology
Telegram Bot / CLI / UI
|
v
ORC Engine
|
+--> Policy Engine
+--> Project Registry
+--> Phase Contracts / Gates / Hooks
+--> Worker Adapters (codex / claude / dual)
+--> Memory Manager
'--> SQLite Audit Store
The major design choice is that the engine is the single state authority. Operator channels can change later, but they all call the same workflow, approval, status, phase, and backlog interfaces.
Core Components
Workflow Engine
The engine plans workflows, creates approval requests, validates approvals, and runs the selected pipeline. The planning entry point already shows the core responsibilities:
def plan(
self,
project_id: str,
action: str,
command_family: str,
command: str,
action_flags: dict[str, bool],
extra_metadata: dict[str, Any] | None = None,
) -> PlanResult:
if not self.policy.project_registered(project_id):
raise EngineError("project is not registered")
if not self.policy.project_allows_family(project_id, command_family):
raise EngineError("command family not allowed for project")
if not self.policy.command_allowed(command_family, command):
raise EngineError("command denied by allowlist policy")
risk = self.policy.classify_risk(action_flags)
...
self.storage.create_workflow(...)
...
for role_group in required_groups:
self.storage.create_approval_request(request)
Source: /home/ORC/orc/engine.py
This is the pattern to copy first: every workflow is planned before it is run, and planning is where policy, quotas, risk, and approvals are attached.
Policy Engine
The policy layer is the hard boundary around the system. It decides:
- whether a project exists
- which command families are allowed for that project
- whether an operator command is allowed by policy
- which role groups must approve the workflow
- tenant limits for active workflows
- auth guardrails for worker execution
This is what keeps ORC from degrading into an unrestricted task runner.
Project Registry
Projects are declared in config/projects.registry.yaml with their:
- root paths
- tech stacks
- owners
- preferred worker engine
- default verify commands
- allowed command families
That registry lets the control plane reason about each connected repo as an explicit asset, not just a free-form directory path.
Phase Pipeline Engine
ORC supports structured phase pipelines instead of one-shot agent runs. In the working design, the engine hands off execution through phase contracts, gates, and hooks.
Typical pipeline shape:
plan -> architect -> develop -> review -> deploy
Each phase can define:
- prompt contract
- pre-gates
- post-gates
- pre-hooks
- post-hooks
- skip conditions
- output schema expectations
This lets the orchestration layer encode SDLC behavior directly.
Workers
Worker execution is intentionally thin. ORC does not embed a custom model runtime. It launches external agent CLIs with policy controls around them.
Supported modes in the current implementation:
codexclaudedual
The important architectural decision is that ORC strips API-key billing paths and expects subscription-authenticated workers.
Storage and Audit
SQLite stores the operational truth for:
- workflows
- approval requests
- executions
- phase runs
- worker runs
- audit events
- project memory items and snapshots
- extracted backlog items
That makes the audit trail queryable without scraping chat logs.
Workflow Lifecycle
The full lifecycle is:
- An operator submits intent through Telegram or
orcctl. plan()validates the project and command family, classifies risk, creates the workflow, and emits approval requests.- Required approvers approve or reject.
run()validates the plan hash and idempotency key, then starts execution._execute_phase_pipeline()runs the selected phases in order.- Results are stored as execution results and per-phase outputs.
- Backlog items and memory entries are extracted.
- Terminal status is reported back to the operator channel.
The status interface is deliberately simple:
def phase_status(self, workflow_id: str) -> list[dict[str, Any]]:
"""Return phase run details for a workflow."""
return self.storage.list_phase_runs(workflow_id)
That one interface is what powers detailed phase inspection in external channels like Telegram.
Operator Commands
The current ORC CLI exposes the minimum useful control-plane surface:
python3 scripts/orcctl.py --db /tmp/orc.db --root . plan --project cps --action build --family build_test --command-text "pnpm build"
python3 scripts/orcctl.py --db /tmp/orc.db --root . implement --project flowry --engine codex --task "Add a health endpoint"
python3 scripts/orcctl.py --db /tmp/orc.db --root . approve --request-id <id> --token <token> --user BalrogEG --role super_admin
python3 scripts/orcctl.py --db /tmp/orc.db --root . run --workflow-id <id> --plan-hash <hash> --idempotency-key manual-1
python3 scripts/orcctl.py --db /tmp/orc.db --root . status --workflow-id <id>
python3 scripts/orcctl.py --db /tmp/orc.db --root . audit --workflow-id <id>
python3 scripts/orcctl.py --db /tmp/orc.db --root . overview
python3 scripts/orcctl.py --db /tmp/orc.db --root . workflow --workflow-id <id>
python3 scripts/orcctl.py --db /tmp/orc.db --root . workers --project flowry
For implementation teams, this is important: expose the orchestration engine through a stable operations surface before you build richer UIs.
How the Phase Pipeline Is Implemented
The production value in ORC is not just "run an agent." It is "run a governed sequence of phases with machine-readable outputs."
The architecture overview in the project maps the working behavior:
plan: scope summary, affected files, risk assessment, task breakdownarchitect: design decisions, pattern compliance, architecture violationsdevelop: code changes, verification results, files modifiedreview: security findings, quality score, compliance statusdeploy: deployment steps, rollback plan, health check results
This is the AEEF-aligned implementation model:
- every phase emits structured output
- every phase can be blocked by gates
- every important output is persisted
- later channels can present those outputs without rerunning the agent
Representative Structured Output
The engine stores results as structured objects rather than raw chat transcripts. A typical phase result shape looks like:
{
"phase_id": "develop",
"status": "completed",
"output": {
"changes_summary": "Added health endpoint and tests",
"files_modified": [
"src/routes/health.ts",
"tests/health.test.ts"
],
"verification_results": [
"pnpm test",
"pnpm build"
]
}
}
This output-first approach is what makes downstream features practical:
- Telegram can show phase details
- backlog extraction can create new tasks
- dashboards can summarize execution without parsing prose
Implementation Guide
If you want to build the same pattern in another company or repo, use this sequence.
1. Define a project registry first
Start with an explicit registry of allowed projects and command families. Do not accept arbitrary filesystem targets from operators.
2. Put planning ahead of execution
Make planning create a stable workflow_id, a plan_hash, a risk class, and approval requests before any worker runs.
3. Separate policy from execution
Keep risk classification, role-group mapping, auth guardrails, and tenant quotas in the policy layer rather than embedding them inside worker code.
4. Encode your SDLC as phase contracts
Represent the lifecycle as named phases with gates and hooks. This makes orchestration inspectable and testable.
5. Keep worker adapters thin
Treat Codex, Claude, or any future runtime as replaceable workers. The control plane owns workflow state and policy; workers only execute the assigned phase.
6. Persist structured outputs
Do not rely on long free-form transcripts. Persist status, output payloads, gate results, and audit events in tables designed for later retrieval.
7. Expose operational introspection
Implement status, phase_status, audit, overview, workflow, and workers endpoints or commands before scaling agent autonomy.
8. Add operator channels last
Telegram, Slack, dashboards, or internal portals should stay as clients of the engine. Do not let channels become the source of truth.
Security and Governance Controls
The current ORC implementation demonstrates several controls that should remain non-negotiable:
- project allowlisting through the registry
- command-family allowlisting
- risk-class approval fan-out
- plan hash validation before execution
- idempotency keys on runs
- subscription-only worker auth mode
- audit event persistence for planning, approval, execution, and terminal states
These are exactly the controls that turn an agent workflow into an enterprise-capable orchestration system.
What To Copy First
If another team wants to reproduce ORC quickly, copy these implementation ideas in this order:
orc/engine.pyworkflow lifecycleorc/policy.pyrisk and approval logicorc/registry.pyexplicit project inventoryorc/phases.py,orc/gates.py, andorc/hooks.pyphase contractsorc/storage.pyaudit-first persistence modelscripts/orcctl.pyoperational surface
Working Project File Map
Key files in the live implementation:
/home/ORC/orc/engine.py/home/ORC/orc/policy.py/home/ORC/orc/registry.py/home/ORC/orc/storage.py/home/ORC/orc/phases.py/home/ORC/orc/gates.py/home/ORC/orc/hooks.py/home/ORC/orc/workers.py/home/ORC/orc/memory.py/home/ORC/orc/cli.py/home/ORC/config/projects.registry.yaml/home/ORC/config/phase-contracts.yaml