Skip to main content

Integration Patterns

Using AEEF with Any Orchestration Tool


AEEF's governance model -- contracts, quality gates, provenance tracking, and AI-usage disclosure -- is designed to be tool-agnostic. While the AEEF CLI provides native orchestration, teams often adopt external orchestration tools for parallel execution, visual management, or framework-specific features.

This page provides concrete integration patterns for eight orchestration tools, plus a generic template that applies to any orchestrator.

Prerequisites

Before integrating AEEF with an external tool, you need:

  1. AEEF contracts for your agent roles. These are the Markdown files in roles/{role}/rules/contract.md from the AEEF CLI repository.
  2. AEEF quality gate definitions. These are the validation criteria enforced at each handoff stage.
  3. AEEF provenance schema. The JSON schema for tracking AI-generated artifacts (from the config packs).
  4. A working AEEF CLI installation (optional but recommended for testing contract compliance before integrating with external tools).

Pattern 1: AEEF + CrewAI

CrewAI's role-based agent model is the closest match to AEEF's 4-agent SDLC among general-purpose frameworks. Each AEEF role maps directly to a CrewAI Agent, and each AEEF handoff maps to a CrewAI Task with validation.

Mapping AEEF Roles to CrewAI Agents

from crewai import Agent, Task, Crew, Process
from crewai.tools import tool

# ─── AEEF Contract Loader ───────────────────────────────────────
def load_aeef_contract(role: str) -> str:
"""Load the AEEF contract for a given role."""
contract_path = f"aeef-cli/roles/{role}/rules/contract.md"
with open(contract_path, "r") as f:
return f.read()

# ─── AEEF-Governed Agents ───────────────────────────────────────
product_agent = Agent(
role="Product Owner",
goal="Define requirements per PRD-STD-001 and produce a complete PRD",
backstory=(
"You are the Product Owner in an AEEF-governed agent SDLC. "
"You follow the AEEF product-agent contract strictly. "
"You produce PRDs with acceptance criteria, user stories, "
"and success metrics. You do NOT write code or tests."
),
verbose=True,
allow_delegation=False,
# Inject the full AEEF contract as additional context
system_template=load_aeef_contract("product"),
)

architect_agent = Agent(
role="Software Architect",
goal="Produce architecture design documents per AEEF architect contract",
backstory=(
"You are the Architect in an AEEF-governed agent SDLC. "
"You receive a PRD from the Product Owner and produce "
"a design document covering component structure, API contracts, "
"data models, and deployment architecture. "
"You do NOT write application code."
),
verbose=True,
allow_delegation=False,
system_template=load_aeef_contract("architect"),
)

developer_agent = Agent(
role="Developer",
goal="Implement the architect's design with tests per AEEF developer contract",
backstory=(
"You are the Developer in an AEEF-governed agent SDLC. "
"You receive a design document from the Architect and "
"implement it with full test coverage. You follow the "
"AEEF developer contract for file ownership, coding standards, "
"and AI-usage disclosure in commit messages."
),
verbose=True,
allow_delegation=False,
system_template=load_aeef_contract("developer"),
)

qc_agent = Agent(
role="QC Engineer",
goal="Validate all artifacts against AEEF quality gates",
backstory=(
"You are the QC Engineer in an AEEF-governed agent SDLC. "
"You receive the developer's implementation and validate it "
"against AEEF quality gates: test coverage >= 80%, no critical "
"security findings, AI-usage disclosure present, and all "
"acceptance criteria met. You do NOT modify application code."
),
verbose=True,
allow_delegation=False,
system_template=load_aeef_contract("qc"),
)

Mapping AEEF Quality Gates to CrewAI Task Validators

# ─── AEEF Quality Gate as Task Output Validator ─────────────────
def aeef_quality_gate(output: str) -> bool:
"""
Validate task output against AEEF quality criteria.
Returns True if the output passes the quality gate.

In production, this would parse structured output and check:
- PRD completeness (product -> architect handoff)
- Design coverage (architect -> developer handoff)
- Test coverage threshold (developer -> QC handoff)
- Full compliance (QC -> merge handoff)
"""
required_sections = {
"product": ["## User Stories", "## Acceptance Criteria", "## Success Metrics"],
"architect": ["## Component Architecture", "## API Contracts", "## Data Model"],
"developer": ["## Implementation", "## Test Results", "## Coverage Report"],
"qc": ["## Validation Results", "## Gate Status: PASS"],
}
# Simplified check -- production implementation would be more rigorous
return "## " in output and len(output) > 200


# ─── AEEF Tasks with Quality Gates ─────────────────────────────
requirements_task = Task(
description=(
"Analyze the feature request and produce a PRD following "
"AEEF PRD-STD-001 format. Include user stories, acceptance "
"criteria, and success metrics."
),
expected_output="A complete PRD in Markdown format with all required sections",
agent=product_agent,
output_validator=aeef_quality_gate,
)

design_task = Task(
description=(
"Review the PRD and produce an architecture design document. "
"Include component structure, API contracts, data models, "
"and deployment architecture per AEEF architect contract."
),
expected_output="A design document in Markdown format",
agent=architect_agent,
output_validator=aeef_quality_gate,
context=[requirements_task], # Receives PRD as input
)

implementation_task = Task(
description=(
"Implement the architecture design with full test coverage. "
"Follow AEEF developer contract: file ownership rules, "
"coding standards, minimum 80% test coverage, "
"AI-usage disclosure in all generated files."
),
expected_output="Implementation with tests and coverage report",
agent=developer_agent,
output_validator=aeef_quality_gate,
context=[design_task], # Receives design doc as input
)

validation_task = Task(
description=(
"Validate the implementation against all AEEF quality gates. "
"Check: test coverage >= 80%, no critical security findings, "
"AI-usage disclosure present, all acceptance criteria met. "
"Produce a validation report with PASS/FAIL status."
),
expected_output="Validation report with gate status",
agent=qc_agent,
output_validator=aeef_quality_gate,
context=[implementation_task, requirements_task],
)

Assembling the AEEF Crew

# ─── AEEF Crew (Sequential Pipeline) ───────────────────────────
aeef_crew = Crew(
agents=[product_agent, architect_agent, developer_agent, qc_agent],
tasks=[requirements_task, design_task, implementation_task, validation_task],
process=Process.sequential, # AEEF baseline: sequential pipeline
verbose=True,
)

# ─── Execute ────────────────────────────────────────────────────
result = aeef_crew.kickoff(
inputs={"feature_request": "Add user authentication with OAuth2 support"}
)
print(result)

AEEF Provenance Tracking with CrewAI

import json
from datetime import datetime, timezone

def log_aeef_provenance(task_name: str, agent_role: str, output: str):
"""Log AEEF provenance record for a CrewAI task execution."""
record = {
"schema": "aeef-provenance-v1",
"timestamp": datetime.now(timezone.utc).isoformat(),
"agent_role": agent_role,
"task": task_name,
"orchestrator": "crewai",
"model": "claude-sonnet-4-20250514",
"output_hash": hash(output),
"contract_version": "aeef-cli-v1.0",
"quality_gate_passed": True,
"ai_generated": True,
}
with open("aeef-provenance.jsonl", "a") as f:
f.write(json.dumps(record) + "\n")

Pattern 2: AEEF + claude-flow

claude-flow's queen-led swarm model can be configured to respect AEEF contracts by injecting role constraints into agent definitions and using AEEF quality gates as task validators.

Configuring AEEF Roles as claude-flow Agents

# claude-flow.config.yaml
# AEEF-governed agent definitions for claude-flow

queen:
name: aeef-architect
description: "AEEF Architect agent - coordinates the development pipeline"
constraints:
- "Follow AEEF architect contract: design documents only, no application code"
- "Decompose features into implementation tasks for worker agents"
- "Validate design coverage before dispatching to developers"
worktree: true
branch_prefix: "aeef/architect"

agents:
- name: aeef-product
description: "AEEF Product Owner - produces PRDs"
constraints:
- "Follow AEEF product-agent contract"
- "Produce PRDs with user stories, acceptance criteria, success metrics"
- "Markdown files only - no code, no tests"
worktree: true
branch_prefix: "aeef/product"
tools:
allowed:
- Read
- Write
- Edit
- Glob
- Grep
disallowed:
- Bash # Product agent cannot execute code

- name: aeef-developer-1
description: "AEEF Developer - implements module-1"
constraints:
- "Follow AEEF developer contract"
- "Only modify files in module-1/ directory"
- "Minimum 80% test coverage"
- "Include AI-Usage: generated header in all new files"
worktree: true
branch_prefix: "aeef/dev-1"
file_ownership:
- "module-1/**"

- name: aeef-developer-2
description: "AEEF Developer - implements module-2"
constraints:
- "Follow AEEF developer contract"
- "Only modify files in module-2/ directory"
- "Minimum 80% test coverage"
- "Include AI-Usage: generated header in all new files"
worktree: true
branch_prefix: "aeef/dev-2"
file_ownership:
- "module-2/**"

- name: aeef-qc
description: "AEEF QC Engineer - validates all outputs"
constraints:
- "Follow AEEF QC contract"
- "Run all tests, check coverage, validate AI-usage disclosure"
- "Do NOT modify application code"
- "Produce validation report with PASS/FAIL status"
worktree: true
branch_prefix: "aeef/qc"
tools:
allowed:
- Read
- Bash # For running tests
- Glob
- Grep
disallowed:
- Write # QC cannot write application code
- Edit # QC cannot edit application code

AEEF Quality Gates as claude-flow Task Dependencies

# claude-flow task graph with AEEF quality gates

tasks:
- id: requirements
agent: aeef-product
description: "Produce PRD for the feature"
outputs:
- PRD.md

- id: gate-prd
type: validation
description: "AEEF quality gate: PRD completeness"
depends_on: [requirements]
validation:
script: "./scripts/aeef-gate-prd.sh"
required_sections:
- "User Stories"
- "Acceptance Criteria"
- "Success Metrics"

- id: design
agent: aeef-architect
description: "Produce architecture design"
depends_on: [gate-prd]
inputs:
- PRD.md
outputs:
- DESIGN.md

- id: gate-design
type: validation
description: "AEEF quality gate: design coverage"
depends_on: [design]
validation:
script: "./scripts/aeef-gate-design.sh"

- id: implement-module-1
agent: aeef-developer-1
description: "Implement module-1 per design"
depends_on: [gate-design]
inputs:
- DESIGN.md

- id: implement-module-2
agent: aeef-developer-2
description: "Implement module-2 per design"
depends_on: [gate-design]
inputs:
- DESIGN.md

# Parallel execution: module-1 and module-2 run simultaneously

- id: gate-implementation
type: validation
description: "AEEF quality gate: test coverage and security"
depends_on: [implement-module-1, implement-module-2]
validation:
script: "./scripts/aeef-gate-implementation.sh"
min_coverage: 80
security_scan: true

- id: validation
agent: aeef-qc
description: "Final QC validation"
depends_on: [gate-implementation]

- id: gate-final
type: validation
description: "AEEF quality gate: final compliance check"
depends_on: [validation]
validation:
script: "./scripts/aeef-gate-final.sh"

AEEF .claude/ Directory Sharing with claude-flow

#!/usr/bin/env bash
# scripts/setup-aeef-worktrees.sh
# Pre-configures .claude/ directory in each claude-flow worktree
# with AEEF role-specific settings

ROLES=("product" "architect" "developer" "qc")
AEEF_CLI_DIR="./aeef-cli"

for role in "${ROLES[@]}"; do
worktree_dir=".claude/worktrees/aeef-${role}"
if [ -d "$worktree_dir" ]; then
# Copy AEEF role config into the worktree's .claude/ directory
mkdir -p "${worktree_dir}/.claude/rules"
mkdir -p "${worktree_dir}/.claude/skills"

cp "${AEEF_CLI_DIR}/roles/${role}/CLAUDE.md" \
"${worktree_dir}/.claude/CLAUDE.md"
cp "${AEEF_CLI_DIR}/roles/${role}/rules/contract.md" \
"${worktree_dir}/.claude/rules/contract.md"
cp "${AEEF_CLI_DIR}/roles/${role}/settings.json" \
"${worktree_dir}/.claude/settings.json"

# Copy shared skills
cp -r "${AEEF_CLI_DIR}/skills/"* \
"${worktree_dir}/.claude/skills/"

echo "Configured AEEF ${role} role in ${worktree_dir}"
fi
done

Pattern 3: AEEF + Composio Agent Orchestrator

Composio and AEEF both use branch-per-agent isolation, making them a natural pairing. The integration focuses on mapping AEEF contracts to Composio agent configurations and AEEF quality gates to Composio CI rules.

Branch Model Alignment

AEEF Branch Model:              Composio Branch Model:
main main
├── aeef/product ├── agent/task-1
├── aeef/architect ├── agent/task-2
├── aeef/dev ├── agent/task-3
└── aeef/qc └── agent/task-4

Combined (AEEF + Composio):
main
├── aeef/product (AEEF role: Product Owner)
├── aeef/architect (AEEF role: Architect)
├── aeef/dev/module-1 (Composio: parallel agent for module-1)
├── aeef/dev/module-2 (Composio: parallel agent for module-2)
├── aeef/dev/module-3 (Composio: parallel agent for module-3)
└── aeef/qc (AEEF role: QC Engineer)

Composio Agent Configuration with AEEF Contracts

# composio-orchestrator.yaml
# AEEF-governed agent configuration for Composio

orchestration:
strategy: branch-per-agent
branch_prefix: "aeef"
ci_integration: true
auto_fix: false # Disable auto-fix to respect AEEF contract boundaries

agents:
product:
branch: "aeef/product"
model: "claude-sonnet-4-20250514"
instructions: |
You are the AEEF Product Owner agent.
Follow the AEEF product-agent contract strictly:
- Produce PRDs with user stories, acceptance criteria, success metrics
- Only create/edit Markdown files
- Do not write code, tests, or configuration
- Include AI-Usage disclosure in PRD metadata
allowed_file_patterns:
- "docs/**/*.md"
- "PRD*.md"
- "requirements/**/*.md"
ci_checks:
- name: "aeef-prd-completeness"
script: "./scripts/aeef-gate-prd.sh"

architect:
branch: "aeef/architect"
model: "claude-sonnet-4-20250514"
depends_on: ["product"]
instructions: |
You are the AEEF Architect agent.
Follow the AEEF architect contract strictly:
- Produce design documents from the PRD
- Include component architecture, API contracts, data models
- Do not write application code
- Decompose implementation into parallel-safe subtasks
allowed_file_patterns:
- "docs/**/*.md"
- "DESIGN*.md"
- "architecture/**"
ci_checks:
- name: "aeef-design-coverage"
script: "./scripts/aeef-gate-design.sh"

developer:
branch_prefix: "aeef/dev"
model: "claude-sonnet-4-20250514"
depends_on: ["architect"]
parallel: true # Composio spawns multiple dev agents for subtasks
instructions: |
You are the AEEF Developer agent.
Follow the AEEF developer contract strictly:
- Implement the architect's design
- Write tests with minimum 80% coverage
- Include AI-Usage header in all new files
- Only modify files in your assigned module
ci_checks:
- name: "aeef-test-coverage"
script: "./scripts/aeef-gate-coverage.sh"
threshold: 80
- name: "aeef-security-scan"
script: "./scripts/aeef-gate-security.sh"

qc:
branch: "aeef/qc"
model: "claude-sonnet-4-20250514"
depends_on: ["developer"]
instructions: |
You are the AEEF QC Engineer agent.
Follow the AEEF QC contract strictly:
- Run all tests and validate coverage
- Check AI-usage disclosure in all files
- Validate against all acceptance criteria from the PRD
- Do NOT modify application code
- Produce a validation report with PASS/FAIL
allowed_file_patterns:
- "reports/**"
- "VALIDATION*.md"
ci_checks:
- name: "aeef-final-compliance"
script: "./scripts/aeef-gate-final.sh"

AEEF CI Gate Scripts for Composio

#!/usr/bin/env bash
# scripts/aeef-gate-coverage.sh
# Quality gate: test coverage threshold for Composio CI integration

set -euo pipefail

THRESHOLD="${1:-80}"
COVERAGE_FILE="coverage/coverage-summary.json"

if [ ! -f "$COVERAGE_FILE" ]; then
echo "FAIL: Coverage report not found at ${COVERAGE_FILE}"
echo "Run tests with coverage before triggering this gate."
exit 1
fi

# Extract line coverage percentage (works with Istanbul/NYC format)
COVERAGE=$(python3 -c "
import json, sys
with open('${COVERAGE_FILE}') as f:
data = json.load(f)
print(data.get('total', {}).get('lines', {}).get('pct', 0))
")

echo "AEEF Quality Gate: Test Coverage"
echo " Threshold: ${THRESHOLD}%"
echo " Actual: ${COVERAGE}%"

if (( $(echo "${COVERAGE} >= ${THRESHOLD}" | bc -l) )); then
echo " Status: PASS"
exit 0
else
echo " Status: FAIL"
echo " Action: Developer agent must increase test coverage"
exit 1
fi

Pattern 4: AEEF + GitHub Agentic Workflows

GitHub Agentic Workflows (tech preview, February 2026) use Markdown files to define multi-agent workflows within a repository. AEEF standards map naturally to workflow constraints and GitHub Actions checks.

AEEF Pipeline as a GitHub Agentic Workflow

<!-- .github/workflows/aeef-feature-pipeline.md -->
# AEEF Feature Pipeline

## Trigger
- New issue with label `feature-request`

## Steps

### 1. Requirements (Product Owner)
- **Agent**: claude-sonnet-4
- **Role**: Product Owner
- **Instructions**: Follow AEEF product-agent contract. Produce a PRD
with user stories, acceptance criteria, and success metrics.
- **Allowed files**: `docs/prd/*.md`
- **Output**: PRD document committed to branch `aeef/product`
- **Gate**: Run `.github/actions/aeef-gate-prd` before proceeding

### 2. Architecture (Architect)
- **Agent**: claude-sonnet-4
- **Role**: Architect
- **Instructions**: Follow AEEF architect contract. Review the PRD
and produce a design document with component structure, API
contracts, and data models.
- **Input**: PRD from step 1
- **Allowed files**: `docs/design/*.md`, `architecture/**`
- **Output**: Design document committed to branch `aeef/architect`
- **Gate**: Run `.github/actions/aeef-gate-design` before proceeding

### 3. Implementation (Developer)
- **Agent**: claude-sonnet-4
- **Role**: Developer
- **Instructions**: Follow AEEF developer contract. Implement the
design with tests. Minimum 80% coverage. Include AI-Usage
disclosure header in all new files.
- **Input**: Design document from step 2
- **Allowed files**: `src/**`, `tests/**`, `package.json`
- **Output**: Implementation committed to branch `aeef/dev`
- **Gate**: Run `.github/actions/aeef-gate-implementation`

### 4. Validation (QC)
- **Agent**: claude-sonnet-4
- **Role**: QC Engineer
- **Instructions**: Follow AEEF QC contract. Run all tests, validate
coverage, check AI-usage disclosure, verify acceptance criteria.
Do NOT modify application code.
- **Input**: Implementation from step 3 + PRD from step 1
- **Allowed files**: `reports/**`, `VALIDATION.md`
- **Output**: Validation report committed to branch `aeef/qc`
- **Gate**: Run `.github/actions/aeef-gate-final`

### 5. Merge
- **Condition**: All gates pass
- **Action**: Create PR from `aeef/qc` to `main`
- **Reviewers**: Auto-assign from CODEOWNERS

AEEF Gate as a GitHub Action

# .github/actions/aeef-gate-prd/action.yml
name: "AEEF Quality Gate: PRD Completeness"
description: "Validates that a PRD meets AEEF PRD-STD-001 requirements"

inputs:
prd_path:
description: "Path to the PRD file"
required: true

runs:
using: "composite"
steps:
- name: Check required sections
shell: bash
run: |
PRD="${{ inputs.prd_path }}"
MISSING=()

for section in "User Stories" "Acceptance Criteria" "Success Metrics" \
"Scope" "Non-Functional Requirements"; do
if ! grep -q "## ${section}" "$PRD"; then
MISSING+=("$section")
fi
done

if [ ${#MISSING[@]} -gt 0 ]; then
echo "::error::AEEF Gate FAIL: PRD missing sections: ${MISSING[*]}"
exit 1
fi

echo "AEEF Gate PASS: PRD contains all required sections"

- name: Check AI-usage disclosure
shell: bash
run: |
PRD="${{ inputs.prd_path }}"
if ! grep -qi "ai-usage\|ai.generated\|generated.by" "$PRD"; then
echo "::warning::AEEF PRD-STD-008: AI-usage disclosure not found in PRD"
fi

Pattern 5: AEEF + CodeRabbit

CodeRabbit is an AI code review platform that automatically reviews pull requests. AEEF quality rules can be encoded as CodeRabbit review configurations, providing automated enforcement during the PR review stage.

CodeRabbit Configuration with AEEF Rules

# .coderabbit.yaml
# AEEF-governed CodeRabbit review configuration

language: "en-US"

reviews:
auto_review:
enabled: true
drafts: false

# ─── AEEF Quality Rules ──────────────────────────────────────
path_instructions:
- path: "src/**"
instructions: |
Review this code against AEEF production standards:
1. PRD-STD-002 (Code Review): Check for clear naming, consistent
style, appropriate error handling, and no dead code.
2. PRD-STD-003 (Testing): Verify that new functions have
corresponding test files. Flag untested code paths.
3. PRD-STD-004 (Security): Check for hardcoded secrets,
SQL injection, XSS vulnerabilities, and insecure dependencies.
4. PRD-STD-006 (Technical Debt): Flag TODO/FIXME comments
without issue links, overly complex functions (cyclomatic
complexity > 10), and duplicated code blocks.

- path: "tests/**"
instructions: |
Review test code against AEEF standards:
1. Verify tests follow Arrange-Act-Assert pattern.
2. Check for meaningful assertion messages.
3. Flag tests that only test happy paths without edge cases.
4. Ensure test names describe the behavior being tested.

- path: "docs/**"
instructions: |
Review documentation against AEEF standards:
1. PRD-STD-001: Verify PRDs include user stories, acceptance
criteria, and success metrics.
2. Check for AI-usage disclosure per PRD-STD-008.
3. Verify cross-references to related documents.

# ─── AEEF AI Disclosure Enforcement ──────────────────────────
custom_rules:
- name: "aeef-ai-disclosure"
description: "AEEF PRD-STD-008: AI-usage disclosure check"
pattern: "new file"
instructions: |
For every NEW file in this PR, check whether it includes an
AI-usage disclosure header or comment. AEEF PRD-STD-008 requires
that all AI-generated code includes a disclosure statement.
Acceptable formats:
- `// AI-Usage: generated` (or `# AI-Usage: generated`)
- `AI-Generated: true` in file metadata
- Commit message contains `AI-Usage:` trailer
If no disclosure is found, flag it as a required change.

- name: "aeef-provenance"
description: "AEEF provenance tracking verification"
pattern: "*.ts|*.py|*.go"
instructions: |
Check that the PR description includes provenance information:
- Which AEEF agent role produced this code
- Which contract version was applied
- Whether quality gates were passed
If the PR was produced by the AEEF CLI, this information
should be in the PR body automatically. If missing, flag it.

# ─── AEEF Security Review Rules ──────────────────────────────
tools:
semgrep:
enabled: true
eslint:
enabled: true
ruff:
enabled: true

CodeRabbit as a Complement to the AEEF QC Agent

AEEF Pipeline with CodeRabbit:

Product → Architect → Developer → QC Agent → PR to main


CodeRabbit auto-review

┌───────┴───────┐
│ │
Approve Request changes
│ │
▼ ▼
Merge Developer fixes
(new commit)


CodeRabbit re-review

CodeRabbit and the AEEF QC agent serve complementary roles:

  • The QC agent runs tests, checks coverage, and validates artifacts (active validation).
  • CodeRabbit reviews code style, security patterns, and AEEF compliance (passive review).

Together, they provide defense-in-depth for code quality.


Pattern 6: AEEF + Claude Agent SDK

The Claude Agent SDK provides the lowest-level building blocks for constructing AEEF-governed agents. This pattern is for teams building custom orchestration on top of Anthropic's official SDK.

TypeScript: Building an AEEF-Governed Agent

import Anthropic from "@anthropic-ai/sdk";
import { readFileSync, appendFileSync } from "fs";

// ─── AEEF Contract Loader ─────────────────────────────────────
function loadContract(role: string): string {
return readFileSync(
`aeef-cli/roles/${role}/rules/contract.md`,
"utf-8"
);
}

// ─── AEEF Tool Permissions per Role ───────────────────────────
const ROLE_TOOLS: Record<string, string[]> = {
product: ["read_file", "write_file", "list_directory"],
architect: ["read_file", "write_file", "list_directory", "search_code"],
developer: [
"read_file", "write_file", "edit_file",
"run_command", "list_directory", "search_code",
],
qc: ["read_file", "run_command", "list_directory", "search_code"],
};

// ─── AEEF Provenance Logger ───────────────────────────────────
function logProvenance(
role: string,
model: string,
action: string,
inputTokens: number,
outputTokens: number
): void {
const record = {
schema: "aeef-provenance-v1",
timestamp: new Date().toISOString(),
agent_role: role,
action,
model,
input_tokens: inputTokens,
output_tokens: outputTokens,
contract_version: "aeef-cli-v1.0",
ai_generated: true,
};
appendFileSync(
"aeef-provenance.jsonl",
JSON.stringify(record) + "\n"
);
}

// ─── AEEF Agent Factory ───────────────────────────────────────
async function createAeefAgent(
role: string,
taskDescription: string,
context: string = ""
): Promise<string> {
const client = new Anthropic();
const contract = loadContract(role);
const allowedTools = ROLE_TOOLS[role] || [];

const systemPrompt = `You are the ${role} agent in an AEEF-governed Agent SDLC.

## Your Contract
${contract}

## Allowed Tools
You may ONLY use the following tools: ${allowedTools.join(", ")}

## AI-Usage Disclosure
Every file you create must include an AI-usage disclosure header.
Every commit message must include an AI-Usage trailer.

## Quality Requirements
Your output must pass the AEEF quality gate for the ${role} role
before it can be handed off to the next agent in the pipeline.`;

const response = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 8192,
system: systemPrompt,
messages: [
{
role: "user",
content: context
? `Context from previous agent:\n${context}\n\nTask: ${taskDescription}`
: taskDescription,
},
],
});

const output =
response.content[0].type === "text" ? response.content[0].text : "";

// Log provenance
logProvenance(
role,
"claude-sonnet-4-20250514",
taskDescription.slice(0, 100),
response.usage.input_tokens,
response.usage.output_tokens
);

return output;
}

// ─── AEEF Sequential Pipeline ─────────────────────────────────
async function runAeefPipeline(featureRequest: string): Promise<void> {
console.log("=== AEEF Sequential Pipeline ===\n");

// Stage 1: Product
console.log("[1/4] Product Owner: Creating PRD...");
const prd = await createAeefAgent(
"product",
`Create a PRD for: ${featureRequest}`
);

// Quality gate: PRD completeness
if (!prd.includes("Acceptance Criteria")) {
throw new Error("AEEF Gate FAIL: PRD missing Acceptance Criteria");
}
console.log("[GATE] PRD completeness: PASS\n");

// Stage 2: Architect
console.log("[2/4] Architect: Creating design...");
const design = await createAeefAgent(
"architect",
"Create an architecture design document from this PRD",
prd
);
console.log("[GATE] Design coverage: PASS\n");

// Stage 3: Developer
console.log("[3/4] Developer: Implementing...");
const implementation = await createAeefAgent(
"developer",
"Implement the following design with tests",
design
);
console.log("[GATE] Implementation coverage: PASS\n");

// Stage 4: QC
console.log("[4/4] QC: Validating...");
const validation = await createAeefAgent(
"qc",
"Validate this implementation against the original PRD",
`PRD:\n${prd}\n\nImplementation:\n${implementation}`
);

console.log("=== Pipeline Complete ===");
console.log(validation);
}

// ─── Execute ──────────────────────────────────────────────────
runAeefPipeline("Add user authentication with OAuth2 support");

Python: AEEF Agent with Quality Gate Enforcement

import anthropic
import json
from datetime import datetime, timezone
from pathlib import Path
from dataclasses import dataclass


@dataclass
class AeefGateResult:
"""Result of an AEEF quality gate check."""
passed: bool
gate_name: str
details: str
timestamp: str = ""

def __post_init__(self):
if not self.timestamp:
self.timestamp = datetime.now(timezone.utc).isoformat()


class AeefAgentSDK:
"""AEEF-governed agent built on Claude Agent SDK."""

def __init__(self, role: str, model: str = "claude-sonnet-4-20250514"):
self.role = role
self.model = model
self.client = anthropic.Anthropic()
self.contract = self._load_contract()
self.provenance_log: list[dict] = []

def _load_contract(self) -> str:
contract_path = Path(f"aeef-cli/roles/{self.role}/rules/contract.md")
if contract_path.exists():
return contract_path.read_text()
return f"Default contract for {self.role} role."

def execute(self, task: str, context: str = "") -> str:
"""Execute a task with AEEF governance."""
system_prompt = (
f"You are the {self.role} agent in an AEEF-governed Agent SDLC.\n\n"
f"## Contract\n{self.contract}\n\n"
f"## Requirements\n"
f"- Follow your contract strictly\n"
f"- Include AI-usage disclosure in all outputs\n"
f"- Produce structured, handoff-ready artifacts\n"
)

messages = []
if context:
messages.append({
"role": "user",
"content": f"Context from previous agent:\n{context}",
})
messages.append({
"role": "assistant",
"content": "I have reviewed the context from the previous agent. Ready for my task.",
})
messages.append({"role": "user", "content": task})

response = self.client.messages.create(
model=self.model,
max_tokens=8192,
system=system_prompt,
messages=messages,
)

output = response.content[0].text if response.content else ""

# Log provenance
self._log_provenance(task, response.usage)

return output

def _log_provenance(self, task: str, usage) -> None:
record = {
"schema": "aeef-provenance-v1",
"timestamp": datetime.now(timezone.utc).isoformat(),
"agent_role": self.role,
"task": task[:200],
"model": self.model,
"input_tokens": usage.input_tokens,
"output_tokens": usage.output_tokens,
"ai_generated": True,
}
self.provenance_log.append(record)

def save_provenance(self, path: str = "aeef-provenance.jsonl") -> None:
with open(path, "a") as f:
for record in self.provenance_log:
f.write(json.dumps(record) + "\n")
self.provenance_log.clear()


def aeef_quality_gate(
stage: str, output: str, criteria: dict[str, str]
) -> AeefGateResult:
"""
Run an AEEF quality gate check on agent output.

Args:
stage: The pipeline stage name (e.g., "product", "architect")
output: The agent's output text
criteria: Dict of check_name -> required_substring pairs
"""
missing = []
for check_name, required in criteria.items():
if required.lower() not in output.lower():
missing.append(check_name)

if missing:
return AeefGateResult(
passed=False,
gate_name=f"aeef-gate-{stage}",
details=f"Missing: {', '.join(missing)}",
)
return AeefGateResult(
passed=True,
gate_name=f"aeef-gate-{stage}",
details="All criteria met",
)


# ─── Pipeline Execution ───────────────────────────────────────
def run_pipeline(feature: str) -> None:
product = AeefAgentSDK("product")
architect = AeefAgentSDK("architect")
developer = AeefAgentSDK("developer")
qc = AeefAgentSDK("qc")

# Stage 1
prd = product.execute(f"Create a PRD for: {feature}")
gate = aeef_quality_gate("product", prd, {
"user_stories": "user stories",
"acceptance_criteria": "acceptance criteria",
})
assert gate.passed, f"Gate failed: {gate.details}"

# Stage 2
design = architect.execute("Design the architecture", context=prd)
gate = aeef_quality_gate("architect", design, {
"components": "component",
"api": "api",
})
assert gate.passed, f"Gate failed: {gate.details}"

# Stage 3
impl = developer.execute("Implement with tests", context=design)

# Stage 4
report = qc.execute("Validate implementation", context=f"{prd}\n\n{impl}")

# Save all provenance
for agent in [product, architect, developer, qc]:
agent.save_provenance()

print(report)

Pattern 7: AEEF + LangGraph

LangGraph's state machine model is a natural fit for AEEF's sequential pipeline with quality gates. Each agent becomes a node, each quality gate becomes a conditional edge, and the handoff artifacts are passed through the graph state.

AEEF Agent SDLC as a LangGraph State Machine

from typing import TypedDict, Literal, Annotated
from langgraph.graph import StateGraph, END
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import HumanMessage, SystemMessage


# ─── State Definition ──────────────────────────────────────────
class AeefState(TypedDict):
"""State passed through the AEEF pipeline graph."""
feature_request: str
prd: str
design: str
implementation: str
validation_report: str
current_stage: str
gate_results: dict[str, bool]
provenance: list[dict]


# ─── Model Setup ───────────────────────────────────────────────
llm = ChatAnthropic(model="claude-sonnet-4-20250514")


# ─── Agent Nodes ───────────────────────────────────────────────
def product_agent(state: AeefState) -> AeefState:
"""AEEF Product Owner agent node."""
contract = open("aeef-cli/roles/product/rules/contract.md").read()
messages = [
SystemMessage(content=f"You are the AEEF Product Owner.\n\n{contract}"),
HumanMessage(content=f"Create a PRD for: {state['feature_request']}"),
]
response = llm.invoke(messages)
state["prd"] = response.content
state["current_stage"] = "product"
state["provenance"].append({
"stage": "product",
"model": "claude-sonnet-4-20250514",
"timestamp": __import__("datetime").datetime.now().isoformat(),
})
return state


def architect_agent(state: AeefState) -> AeefState:
"""AEEF Architect agent node."""
contract = open("aeef-cli/roles/architect/rules/contract.md").read()
messages = [
SystemMessage(content=f"You are the AEEF Architect.\n\n{contract}"),
HumanMessage(content=f"Design architecture for this PRD:\n\n{state['prd']}"),
]
response = llm.invoke(messages)
state["design"] = response.content
state["current_stage"] = "architect"
return state


def developer_agent(state: AeefState) -> AeefState:
"""AEEF Developer agent node."""
contract = open("aeef-cli/roles/developer/rules/contract.md").read()
messages = [
SystemMessage(content=f"You are the AEEF Developer.\n\n{contract}"),
HumanMessage(content=f"Implement this design:\n\n{state['design']}"),
]
response = llm.invoke(messages)
state["implementation"] = response.content
state["current_stage"] = "developer"
return state


def qc_agent(state: AeefState) -> AeefState:
"""AEEF QC Engineer agent node."""
contract = open("aeef-cli/roles/qc/rules/contract.md").read()
messages = [
SystemMessage(content=f"You are the AEEF QC Engineer.\n\n{contract}"),
HumanMessage(
content=(
f"Validate this implementation:\n\n{state['implementation']}\n\n"
f"Against this PRD:\n\n{state['prd']}"
)
),
]
response = llm.invoke(messages)
state["validation_report"] = response.content
state["current_stage"] = "qc"
return state


# ─── Quality Gate Nodes ────────────────────────────────────────
def prd_quality_gate(state: AeefState) -> AeefState:
"""AEEF quality gate: PRD completeness."""
prd = state["prd"].lower()
passed = all(
section in prd
for section in ["user stories", "acceptance criteria", "success metrics"]
)
state["gate_results"]["prd"] = passed
return state


def design_quality_gate(state: AeefState) -> AeefState:
"""AEEF quality gate: design coverage."""
design = state["design"].lower()
passed = all(
section in design
for section in ["component", "api", "data model"]
)
state["gate_results"]["design"] = passed
return state


def implementation_quality_gate(state: AeefState) -> AeefState:
"""AEEF quality gate: implementation quality."""
impl = state["implementation"].lower()
passed = "test" in impl and len(impl) > 500
state["gate_results"]["implementation"] = passed
return state


def final_quality_gate(state: AeefState) -> AeefState:
"""AEEF quality gate: final compliance."""
report = state["validation_report"].lower()
passed = "pass" in report
state["gate_results"]["final"] = passed
return state


# ─── Conditional Edges (Gate Routing) ──────────────────────────
def route_after_prd_gate(state: AeefState) -> Literal["architect", "product"]:
"""Route based on PRD quality gate result."""
if state["gate_results"].get("prd", False):
return "architect"
return "product" # Retry if gate fails


def route_after_design_gate(state: AeefState) -> Literal["developer", "architect"]:
if state["gate_results"].get("design", False):
return "developer"
return "architect"


def route_after_impl_gate(state: AeefState) -> Literal["qc", "developer"]:
if state["gate_results"].get("implementation", False):
return "qc"
return "developer"


def route_after_final_gate(state: AeefState) -> Literal["__end__", "developer"]:
if state["gate_results"].get("final", False):
return "__end__"
return "developer"


# ─── Build the Graph ───────────────────────────────────────────
workflow = StateGraph(AeefState)

# Add agent nodes
workflow.add_node("product", product_agent)
workflow.add_node("architect", architect_agent)
workflow.add_node("developer", developer_agent)
workflow.add_node("qc", qc_agent)

# Add quality gate nodes
workflow.add_node("prd_gate", prd_quality_gate)
workflow.add_node("design_gate", design_quality_gate)
workflow.add_node("impl_gate", implementation_quality_gate)
workflow.add_node("final_gate", final_quality_gate)

# Wire the graph: agent -> gate -> conditional routing
workflow.set_entry_point("product")
workflow.add_edge("product", "prd_gate")
workflow.add_conditional_edges("prd_gate", route_after_prd_gate)
workflow.add_edge("architect", "design_gate")
workflow.add_conditional_edges("design_gate", route_after_design_gate)
workflow.add_edge("developer", "impl_gate")
workflow.add_conditional_edges("impl_gate", route_after_impl_gate)
workflow.add_edge("qc", "final_gate")
workflow.add_conditional_edges("final_gate", route_after_final_gate)

# Compile
aeef_pipeline = workflow.compile()

# ─── Execute ───────────────────────────────────────────────────
initial_state: AeefState = {
"feature_request": "Add user authentication with OAuth2 support",
"prd": "",
"design": "",
"implementation": "",
"validation_report": "",
"current_stage": "",
"gate_results": {},
"provenance": [],
}

result = aeef_pipeline.invoke(initial_state)
print(result["validation_report"])

LangGraph Visualization

The graph above produces the following flow when visualized:

product ──► prd_gate ──┬──► architect ──► design_gate ──┬──► developer
│ │ │ │ │
▼ │ ▼ │ ▼
(retry)─────┘ (retry)──────┘ impl_gate
│ │
▼ ▼
qc (retry to dev)


final_gate
│ │
▼ ▼
END (retry)

This is AEEF's sequential pipeline expressed as a state machine with automatic retry on quality gate failure. LangGraph's conditional edges provide the routing logic that AEEF's quality gates require.


Generic Integration Template

For orchestration tools not listed above, use this template to integrate AEEF governance into any multi-agent system.

Step 1: Map AEEF Agent Contracts to Agent Definitions

Every orchestration tool has a concept of "agent definition" or "agent configuration." Map AEEF's contract files to that concept:

AEEF Contract File                    Orchestrator Concept
───────────────── ────────────────────
roles/{role}/rules/contract.md → Agent system prompt / instructions
roles/{role}/CLAUDE.md → Agent context / backstory
roles/{role}/settings.json → Agent tool permissions / constraints
skills/ → Agent tools / capabilities

Step 2: Map AEEF Quality Gates to Validation Hooks

Every orchestration tool has some form of task validation or completion check. Map AEEF's quality gates to that mechanism:

AEEF Quality Gate                     Orchestrator Concept
───────────────── ────────────────────
PRD completeness check → Task output validator / assertion
Design coverage check → Task output validator / assertion
Test coverage threshold → CI check / post-task hook
Security scan → CI check / post-task hook
AI-usage disclosure check → Output parser / post-task hook
Final compliance check → Pipeline completion validator

Step 3: Map AEEF Branch-per-Role to Isolation Model

Every orchestration tool has some form of agent isolation. Map AEEF's Git-branch model to the tool's isolation mechanism:

AEEF Isolation                        Orchestrator Concept
────────────── ────────────────────
Git branch per role → Worktree / sandbox / container
PR as handoff artifact → Task output / message passing
Merge as promotion → Task completion / state transition
Branch protection rules → Agent file-access permissions

Step 4: Feed AEEF Provenance into Audit System

Every orchestration tool produces some form of execution log. Map AEEF's provenance tracking to the tool's audit mechanism:

AEEF Provenance                       Orchestrator Concept
─────────────── ────────────────────
Provenance JSONL records → Task execution logs
Agent role + contract version → Agent metadata
Model + token usage → Execution metrics
AI-generated flag → Output classification
Git commit with AI-Usage trailer → Artifact metadata

Step 5: Validate the Integration

Run the following checks to verify your integration:

#!/usr/bin/env bash
# scripts/validate-aeef-integration.sh

echo "=== AEEF Integration Validation ==="

# 1. Contract injection
echo -n "[1/5] Agent contracts loaded: "
# Check that each agent has AEEF contract content in its configuration
# Implementation depends on the orchestration tool

# 2. Quality gates
echo -n "[2/5] Quality gates enforced: "
# Verify that failed quality gates prevent handoffs
# Test with intentionally incomplete output

# 3. Isolation
echo -n "[3/5] Agent isolation: "
# Verify that agents cannot access files outside their scope
# Test with an agent attempting to write to a restricted path

# 4. Provenance
echo -n "[4/5] Provenance tracking: "
if [ -f "aeef-provenance.jsonl" ]; then
RECORDS=$(wc -l < aeef-provenance.jsonl)
echo "PASS (${RECORDS} records)"
else
echo "FAIL (no provenance file)"
fi

# 5. AI disclosure
echo -n "[5/5] AI-usage disclosure: "
# Check that generated files include disclosure headers
# Check that commit messages include AI-Usage trailers

echo "=== Validation Complete ==="

Integration Comparison Summary

PatternIntegration EffortBest ForKey Benefit
AEEF + CrewAILow-MediumTeams already using CrewAINatural role mapping
AEEF + claude-flowMediumParallel swarm with governanceWorktree + contract alignment
AEEF + ComposioLow-MediumBranch-per-agent CI workflowsNative branch model match
AEEF + GitHub AgenticLowGitHub-hosted projectsZero infrastructure
AEEF + CodeRabbitLowAutomated review augmentationComplement to QC agent
AEEF + Claude Agent SDKLowCustom agent developmentDeepest Claude integration
AEEF + LangGraphMediumComplex state machine workflowsConditional gate routing

Common Pitfalls

1. Ignoring Contract Enforcement at Runtime

Loading AEEF contracts as system prompts is necessary but not sufficient. Contracts must also be enforced at the tool level -- restricting which tools each agent can use and which files each agent can access. System prompts can be ignored by the model; tool restrictions cannot.

2. Treating Quality Gates as Optional

Quality gates should block handoffs, not just warn. If your integration logs gate failures but allows the pipeline to continue, you do not have quality gates -- you have suggestions.

3. Skipping Provenance for "Internal" Runs

Provenance tracking should be always-on, not just for production deployments. Internal runs, experiments, and prototypes also produce AI-generated artifacts that may end up in production. Track everything.

4. Duplicating AEEF Logic in the Orchestrator

AEEF contracts and quality gates should be the single source of truth. If you duplicate quality-gate logic in both AEEF and your orchestrator, they will drift apart. Instead, have the orchestrator call AEEF's gate scripts rather than reimplementing the checks.

5. Over-Customizing per Orchestrator

AEEF's value is its tool-agnostic governance model. If your integration is so customized that switching orchestrators requires rebuilding the governance layer, you have lost that value. Keep the AEEF layer portable and the orchestrator-specific adapters thin.


Pattern 8: AEEF + OpenClaw

OpenClaw is the orchestration tool with the deepest native AEEF integration. AEEF provides a complete template pack for OpenClaw, so this integration requires minimal custom work.

OpenClaw's architecture maps 1:1 to AEEF concepts:

OpenClaw ConceptAEEF Equivalent
Route policiesAgent contracts + role routing
Command checkpointsQuality gates
Per-agent sandboxesBranch-per-role isolation
Monitor loopsContinuous compliance validation
Task registryAgent SDLC handoff artifacts
Orchestrator agentAEEF Product/Architect role
Execution agentsAEEF Developer/QC roles

Step 1: Install the AEEF Template Pack

The template pack is included in the AEEF transform tier repository:

# Clone the transform tier
git clone https://github.com/AEEF-AI/aeef-transform

# Copy the OpenClaw template pack to your project
cp -r aeef-transform/shared/openclaw-templates/ .openclaw/aeef/

Or copy from the reference implementations docs:

reference-implementations/transform/openclaw-template-pack/
├── active-tasks.schema.json # Task registry schema
├── monitor-loop-contract.yaml # Monitor loop behavior contract
├── monitor-loop-checklist.yaml # Operational readiness checklist
├── route-policy-4-agent.yaml # 4-agent routing policy
└── route-policy-11-agent.yaml # 11-agent routing policy

Step 2: Configure the 4-Agent Route Policy

The route-policy-4-agent.yaml defines which agents handle which tasks and what gates they must pass:

# route-policy-4-agent.yaml (excerpt)
#
# This file defines the AEEF 4-agent routing policy for OpenClaw.
# Each agent has: role, allowed tools, file scope, gate criteria.

agents:
product-agent:
role: product
description: "Translates business intent into structured PRDs"
allowed_tools:
- file_read
- file_write # Markdown only
- web_search
file_scope:
include: ["docs/**", "*.md", "PRD-*.md"]
exclude: ["src/**", "tests/**", "*.ts", "*.py", "*.go"]
gate:
name: prd-completeness
criteria:
- "PRD document exists and is non-empty"
- "Acceptance criteria are defined"
- "Out-of-scope section is present"
handoff_to: architect-agent

architect-agent:
role: architect
description: "Produces design documents from PRDs"
allowed_tools:
- file_read
- file_write # Design docs and config only
- grep
- glob
file_scope:
include: ["docs/**", "*.md", "*.yaml", "*.json"]
exclude: ["src/**/*.ts", "src/**/*.py", "app/**"]
gate:
name: design-coverage
criteria:
- "Design doc addresses all PRD requirements"
- "API contracts are defined"
- "Data model is specified"
handoff_to: developer-agent

developer-agent:
role: developer
description: "Implements code per architect design"
allowed_tools:
- file_read
- file_write
- bash # Build, test, lint
- grep
- glob
file_scope:
include: ["src/**", "tests/**", "app/**", "*.config.*"]
exclude: ["docs/PRD-*.md"]
gate:
name: implementation-quality
criteria:
- "All tests pass"
- "Lint checks pass"
- "Coverage >= 80%"
handoff_to: qc-agent

qc-agent:
role: qc
description: "Validates implementation against requirements"
allowed_tools:
- file_read
- bash # Run tests only
- grep
file_scope:
include: ["**"] # Can read everything
write: ["tests/**", "docs/test-report-*.md"]
gate:
name: release-readiness
criteria:
- "All tests pass including integration"
- "Security scan clean"
- "Test report generated"
- "AI-disclosure checklist complete"
handoff_to: merge

Step 3: Set Up the Monitor Loop

The monitor loop is OpenClaw's mechanism for continuous compliance checking. AEEF's monitor-loop-contract.yaml defines the behavior:

# monitor-loop-contract.yaml (excerpt)
#
# Deterministic loop that checks agent task state,
# validates quality gates, and triggers escalation.

monitor:
interval_seconds: 30
max_iterations: 100

checks:
- name: task-state-check
action: "Read active-tasks.json, verify agent assignment"
on_fail: "Log warning, retry after interval"

- name: pr-status-check
action: "Check open PRs for current agent branch"
on_fail: "If PR blocked > 5 min, escalate to human"

- name: ci-status-check
action: "Check CI pipeline status for agent branch"
on_fail: "If CI fails, dispatch fix to developer-agent"

- name: gate-compliance-check
action: "Validate current agent output against gate criteria"
on_fail: "Block handoff, log non-compliance, notify agent"

escalation:
human_approval_required:
- "Security scan finds critical vulnerability"
- "Agent exceeds 3 retry attempts on same task"
- "Handoff blocked at gate for > 10 minutes"
auto_retry:
- "Lint failure (auto-fixable)"
- "Test failure with clear error message"

Step 4: Validate with the Readiness Checklist

Before going live, run through monitor-loop-checklist.yaml:

# Pre-flight checks (excerpt)
readiness:
- "[ ] OpenClaw runtime is running and accessible"
- "[ ] Route policy loaded (4-agent or 11-agent)"
- "[ ] Task registry schema validated against active-tasks.schema.json"
- "[ ] Monitor loop contract loaded"
- "[ ] All 4 agent sandboxes/worktrees created"
- "[ ] Git branches exist: aeef/product, aeef/architect, aeef/dev, aeef/qc"
- "[ ] CI pipeline configured for all agent branches"
- "[ ] Human escalation channel configured (Slack/email)"
- "[ ] AEEF quality gate scripts accessible from monitor loop"

Step 5: Scale to 11-Agent (When Ready)

When your team reaches Tier 3 maturity, switch from route-policy-4-agent.yaml to route-policy-11-agent.yaml. The 11-agent policy adds:

  • Security Agent: Reviews code for vulnerabilities, runs SAST/SCA
  • Compliance Agent: Validates regulatory controls (sovereign overlays)
  • Platform Agent: Manages infrastructure and deployment configuration
  • DevOps Agent: Handles CI/CD pipeline maintenance
  • Scrum Agent: Tracks sprint state and velocity
  • Ops Agent: Monitors production and handles incident triage
  • Executive Agent: Generates summary reports and KPI dashboards

Each additional agent follows the same pattern: role definition, allowed tools, file scope, and gate criteria.

The Elvis/Zoe Pattern (Real-World Reference)

A widely-shared field pattern documented by @elvissun demonstrates this integration in practice:

  1. Orchestrator agent ("Zoe") holds business context, reads meeting notes, builds task prompts
  2. Coding agents (Claude Code / Codex) execute in isolated worktrees via OpenClaw
  3. Deterministic monitor loop checks task state, PRs, CI, and retries
  4. Human review happens late, after automated gates and multi-model reviews

This maps directly to AEEF's model:

  • Zoe = Product Agent + Architect Agent (business context + task decomposition)
  • Coding agents = Developer Agent (isolated execution)
  • Monitor loop = AEEF Quality Gates (automated validation)
  • Human review = QC Agent approval gate (final check before merge)

For the full AEEF analysis of this pattern, see the OpenClaw for AEEF Agent Orchestration guide.


Protocol-Specific Code Paths (MCP Required, A2A Progressive)

All new integration examples in this repository follow:

  1. MCP-required tool integration.
  2. A2A progressive enablement for cross-runtime workflows.

Path A: Internal runtime orchestration (no cross-runtime handoff)

  • Required: AgentContract, HookContract, GateDecision, RunLedgerEntry.
  • Required: MCP-mediated tool access with deny-by-default policy.
  • Optional: A2A profile.

Path B: Cross-runtime orchestration (LangGraph <-> CrewAI or similar)

  • Required: all Path A controls.
  • Required: HandoffArtifact schema validation at runtime boundaries.
  • Required: A2A bridge profile for interoperability handoffs.

Path C: External/vendor agent interaction

  • Required: all Path B controls.
  • Required: signed provenance, explicit policy mapping, and human escalation gate.

Reference docs:


Next Steps