Skip to main content

Developer Certification -- AEEF Certified AI Engineer

The Developer Certification is the most comprehensive track in the AEEF Training & Certification Program. Over 24 hours across 8 modules, software engineers gain hands-on experience with every layer of the AEEF framework -- from configuring AI tool rules in a greenfield project to running multi-agent orchestration workflows in a production-grade environment.

This is not a slide-deck certification. Every module includes a lab exercise using real AEEF repositories. By the end, you will have migrated an actual project from Tier 1 to Tier 2 and demonstrated competency across the full stack of AEEF governance tooling.


Certification Overview

AttributeDetail
Certification TitleAEEF Certified AI Engineer
Duration24 hours (8 modules, 3 hours each)
FormatSelf-paced or instructor-led
Prerequisites1+ year software development, Git proficiency, GitHub account
Languages CoveredTypeScript, Python, Go (choose your primary stack)
Assessment60-question exam + practical capstone
Passing Score80% on exam, capstone approved by reviewer
Validity2 years

Module 1: AI Coding Fundamentals (3 hours)

Learning Objectives

By the end of this module, you will be able to:

  • Explain the productivity paradox and why AI coding governance is necessary
  • Identify the major AI coding tools and their governance capabilities
  • Describe LLM limitations that create engineering risk
  • Configure Claude Code with basic AEEF settings

Topics

1.1 The Productivity Paradox (45 minutes)

Why governance matters -- and why most organizations get this wrong.

  • The Faros AI study: 10,000+ developers, 98% more PRs, 91% longer review times
  • The METR randomized controlled trial: perceived 20% speedup, actual 19% slowdown
  • Uplevel data: 41% of AI-generated code is reverted within 2 weeks
  • The defect multiplication problem: AI writes bugs at scale
  • Why "move fast and fix later" fails with AI-generated code

1.2 AI Tool Landscape (45 minutes)

Understanding the tools your team is already using.

  • Claude Code: CLI-native, hooks system, CLAUDE.md, settings.json
  • Cursor: IDE-integrated, .cursorrules, composer mode
  • GitHub Copilot: Inline suggestions, copilot-instructions.md, workspace agents
  • Aider: Git-native, architect/editor modes, convention files
  • OpenCode: Terminal-based, multi-provider, .opencode files
  • Windsurf/Cline/Continue: Emerging tools and their governance surfaces
  • Comparison matrix: which tools support which governance controls

1.3 Understanding LLM Limitations (45 minutes)

The failure modes that governance must address.

  • Hallucination: Confident generation of incorrect code, APIs, and dependencies
  • Context windows: What happens when your codebase exceeds the context limit
  • Training cutoffs: Why the model suggests deprecated APIs and outdated patterns
  • Sycophancy: The model agrees with your wrong assumptions
  • Prompt injection: How malicious content in codebases can manipulate AI tools
  • Nondeterminism: The same prompt produces different outputs across runs

1.4 Lab: Set Up Claude Code with AEEF Config (45 minutes)

Exercise: Install Claude Code, clone the aeef-quickstart repository, and configure it for your chosen language stack.

Steps:

  1. Install Claude Code CLI
  2. Clone aeef-quickstart for your language (TypeScript, Python, or Go)
  3. Review the generated .claude/settings.json and CLAUDE.md
  4. Run Claude Code and observe how rules constrain AI behavior
  5. Attempt to generate code that violates a rule and verify enforcement

Deliverable: Screenshot of Claude Code running with AEEF config applied, showing at least one rule enforcement event.

Assessment Criteria

  • Can explain the productivity paradox with at least 3 data points
  • Can name 5+ AI coding tools and their primary governance surface
  • Can describe 4+ LLM failure modes relevant to engineering
  • Successfully configured Claude Code with AEEF settings

Module 2: AEEF Tier 1 -- Quick Start (3 hours)

Learning Objectives

By the end of this module, you will be able to:

  • Configure Tier 1 governance for any project using AEEF Quick Start
  • Write effective AI tool rules for Claude Code, Cursor, and Copilot
  • Set up a basic CI pipeline with lint, test, and SAST stages
  • Create PR templates with AI disclosure sections

Topics

2.1 Quick Start Architecture (30 minutes)

How Tier 1 is structured and what it enforces.

  • Repository layout and file purposes
  • Standards enforced: PRD-STD-001 (Prompt Engineering), PRD-STD-002 (Code Review), PRD-STD-003 (Testing), PRD-STD-004 (Security), PRD-STD-008 (Dependencies)
  • What Tier 1 does NOT cover (and why that is acceptable for early adoption)

2.2 AI Tool Rules (45 minutes)

Writing rules that constrain AI behavior without killing productivity.

  • .cursorrules -- Cursor-specific behavioral constraints
  • .github/copilot-instructions.md -- GitHub Copilot workspace instructions
  • .claude/settings.json -- Claude Code permissions and hook configuration
  • CLAUDE.md -- Project context and conventions for Claude Code
  • .windsurfrules, .aider.conf.yml, .opencode/ -- Other tool configurations
  • Rule writing patterns: be specific, be testable, include examples
  • Common mistakes: rules too vague, too restrictive, or contradictory

2.3 Basic CI Pipeline (45 minutes)

The minimum viable pipeline for AI-governed development.

  • 3-stage pipeline: lint, test, SAST
  • GitHub Actions workflow configuration
  • Language-specific linter setup (ESLint flat config, ruff + mypy, golangci-lint)
  • Test runner configuration (Vitest, pytest, go test)
  • Semgrep with AEEF rule packs
  • Making the pipeline a required check

2.4 PR Templates and AI Disclosure (30 minutes)

Transparency as a governance control.

  • PR template with AI disclosure checkbox
  • AI-Usage trailer format in commit messages
  • Why disclosure matters for code review
  • Audit trail generation from PR metadata

2.5 Lab: Configure Tier 1 for Your Project (30 minutes)

Exercise: Take an existing project (or a provided sample project) and apply Tier 1 governance using AEEF Quick Start.

Steps:

  1. Fork or clone the sample project
  2. Apply config packs from aeef-config-packs
  3. Configure AI tool rules for your team's primary AI tool
  4. Set up the 3-stage CI pipeline
  5. Create a PR using the AEEF template with AI disclosure
  6. Verify all pipeline checks pass

Deliverable: A pull request on your configured project with passing CI checks and completed AI disclosure.

Assessment Criteria

  • Can configure AI tool rules for at least 2 tools
  • CI pipeline runs lint, test, and SAST stages successfully
  • PR template includes AI disclosure section
  • Can explain which PRD-STDs are enforced by Tier 1

Module 3: Prompt Engineering for Production (3 hours)

Learning Objectives

By the end of this module, you will be able to:

  • Author a production-quality CLAUDE.md file for a real project
  • Apply context engineering principles to maximize AI tool effectiveness
  • Use structured prompting patterns for consistent output
  • Identify and fix common prompt anti-patterns

Topics

3.1 CLAUDE.md Authoring Best Practices (45 minutes)

The single most impactful governance artifact.

  • Purpose: project context, conventions, constraints, and examples
  • Structure: project overview, tech stack, coding conventions, testing requirements, forbidden patterns
  • Length guidelines: enough context to be useful, short enough to fit in context window
  • Version control: CLAUDE.md evolves with the project
  • Multi-file strategies: root CLAUDE.md + directory-level CLAUDE.md files
  • Real examples from AEEF reference implementations

3.2 Context Engineering (45 minutes)

Maximizing the value of limited context windows.

  • Spotify's approach to context engineering: structured project knowledge
  • Context hierarchy: system prompt, CLAUDE.md, file contents, conversation history
  • Context budget management: what to include and what to exclude
  • Repository indexing strategies for large codebases
  • The "context engineering > prompt engineering" principle

3.3 Structured Prompting Patterns (45 minutes)

Repeatable patterns for consistent AI output.

  • Role pattern: "You are a senior {language} engineer at a {domain} company..."
  • Context pattern: "This project uses {framework} with {architecture}..."
  • Constraint pattern: "Never use {deprecated API}. Always {required practice}."
  • Output format pattern: "Return your response as {format} with {sections}."
  • Chain-of-thought pattern: "First analyze, then plan, then implement."
  • Example pattern: "Here is an example of correct implementation: ..."
  • Combining patterns for complex tasks

3.4 Common Anti-Patterns (45 minutes)

What to stop doing immediately.

  • The blank slate: Starting Claude Code with no CLAUDE.md or project context
  • The novel: CLAUDE.md files so long they exhaust the context window
  • The wishful thinker: "Write perfect, bug-free code" (not actionable)
  • The micromanager: Constraining every decision, leaving no room for AI judgment
  • The copy-paster: Accepting AI output without review or adaptation
  • The prompt jockey: Spending more time crafting prompts than writing code
  • How to diagnose which anti-pattern your team is exhibiting

3.5 Lab: Write CLAUDE.md for a Real Project (remaining time)

Exercise: Author a CLAUDE.md file for a real project you work on (or a provided complex sample project).

Steps:

  1. Analyze the project structure, dependencies, and conventions
  2. Write a CLAUDE.md covering: overview, stack, conventions, testing, forbidden patterns
  3. Test the CLAUDE.md by running Claude Code against 3 different tasks
  4. Iterate based on output quality
  5. Peer review another participant's CLAUDE.md

Deliverable: A production-quality CLAUDE.md file with evidence of testing (before/after output comparison).

Assessment Criteria

  • CLAUDE.md covers all required sections (overview, stack, conventions, testing, constraints)
  • Can identify 4+ prompt anti-patterns and explain remediation
  • Demonstrates context engineering principles in CLAUDE.md design
  • Lab CLAUDE.md produces measurably better AI output than baseline

Module 4: AEEF Tier 2 -- Transformation (3 hours)

Learning Objectives

By the end of this module, you will be able to:

  • Describe the 4-agent SDLC model and role responsibilities
  • Write agent contracts and handoff protocols
  • Configure mutation testing for your language stack
  • Run a complete 4-agent workflow using the AEEF CLI

Topics

4.1 The 4-Agent SDLC Model (45 minutes)

How Tier 2 structures AI-assisted development as a pipeline.

  • Product Agent: Story authoring, acceptance criteria, requirements validation
  • Architect Agent: Design decisions, ADRs, interface contracts, dependency approval
  • Developer Agent: Implementation within architectural constraints, test writing
  • QC Agent: Test execution, mutation testing, coverage validation, security scanning
  • Why 4 agents instead of 1: separation of concerns, constraint enforcement, audit trail
  • Mapping to the Agent Orchestration Model

4.2 Agent Contracts and Handoff Protocols (45 minutes)

The glue that makes multi-agent workflows reliable.

  • Contract structure: role, permissions, constraints, required outputs, forbidden actions
  • Handoff format: structured PR body with checklist, artifacts, and next-agent instructions
  • Branch naming conventions: aeef/product, aeef/architect, aeef/dev, aeef/qc
  • PR as handoff mechanism: why Git PRs are the ideal handoff artifact
  • Contract violations: what happens when an agent exceeds its role
  • Writing contracts for custom roles

4.3 Mutation Testing (45 minutes)

The quality gate that catches what coverage metrics miss.

  • Why code coverage alone is insufficient (the "assert true" problem)
  • Mutation testing concepts: mutants, killed, survived, mutation score
  • Stryker (TypeScript): Configuration, thresholds, CI integration
  • mutmut (Python): Configuration, runner setup, baseline management
  • go-mutesting (Go): Configuration, mutation operators, reporting
  • Setting realistic mutation score thresholds for AI-generated code
  • Interpreting mutation testing results: what survived mutants tell you

4.4 Metrics Pipeline Setup (30 minutes)

Measuring what matters in AI-assisted development.

  • Core metrics: defect density, AI attribution rate, review time, mutation score
  • Collection points: CI pipeline, PR metadata, commit trailers
  • Aggregation and visualization: dashboards for team leads
  • Connecting metrics to the KPI Framework

4.5 Lab: Run a 4-Agent Workflow Using AEEF CLI (remaining time)

Exercise: Install the AEEF CLI and execute a complete 4-agent workflow on a sample project.

Steps:

  1. Install the AEEF CLI wrapper
  2. Initialize a project with aeef init
  3. Run aeef --role=product to create a story
  4. Run aeef --role=architect to design the solution
  5. Run aeef --role=developer to implement the code
  6. Run aeef --role=qc to validate and test
  7. Review the PR chain and handoff artifacts

Deliverable: A completed 4-agent workflow with PR chain showing handoffs between each role.

Assessment Criteria

  • Can describe all 4 agent roles and their boundaries
  • Can author a contract for at least 1 agent role
  • Mutation testing configured and running for chosen language
  • Successfully executed a 4-agent CLI workflow end-to-end

Module 5: Quality Gates & Security (3 hours)

Learning Objectives

By the end of this module, you will be able to:

  • Write custom Semgrep rules targeting AI-generated code patterns
  • Configure SCA scanning for dependency compliance
  • Implement pre-commit hooks that enforce AEEF standards
  • Design a code review process adapted for AI-assisted development

Topics

5.1 SAST with Semgrep (60 minutes)

Static analysis as the first line of defense against AI-generated vulnerabilities.

  • Semgrep fundamentals: rules, patterns, metavariables
  • AEEF Semgrep rule packs: what ships with each tier
  • Writing custom rules for your project's specific risks:
    • Detecting hardcoded secrets in AI-generated code
    • Catching deprecated API usage
    • Enforcing architectural boundaries (e.g., no direct database calls from handlers)
    • Detecting common AI hallucination patterns (non-existent imports, fabricated APIs)
  • Rule testing with semgrep --test
  • CI integration: making Semgrep a required check

Rule authoring exercise (inline):

rules:
- id: no-console-log-in-production
patterns:
- pattern: console.log(...)
- pattern-not-inside: |
if ($CONDITION) { ... }
message: "Remove console.log before merging. Use structured logger instead."
languages: [typescript, javascript]
severity: WARNING
metadata:
aeef-standard: PRD-STD-006
category: technical-debt

5.2 SCA and Dependency Management (30 minutes)

Controlling what AI tools pull into your dependency tree.

  • The AI dependency problem: models suggest packages they were trained on, not what is current or safe
  • SCA tools: Dependabot, Renovate, Snyk, OSV-Scanner
  • License compliance checking
  • Dependency pinning strategies for AI-governed projects
  • AEEF config pack: dependency allowlists and blocklists

5.3 Code Review in an AI World (30 minutes)

Adapting review practices for AI-generated code.

  • The review burden problem: 98% more PRs, 91% longer reviews
  • AI disclosure in PRs: what reviewers need to know
  • Review checklists adapted for AI-generated code
  • Reviewer training: patterns that indicate AI hallucination
  • Automated pre-review gates that reduce human review load
  • When to reject AI-generated code entirely

5.4 Pre-Commit Hooks and CI Enforcement (30 minutes)

Shifting quality gates as far left as possible.

  • Pre-commit hook framework setup
  • AEEF hooks: lint, format, type check, Semgrep, secret detection
  • Claude Code hooks: PreToolUse, PostToolUse, Stop
  • Making hooks non-bypassable in CI
  • Performance considerations: keeping hooks fast enough to use

5.5 Lab: Write 3 Custom Semgrep Rules for Your Project (remaining time)

Exercise: Identify 3 risk patterns specific to your project and write Semgrep rules to detect them.

Steps:

  1. Audit your project for 3 patterns that AI tools commonly get wrong
  2. Write a Semgrep rule for each pattern with tests
  3. Run rules against your codebase and verify detection
  4. Add rules to your CI pipeline
  5. Document each rule: what it catches, why it matters, how to fix violations

Deliverable: 3 tested Semgrep rules integrated into CI, with documentation.

Assessment Criteria

  • Can write a Semgrep rule from scratch with metavariables and patterns
  • All 3 custom rules pass semgrep --test validation
  • Can explain SCA scanning and dependency compliance strategies
  • Pre-commit hooks configured and running locally

Module 6: AEEF Tier 3 -- Production (3 hours)

Learning Objectives

By the end of this module, you will be able to:

  • Describe the 11-agent orchestration model and its role hierarchy
  • Configure monitoring for AI-assisted engineering metrics
  • Implement drift detection and baseline management
  • Set up automated incident response workflows

Topics

6.1 The 11-Agent Orchestration Model (45 minutes)

Scaling from 4 agents to enterprise-grade orchestration.

  • The 7 additional roles: Security, Compliance, Release, Infrastructure, Documentation, Metrics, Incident Response
  • Role hierarchy and supervision chains
  • When to use 11 agents vs. 4 agents (decision framework)
  • Contract complexity at scale: inter-agent dependencies
  • The Enterprise Role Pack for AEEF CLI

6.2 Monitoring Stack (45 minutes)

Observability for AI-assisted engineering.

  • Grafana: Dashboard setup for AEEF metrics
  • Prometheus: Metric collection from CI pipelines and agent workflows
  • AlertManager: Alerting on quality gate failures, drift detection, SLA breaches
  • Key dashboards: defect density trend, AI attribution rate, mutation score over time, review latency
  • Integration with existing monitoring infrastructure

6.3 Drift Detection and Baseline Management (30 minutes)

Preventing configuration decay in governed projects.

  • What drifts: CI configs, linter rules, Semgrep rule packs, agent contracts
  • Baseline files: checksums of governed configuration
  • Drift detection scripts: comparing current state to baseline
  • CI-integrated drift checks: failing builds on unauthorized changes
  • Remediation workflows: automated PRs to fix drift

6.4 Incident Response Automation (30 minutes)

When things go wrong in AI-assisted development.

  • Incident categories: AI-generated vulnerability, contract violation, quality gate bypass, data exposure
  • Automated response scripts from aeef-production shared/scripts/
  • Escalation paths: agent to human, team to security, security to compliance
  • Post-incident analysis: root cause attribution to AI vs. human decisions
  • Incident response runbooks

6.5 Lab: Set Up Monitoring for a Tier 3 Project (remaining time)

Exercise: Deploy the AEEF monitoring stack for a sample Tier 3 project.

Steps:

  1. Clone aeef-production for your language stack
  2. Deploy the monitoring stack using Docker Compose
  3. Configure Prometheus to collect metrics from the CI pipeline
  4. Set up Grafana dashboards for AEEF core metrics
  5. Configure AlertManager rules for quality gate SLA breaches
  6. Trigger a test alert and verify the pipeline

Deliverable: Running monitoring stack with at least 3 dashboards and 2 alert rules configured.

Assessment Criteria

  • Can describe all 11 agent roles and when to use the full model
  • Monitoring stack deployed and collecting metrics
  • At least 2 alert rules configured and tested
  • Can explain drift detection and baseline management

Module 7: Orchestration Patterns (3 hours)

Learning Objectives

By the end of this module, you will be able to:

  • Compare multi-agent orchestration architectures and select the right one
  • Implement Git-branch isolation for parallel agent workflows
  • Integrate AEEF governance with third-party orchestration tools
  • Build a custom orchestration workflow for a specific use case

Topics

7.1 Multi-Agent Architectures (45 minutes)

Patterns for coordinating multiple AI agents.

  • Sequential: Agent A finishes, then Agent B starts (AEEF CLI default)
  • Parallel: Multiple agents work simultaneously on different components
  • Hierarchical: Supervisor agent delegates to worker agents
  • Peer Review: Two agents independently solve the same problem, outputs compared
  • Pipeline with Feedback: Later agents can send work back to earlier agents
  • Trade-offs: speed vs. governance, complexity vs. reliability
  • Pattern selection framework: which pattern for which project type

7.2 Git-Branch Isolation Patterns (45 minutes)

Using Git as the coordination layer for multi-agent workflows.

  • Branch-per-role: the AEEF CLI model
  • Branch-per-task: parallel agent execution on different features
  • Branch-per-component: agents own different parts of the system
  • Merge strategies: fast-forward, squash, rebase
  • Conflict resolution in multi-agent workflows
  • PR chains vs. stacked PRs vs. trunk-based development

7.3 Integration with Orchestration Tools (45 minutes)

AEEF governance does not require the AEEF CLI -- it works with any orchestrator.

  • CrewAI: Agent definition, task assignment, AEEF contract integration
  • claude-flow: Multi-agent coordination with Claude Code
  • Composio: Tool orchestration with governance hooks
  • OpenClaw (tmux-based): Terminal multiplexing for parallel agents
  • Custom scripts: Building your own orchestration with bash/Python
  • Governance integration points: where to inject AEEF controls in each tool

7.4 Lab: Implement a Parallel Agent Workflow (remaining time)

Exercise: Build a parallel agent workflow where 2 agents work simultaneously on different components, then a review agent validates both.

Steps:

  1. Define 3 agent roles: Agent A (frontend), Agent B (backend), Agent C (reviewer)
  2. Write contracts for each role with clear boundaries
  3. Implement branch-per-component isolation
  4. Run agents A and B in parallel (using tmux, background processes, or orchestration tool)
  5. Run agent C to review both outputs
  6. Merge validated work to main

Deliverable: A completed parallel workflow with 3 agent roles, branch isolation, and merged output.

Assessment Criteria

  • Can compare 4+ orchestration patterns with trade-offs
  • Git-branch isolation implemented and working
  • At least 1 third-party orchestration tool integrated with AEEF governance
  • Parallel workflow completed with review agent validation

Module 8: Capstone Project (3 hours)

Learning Objectives

By the end of this module, you will be able to:

  • Execute a complete Tier 1 to Tier 2 migration on a real project
  • Demonstrate all governance controls working end-to-end
  • Present your migration to a review panel with metrics and evidence
  • Pass the certification assessment

Project Requirements

The capstone project is the culmination of the Developer Certification. You will take an existing project (provided or your own) and migrate it from Tier 1 to Tier 2 governance, demonstrating competency across all modules.

Required Deliverables

  1. Tier 1 Baseline (already completed in Module 2)

    • AI tool rules configured
    • 3-stage CI pipeline running
    • PR template with AI disclosure
  2. Tier 2 Migration

    • Agent contracts for all 4 roles (product, architect, developer, QC)
    • Handoff protocols defined with PR templates
    • Mutation testing configured and passing thresholds
    • Metrics pipeline collecting at least 4 core metrics
    • At least 1 complete 4-agent workflow executed with evidence
  3. Quality Evidence

    • 3+ custom Semgrep rules running in CI
    • Pre-commit hooks configured and enforced
    • Code review checklist adapted for AI-assisted development
    • CLAUDE.md with demonstrated improvement over baseline
  4. Presentation (15 minutes)

    • Before/after metrics comparison
    • Challenges encountered and how they were resolved
    • Recommendations for the project team
    • Q&A with review panel

Evaluation Rubric

CriterionPointsDescription
CI Pipeline15All stages running, checks required, no bypasses
Agent Contracts15All 4 roles defined with clear constraints
Mutation Testing10Configured, passing threshold, integrated in CI
Semgrep Rules103+ custom rules, tested, documented
CLAUDE.md Quality10Comprehensive, tested, measurable improvement
Metrics Pipeline104+ metrics collected, dashboard or report
Workflow Execution15Complete 4-agent workflow with PR evidence
Presentation15Clear, evidence-based, actionable recommendations
Total100Pass: 80+

Assessment Details

In addition to the capstone project, candidates complete a 60-question multiple-choice exam:

SectionQuestionsTopics
AI Fundamentals10Productivity paradox, LLM limitations, tool landscape
Tier 1 Configuration10AI rules, CI pipeline, PR templates
Prompt Engineering8CLAUDE.md, context engineering, anti-patterns
Tier 2 Governance10Agent SDLC, contracts, handoffs, mutation testing
Security & Quality8Semgrep, SCA, code review, pre-commit hooks
Tier 3 & Monitoring711-agent model, monitoring, drift, incident response
Orchestration7Patterns, Git isolation, tool integration
Total60Pass: 48/60 (80%)

Time limit: 90 minutes for the exam. Capstone project is submitted before the exam.


Preparation Checklist

Before starting the Developer Certification, ensure you have:

  • A computer with Git, Node.js (or Python 3.11+ or Go 1.21+), and Docker installed
  • A GitHub account with permission to create repositories and Actions workflows
  • Claude Code CLI installed (or willingness to install in Module 1)
  • Access to the AEEF public repositories (aeef-quickstart, aeef-config-packs, aeef-transform, aeef-cli)
  • A text editor or IDE you are comfortable with
  • 24 hours of dedicated study time (recommend 3 hours per week over 8 weeks)
  • Optional: a real project to apply governance to during labs

Self-Paced (8 weeks)

WeekModuleHours
1Module 1: AI Coding Fundamentals3
2Module 2: Tier 1 Quick Start3
3Module 3: Prompt Engineering3
4Module 4: Tier 2 Transformation3
5Module 5: Quality Gates & Security3
6Module 6: Tier 3 Production3
7Module 7: Orchestration Patterns3
8Module 8: Capstone + Exam3

Intensive (2 weeks)

DayModulesHours
Mon-TueModules 1-26
Wed-ThuModules 3-46
FriModule 53
MonModule 63
TueModule 73
WedModule 8: Capstone + Exam3

Continuing Education

After completing the Developer Certification, you may pursue:

  • Architect Track: Deepen your knowledge of agent SDLC design, orchestration patterns, and enterprise scaling
  • Workshop Specializations: Take advanced workshops on Multi-Agent Orchestration or AEEF CLI Mastery
  • Community Contributions: Author Semgrep rules, agent contracts, or CLAUDE.md templates for the AEEF community