Skip to main content

Development Manager Agent

Overview

FieldValue
Agent IDdevmgr-agent
SDLC StageStage 4: Testing and Quality Assurance (oversight)
Human OwnerDevelopment Manager
Role GuideDevelopment Manager Guide
Prompt Templateprompt-library/by-role/development-manager/quality-risk-enablement-plan.md
Contract Version1.0.0
StatusActive

What This Agent Does

The devmgr-agent provides quality oversight across the pipeline. It operates alongside the qa-agent in Stage 4, assessing quality metrics against team baselines and validating that evidence is complete before the work item advances.

Core responsibilities:

  1. Quality metrics assessment — Compare current work item metrics against team baselines and historical trends
  2. Trend deviation detection — Flag when quality metrics are deteriorating (increasing defect rates, declining coverage, growing rework)
  3. Evidence completeness validation — Verify all required artifacts exist before advancing through gates
  4. Team health monitoring — Track AI-assisted delivery patterns for sustainable pace
  5. Risk aggregation — Combine risks from multiple agents into a unified risk view
  6. Reporting — Produce quality dashboards and trend reports for leadership

Agent Contract

agent_id: devmgr-agent
contract_version: 1.0.0
role_owner: development-manager

allowed_inputs:
- test-results
- coverage-reports
- release-readiness-recommendation
- quality-baselines
- defect-history
- sprint-velocity-data
- agent-trust-level-metrics

allowed_outputs:
- quality-metrics-assessment
- trend-analysis
- evidence-completeness-report
- risk-aggregation-summary
- team-health-indicators
- quality-dashboard

forbidden_actions:
- override-security-findings # Security decisions belong to security-agent
- skip-audit-evidence # Evidence requirements are non-negotiable
- modify-test-results # Test results are immutable
- change-quality-thresholds # Threshold changes require human approval
- approve-own-quality-assessments # Self-approval violates governance

required_checks:
- metrics-compared-to-baseline
- evidence-completeness-verified
- no-negative-trend-unaddressed

handoff_targets:
- agent: platform-agent
artifact: quality-assessment
condition: quality-oversight-complete

escalation_path:
approver_role: development-manager
triggers:
- quality-metrics-below-baseline
- negative-trend-detected
- evidence-gap-identified
- team-burnout-indicators

System Prompt Blueprint

You are devmgr-agent for [PROJECT_NAME].

Your role: Assess quality metrics, validate evidence completeness, and
provide quality oversight for the delivery pipeline.

Team baselines:
- Test coverage target: [X]%
- Defect escape rate target: <[Y]%
- First-pass gate rate target: >[Z]%

Contract boundaries:
- You MUST NOT override security findings
- You MUST NOT skip or fabricate audit evidence
- You MUST NOT modify test results
- You MUST escalate when metrics fall below baseline

For every work item in Stage 4, assess:
1. Quality metrics vs team baseline (coverage, defect rate, rework rate)
2. Trend analysis (improving, stable, or deteriorating)
3. Evidence completeness (all required artifacts present)
4. Risk aggregation (combined risks from all agents)
5. Quality dashboard update

Standards: PRD-STD-007 (Quality Gates), PRD-STD-009 (Agent Governance)

Handoff Specifications

Receives From (Upstream)

SourceArtifactTrigger
qa-agentTest results with release readiness recommendationTesting complete

Sends To (Downstream)

TargetArtifactCondition
platform-agent (via Gate 5 merge)Quality metrics assessment and evidence reportQuality oversight complete

Gate Responsibilities

Co-owns Gate 4 with qa-agent — specifically the metrics and evidence aspects.

Trust Level Progression

LevelDurationWhat Changes
Level 02 weeks / 20 runsDev Manager reviews every quality assessment
Level 14 weeks / 50 runsAuto-approve when metrics exceed baseline by >10%
Level 28 weeks / 100 runsAuto-approve for Tier 1-2; human reviews trend deviations
Level 3OngoingHuman reviews only significant trend deviations and Tier 3+

Environment Scope

EnvironmentAccessAllowed Actions
DevelopmentRead-onlyView sprint metrics for context
StagingFullAssess quality metrics, validate evidence
ProductionRead-onlyMonitor production quality metrics for feedback

Implementation Guide

Step 1: Define Quality Baselines

Establish baselines from your last 3-6 sprints:

quality_baselines:
test_coverage: 80%
defect_escape_rate: 2%
first_pass_gate_rate: 85%
rework_rate: 15%
ai_attribution_completeness: 100%

Step 2: Configure Trend Detection

trend_alerts:
- metric: "defect_escape_rate"
window: "4_sprints"
alert_if: "increasing_for_2_consecutive"
- metric: "test_coverage"
window: "4_sprints"
alert_if: "decreasing_for_2_consecutive"
- metric: "rework_rate"
window: "4_sprints"
alert_if: "exceeds_20_percent"

Step 3: Wire to Quality Dashboard

The devmgr-agent feeds data to your quality dashboard (Grafana, DataDog, etc.). Configure it to track agent-specific metrics per the Trust Model.

Known Limitations

  • Metrics depend on upstream accuracy — If qa-agent produces inaccurate test results, quality assessment is unreliable.
  • Team health is inferential — The agent infers burnout from metrics patterns, not direct observation.
  • Cross-team comparisons — Baselines are team-specific. Cross-team benchmarking requires normalization.

Standards Compliance

StandardRequirementEvidence This Agent Produces
PRD-STD-007Quality gate enforcementMetrics assessment, evidence completeness report
PRD-STD-009Agent governanceRun records, quality dashboards