Scenario: Python Microservice with AEEF Standards
This walkthrough shows how to apply AEEF production standards together on a Python FastAPI microservice. It follows an entire feature from prompt to production, demonstrating how the standards create a governed delivery pipeline.
Time required: 60-90 minutes (reading + doing) Prerequisites: Familiarity with Python 3.12+, FastAPI, and basic AEEF concepts from the Startup Quick-Start.
This is a realistic composite scenario showing how standards apply together. Adapt the specifics to your stack — the governance workflow is universal.
The Project
A payment processing microservice built with:
- Framework: FastAPI with Python 3.12
- Database: PostgreSQL via SQLAlchemy 2.0 (async)
- Auth: JWT tokens validated from an auth service
- Testing: pytest + pytest-asyncio + httpx
- CI: GitHub Actions with Semgrep + pip-audit
- Deployment: Docker + Kubernetes
The team has 12 engineers across 3 microservices and has completed the CI/CD Pipeline Starter.
The Feature
User story: As a merchant, I can configure webhook endpoints so I receive real-time notifications when payment events occur.
This feature touches:
- Database schema (webhook configuration model)
- API endpoints (CRUD for webhook configs)
- Async worker (webhook delivery with retry)
- Signature verification (HMAC for webhook payloads)
- Authorization (merchant can only manage their own webhooks)
Phase 1: Prompt Engineering (PRD-STD-001)
Step 1.1: Structured Prompt for API Endpoint
Using the Python Secure REST Endpoint template (prompt-library/by-language/python/secure-endpoint.md):
You are generating production-grade Python API code.
**Context:**
- Python 3.12 with strict type hints (mypy strict mode)
- Framework: FastAPI
- Database: SQLAlchemy 2.0 async with asyncpg
- Validation: Pydantic v2
- Testing: pytest + pytest-asyncio + httpx
**Task:** Create CRUD endpoints for webhook configurations.
**Requirements:**
1. Webhook config: url (HTTPS only), events (list of event types),
secret (auto-generated HMAC key), active (boolean), merchant_id
2. Merchants can only manage their own webhook configs
3. URL must be HTTPS — reject HTTP endpoints
4. Secret key must be generated server-side using secrets module
5. Max 10 webhook configs per merchant
6. Validate URL is reachable with a test ping on creation
**Constraints:**
- Use Pydantic v2 models for all request/response validation
- Enforce authorization via JWT dependency — check merchant_id claim
- Use parameterized queries only (SQLAlchemy handles this)
- Return structured error responses with proper HTTP status codes
- All functions must have complete type hints
- Use secrets.token_hex(32) for webhook signing keys, not random module
Step 1.2: Structured Prompt for Webhook Delivery
You are implementing a webhook delivery worker.
**Context:**
- Python 3.12, asyncio-based
- HTTP client: httpx.AsyncClient
- Queue: [Redis / RabbitMQ / in-process — specify]
**Task:** Implement webhook delivery with retry logic.
**Requirements:**
1. Sign payloads with HMAC-SHA256 using the webhook secret
2. Include signature in X-Webhook-Signature header
3. Retry failed deliveries: 3 attempts with exponential backoff (1s, 10s, 60s)
4. Log each delivery attempt with structured context (webhook_id, event, attempt, status)
5. Mark webhook as inactive after 10 consecutive failures
6. Timeout: 30 seconds per delivery attempt
7. Never log the full payload or signing secret
**Constraints:**
- Use httpx.AsyncClient with explicit timeout
- Use hmac.compare_digest for signature comparison (timing-safe)
- Handle network errors, timeouts, and non-2xx responses separately
- All functions must have complete type hints
Step 1.3: Record Prompt References
AI-Usage: claude
AI-Prompt-Ref: by-language/python/secure-endpoint (webhook CRUD),
by-language/python/secure-endpoint (delivery worker, adapted)
AI-Confidence: high — CRUD endpoints, medium — retry/backoff logic
Phase 2: Human-in-the-Loop Review (PRD-STD-002)
Step 2.1: Review AI Output Against Checklist
Using the Python PR Risk Review prompt (prompt-library/by-language/python/pr-risk-review.md):
Critical items for this feature:
| Check | What to Verify | Status |
|---|---|---|
| Auth bypass | Does every endpoint verify merchant_id from JWT before accessing data? | |
| Secret handling | Is the webhook secret generated with secrets.token_hex(), not random? | |
| SSRF | Is the webhook URL validated to be HTTPS? Does the test ping prevent internal network access? | |
| SQL injection | SQLAlchemy uses parameterized queries — verify no text() with f-strings | |
| Timing attack | Is HMAC signature comparison using hmac.compare_digest(), not ==? | |
| Secret leakage | Is the webhook secret excluded from list/detail API responses? | |
| Async safety | No requests.get() inside async functions? Using httpx.AsyncClient? | |
| Input validation | URL validated as HTTPS? Events list validated against allowed types? |
Step 2.2: Python-Specific AI Pitfalls to Check
From the Python anti-patterns table (prompt-library/by-language/python.md):
- No bare
except:clauses — all exceptions are specific - No
print()statements — usingloggingmodule with structured context - No
os.path— usingpathlib.Pathif file operations exist - No mutable default arguments in function signatures
- No
yaml.load()— usingyaml.safe_load()if YAML parsing exists - Type hints on all functions (parameters + returns)
Phase 3: Testing (PRD-STD-003)
Step 3.1: Generate Test Matrix
Use the Python Risk-Based Test Matrix prompt (prompt-library/by-language/python/test-matrix.md):
Feature: Webhook configuration CRUD + delivery worker
Changes: SQLAlchemy model, FastAPI router, Pydantic schemas, delivery worker
Generate a risk-based test matrix covering:
1. Unit tests for Pydantic validation, HMAC signing, retry logic
2. Integration tests for API endpoints (auth states, validation, CRUD)
3. Async tests for webhook delivery (success, retry, failure)
4. Security tests for SSRF prevention and secret handling
Expected test coverage:
| Test Type | Count | What It Covers |
|---|---|---|
| Unit (pytest) | 10-15 | Pydantic schemas, HMAC signing, URL validation, retry backoff |
| API integration | 8-12 | CRUD endpoints + auth boundary + validation errors |
| Async worker | 6-8 | Delivery success, retry, timeout, consecutive failures |
| Security | 3-5 | SSRF prevention, secret non-exposure, timing-safe comparison |
Step 3.2: Verify AI-Generated Tests
Common issues with AI-generated Python tests:
- Tests use
pytest.mark.asynciofor async test functions - Tests use
httpx.AsyncClientfor FastAPI testing, notrequests - Tests mock external HTTP calls, not internal service methods
- Tests use
pytest.mark.parametrizefor multiple input scenarios - No
time.sleep()— using async patterns or freezegun for time - Factory functions or fixtures for test data, not inline construction
Phase 4: Security Scanning (PRD-STD-004)
Step 4.1: Automated CI Checks
Your CI pipeline catches:
# These run automatically on every PR
- Semgrep: SQL injection, command injection, SSRF, insecure crypto
- pip-audit: Known CVEs in Python dependencies
- mypy --strict: Type safety violations
- ruff: Linting including security-focused rules
- bandit: Python-specific security analysis
Step 4.2: SSRF-Specific Review
Webhook URL registration is an SSRF risk. Verify:
- URL must be HTTPS (HTTP rejected)
- URL must resolve to a public IP (reject 10.x, 172.16-31.x, 192.168.x, 127.x, ::1)
- URL must not point to cloud metadata endpoints (169.254.169.254)
- DNS resolution happens at delivery time, not registration time (prevent DNS rebinding)
- Test ping uses the same restrictions as production delivery
Phase 5: Quality Gates (PRD-STD-007)
Step 5.1: PR Checklist
| Gate | Tool | Pass Criteria |
|---|---|---|
| Type safety | mypy --strict | Zero errors |
| Lint | ruff | Zero errors |
| Unit tests | pytest | 100% passing, new code covered |
| Integration tests | pytest | API tests passing |
| Security scan | Semgrep + bandit | Zero high/critical findings |
| Dependency audit | pip-audit | Zero high/critical CVEs |
| Build | Docker build | Successful |
Step 5.2: PR Metadata
## Changes
- Add WebhookConfig SQLAlchemy model with Alembic migration
- Add CRUD endpoints: POST/GET/PATCH/DELETE /api/v1/webhooks
- Add webhook delivery worker with HMAC signing and retry
- Add Pydantic schemas with URL and event validation
## AI Disclosure
- AI-Usage: claude
- AI-Prompt-Ref: by-language/python/secure-endpoint (CRUD + delivery)
- AI-Review: Used by-language/python/pr-risk-review for self-review
- Human-Review: SSRF protection manually verified, HMAC implementation verified
## Testing
- 13 unit tests (Pydantic validation, HMAC signing, retry logic, URL validation)
- 10 API integration tests (CRUD + auth boundary + validation)
- 7 async worker tests (delivery, retry, timeout, deactivation)
- 4 security tests (SSRF prevention, secret handling)
Phase 6: Dependency Compliance (PRD-STD-008)
Use the Python Dependency Risk Check (prompt-library/by-language/python/dependency-check.md) if new packages were added:
Review these dependency additions:
- httpx[http2]>=0.27 (async HTTP client for webhook delivery)
- celery>=5.4 (task queue for async delivery, if using Celery)
Check: license, CVEs, async compatibility, maintenance status, alternatives.
Phase 7: Documentation (PRD-STD-005)
Use the Python Change Runbook (prompt-library/by-language/python/change-runbook.md) to generate:
- Migration notes: Alembic migration must run before deployment
- Environment variables:
WEBHOOK_DELIVERY_TIMEOUT_SECONDS,WEBHOOK_MAX_RETRIES,WEBHOOK_MAX_CONSECUTIVE_FAILURES - Rollback procedure: Downgrade Alembic migration, redeploy previous image
- Monitoring:
- Alert on webhook delivery failure rate > 10%
- Alert on webhook delivery latency p99 > 25s
- Dashboard: delivery success rate by merchant, retry distribution, deactivated webhooks
- Operational notes:
- Webhook secrets are never logged or returned in API responses
- Deactivated webhooks require manual reactivation via admin endpoint
Summary: Standards Applied
| Standard | How It Was Applied | Evidence |
|---|---|---|
| PRD-STD-001 (Prompt Engineering) | Structured prompts from Python templates | PR description AI-Prompt-Ref |
| PRD-STD-002 (Code Review) | AI + human review with security focus | Review comments on PR |
| PRD-STD-003 (Testing) | Risk-based test matrix, 34+ tests | CI test results |
| PRD-STD-004 (Security) | Automated scans + SSRF-specific review | CI scan output + review notes |
| PRD-STD-005 (Documentation) | Generated runbook from template | PR description + runbook |
| PRD-STD-007 (Quality Gates) | All gates passing before merge | CI status checks |
| PRD-STD-008 (Dependencies) | Dependency risk check for new packages | PR comment with assessment |
What This Demonstrates
- Security-sensitive features need extra review — webhook URL registration is an SSRF vector; the standards flagged this for manual review
- Python-specific pitfalls are real — async/sync confusion, bare except clauses, and insecure randomness are common AI-generated issues
- Testing strategy adapts to risk — more security tests for a feature that handles external URLs and cryptographic signing
- Prompts save time, not just improve quality — starting from the Python secure endpoint template meant the AI output was closer to production-ready from the first attempt
- Governance overhead scales with risk — a low-risk UI change needs less review than a webhook system handling merchant credentials
Apply This Pattern in Your Repo
Use this scenario as a reference pattern, then choose an implementation path:
- Day 1 / small team: Starter Config Files + CI/CD Pipeline Starter
- Live role-based workflow (same repo, 4-role baseline): AEEF CLI Wrapper
- Transformation rollout (Python teams): Tier 2: Transformation Apply Path then Tier 2 Python
- Production rollout (regulated / enterprise): Tier 3: Production Apply Path then Tier 3 Python
Next Steps
- Walk through the Next.js Full-Stack Scenario for a frontend-inclusive example
- Review the full Production Standards to identify any gaps for your team
- Use the Self-Assessment to measure your maturity level