Scenario: Go Microservice with AEEF Standards
This walkthrough shows how to apply AEEF production standards together on a Go microservice. It follows an entire feature from prompt to production, showing how the standards create a governed delivery pipeline — with specific attention to Go concurrency patterns, struct validation, and table-driven testing.
Time required: 60-90 minutes (reading + doing) Prerequisites: Familiarity with Go 1.22+, HTTP routers (Chi/Gin), and basic AEEF concepts from the Startup Quick-Start.
This is a realistic composite scenario showing how standards apply together. Adapt the specifics to your stack — the governance workflow is universal.
The Project
An internal rate-limiting service built with:
- Language: Go 1.22+
- Router: Chi v5
- Database: PostgreSQL via pgx (connection pool)
- Cache: Redis for sliding-window counters
- Observability: OpenTelemetry + structured logging (slog)
- Testing: Go standard library + testify + testcontainers-go
- CI: GitHub Actions with golangci-lint + govulncheck
The team has 4 engineers managing 6 microservices and has completed the Starter Config Files and CI/CD Pipeline Starter tutorials.
The Feature
User story: As a platform engineer, I can define rate-limit policies per tenant so that no single tenant can exhaust shared API capacity.
This feature touches:
- Database schema (rate-limit policy model)
- REST API (CRUD for policies + rate-check endpoint)
- Redis integration (sliding-window counter)
- Middleware (rate-limiting middleware for downstream services)
- Concurrency (atomic counter operations under high throughput)
Phase 1: Prompt Engineering (PRD-STD-001)
Step 1.1: Structured Prompt for API Endpoint
Using the Go Secure REST Endpoint template (prompt-library/by-language/go/secure-endpoint.md):
You are generating production-grade Go API code.
**Context:**
- Go 1.22+ with strict linting (golangci-lint, revive, staticcheck)
- Router: Chi v5
- Database: PostgreSQL via pgx/v5 pool
- Validation: go-playground/validator/v10
- Logging: log/slog with structured context
- Config: envconfig or viper for environment-based config
**Task:** Create a rate-limit policy CRUD API.
**Requirements:**
1. Policy model: tenant_id, endpoint_pattern, max_requests, window_seconds,
burst_limit, created_at, updated_at
2. Unique constraint on (tenant_id, endpoint_pattern)
3. Tenant isolation: JWT claim carries tenant_id — enforce at query level
4. Policies can be enabled/disabled without deletion
5. List endpoint supports filtering by tenant_id with pagination
**Constraints:**
- Use struct tags for validation (validate:"required,min=1")
- Use parameterized queries only — never fmt.Sprintf into SQL
- Return proper HTTP status codes (201 created, 409 conflict, 404 not found)
- All errors return a structured JSON envelope:
{"error": {"code": "RATE_POLICY_NOT_FOUND", "message": "..."}}
- Use context.Context for cancellation and timeout propagation
- Close all resources with defer — database rows, response bodies
- All exported functions must have GoDoc comments
Step 1.2: Structured Prompt for Rate-Check Logic
You are implementing a sliding-window rate limiter in Go.
**Context:**
- Go 1.22+, Redis via go-redis/v9
- Must handle 50,000+ checks per second per instance
**Task:** Implement the rate-check logic.
**Requirements:**
1. Sliding window algorithm using Redis sorted sets (ZRANGEBYSCORE + ZADD + ZREMRANGEBYSCORE)
2. Atomic operation — entire check-and-increment must be a single Redis pipeline or Lua script
3. Return: allowed (bool), remaining (int), reset_at (time.Time)
4. Fallback to allow if Redis is unreachable (open-circuit, not closed)
5. Metrics: counter for allowed/denied, histogram for check latency
**Constraints:**
- Use context.Context with timeout (50ms max for Redis call)
- Use sync.Pool for reusable buffers if needed
- No race conditions — the Lua script ensures atomicity in Redis
- All operations must respect context cancellation
- Log Redis errors with slog.Warn, not slog.Error (expected during failover)
Step 1.3: Record Prompt References
AI-Usage: claude
AI-Prompt-Ref: by-language/go/secure-endpoint (policy CRUD),
by-language/go/secure-endpoint (rate-check, adapted)
AI-Confidence: high — CRUD endpoints, medium — Redis sliding window
Phase 2: Human-in-the-Loop Review (PRD-STD-002)
Step 2.1: Review AI Output Against Checklist
Using the Go PR Risk Review prompt (prompt-library/by-language/go/pr-risk-review.md):
Critical items for this feature:
| Check | What to Verify | Status |
|---|---|---|
| Auth bypass | Does every handler extract tenant_id from JWT and filter queries by it? | |
| SQL injection | All queries use $1, $2 placeholders — no fmt.Sprintf into SQL? | |
| Race condition | Is the Redis rate-check atomic (Lua script or pipeline)? | |
| Resource leaks | Are pgx.Rows closed with defer rows.Close()? Are HTTP response bodies closed? | |
| Context propagation | Do all database and Redis calls receive ctx from the request? | |
| Goroutine leaks | Any goroutines started without cancellation or WaitGroup? | |
| Error wrapping | Are errors wrapped with fmt.Errorf("...: %w", err) for stack context? | |
| Nil pointer | Are all struct pointers checked before dereference? |
Step 2.2: Go-Specific AI Pitfalls to Check
From the Go anti-patterns table (prompt-library/by-language/go.md):
- No
panicin library code — return errors instead - No
init()functions with side effects (database connections, HTTP calls) - No
interface{}/anywhere a concrete type is known - No ignoring errors with
_on error-returning functions - No
sync.Mutexwhen a channel-based design is cleaner (or vice versa) - Errors wrapped with
%wforerrors.Is/errors.Ascompatibility - Context passed as first parameter to all functions that do I/O
Phase 3: Testing (PRD-STD-003)
Step 3.1: Generate Test Matrix
Use the Go Risk-Based Test Matrix prompt (prompt-library/by-language/go/test-matrix.md):
Feature: Rate-limit policy CRUD + sliding-window rate checker
Changes: Policy handlers, Redis rate-limiter, Chi middleware
Generate a risk-based test matrix covering:
1. Table-driven unit tests for validation, policy logic, rate-check math
2. Integration tests for API endpoints (auth states, validation, CRUD)
3. Redis integration tests for atomic rate-check behavior
4. Concurrency tests for race conditions under high throughput
Expected test coverage:
| Test Type | Count | What It Covers |
|---|---|---|
| Unit (table-driven) | 12-18 | Struct validation, rate-check math, error wrapping |
| API integration | 8-12 | CRUD endpoints + auth boundary + pagination |
| Redis integration | 6-8 | Sliding window correctness, atomic operations, expiry |
| Concurrency | 3-5 | Parallel rate checks, no race conditions (-race flag) |
Step 3.2: Verify AI-Generated Tests
Common issues with AI-generated Go tests:
- Tests use
t.Parallel()where safe for faster execution - Tests use table-driven pattern with
tt := ttcapture in loop - Tests use
testify/assertortestify/requireconsistently (not mixed) - Integration tests use
testcontainers-gofor real PostgreSQL and Redis - No
time.Sleep()— use ticker, context timeout, orrequire.Eventually - Concurrent tests run with
go test -racein CI - Cleanup with
t.Cleanup()instead of manualdeferwhere appropriate
Phase 4: Security Scanning (PRD-STD-004)
Step 4.1: Automated CI Checks
Your CI pipeline catches:
# These run automatically on every PR
- golangci-lint: staticcheck, gosec, revive, govet
- govulncheck: Known CVEs in Go module dependencies
- Semgrep: SQL injection, command injection, SSRF
- Trivy: Container image vulnerabilities
Step 4.2: Manual Security Review
Rate-limiting services are security-critical. Verify:
- Rate-check is atomic — no TOCTOU between check and increment
- Redis auth credentials are not hardcoded (loaded from env/secret store)
- Tenant isolation is enforced at the query level (WHERE tenant_id = $1)
- Rate-limit bypass is not possible by omitting headers or spoofing tenant_id
- Open-circuit fallback (allow on Redis failure) is logged and alerted
- No timing side-channel in policy lookup (constant-time tenant lookup)
Phase 5: Quality Gates (PRD-STD-007)
Step 5.1: PR Checklist
| Gate | Tool | Pass Criteria |
|---|---|---|
| Type safety | go vet ./... | Zero issues |
| Lint | golangci-lint run | Zero errors (warnings reviewed) |
| Unit tests | go test ./... | 100% passing, new code covered |
| Race detector | go test -race ./... | Zero race conditions detected |
| Security scan | gosec + govulncheck | Zero high/critical findings |
| Integration tests | testcontainers-go | All passing with real Postgres + Redis |
| Build | go build ./... | Successful, binary runs |
Step 5.2: PR Metadata
## Changes
- Add rate-limit policy model with pgx migration
- Add CRUD handlers: POST/GET/PATCH/DELETE /api/v1/policies
- Add sliding-window rate-check with Redis Lua script
- Add Chi middleware for downstream rate enforcement
- Add OpenTelemetry spans for rate-check latency
## AI Disclosure
- AI-Usage: claude
- AI-Prompt-Ref: by-language/go/secure-endpoint (CRUD + rate-check)
- AI-Review: Used by-language/go/pr-risk-review for self-review
- Human-Review: Redis atomicity manually verified, concurrency tests reviewed
## Testing
- 15 unit tests (table-driven: validation, rate math, error wrapping)
- 10 API integration tests (CRUD + auth boundary + pagination)
- 7 Redis integration tests (sliding window, atomicity, expiry)
- 4 concurrency tests (parallel rate-checks with -race flag)
Phase 6: Dependency Compliance (PRD-STD-008)
Use the Go Dependency Risk Check (prompt-library/by-language/go/dependency-check.md) if new modules were added:
Review these dependency additions:
- github.com/redis/go-redis/v9 (Redis client)
- github.com/testcontainers/testcontainers-go (test infrastructure)
Check: license, CVEs, maintenance status, module size, alternatives.
Phase 7: Documentation (PRD-STD-005)
Use the Go Change Runbook (prompt-library/by-language/go/change-runbook.md) to generate:
- Migration notes: PostgreSQL migration must run before deployment
- Environment variables:
REDIS_URL,REDIS_PASSWORD,RATE_LIMIT_DEFAULT_WINDOW,RATE_LIMIT_REDIS_TIMEOUT_MS - Rollback procedure: Revert migration, redeploy previous binary, Redis counters auto-expire
- Monitoring:
- Alert on rate-check Redis error rate > 1% (indicates Redis connectivity issue)
- Alert on rate-check latency p99 > 30ms
- Dashboard: allowed/denied ratio by tenant, Redis hit rate, open-circuit events
- Operational notes:
- Open-circuit mode allows all requests when Redis is unreachable — monitor for abuse during Redis outages
- Rate-limit policies take effect immediately — no cache delay
Summary: Standards Applied
| Standard | How It Was Applied | Evidence |
|---|---|---|
| PRD-STD-001 (Prompt Engineering) | Structured prompts from Go templates | PR description AI-Prompt-Ref |
| PRD-STD-002 (Code Review) | AI + human review with concurrency focus | Review comments on PR |
| PRD-STD-003 (Testing) | Table-driven tests + race detector, 36+ tests | CI test results |
| PRD-STD-004 (Security) | Automated scans + atomicity review | CI scan output + review notes |
| PRD-STD-005 (Documentation) | Generated runbook from template | PR description + runbook |
| PRD-STD-007 (Quality Gates) | All gates passing including -race | CI status checks |
| PRD-STD-008 (Dependencies) | Dependency risk check for new modules | PR comment with assessment |
What This Demonstrates
- Concurrency is Go's biggest AI risk — AI-generated Go code frequently has race conditions; the
-raceflag and explicit concurrency tests catch these - Table-driven tests are the Go idiom — AI often generates one-test-per-case instead of table-driven; the testing strategy prompt corrects this
- Resource leaks are subtle —
defer rows.Close()and context cancellation are easy for AI to miss; the review checklist catches them - Open-circuit decisions need governance — allowing all requests when Redis fails is a business decision that needs documentation and alerting, not just code
- Go's simplicity makes governance lightweight — minimal framework magic means code review is straightforward; the standards add structure without bureaucracy
Apply This Pattern in Your Repo
Use this scenario as a reference pattern, then choose an implementation path:
- Day 1 / small team: Starter Config Files + CI/CD Pipeline Starter
- Live role-based workflow (same repo, 4-role baseline): AEEF CLI Wrapper
- Transformation rollout (Go teams): Tier 2: Transformation Apply Path then Tier 2 Go
- Production rollout (regulated / enterprise): Tier 3: Production Apply Path then Tier 3 Go
Next Steps
- Walk through the Spring Boot Enterprise Scenario for a Java enterprise example
- Walk through the Next.js Full-Stack Scenario for a frontend-inclusive example
- Review the full Production Standards to identify any gaps for your team