Research Evidence & Assumption Register
AEEF uses a mix of external research, regulatory texts, and implementation assumptions. This page separates what is externally evidenced from what must be validated within each organization.
Last Updated: April 2026
Next Review: July 2026
How to Use This Page
- Treat this page as the source of truth for framework-level claims used across AEEF.
- For any KPI target used in your organization, replace framework defaults with your own baseline measurement within 90 days.
- Re-validate external sources at least quarterly.
Evidence Quality Scale
| Grade | Meaning |
|---|---|
| A | Primary source (official standard, regulator, peer-reviewed publication, or original dataset) |
| B | Vendor or industry report with published methodology |
| C | Directional claim requiring local validation before use in governance decisions |
Key Claim Register
| Claim in AEEF v1.1 | Evidence Status | Grade | Required Action |
|---|---|---|---|
| "74% of developers worldwide adopted AI coding tools by January 2026" | JetBrains Developer Survey 2026 — 74% global adoption | B | Re-check quarterly |
| "95% of developers use AI tools at least weekly" | Pragmatic Engineer Survey 2026 — 95% weekly usage among respondents | B | Re-check quarterly |
| "51% of code commits are AI-assisted" | GitHub 2026 data on AI-assisted commits | B | Monitor for annual update |
| "Claude Code is most-loved tool (46% preference)" | Pragmatic Engineer Survey 2026 — developer preference data | B | Re-check quarterly |
| "AI co-authored code has 1.7x more issues" | GitClear analysis — requires local validation | C | Replace with org-specific baseline |
| "2.74x higher security vulnerability rate" | Stanford/UIUC study — aging, validate against newer research | C/B | Seek updated primary source |
| "18% of developers use Claude Code at work" | JetBrains 2026 survey — 18% adoption, 6x growth | B | Re-check quarterly |
| "55% of developers use AI agents regularly" | Pragmatic Engineer 2026 — agent usage correlation with enthusiasm | B | Re-check quarterly |
| AI governance obligations increasing | Formal regulatory publications (EU AI Act, etc.) | A | Maintain legal review cadence |
Recently Updated Claims
April 2026 Updates:
- ✅ Updated: "92% US developers" → "74% global adoption" (JetBrains 2026)
- ✅ Added: "95% weekly usage" (Pragmatic Engineer 2026)
- ✅ Updated: "41% AI-generated" → "51% AI-assisted commits" (GitHub 2026)
- ✅ Added: Claude Code adoption metrics
- ✅ Added: Agent usage statistics
Source Library
Quarterly validation owner: [Assign: Standards Liaison or Governance Lead]
Last full validation: April 2026
Next scheduled validation: July 2026
Adoption and Productivity
| Source | Last Verified | Key Finding |
|---|---|---|
| JetBrains: "Which AI Coding Tools Do Developers Actually Use" (April 2026) — jetbrains.com/research | Apr 2026 | 74% global adoption; Claude Code at 18% work usage, tied with Cursor; 6x growth for Claude Code |
| Pragmatic Engineer: AI Tooling Survey 2026 (Feb) — pragmaticengineer.com | Apr 2026 | 95% weekly AI usage; Claude Code #1 most-loved (46%); 55% regular agent usage; Codex 60% of Cursor usage in 8 months |
| GitHub Blog: Survey reveals AI's impact on the developer experience — github.blog | Feb 2026 | 92% US developer usage (prior claim) |
| Stack Overflow Developer Survey 2024 (AI tools section) — survey.stackoverflow.co | Feb 2026 | General AI tool awareness and usage |
| Perry et al. (2022), Microsoft Research/GitHub Copilot controlled experiment — arxiv.org | Feb 2026 | 55% faster task completion with Copilot |
| GitHub Octoverse 2025/2026 — github.blog/octoverse | Apr 2026 | 51% of commits AI-assisted; 20M Copilot users |
Security and Code Risk
| Source | Last Verified | Key Finding |
|---|---|---|
| CodeRabbit: State of AI Code Quality Report 2026 — coderabbit.ai | Apr 2026 | First comprehensive AI code quality dataset — use for local validation |
| Pearce et al. (CACM), Asleep at the Keyboard? Assessing the Security of GitHub Copilot's Code Contributions — cacm.acm.org | Feb 2026 | 2.74x vulnerability rate claim source |
| Pearce et al. preprint (method details) — arxiv.org | Feb 2026 | Security methodology details |
| GitClear: Coding on Copilot (2024) — gitclear.com | Feb 2026 | 1.7x issue rate claim source |
Tool-Specific Research
| Source | Last Verified | Key Finding |
|---|---|---|
| JetBrains 2026: Tool Market Share | Apr 2026 | Copilot 76% awareness, 29% usage; Claude Code 57% awareness, 18% usage; Cursor 69% awareness |
| Pragmatic Engineer 2026: Tool Preferences | Apr 2026 | Claude Code 46% "most loved"; Cursor 19%; Copilot 9%; Codex explosive growth |
| Cursor $500M ARR — Industry reports | Apr 2026 | 18% market share, 10x YoY growth |
| Kimi K2.5 Benchmarks — Moonshot AI | Apr 2026 | 256K context, 76.8% SWE-Bench, $0.60/$2.50 pricing |
Governance, Risk, and Compliance
| Source | Last Verified |
|---|---|
| NIST AI Risk Management Framework 1.0 — nist.gov | Feb 2026 |
| NIST Secure Software Development Framework (SP 800-218) — csrc.nist.gov | Feb 2026 |
| EU AI Act (Regulation (EU) 2024/1689) — eur-lex.europa.eu | Feb 2026 |
| EU AI Act Code of Practice 2025 — digital-strategy.ec.europa.eu | Apr 2026 |
| ISO/IEC 42001 (AI management system standard) — iso.org | Feb 2026 |
| ISO/IEC 42006 (AIMS certification) — iso.org | Feb 2026 |
| ISO/IEC 23894 (AI risk management) — iso.org | Feb 2026 |
| OWASP Top 10 for LLM Applications — owasp.org | Feb 2026 |
| OpenAI Model Spec (Dec 2025) — model-spec.openai.com | Apr 2026 |
| Anthropic AI Safety Framework (July 2025) — anthropic.com | Apr 2026 |
| SDAIA PDPL Knowledge Center (Saudi Arabia) — dgp.sdaia.gov.sa | Feb 2026 |
| National Cybersecurity Authority ECC (Saudi Arabia) — nca.gov.sa | Feb 2026 |
| National Cybersecurity Authority CCC (Saudi Arabia) — nca.gov.sa | Feb 2026 |
| Digital Government Authority IT Governance Controls v2.0 (Saudi Arabia) — dga.gov.sa | Feb 2026 |
New Sources Pending Validation
The following sources have been identified for potential inclusion:
| Source | Status | Notes |
|---|---|---|
| Stack Overflow Developer Survey 2025 | Awaiting publication | Update adoption statistics |
| GitHub Octoverse 2026 | Awaiting publication | Verify AI-assisted commit trends |
| McKinsey: Economic Impact of AI 2026 | Awaiting publication | Productivity metrics update |
Governance Rules for Claims
- AEEF policy controls MUST NOT rely exclusively on Grade C claims.
- KPI thresholds used for executive reporting MUST be tied to internal baseline data.
- Any external claim older than 18 months SHOULD be marked for revalidation.
- Security claims from vendor reports MUST be treated as directional unless independently replicated.
- Tool market share data MUST be cross-referenced with multiple sources before policy use.
Quarterly Validation Checklist
- Re-run source checks for all Grade B/C claims.
- Confirm links are still accessible and current.
- Replace stale adoption/security multipliers with newer evidence where available.
- Update cross-references in:
docs/about/index.md,docs/pillar-1-engineering-discipline/index.md,docs/pillar-2-governance-risk/index.md,docs/kpi/index.md, andtransformation/index.md. - Review new tool market entrants for inclusion.
- Verify vendor-published standards (OpenAI, Anthropic) for framework alignment.
Changelog
| Date | Change | Author |
|---|---|---|
| Apr 2026 | Updated with JetBrains 2026 survey (74% adoption, Claude Code 18%) | Content Team |
| Apr 2026 | Added Pragmatic Engineer 2026 (95% weekly usage, agent statistics) | Content Team |
| Apr 2026 | Updated GitHub Octoverse 2026 (51% AI-assisted commits) | Content Team |
| Apr 2026 | Added CodeRabbit State of AI Code Quality Report | Content Team |
| Apr 2026 | Added OpenAI Model Spec and Anthropic Safety Framework references | Content Team |
| Feb 2026 | Initial v1.0.0 release | AEEF Team |