Skip to main content

Research Evidence & Assumption Register

AEEF uses a mix of external research, regulatory texts, and implementation assumptions. This page separates what is externally evidenced from what must be validated within each organization.

Last Updated: April 2026
Next Review: July 2026

How to Use This Page

  • Treat this page as the source of truth for framework-level claims used across AEEF.
  • For any KPI target used in your organization, replace framework defaults with your own baseline measurement within 90 days.
  • Re-validate external sources at least quarterly.

Evidence Quality Scale

GradeMeaning
APrimary source (official standard, regulator, peer-reviewed publication, or original dataset)
BVendor or industry report with published methodology
CDirectional claim requiring local validation before use in governance decisions

Key Claim Register

Claim in AEEF v1.1Evidence StatusGradeRequired Action
"74% of developers worldwide adopted AI coding tools by January 2026"JetBrains Developer Survey 2026 — 74% global adoptionBRe-check quarterly
"95% of developers use AI tools at least weekly"Pragmatic Engineer Survey 2026 — 95% weekly usage among respondentsBRe-check quarterly
"51% of code commits are AI-assisted"GitHub 2026 data on AI-assisted commitsBMonitor for annual update
"Claude Code is most-loved tool (46% preference)"Pragmatic Engineer Survey 2026 — developer preference dataBRe-check quarterly
"AI co-authored code has 1.7x more issues"GitClear analysis — requires local validationCReplace with org-specific baseline
"2.74x higher security vulnerability rate"Stanford/UIUC study — aging, validate against newer researchC/BSeek updated primary source
"18% of developers use Claude Code at work"JetBrains 2026 survey — 18% adoption, 6x growthBRe-check quarterly
"55% of developers use AI agents regularly"Pragmatic Engineer 2026 — agent usage correlation with enthusiasmBRe-check quarterly
AI governance obligations increasingFormal regulatory publications (EU AI Act, etc.)AMaintain legal review cadence

Recently Updated Claims

April 2026 Updates:

  • ✅ Updated: "92% US developers" → "74% global adoption" (JetBrains 2026)
  • ✅ Added: "95% weekly usage" (Pragmatic Engineer 2026)
  • ✅ Updated: "41% AI-generated" → "51% AI-assisted commits" (GitHub 2026)
  • ✅ Added: Claude Code adoption metrics
  • ✅ Added: Agent usage statistics

Source Library

Quarterly validation owner: [Assign: Standards Liaison or Governance Lead]
Last full validation: April 2026
Next scheduled validation: July 2026

Adoption and Productivity

SourceLast VerifiedKey Finding
JetBrains: "Which AI Coding Tools Do Developers Actually Use" (April 2026)jetbrains.com/researchApr 202674% global adoption; Claude Code at 18% work usage, tied with Cursor; 6x growth for Claude Code
Pragmatic Engineer: AI Tooling Survey 2026 (Feb)pragmaticengineer.comApr 202695% weekly AI usage; Claude Code #1 most-loved (46%); 55% regular agent usage; Codex 60% of Cursor usage in 8 months
GitHub Blog: Survey reveals AI's impact on the developer experiencegithub.blogFeb 202692% US developer usage (prior claim)
Stack Overflow Developer Survey 2024 (AI tools section) — survey.stackoverflow.coFeb 2026General AI tool awareness and usage
Perry et al. (2022), Microsoft Research/GitHub Copilot controlled experiment — arxiv.orgFeb 202655% faster task completion with Copilot
GitHub Octoverse 2025/2026 — github.blog/octoverseApr 202651% of commits AI-assisted; 20M Copilot users

Security and Code Risk

SourceLast VerifiedKey Finding
CodeRabbit: State of AI Code Quality Report 2026coderabbit.aiApr 2026First comprehensive AI code quality dataset — use for local validation
Pearce et al. (CACM), Asleep at the Keyboard? Assessing the Security of GitHub Copilot's Code Contributionscacm.acm.orgFeb 20262.74x vulnerability rate claim source
Pearce et al. preprint (method details) — arxiv.orgFeb 2026Security methodology details
GitClear: Coding on Copilot (2024) — gitclear.comFeb 20261.7x issue rate claim source

Tool-Specific Research

SourceLast VerifiedKey Finding
JetBrains 2026: Tool Market ShareApr 2026Copilot 76% awareness, 29% usage; Claude Code 57% awareness, 18% usage; Cursor 69% awareness
Pragmatic Engineer 2026: Tool PreferencesApr 2026Claude Code 46% "most loved"; Cursor 19%; Copilot 9%; Codex explosive growth
Cursor $500M ARR — Industry reportsApr 202618% market share, 10x YoY growth
Kimi K2.5 Benchmarks — Moonshot AIApr 2026256K context, 76.8% SWE-Bench, $0.60/$2.50 pricing

Governance, Risk, and Compliance

SourceLast Verified
NIST AI Risk Management Framework 1.0 — nist.govFeb 2026
NIST Secure Software Development Framework (SP 800-218) — csrc.nist.govFeb 2026
EU AI Act (Regulation (EU) 2024/1689) — eur-lex.europa.euFeb 2026
EU AI Act Code of Practice 2025digital-strategy.ec.europa.euApr 2026
ISO/IEC 42001 (AI management system standard) — iso.orgFeb 2026
ISO/IEC 42006 (AIMS certification) — iso.orgFeb 2026
ISO/IEC 23894 (AI risk management) — iso.orgFeb 2026
OWASP Top 10 for LLM Applications — owasp.orgFeb 2026
OpenAI Model Spec (Dec 2025)model-spec.openai.comApr 2026
Anthropic AI Safety Framework (July 2025)anthropic.comApr 2026
SDAIA PDPL Knowledge Center (Saudi Arabia) — dgp.sdaia.gov.saFeb 2026
National Cybersecurity Authority ECC (Saudi Arabia) — nca.gov.saFeb 2026
National Cybersecurity Authority CCC (Saudi Arabia) — nca.gov.saFeb 2026
Digital Government Authority IT Governance Controls v2.0 (Saudi Arabia) — dga.gov.saFeb 2026

New Sources Pending Validation

The following sources have been identified for potential inclusion:

SourceStatusNotes
Stack Overflow Developer Survey 2025Awaiting publicationUpdate adoption statistics
GitHub Octoverse 2026Awaiting publicationVerify AI-assisted commit trends
McKinsey: Economic Impact of AI 2026Awaiting publicationProductivity metrics update

Governance Rules for Claims

  1. AEEF policy controls MUST NOT rely exclusively on Grade C claims.
  2. KPI thresholds used for executive reporting MUST be tied to internal baseline data.
  3. Any external claim older than 18 months SHOULD be marked for revalidation.
  4. Security claims from vendor reports MUST be treated as directional unless independently replicated.
  5. Tool market share data MUST be cross-referenced with multiple sources before policy use.

Quarterly Validation Checklist

  • Re-run source checks for all Grade B/C claims.
  • Confirm links are still accessible and current.
  • Replace stale adoption/security multipliers with newer evidence where available.
  • Update cross-references in: docs/about/index.md, docs/pillar-1-engineering-discipline/index.md, docs/pillar-2-governance-risk/index.md, docs/kpi/index.md, and transformation/index.md.
  • Review new tool market entrants for inclusion.
  • Verify vendor-published standards (OpenAI, Anthropic) for framework alignment.

Changelog

DateChangeAuthor
Apr 2026Updated with JetBrains 2026 survey (74% adoption, Claude Code 18%)Content Team
Apr 2026Added Pragmatic Engineer 2026 (95% weekly usage, agent statistics)Content Team
Apr 2026Updated GitHub Octoverse 2026 (51% AI-assisted commits)Content Team
Apr 2026Added CodeRabbit State of AI Code Quality ReportContent Team
Apr 2026Added OpenAI Model Spec and Anthropic Safety Framework referencesContent Team
Feb 2026Initial v1.0.0 releaseAEEF Team