External Resources
Curated links to external research, documentation, tutorials, and community resources that complement the AEEF framework. Resources are organized by topic and verified for relevance.
This page is updated quarterly. If you find a valuable resource that should be listed here, contribute via the Contributing Guide.
Research and Evidence
Academic and Industry Research
-
GitClear: "Coding on Copilot" (2024) — The study behind AEEF's "1.7x more issues" statistic. Analyzes code quality trends across 150M+ lines of code with AI assistance. https://www.gitclear.com/coding_on_copilot_data_shows_ais_downward_pressure_on_code_quality
-
Stanford/UIUC: Security of AI-Generated Code (2023) — Found that AI-assisted developers produce less secure code while being more confident in its security. Source for the "2.74x vulnerability rate" statistic. https://arxiv.org/abs/2211.03622
-
GitHub: The State of the Octoverse (annual) — Tracks AI tool adoption rates across the developer ecosystem. https://github.blog/news-insights/octoverse/
-
McKinsey: "The Economic Potential of Generative AI" (2023) — Estimates that generative AI could increase developer productivity by 20-45%. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier
-
Google DeepMind: "Large Language Models for Code" (2024) — Survey of LLM capabilities and limitations for code generation. https://arxiv.org/abs/2311.10372
Security Research
-
OWASP: AI Security and Privacy Guide — Comprehensive guide to security risks in AI applications. https://owasp.org/www-project-ai-security-and-privacy-guide/
-
OWASP: Top 10 for LLM Applications — The most critical security risks for applications using LLMs. https://owasp.org/www-project-top-10-for-large-language-model-applications/
-
NIST AI Risk Management Framework — US government framework for AI risk management. https://www.nist.gov/artificial-intelligence/executive-order-safe-secure-and-trustworthy-artificial-intelligence
Tool Documentation
AI Coding Assistants
| Tool | Official Docs | Getting Started |
|---|---|---|
| GitHub Copilot | docs.github.com/copilot | Quickstart |
| Cursor | docs.cursor.com | Getting Started |
| Claude Code | docs.anthropic.com/claude-code | Quickstart |
| Cody (Sourcegraph) | sourcegraph.com/docs/cody | Getting Started |
| Continue.dev | docs.continue.dev | Quickstart |
Security Scanning Tools (Free)
| Tool | What It Does | Docs |
|---|---|---|
| Semgrep | SAST — finds security patterns in code | semgrep.dev/docs |
| Trivy | Vulnerability scanner for containers and filesystems | aquasecurity.github.io/trivy |
| npm audit | Node.js dependency vulnerability checking | docs.npmjs.com/cli/audit |
| pip-audit | Python dependency vulnerability checking | github.com/pypa/pip-audit |
| govulncheck | Go vulnerability checking | pkg.go.dev/golang.org/x/vuln |
| TruffleHog | Secret detection in code | github.com/trufflesecurity/trufflehog |
Standards and Frameworks
AI Governance Standards
| Standard | Scope | Relevance |
|---|---|---|
| ISO/IEC 42001 | AI Management System | Foundation for AEEF's governance structure |
| EU AI Act | European AI regulation | Compliance requirements for EU-serving organizations |
| NIST AI RMF | US AI risk management | Risk management approach referenced in Pillar 2 |
| IEEE 7000 | Ethical AI design | Ethics-by-design principles |
Software Development Standards
| Standard | Scope | Relevance |
|---|---|---|
| ISO 27001 | Information security management | Security controls for AI tool data handling |
| SOC 2 Type II | Service organization controls | Audit evidence for AI governance |
| OWASP ASVS | Application security verification | Security testing requirements |
KSA-Specific Regulations
| Regulation | Authority | AEEF Reference |
|---|---|---|
| SAMA CSF | Saudi Central Bank | SAMA-CSF Integration |
| SDAIA AI Ethics | Saudi Data and AI Authority | SDAIA Ethics & Traceability |
| NTP/PDPL | Personal Data Protection Law | Data classification requirements |
Learning Resources
Free Courses and Tutorials
-
GitHub Copilot Fundamentals — Official learning path for Copilot best practices https://learn.microsoft.com/en-us/training/modules/introduction-to-github-copilot/
-
Anthropic: Prompt Engineering Guide — Comprehensive guide to effective prompting https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview
-
DeepLearning.AI: ChatGPT Prompt Engineering for Developers — Free course on structured prompting https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/
-
Semgrep Academy — Free training on SAST and security scanning https://academy.semgrep.dev/
Books
- "AI-Assisted Programming" by Tom Taulli (O'Reilly, 2024) — Practical guide covering Copilot, ChatGPT, and other tools for daily development
- "Software Engineering at Google" (O'Reilly, 2020) — While pre-AI, its chapters on code review, testing culture, and engineering productivity directly inform AEEF's Pillar 1 and Pillar 3
Conference Talks (Recommended)
- "The Hidden Costs of AI-Generated Code" (StrangeLoop 2024) — Analysis of AI code quality in production environments
- "Responsible AI Engineering at Scale" (QCon 2024) — Enterprise AI governance practices
Community
Discussion Forums
-
GitHub Discussions — Active community around Copilot, Codespaces, and AI development tools https://github.com/orgs/community/discussions
-
r/ChatGPTCoding — Reddit community focused on AI-assisted programming https://www.reddit.com/r/ChatGPTCoding/
-
Cursor Forum — Official community forum for Cursor users https://forum.cursor.com/
Newsletters
- The AI Coding Report — Weekly digest of AI coding tool updates and best practices
- TLDR AI — Daily AI news including development tool updates https://tldr.tech/ai
Related Frameworks
| Framework | Focus | How It Relates to AEEF |
|---|---|---|
| DORA Metrics | DevOps performance | AEEF's KPI framework incorporates DORA-style metrics |
| SPACE Framework | Developer productivity | Informs AEEF's Pillar 3 productivity measurement approach |
| Microsoft's Responsible AI Standard | Enterprise AI governance | Similar governance structure, broader scope (not code-specific) |
| Google's SAIF | Secure AI Framework | Security-focused complement to AEEF's Pillar 2 |