- https://cloud.google.com/vertex-ai/generative-ai/docs/learn/prompts/introduction-prompt-design
- https://learnprompting.org/docs/introduction
- https://promptz2h.com/
- https://github.com/untamed-theory/vibesec
- https://github.com/wiz-sec-public/secure-rules-files
- https://atlas.mitre.org/matrices/ATLAS
- https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf
- https://saif.google/secure-ai-framework/saif-map
- https://www.pillar.security/ai-risks/inadequate-ai-policy
- https://airisk.mit.edu/
- https://astrix.security/
- https://aembit.io/
- https://www.keycard.sh/
- https://www.descope.com/
- https://www.oasis.security/
- https://www.nccgroup.com/us/research-blog/analyzing-secure-ai-architectures/
- https://techcommunity.microsoft.com/blog/microsoft-security-blog/best-practices-to-architect-secure-generative-ai-applications/4116661
- https://www.nccgroup.com/us/research-blog/analyzing-secure-ai-design-principles/
- https://www.nccgroup.com/us/research-blog/where-you-inject-matters-the-role-specific-impact-of-prompt-injection-attacks-on-openai-models/
- https://www.nccgroup.com/us/research-blog/analyzing-ai-application-threat-models/
- https://simonwillison.net/2023/Apr/25/dual-llm-pattern/
- https://simonwillison.net/2025/Apr/11/camel/
- https://github.com/corca-ai/awesome-llm-security
- https://github.com/ydyjya/Awesome-LLM-Safety
- https://github.com/christiancscott/awesome-LLM-security
- https://github.com/ShenaoW/awesome-llm-supply-chain-security
- https://github.com/wearetyomsmnv/Awesome-LLMSecOps
- https://github.com/wearetyomsmnv/Awesome-LLM-agent-Security
- https://github.com/ThuCCSLab/Awesome-LM-SSP
- https://github.com/asgeirtj/system_prompts_leaks
- https://arxiv.org/html/2505.08807v1 - Security of Internet of Agents: Attacks and Countermeasures
- https://arxiv.org/html/2505.00047v1 - Base Models Beat Aligned Models at Randomness and Creativity
- https://arxiv.org/pdf/2412.06090 - Trust No AI: Prompt Injection Along The CIA Security Triad
- https://github.com/Arcanum-Sec/arc_pi_taxonomy
- https://github.com/elder-plinius/L1B3RT4S
- https://assets.crowdstrike.com/is/content/crowdstrikeinc/Prompt-Injection-Taxonomy-Posterpdf
- https://genai.owasp.org/resource/genai-red-teaming-guide/
- https://elder-plinius.github.io/P4RS3LT0NGV3/
- https://www.pillar.security/ai-red-teaming-introduction
- https://github.com/lakeraai/pint-benchmark - lakera is the best! X*D
- https://gentellab.github.io/gentel-safe.github.io/
- https://hiddenlayer.com/innovation-hub/evaluating-prompt-injection-datasets/
- https://github.com/microsoft/BIPIA
- https://github.com/promptfoo/promptfoo
- https://huggingface.co/datasets/qualifire/Qualifire-prompt-injection-benchmark
- https://huggingface.co/datasets/xxz224/prompt-injection-attack-dataset
- https://huggingface.co/datasets/yanismiraoui/prompt_injections
- https://huggingface.co/datasets/jayavibhav/prompt-injection-safety
- https://huggingface.co/datasets/jayavibhav/prompt-injection
- https://huggingface.co/datasets/deepset/prompt-injections
- https://huggingface.co/datasets/hackaprompt/hackaprompt-dataset
- https://arxiv.org/pdf/2403.02691 - INJECAGENT: Benchmarking Indirect Prompt Injections in Tool-Integrated Large Language Model Agents
- https://arxiv.org/html/2505.00843 - OET: Optimization-based prompt injection Evaluation Toolkit
- https://arxiv.org/pdf/2312.14197 - Benchmarking and Defending Against Indirect Prompt Injection Attacks on Large Language Models
https://arxiv.org/pdf/2503.18813 - Defeating Prompt Injections by Design
https://arxiv.org/pdf/2404.13208 - The Instruction Hierarchy: Training LLMs to Prioritize Privileged Instructions \
- Llama Guard: https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/
- https://github.com/Defend-AI-Tech-Inc/wozway
- https://github.com/openshieldai/openshield
- https://github.com/eunomatix/llminspect-gateway
- https://www.lasso.security/
- https://www.lakera.ai/
- https://www.prompt.security/
- https://www.troj.ai/
- https://trust3.ai/
- https://github.com/privacera/paig
- https://github.com/openlit/openlit
- https://www.aim.security/lp/aim-labs-echoleak-blogpost
- https://simonwillison.net/2025/Jun/11/echoleak/
- https://drive.google.com/drive/folders/1dk96P80X8b2di57XyI8R9Co1XsmsnH4Z
- https://www.dbreunig.com/2025/08/01/does-the-bitter-lesson-have-limits.html
- https://www.dbreunig.com/2025/05/27/will-the-model-eat-your-stack.html
- https://solmaz.io/typed-languages-are-better-suited-for-vibecoding
- https://fortune.com/2024/04/16/ai-hallucinations-solvable-year-ex-google-researcher/
- https://etsd.tech/posts/rtfc/
- https://lore.kernel.org/all/CACzwLxg=vQeQKA1mPiYV9biu=swo7QDmjB3i=UhYmv+fGRBA4Q@mail.gmail.com/
- https://biilmann.blog/articles/introducing-ax/
- https://www.latent.space/p/ai-engineer
- https://bair.berkeley.edu/blog/2024/02/18/compound-ai-systems/
- https://embracethered.com/blog/posts/2025/github-copilot-remote-code-execution-via-prompt-injection/
- https://www.lasso.security/blog/identitymesh-exploiting-agentic-ai
- https://learn.convo-lang.ai/
- https://www.nccgroup.com/us/research-blog/5-mcp-security-tips/
- https://zed.dev/blog/why-llms-cant-build-software
- https://nousresearch.com/measuring-thinking-efficiency-in-reasoning-models-the-missing-benchmark/
- https://www.lastweekinaws.com/podcast/screaming-in-the-cloud/ai-s-security-crisis-why-your-assistant-might-betray-you/
- https://desfontain.es/blog/bfdi-consultation-ai.html
- https://github.com/awslabs/mcp/blob/main/VIBE_CODING_TIPS_TRICKS.md
- https://garymarcus.substack.com/p/llms-coding-agents-security-nightmare
- https://research.kudelskisecurity.com/2023/05/25/reducing-the-impact-of-prompt-injection-attacks-through-design/
- https://vlaaad.github.io/mcp-tools-with-dependent-types
- https://playtechnique.io/blog/ai-doesnt-lighten-the-burden-of-mastery.html
- https://churchofturing.github.io/the-enterprise-experience.html
- https://aws.amazon.com/blogs/machine-learning/introducing-amazon-bedrock-agentcore-gateway-transforming-enterprise-ai-agent-tool-development/
- https://opcode.sh/
- https://research.trychroma.com/context-rot
- https://microsoft.github.io/VibeVoice/
- https://semgrep.dev/blog/2025/finding-vulnerabilities-in-modern-web-apps-using-claude-code-and-openai-codex/
- https://gist.github.com/fr0gger/0386018f67c2bc780fbd852697014c8b
- https://www.dryrun.security/blog/beyond-pattern-matching-why-context-is-the-future-of-application-security
- https://www.thestack.technology/target-turns-ai-get-out-sales-spiral/
- https://devblogs.microsoft.com/blog/protecting-against-indirect-injection-attacks-mcp
- https://cloudsecurityalliance.org/blog/2025/03/24/threat-modeling-openai-s-responses-api-with-the-maestro-framework
- https://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro#
- https://blog.trailofbits.com/2025/08/21/weaponizing-image-scaling-against-production-ai-systems/
- https://guard.io/labs/scamlexity-we-put-agentic-ai-browsers-to-the-test-they-clicked-they-paid-they-failed
- https://guard.io/labs/vibescamming-from-prompt-to-phish-benchmarking-popular-ai-agents-resistance-to-the-dark-side
- https://joshuavaldez.com/the-unbearable-slowness-of-ai-coding/
- https://noperator.dev/posts/slice/
- https://www.seangoedecke.com/ai-security/
- https://www.vibecodingchecklist.com/
- https://learnhowtovibecode.com/
- https://www.seangoedecke.com/ai-security/
- https://vercel.com/blog/a-proposal-for-inline-llm-instructions-in-html
- https://giansegato.com/essays/probabilistic-era
- https://www.microsoft.com/en-us/research/podcast/ai-testing-and-evaluation-learnings-from-cybersecurity/
- https://blog.cloudflare.com/block-unsafe-llm-prompts-with-firewall-for-ai/
- https://every.to/source-code/my-ai-had-already-fixed-the-code-before-i-saw-it
- https://promptql.io/blog/being-confidently-wrong-is-holding-ai-back \
- https://medium.com/quantumblack/how-we-enabled-agents-at-scale-in-the-enterprise-with-the-agentic-ai-mesh-architecture-baf4290daf48
- https://catskull.net/what-the-hell-is-going-on-right-now.html
- https://rootly.com/blog/ai-sre-needs-more-than-ai-it-needs-operational-context
- https://simonwillison.net/2025/Aug/15/the-summer-of-johann/
- https://aisecurity.forum/
- https://embracethered.com/blog/images/2025/github-agent-e2e-zombai.png
- https://aisecurityforum.substack.com/p/quick-list-of-high-impact-ai-security
- https://www.youtube.com/watch?v=xTBysNqET4U
- https://edward-playground.github.io/aidefense-framework/
- https://www.youtube.com/watch?v=Z3WMt_ncgUI
- https://blog.trailofbits.com/2025/08/08/buttercup-is-now-open-source/
- https://www.ssp.sh/brain/will-ai-replace-humans/
- https://embracethered.com/blog/tags/month-of-ai-bugs/
- https://nsfocusglobal.com/prompt-word-injection-an-analysis-of-recent-llm-security-incidents/
- https://anthonymoser.github.io/writing/ai/haterdom/2025/08/26/i-am-an-ai-hater.html
- https://www.youtube.com/watch?v=EsCNkDrIGCw
- https://www.stilldrinking.org/stop-talking-to-technology-executives-like-they-have-anything-to-say
- https://blogs.cisco.com/security/detecting-exposed-llm-servers-shodan-case-study-on-ollama
- https://semgrep.dev/blog/2025/finding-vulnerabilities-in-modern-web-apps-using-claude-code-and-openai-codex/
- https://www.sanity.io/blog/first-attempt-will-be-95-garbage
- https://www.seangoedecke.com/good-system-design/
- https://marketsaintefficient.substack.com/p/vibe-debugging-enterprises-up-and
- https://www.stochasticlifestyle.com/a-guide-to-gen-ai-llm-vibecoding-for-expert-programmers/