Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ AI agents are autonomous systems that use LLMs to reason, plan, and take actions
Visit the following resources to learn more:

- [@official@Tool use overview - Anthropic](https://platform.claude.com/docs/en/agents-and-tools/tool-use/overview)
- [@article@Introduction to AI Agents - DAIR.AI](https://www.promptingguide.ai/agents/introduction)
- [@article@Introduction to AI Agents - DAIR.AI](https://www.promptingguide.ai/agents/introduction)
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,4 @@ AI red teaming involves deliberately testing AI systems to find vulnerabilities,

Visit the following resources to learn more:

- [@official@Define success and build evaluations - Anthropic](https://platform.claude.com/docs/en/test-and-evaluate/develop-tests)
- [@official@OWASP Top 10 for LLM Applications 2025](https://genai.owasp.org/llmrisk/)
- [@opensource@Microsoft PyRIT - Risk Identification for GenAI](https://github.com/microsoft/PyRIT)
- [@roadmap@Visit the Dedicated AI Red Teaming Roadmap](https://roadmap.sh/ai-red-teaming)
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ AI (Artificial Intelligence) refers to systems that perform specific tasks intel

Visit the following resources to learn more:

- [@article@Artificial general intelligence - Wikipedia](https://en.wikipedia.org/wiki/Artificial_general_intelligence)
- [@article@Artificial general intelligence - Wikipedia](https://en.wikipedia.org/wiki/Artificial_general_intelligence)
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ Anthropic develops Claude, a family of large language models focused on safety a
Visit the following resources to learn more:

- [@official@Claude API Documentation](https://docs.anthropic.com/en/docs/intro)
- [@official@Anthropic Research](https://www.anthropic.com/research)
- [@official@Anthropic Research](https://www.anthropic.com/research)
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ Automatic Prompt Engineering (APE) uses LLMs to generate and optimize prompts au

Visit the following resources to learn more:

- [@article@Automatic Prompt Engineer - DAIR.AI](https://www.promptingguide.ai/techniques/ape)
- [@article@Automatic Prompt Engineer - DAIR.AI](https://www.promptingguide.ai/techniques/ape)
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ Calibrating LLMs involves adjusting models so their confidence scores accurately

Visit the following resources to learn more:

- [@article@Calibrating LLMs - LearnPrompting](https://learnprompting.org/docs/reliability/calibration)
- [@article@Calibrating LLMs - LearnPrompting](https://learnprompting.org/docs/reliability/calibration)
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,4 @@ Visit the following resources to learn more:
- [@article@Chain-of-Thought Prompting - DAIR.AI](https://www.promptingguide.ai/techniques/cot)
- [@article@Chain-of-Thought Prompting - LearnPrompting](https://learnprompting.org/docs/intermediate/chain_of_thought)
- [@article@Reasoning LLMs Guide - DAIR.AI](https://www.promptingguide.ai/guides/reasoning-llms)
- [@video@Context Engineering vs. Prompt Engineering: Smarter AI with RAG & Agents](https://youtu.be/vD0E3EUb8-8?si=Y6MCLPzjmhMB4jSu&t=203)
- [@video@Context Engineering vs. Prompt Engineering: Smarter AI with RAG & Agents](https://youtu.be/vD0E3EUb8-8?si=Y6MCLPzjmhMB4jSu&t=203)
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ Context window refers to the maximum number of tokens an LLM can process in a si
Visit the following resources to learn more:

- [@official@Context windows - Anthropic](https://platform.claude.com/docs/en/build-with-claude/context-windows)
- [@article@What is a context window? - IBM](https://www.ibm.com/think/topics/context-window)
- [@article@What is a context window? - IBM](https://www.ibm.com/think/topics/context-window)
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ Contextual prompting provides specific background information or situational det
Visit the following resources to learn more:

- [@official@Prompting Best Practices - Anthropic](https://platform.claude.com/docs/en/build-with-claude/prompt-engineering/claude-prompting-best-practices)
- [@article@Prompt Structure and Key Parts - LearnPrompting](https://learnprompting.org/docs/basics/prompt_structure)
- [@article@Prompt Structure and Key Parts - LearnPrompting](https://learnprompting.org/docs/basics/prompt_structure)
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ Fine-tuning trains models on specific data to specialize behavior, while prompt
Visit the following resources to learn more:

- [@article@When to use prompt engineering vs. fine-tuning - TechTarget](https://www.techtarget.com/searchEnterpriseAI/tip/Prompt-engineering-vs-fine-tuning-Whats-the-difference)
- [@article@Prompt Engineering vs Fine Tuning: When to Use Each - Codecademy](https://www.codecademy.com/article/prompt-engineering-vs-fine-tuning)
- [@article@Prompt Engineering vs Fine Tuning: When to Use Each - Codecademy](https://www.codecademy.com/article/prompt-engineering-vs-fine-tuning)
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ Frequency penalty reduces token probability based on how frequently they have ap

Visit the following resources to learn more:

- [@article@Frequency Penalty - LLM Parameter Guide - Vellum](https://www.vellum.ai/llm-parameters/frequency-penalty)
- [@article@Frequency Penalty - LLM Parameter Guide - Vellum](https://www.vellum.ai/llm-parameters/frequency-penalty)
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ Google develops Gemini, a family of multimodal AI models. The latest flagship, G
Visit the following resources to learn more:

- [@official@Google AI Studio](https://ai.google.dev/)
- [@official@Gemini API Documentation](https://ai.google.dev/gemini-api/docs)
- [@official@Gemini API Documentation](https://ai.google.dev/gemini-api/docs)
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ Hallucination in LLMs refers to generating plausible-sounding but factually inco
Visit the following resources to learn more:

- [@official@Reduce hallucinations - Anthropic](https://platform.claude.com/docs/en/test-and-evaluate/strengthen-guardrails/reduce-hallucinations)
- [@article@What are AI hallucinations? - IBM](https://www.ibm.com/think/topics/ai-hallucinations)
- [@article@What are AI hallucinations? - IBM](https://www.ibm.com/think/topics/ai-hallucinations)
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ Prompt engineering is the practice of designing effective inputs for Large Langu

Visit the following resources to learn more:

- [@article@What is Generative AI? - LearnPrompting](https://learnprompting.org/docs/basics/generative_ai)
- [@article@What is Generative AI? - LearnPrompting](https://learnprompting.org/docs/basics/generative_ai)
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ LLM self-evaluation involves prompting models to assess their own outputs for qu

Visit the following resources to learn more:

- [@article@LLM Self-Evaluation - LearnPrompting](https://learnprompting.org/docs/reliability/lm_self_eval)
- [@article@LLM Self-Evaluation - LearnPrompting](https://learnprompting.org/docs/reliability/lm_self_eval)
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ Large Language Models (LLMs) are AI systems trained on vast text data to underst
Visit the following resources to learn more:

- [@official@LLM - Anthropic Glossary](https://platform.claude.com/docs/en/about-claude/glossary)
- [@article@Differences Between Chatbots and LLMs - LearnPrompting](https://learnprompting.org/docs/basics/chatbot_basics)
- [@article@Differences Between Chatbots and LLMs - LearnPrompting](https://learnprompting.org/docs/basics/chatbot_basics)
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,4 @@ Visit the following resources to learn more:
- [@article@What are large language models (LLMs)? - IBM](https://www.ibm.com/think/topics/large-language-models)
- [@article@Large language model - Wikipedia](https://en.wikipedia.org/wiki/Large_language_model)
- [@article@How Large Language Models Work: Explained Simply](https://justainews.com/applications/chatbots-and-virtual-assistants/how-large-language-models-work/)
- [@video@How Large Language Models Work](https://youtu.be/5sLYAQS9sWQ)
- [@video@How Large Language Models Work](https://youtu.be/5sLYAQS9sWQ)
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ Max tokens setting controls the maximum number of tokens an LLM can generate in
Visit the following resources to learn more:

- [@official@Token Counting - Anthropic](https://platform.claude.com/docs/en/build-with-claude/token-counting)
- [@article@Max Tokens - LLM Parameter Guide - Vellum](https://www.vellum.ai/llm-parameters/max-tokens)
- [@article@Max Tokens - LLM Parameter Guide - Vellum](https://www.vellum.ai/llm-parameters/max-tokens)
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ Meta develops the Llama family of open-source large language models. The latest
Visit the following resources to learn more:

- [@official@Llama](https://www.llama.com/)
- [@opensource@Llama Models (GitHub)](https://github.com/meta-llama/llama-models)
- [@opensource@Llama Models (GitHub)](https://github.com/meta-llama/llama-models)
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ Model weights and parameters are the learned values that define an LLM's behavio

Visit the following resources to learn more:

- [@article@What are LLM parameters? - IBM](https://www.ibm.com/think/topics/llm-parameters)
- [@article@What are LLM parameters? - IBM](https://www.ibm.com/think/topics/llm-parameters)
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,4 @@ Visit the following resources to learn more:
- [@article@Few-Shot Prompting - DAIR.AI](https://www.promptingguide.ai/techniques/fewshot)
- [@article@Few-Shot Prompting - LearnPrompting](https://learnprompting.org/docs/basics/few_shot)
- [@article@Few-Shot Introduction - LearnPrompting](https://learnprompting.org/docs/advanced/few_shot/introduction)
- [@video@Context Engineering vs. Prompt Engineering: Smarter AI with RAG & Agents](https://youtu.be/vD0E3EUb8-8?si=Fi2igdPTBUocqnX7&t=177)
- [@video@Context Engineering vs. Prompt Engineering: Smarter AI with RAG & Agents](https://youtu.be/vD0E3EUb8-8?si=Fi2igdPTBUocqnX7&t=177)
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ OpenAI develops leading language models including GPT-5.4, o3, and Codex, settin
Visit the following resources to learn more:

- [@official@OpenAI API Documentation](https://developers.openai.com/api/docs)
- [@official@OpenAI Cookbook (GitHub)](https://github.com/openai/openai-cookbook)
- [@official@OpenAI Cookbook (GitHub)](https://github.com/openai/openai-cookbook)
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ Output control encompasses techniques and parameters for managing LLM response c
Visit the following resources to learn more:

- [@official@Increase Output Consistency - Anthropic](https://platform.claude.com/docs/en/test-and-evaluate/strengthen-guardrails/increase-consistency)
- [@article@General Tips for Designing Prompts - DAIR.AI](https://www.promptingguide.ai/introduction/tips)
- [@article@General Tips for Designing Prompts - DAIR.AI](https://www.promptingguide.ai/introduction/tips)
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ Presence penalty reduces the likelihood of repeating tokens that have already ap

Visit the following resources to learn more:

- [@article@Presence Penalty - LLM Parameter Guide - Vellum](https://www.vellum.ai/llm-parameters/presence-penalty)
- [@article@Presence Penalty - LLM Parameter Guide - Vellum](https://www.vellum.ai/llm-parameters/presence-penalty)
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ Prompt debiasing involves techniques to reduce unwanted biases in LLM outputs by

Visit the following resources to learn more:

- [@article@Prompt Debiasing - LearnPrompting](https://learnprompting.org/docs/reliability/debiasing)
- [@article@Prompt Debiasing - LearnPrompting](https://learnprompting.org/docs/reliability/debiasing)
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ Prompt ensembling combines multiple different prompts or prompt variations to im

Visit the following resources to learn more:

- [@article@Introduction to Ensembling - LearnPrompting](https://learnprompting.org/docs/advanced/ensembling/introduction)
- [@article@Introduction to Ensembling - LearnPrompting](https://learnprompting.org/docs/advanced/ensembling/introduction)
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@ Visit the following resources to learn more:

- [@official@Mitigate jailbreaks and prompt injections - Anthropic](https://platform.claude.com/docs/en/test-and-evaluate/strengthen-guardrails/mitigate-jailbreaks)
- [@official@LLM01:2025 Prompt Injection - OWASP](https://genai.owasp.org/llmrisk/llm01-prompt-injection/)
- [@video@What Is a Prompt Injection Attack?](https://www.youtube.com/watch?v=jrHRe9lSqqA)
- [@video@What Is a Prompt Injection Attack?](https://www.youtube.com/watch?v=jrHRe9lSqqA)
Original file line number Diff line number Diff line change
Expand Up @@ -4,5 +4,5 @@ Retrieval-Augmented Generation (RAG) combines LLMs with external knowledge retri

Visit the following resources to learn more:

- [@article@Retrieval Augmented Generation (RAG) - DAIR.AI](https://www.promptingguide.ai/techniques/rag)
- [@opensource@Introduction to RAG - LlamaIndex](https://developers.llamaindex.ai/python/framework/understanding/rag/)
- [@article@Retrieval Augmented Generation (RAG) - DAIR.AI](https://www.promptingguide.ai/techniques/rag)
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@ Visit the following resources to learn more:

- [@article@ReAct - DAIR.AI](https://www.promptingguide.ai/techniques/react)
- [@article@ReAct: Synergizing Reasoning and Acting - LearnPrompting](https://learnprompting.org/docs/techniques/react)
- [@video@4 Methods of Prompt Engineering](https://youtu.be/vD0E3EUb8-8?si=Y6MCLPzjmhMB4jSu&t=203)
- [@video@4 Methods of Prompt Engineering](https://youtu.be/vD0E3EUb8-8?si=Y6MCLPzjmhMB4jSu&t=203)
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ Repetition penalties discourage LLMs from repeating words or phrases by reducing

Visit the following resources to learn more:

- [@article@Tips for Writing Better Prompts - LearnPrompting](https://learnprompting.org/docs/basics/ai_prompt_tips)
- [@article@Tips for Writing Better Prompts - LearnPrompting](https://learnprompting.org/docs/basics/ai_prompt_tips)
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@ Visit the following resources to learn more:

- [@article@Assigning Roles to Chatbots - LearnPrompting](https://learnprompting.org/docs/basics/roles)
- [@article@Role Prompting - LearnPrompting](https://learnprompting.org/docs/advanced/zero_shot/role_prompting)
- [@video@Context Engineering vs. Prompt Engineering: Smarter AI with RAG & Agents](https://youtu.be/vD0E3EUb8-8?si=9orzEniOGmRD7g-o&t=136)
- [@video@Context Engineering vs. Prompt Engineering: Smarter AI with RAG & Agents](https://youtu.be/vD0E3EUb8-8?si=9orzEniOGmRD7g-o&t=136)
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ Sampling parameters (temperature, top-K, top-P) control how LLMs select tokens f

Visit the following resources to learn more:

- [@article@LLM Settings (Temperature, Top-K, Top-P) - DAIR.AI](https://www.promptingguide.ai/introduction/settings)
- [@article@LLM Settings (Temperature, Top-K, Top-P) - DAIR.AI](https://www.promptingguide.ai/introduction/settings)
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ Self-consistency prompting generates multiple reasoning paths for the same probl
Visit the following resources to learn more:

- [@article@Self-Consistency - DAIR.AI](https://www.promptingguide.ai/techniques/consistency)
- [@article@Self-Consistency - LearnPrompting](https://learnprompting.org/docs/intermediate/self_consistency)
- [@article@Self-Consistency - LearnPrompting](https://learnprompting.org/docs/intermediate/self_consistency)
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ Step-back prompting improves LLM performance by first asking a general question

Visit the following resources to learn more:

- [@article@Step-Back Prompting - LearnPrompting](https://learnprompting.org/docs/advanced/thought_generation/step_back_prompting)
- [@article@Step-Back Prompting - LearnPrompting](https://learnprompting.org/docs/advanced/thought_generation/step_back_prompting)
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ Stop sequences are specific strings that signal the LLM to stop generating text
Visit the following resources to learn more:

- [@official@Handling Stop Reasons - Anthropic](https://platform.claude.com/docs/en/build-with-claude/handling-stop-reasons)
- [@article@Stop Sequence - LLM Parameter Guide - Vellum](https://www.vellum.ai/llm-parameters/stop-sequence)
- [@article@Stop Sequence - LLM Parameter Guide - Vellum](https://www.vellum.ai/llm-parameters/stop-sequence)
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,4 @@ Visit the following resources to learn more:
- [@official@Structured Output - Google Gemini API](https://ai.google.dev/gemini-api/docs/structured-output)
- [@official@Structured Outputs - Anthropic](https://platform.claude.com/docs/en/build-with-claude/structured-outputs)
- [@opensource@Instructor - Structured Output Library](https://github.com/jxnl/instructor)
- [@article@Structured Outputs - LLM Parameter Guide - Vellum](https://www.vellum.ai/llm-parameters/structured-outputs)
- [@article@Structured Outputs - LLM Parameter Guide - Vellum](https://www.vellum.ai/llm-parameters/structured-outputs)
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ System prompting sets the overall context, purpose, and operational guidelines f
Visit the following resources to learn more:

- [@official@Prompt Engineering Overview - Anthropic](https://platform.claude.com/docs/en/build-with-claude/prompt-engineering/overview)
- [@article@Instructions - LearnPrompting](https://learnprompting.org/docs/basics/instructions)
- [@article@Instructions - LearnPrompting](https://learnprompting.org/docs/basics/instructions)
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ Temperature controls the randomness in token selection during text generation. L
Visit the following resources to learn more:

- [@article@What is LLM Temperature? - IBM](https://www.ibm.com/think/topics/llm-temperature)
- [@article@Temperature - LLM Parameter Guide - Vellum](https://www.vellum.ai/llm-parameters/temperature)
- [@article@Temperature - LLM Parameter Guide - Vellum](https://www.vellum.ai/llm-parameters/temperature)
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ Tokens are fundamental units of text that LLMs process, created by breaking down
Visit the following resources to learn more:

- [@article@Understanding tokens - Microsoft Learn](https://learn.microsoft.com/en-us/dotnet/ai/conceptual/understanding-tokens)
- [@article@What Are Tokens in LLMs and Why They Matter - LLM Guides](https://llmguides.ai/learn/what-are-tokens/)
- [@article@What Are Tokens in LLMs and Why They Matter - LLM Guides](https://llmguides.ai/learn/what-are-tokens/)
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ Top-K restricts token selection to the K most likely tokens from the probability
Visit the following resources to learn more:

- [@official@Gemini API Prompting Strategies - Google](https://ai.google.dev/gemini-api/docs/prompting-strategies)
- [@article@Top K - LLM Parameter Guide - Vellum](https://www.vellum.ai/llm-parameters/top-k)
- [@article@Top K - LLM Parameter Guide - Vellum](https://www.vellum.ai/llm-parameters/top-k)
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ Top-P (nucleus sampling) selects tokens from the smallest set whose cumulative p

Visit the following resources to learn more:

- [@article@Top P - LLM Parameter Guide - Vellum](https://www.vellum.ai/llm-parameters/top-p)
- [@article@Top P - LLM Parameter Guide - Vellum](https://www.vellum.ai/llm-parameters/top-p)
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ Tree of Thoughts (ToT) generalizes Chain of Thought by allowing LLMs to explore

Visit the following resources to learn more:

- [@article@Tree of Thoughts - DAIR.AI](https://www.promptingguide.ai/techniques/tot)
- [@article@Tree of Thoughts - DAIR.AI](https://www.promptingguide.ai/techniques/tot)
Loading