diff --git a/src/data/roadmaps/prompt-engineering/content/agents@Pw5LWA9vNRY0N2M0FW16f.md b/src/data/roadmaps/prompt-engineering/content/agents@Pw5LWA9vNRY0N2M0FW16f.md index 4528ac857488..a2defbbd3717 100644 --- a/src/data/roadmaps/prompt-engineering/content/agents@Pw5LWA9vNRY0N2M0FW16f.md +++ b/src/data/roadmaps/prompt-engineering/content/agents@Pw5LWA9vNRY0N2M0FW16f.md @@ -1,3 +1,8 @@ # Agents -AI agents are autonomous systems that use LLMs to reason, plan, and take actions to achieve specific goals. They combine language understanding with tool usage, memory, and decision-making to perform complex, multi-step tasks. Agents can interact with external APIs and services while maintaining context across interactions. \ No newline at end of file +AI agents are autonomous systems that use LLMs to reason, plan, and take actions to achieve specific goals. They combine language understanding with tool usage, memory, and decision-making to perform complex, multi-step tasks. Agents can interact with external APIs and services while maintaining context across interactions. + +Visit the following resources to learn more: + +- [@official@Tool use overview - Anthropic](https://platform.claude.com/docs/en/agents-and-tools/tool-use/overview) +- [@article@Introduction to AI Agents - DAIR.AI](https://www.promptingguide.ai/agents/introduction) diff --git a/src/data/roadmaps/prompt-engineering/content/ai-red-teaming@Wvu9Q_kNhH1_JlOgxAjP6.md b/src/data/roadmaps/prompt-engineering/content/ai-red-teaming@Wvu9Q_kNhH1_JlOgxAjP6.md index 90c53897f3dc..8048f2cf40af 100644 --- a/src/data/roadmaps/prompt-engineering/content/ai-red-teaming@Wvu9Q_kNhH1_JlOgxAjP6.md +++ b/src/data/roadmaps/prompt-engineering/content/ai-red-teaming@Wvu9Q_kNhH1_JlOgxAjP6.md @@ -1,3 +1,9 @@ # AI Red Teaming -AI red teaming involves deliberately testing AI systems to find vulnerabilities, biases, or harmful behaviors through adversarial prompting. Teams attempt to make models produce undesired outputs, bypass safety measures, or exhibit problematic behaviors. This process helps identify weaknesses and improve AI safety and robustness before deployment. \ No newline at end of file +AI red teaming involves deliberately testing AI systems to find vulnerabilities, biases, or harmful behaviors through adversarial prompting. Teams attempt to make models produce undesired outputs, bypass safety measures, or exhibit problematic behaviors. This process helps identify weaknesses and improve AI safety and robustness before deployment. + +Visit the following resources to learn more: + +- [@official@Define success and build evaluations - Anthropic](https://platform.claude.com/docs/en/test-and-evaluate/develop-tests) +- [@official@OWASP Top 10 for LLM Applications 2025](https://genai.owasp.org/llmrisk/) +- [@opensource@Microsoft PyRIT - Risk Identification for GenAI](https://github.com/microsoft/PyRIT) diff --git a/src/data/roadmaps/prompt-engineering/content/ai-vs-agi@Sj1CMZzZp8kF-LuHcd_UU.md b/src/data/roadmaps/prompt-engineering/content/ai-vs-agi@Sj1CMZzZp8kF-LuHcd_UU.md index 1702891ac1cc..8f817ddc15e2 100644 --- a/src/data/roadmaps/prompt-engineering/content/ai-vs-agi@Sj1CMZzZp8kF-LuHcd_UU.md +++ b/src/data/roadmaps/prompt-engineering/content/ai-vs-agi@Sj1CMZzZp8kF-LuHcd_UU.md @@ -1,3 +1,7 @@ # AI vs AGI -AI (Artificial Intelligence) refers to systems that perform specific tasks intelligently, while AGI (Artificial General Intelligence) represents hypothetical AI with human-level reasoning across all domains. Current LLMs are narrow AI - powerful at language tasks but lacking true understanding or general intelligence like AGI would possess. \ No newline at end of file +AI (Artificial Intelligence) refers to systems that perform specific tasks intelligently, while AGI (Artificial General Intelligence) represents hypothetical AI with human-level reasoning across all domains. Current LLMs are narrow AI - powerful at language tasks but lacking true understanding or general intelligence like AGI would possess. + +Visit the following resources to learn more: + +- [@article@Artificial general intelligence - Wikipedia](https://en.wikipedia.org/wiki/Artificial_general_intelligence) diff --git a/src/data/roadmaps/prompt-engineering/content/anthropic@V8pDOwrRKKcHBTd4qlSsH.md b/src/data/roadmaps/prompt-engineering/content/anthropic@V8pDOwrRKKcHBTd4qlSsH.md index 535909af5923..867f6c4aeebb 100644 --- a/src/data/roadmaps/prompt-engineering/content/anthropic@V8pDOwrRKKcHBTd4qlSsH.md +++ b/src/data/roadmaps/prompt-engineering/content/anthropic@V8pDOwrRKKcHBTd4qlSsH.md @@ -1,3 +1,8 @@ # Anthropic -Anthropic created Claude, a family of large language models known for safety features and constitutional AI training. Claude models excel at following instructions, maintaining context, and avoiding harmful outputs. Their strong instruction-following capabilities and built-in safety measures make them valuable for reliable, ethical AI applications. \ No newline at end of file +Anthropic develops Claude, a family of large language models focused on safety and helpfulness. The current lineup includes Claude Opus 4.7 (most capable, for complex reasoning and agentic coding), Claude Sonnet 4.6 (best speed-intelligence balance), and Claude Haiku 4.5 (fastest, near-frontier intelligence). All models support extended thinking, vision, and 1M token context windows. + +Visit the following resources to learn more: + +- [@official@Claude API Documentation](https://docs.anthropic.com/en/docs/intro) +- [@official@Anthropic Research](https://www.anthropic.com/research) diff --git a/src/data/roadmaps/prompt-engineering/content/automatic-prompt-engineering@diHNCiuKHeMVgvJ4OMwVh.md b/src/data/roadmaps/prompt-engineering/content/automatic-prompt-engineering@diHNCiuKHeMVgvJ4OMwVh.md index 98c5c54859ae..7944ff7da7df 100644 --- a/src/data/roadmaps/prompt-engineering/content/automatic-prompt-engineering@diHNCiuKHeMVgvJ4OMwVh.md +++ b/src/data/roadmaps/prompt-engineering/content/automatic-prompt-engineering@diHNCiuKHeMVgvJ4OMwVh.md @@ -1,3 +1,7 @@ # Automatic Prompt Engineering -Automatic Prompt Engineering (APE) uses LLMs to generate and optimize prompts automatically, reducing human effort while enhancing model performance. The process involves prompting a model to create multiple prompt variants, evaluating them using metrics like BLEU or ROUGE, then selecting the highest-scoring candidate. For example, generating 10 variants of customer order phrases for chatbot training, then testing and refining the best performers. This iterative approach helps discover effective prompts that humans might not consider, automating the optimization process. \ No newline at end of file +Automatic Prompt Engineering (APE) uses LLMs to generate and optimize prompts automatically, reducing human effort while enhancing model performance. The process involves prompting a model to create multiple prompt variants, evaluating them using metrics like BLEU or ROUGE, then selecting the highest-scoring candidate. For example, generating 10 variants of customer order phrases for chatbot training, then testing and refining the best performers. This iterative approach helps discover effective prompts that humans might not consider, automating the optimization process. + +Visit the following resources to learn more: + +- [@article@Automatic Prompt Engineer - DAIR.AI](https://www.promptingguide.ai/techniques/ape) diff --git a/src/data/roadmaps/prompt-engineering/content/calibrating-llms@P5nDyQbME53DOEfSkcY6I.md b/src/data/roadmaps/prompt-engineering/content/calibrating-llms@P5nDyQbME53DOEfSkcY6I.md index 60dc6d3f0f51..f30ed12263e2 100644 --- a/src/data/roadmaps/prompt-engineering/content/calibrating-llms@P5nDyQbME53DOEfSkcY6I.md +++ b/src/data/roadmaps/prompt-engineering/content/calibrating-llms@P5nDyQbME53DOEfSkcY6I.md @@ -1,3 +1,7 @@ # Calibrating LLMs -Calibrating LLMs involves adjusting models so their confidence scores accurately reflect their actual accuracy. Well-calibrated models express appropriate uncertainty - being confident when correct and uncertain when likely wrong. This helps users better trust and interpret model outputs, especially in critical applications where uncertainty awareness is crucial. \ No newline at end of file +Calibrating LLMs involves adjusting models so their confidence scores accurately reflect their actual accuracy. Well-calibrated models express appropriate uncertainty - being confident when correct and uncertain when likely wrong. This helps users better trust and interpret model outputs, especially in critical applications where uncertainty awareness is crucial. + +Visit the following resources to learn more: + +- [@article@Calibrating LLMs - LearnPrompting](https://learnprompting.org/docs/reliability/calibration) diff --git a/src/data/roadmaps/prompt-engineering/content/chain-of-thought-cot-prompting@weRaJxEplhKDyFWSMeoyI.md b/src/data/roadmaps/prompt-engineering/content/chain-of-thought-cot-prompting@weRaJxEplhKDyFWSMeoyI.md index 7341c650ca12..8a32c4ac1b61 100644 --- a/src/data/roadmaps/prompt-engineering/content/chain-of-thought-cot-prompting@weRaJxEplhKDyFWSMeoyI.md +++ b/src/data/roadmaps/prompt-engineering/content/chain-of-thought-cot-prompting@weRaJxEplhKDyFWSMeoyI.md @@ -4,4 +4,7 @@ Chain of Thought prompting improves LLM reasoning by generating intermediate rea Visit the following resources to learn more: +- [@article@Chain-of-Thought Prompting - DAIR.AI](https://www.promptingguide.ai/techniques/cot) +- [@article@Chain-of-Thought Prompting - LearnPrompting](https://learnprompting.org/docs/intermediate/chain_of_thought) +- [@article@Reasoning LLMs Guide - DAIR.AI](https://www.promptingguide.ai/guides/reasoning-llms) - [@video@Context Engineering vs. Prompt Engineering: Smarter AI with RAG & Agents](https://youtu.be/vD0E3EUb8-8?si=Y6MCLPzjmhMB4jSu&t=203) diff --git a/src/data/roadmaps/prompt-engineering/content/context-window@b-Xtkv6rt8QgzJXSShOX-.md b/src/data/roadmaps/prompt-engineering/content/context-window@b-Xtkv6rt8QgzJXSShOX-.md index ecc3242050ed..e852c98ea5d1 100644 --- a/src/data/roadmaps/prompt-engineering/content/context-window@b-Xtkv6rt8QgzJXSShOX-.md +++ b/src/data/roadmaps/prompt-engineering/content/context-window@b-Xtkv6rt8QgzJXSShOX-.md @@ -1,3 +1,8 @@ # Context Window -Context window refers to the maximum number of tokens an LLM can process in a single interaction, including both input prompt and generated output. When exceeded, older parts are truncated. Understanding this constraint is crucial for prompt engineering—you must balance providing sufficient context with staying within token limits. \ No newline at end of file +Context window refers to the maximum number of tokens an LLM can process in a single interaction, including both input prompt and generated output. When exceeded, older parts are truncated. Understanding this constraint is crucial for prompt engineering—you must balance providing sufficient context with staying within token limits. + +Visit the following resources to learn more: + +- [@official@Context windows - Anthropic](https://platform.claude.com/docs/en/build-with-claude/context-windows) +- [@article@What is a context window? - IBM](https://www.ibm.com/think/topics/context-window) diff --git a/src/data/roadmaps/prompt-engineering/content/contextual-prompting@5TNK1KcSzh9GTKiEJnM-y.md b/src/data/roadmaps/prompt-engineering/content/contextual-prompting@5TNK1KcSzh9GTKiEJnM-y.md index 1ca6538d8a6d..0a1ed0849a72 100644 --- a/src/data/roadmaps/prompt-engineering/content/contextual-prompting@5TNK1KcSzh9GTKiEJnM-y.md +++ b/src/data/roadmaps/prompt-engineering/content/contextual-prompting@5TNK1KcSzh9GTKiEJnM-y.md @@ -1,3 +1,8 @@ # Contextual Prompting -Contextual prompting provides specific background information or situational details relevant to the current task, helping LLMs understand nuances and tailor responses accordingly. Unlike system or role prompts, contextual prompts supply immediate, task-specific information that's dynamic and changes based on the situation. For example: "Context: You are writing for a blog about retro 80's arcade video games. Suggest 3 topics to write articles about." This technique ensures responses are relevant, accurate, and appropriately framed for the specific context provided. \ No newline at end of file +Contextual prompting provides specific background information or situational details relevant to the current task, helping LLMs understand nuances and tailor responses accordingly. Unlike system or role prompts, contextual prompts supply immediate, task-specific information that's dynamic and changes based on the situation. For example: "Context: You are writing for a blog about retro 80's arcade video games. Suggest 3 topics to write articles about." This technique ensures responses are relevant, accurate, and appropriately framed for the specific context provided. + +Visit the following resources to learn more: + +- [@official@Prompting Best Practices - Anthropic](https://platform.claude.com/docs/en/build-with-claude/prompt-engineering/claude-prompting-best-practices) +- [@article@Prompt Structure and Key Parts - LearnPrompting](https://learnprompting.org/docs/basics/prompt_structure) diff --git a/src/data/roadmaps/prompt-engineering/content/fine-tuning-vs-prompt-engg@Ke5GT163k_ek9SzbcbBGE.md b/src/data/roadmaps/prompt-engineering/content/fine-tuning-vs-prompt-engg@Ke5GT163k_ek9SzbcbBGE.md index f4f4c79939f1..e39ed6806ab3 100644 --- a/src/data/roadmaps/prompt-engineering/content/fine-tuning-vs-prompt-engg@Ke5GT163k_ek9SzbcbBGE.md +++ b/src/data/roadmaps/prompt-engineering/content/fine-tuning-vs-prompt-engg@Ke5GT163k_ek9SzbcbBGE.md @@ -1,3 +1,8 @@ # Fine-tuning vs Prompt Engineering -Fine-tuning trains models on specific data to specialize behavior, while prompt engineering achieves customization through input design without model modification. Prompt engineering is faster, cheaper, and more accessible. Fine-tuning offers deeper customization but requires significant resources and expertise. \ No newline at end of file +Fine-tuning trains models on specific data to specialize behavior, while prompt engineering achieves customization through input design without model modification. Prompt engineering is faster, cheaper, and more accessible. Fine-tuning offers deeper customization but requires significant resources and expertise. + +Visit the following resources to learn more: + +- [@article@When to use prompt engineering vs. fine-tuning - TechTarget](https://www.techtarget.com/searchEnterpriseAI/tip/Prompt-engineering-vs-fine-tuning-Whats-the-difference) +- [@article@Prompt Engineering vs Fine Tuning: When to Use Each - Codecademy](https://www.codecademy.com/article/prompt-engineering-vs-fine-tuning) diff --git a/src/data/roadmaps/prompt-engineering/content/frequency-penalty@YIVNjkmTOY61VmL0md9Pj.md b/src/data/roadmaps/prompt-engineering/content/frequency-penalty@YIVNjkmTOY61VmL0md9Pj.md index 25dff903bf9e..375ff4d5b699 100644 --- a/src/data/roadmaps/prompt-engineering/content/frequency-penalty@YIVNjkmTOY61VmL0md9Pj.md +++ b/src/data/roadmaps/prompt-engineering/content/frequency-penalty@YIVNjkmTOY61VmL0md9Pj.md @@ -1,3 +1,7 @@ # Frequency Penalty -Frequency penalty reduces token probability based on how frequently they've appeared in the text, with higher penalties for more frequent tokens. This prevents excessive repetition and encourages varied language use. The penalty scales with usage frequency, making overused words less likely to be selected again, improving content diversity. \ No newline at end of file +Frequency penalty reduces token probability based on how frequently they have appeared in the text, with higher penalties for more frequent tokens. This prevents excessive repetition and encourages varied language use. The penalty scales with usage frequency, making overused words less likely to be selected again, improving content diversity. + +Visit the following resources to learn more: + +- [@article@Frequency Penalty - LLM Parameter Guide - Vellum](https://www.vellum.ai/llm-parameters/frequency-penalty) diff --git a/src/data/roadmaps/prompt-engineering/content/google@o-6UKLZ6oCRbAKgRjH2uI.md b/src/data/roadmaps/prompt-engineering/content/google@o-6UKLZ6oCRbAKgRjH2uI.md index bc83093905cd..c26582b28106 100644 --- a/src/data/roadmaps/prompt-engineering/content/google@o-6UKLZ6oCRbAKgRjH2uI.md +++ b/src/data/roadmaps/prompt-engineering/content/google@o-6UKLZ6oCRbAKgRjH2uI.md @@ -1,3 +1,8 @@ # Google -Google develops influential LLMs including Gemini, PaLM, and Bard. Through Vertex AI and Google Cloud Platform, they provide enterprise-grade model access with extensive prompt testing via Vertex AI Studio. Google's research has advanced many prompt engineering techniques, including Chain of Thought reasoning methods. \ No newline at end of file +Google develops Gemini, a family of multimodal AI models. The latest flagship, Gemini 3, supports text, image, video, and audio through the Gemini API and Google AI Studio. Google also offers specialized models including Imagen for image generation, Veo for video, and Lyria 3 for music. Their research has advanced many prompt engineering techniques, including Chain of Thought reasoning. + +Visit the following resources to learn more: + +- [@official@Google AI Studio](https://ai.google.dev/) +- [@official@Gemini API Documentation](https://ai.google.dev/gemini-api/docs) diff --git a/src/data/roadmaps/prompt-engineering/content/hallucination@SWDa3Su3VS815WQbvvNsa.md b/src/data/roadmaps/prompt-engineering/content/hallucination@SWDa3Su3VS815WQbvvNsa.md index 6a985609b995..8d9de038132d 100644 --- a/src/data/roadmaps/prompt-engineering/content/hallucination@SWDa3Su3VS815WQbvvNsa.md +++ b/src/data/roadmaps/prompt-engineering/content/hallucination@SWDa3Su3VS815WQbvvNsa.md @@ -1,3 +1,8 @@ # Hallucination -Hallucination in LLMs refers to generating plausible-sounding but factually incorrect or fabricated information. This occurs when models fill knowledge gaps or present uncertain information with apparent certainty. Mitigation techniques include requesting sources, asking for confidence levels, providing context, and always verifying critical information independently. \ No newline at end of file +Hallucination in LLMs refers to generating plausible-sounding but factually incorrect or fabricated information. This occurs when models fill knowledge gaps or present uncertain information with apparent certainty. Mitigation techniques include requesting sources, asking for confidence levels, providing context, and always verifying critical information independently. + +Visit the following resources to learn more: + +- [@official@Reduce hallucinations - Anthropic](https://platform.claude.com/docs/en/test-and-evaluate/strengthen-guardrails/reduce-hallucinations) +- [@article@What are AI hallucinations? - IBM](https://www.ibm.com/think/topics/ai-hallucinations) diff --git a/src/data/roadmaps/prompt-engineering/content/introduction@jrH1qE6EnFXL4fTyYU8gR.md b/src/data/roadmaps/prompt-engineering/content/introduction@jrH1qE6EnFXL4fTyYU8gR.md index d3a3150d1f25..46175bffd11f 100644 --- a/src/data/roadmaps/prompt-engineering/content/introduction@jrH1qE6EnFXL4fTyYU8gR.md +++ b/src/data/roadmaps/prompt-engineering/content/introduction@jrH1qE6EnFXL4fTyYU8gR.md @@ -1,3 +1,7 @@ # Introduction -Prompt engineering is the practice of designing effective inputs for Large Language Models to achieve desired outputs. This roadmap covers fundamental concepts, core techniques, model parameters, and advanced methods. It's a universal skill accessible to anyone, requiring no programming background, yet crucial for unlocking AI potential across diverse applications and domains. \ No newline at end of file +Prompt engineering is the practice of designing effective inputs for Large Language Models to achieve desired outputs. This roadmap covers fundamental concepts, core techniques, model parameters, and advanced methods. It's a universal skill accessible to anyone, requiring no programming background, yet crucial for unlocking AI potential across diverse applications and domains. + +Visit the following resources to learn more: + +- [@article@What is Generative AI? - LearnPrompting](https://learnprompting.org/docs/basics/generative_ai) diff --git a/src/data/roadmaps/prompt-engineering/content/llm-self-evaluation@CvV3GIvQhsTvE-TQjTpIQ.md b/src/data/roadmaps/prompt-engineering/content/llm-self-evaluation@CvV3GIvQhsTvE-TQjTpIQ.md index 675f67db8f8f..70e664f29ba2 100644 --- a/src/data/roadmaps/prompt-engineering/content/llm-self-evaluation@CvV3GIvQhsTvE-TQjTpIQ.md +++ b/src/data/roadmaps/prompt-engineering/content/llm-self-evaluation@CvV3GIvQhsTvE-TQjTpIQ.md @@ -4,4 +4,4 @@ LLM self-evaluation involves prompting models to assess their own outputs for qu Visit the following resources to learn more: -- [@article@LLM Self-Evaluation](https://learnprompting.org/docs/reliability/lm_self_eval) +- [@article@LLM Self-Evaluation - LearnPrompting](https://learnprompting.org/docs/reliability/lm_self_eval) diff --git a/src/data/roadmaps/prompt-engineering/content/llm@pamV5Z8DRKk2ioZbg6QVK.md b/src/data/roadmaps/prompt-engineering/content/llm@pamV5Z8DRKk2ioZbg6QVK.md index bed8df2a8c0e..ae9f278d6453 100644 --- a/src/data/roadmaps/prompt-engineering/content/llm@pamV5Z8DRKk2ioZbg6QVK.md +++ b/src/data/roadmaps/prompt-engineering/content/llm@pamV5Z8DRKk2ioZbg6QVK.md @@ -1,3 +1,8 @@ # LLM -Large Language Models (LLMs) are AI systems trained on vast text data to understand and generate human-like language. They work as prediction engines, analyzing input and predicting the next most likely token. LLMs perform tasks like text generation, translation, summarization, and Q&A. Understanding token processing is key to effective prompt engineering. \ No newline at end of file +Large Language Models (LLMs) are AI systems trained on vast text data to understand and generate human-like language. They work as prediction engines, analyzing input and predicting the next most likely token. LLMs perform tasks like text generation, translation, summarization, and Q&A. Understanding token processing is key to effective prompt engineering. + +Visit the following resources to learn more: + +- [@official@LLM - Anthropic Glossary](https://platform.claude.com/docs/en/about-claude/glossary) +- [@article@Differences Between Chatbots and LLMs - LearnPrompting](https://learnprompting.org/docs/basics/chatbot_basics) diff --git a/src/data/roadmaps/prompt-engineering/content/llms-and-how-they-work@74JxgfJ_1qmVNZ_QRp9Ne.md b/src/data/roadmaps/prompt-engineering/content/llms-and-how-they-work@74JxgfJ_1qmVNZ_QRp9Ne.md index c1e23d98ea5a..85dd5b02dc1d 100644 --- a/src/data/roadmaps/prompt-engineering/content/llms-and-how-they-work@74JxgfJ_1qmVNZ_QRp9Ne.md +++ b/src/data/roadmaps/prompt-engineering/content/llms-and-how-they-work@74JxgfJ_1qmVNZ_QRp9Ne.md @@ -4,4 +4,7 @@ LLMs function as sophisticated prediction engines that process text sequentially Visit the following resources to learn more: +- [@article@What are large language models (LLMs)? - IBM](https://www.ibm.com/think/topics/large-language-models) +- [@article@Large language model - Wikipedia](https://en.wikipedia.org/wiki/Large_language_model) +- [@article@How Large Language Models Work: Explained Simply](https://justainews.com/applications/chatbots-and-virtual-assistants/how-large-language-models-work/) - [@video@How Large Language Models Work](https://youtu.be/5sLYAQS9sWQ) diff --git a/src/data/roadmaps/prompt-engineering/content/max-tokens@vK9Gf8dGu2UvvJJhhuHG9.md b/src/data/roadmaps/prompt-engineering/content/max-tokens@vK9Gf8dGu2UvvJJhhuHG9.md index 1fc810c5d265..e419c97eef91 100644 --- a/src/data/roadmaps/prompt-engineering/content/max-tokens@vK9Gf8dGu2UvvJJhhuHG9.md +++ b/src/data/roadmaps/prompt-engineering/content/max-tokens@vK9Gf8dGu2UvvJJhhuHG9.md @@ -1,3 +1,8 @@ # Max Tokens -Max tokens setting controls the maximum number of tokens an LLM can generate in response, directly impacting computation cost, response time, and energy consumption. Setting lower limits doesn't make models more concise—it simply stops generation when the limit is reached. This parameter is crucial for techniques like ReAct where models might generate unnecessary tokens after the desired response. Balancing max tokens involves considering cost efficiency, response completeness, and application requirements while ensuring critical information isn't truncated. \ No newline at end of file +Max tokens setting controls the maximum number of tokens an LLM can generate in response, directly impacting computation cost, response time, and energy consumption. Setting lower limits doesn't make models more concise—it simply stops generation when the limit is reached. This parameter is crucial for techniques like ReAct where models might generate unnecessary tokens after the desired response. Balancing max tokens involves considering cost efficiency, response completeness, and application requirements while ensuring critical information isn't truncated. + +Visit the following resources to learn more: + +- [@official@Token Counting - Anthropic](https://platform.claude.com/docs/en/build-with-claude/token-counting) +- [@article@Max Tokens - LLM Parameter Guide - Vellum](https://www.vellum.ai/llm-parameters/max-tokens) diff --git a/src/data/roadmaps/prompt-engineering/content/meta@Td2YzDFT4LPGDw8JMmQSQ.md b/src/data/roadmaps/prompt-engineering/content/meta@Td2YzDFT4LPGDw8JMmQSQ.md index 497572fcb7a8..e8e59d077ca2 100644 --- a/src/data/roadmaps/prompt-engineering/content/meta@Td2YzDFT4LPGDw8JMmQSQ.md +++ b/src/data/roadmaps/prompt-engineering/content/meta@Td2YzDFT4LPGDw8JMmQSQ.md @@ -1,3 +1,8 @@ # Meta -Meta (formerly Facebook) develops the Llama family of open-source large language models. Llama models are available for research and commercial use, offering strong performance across various tasks. For prompt engineering, Meta's models provide transparency in training data and architecture, allowing developers to fine-tune and customize prompts for specific applications without vendor lock-in. \ No newline at end of file +Meta develops the Llama family of open-source large language models. The latest release, Llama 4, comes in Maverick and Scout variants with strong multimodal and long-context capabilities. Llama models are freely available for research and commercial use, providing transparency in training data and architecture without vendor lock-in. + +Visit the following resources to learn more: + +- [@official@Llama](https://www.llama.com/) +- [@opensource@Llama Models (GitHub)](https://github.com/meta-llama/llama-models) diff --git a/src/data/roadmaps/prompt-engineering/content/model-weights--parameters@yfsjW1eze8mWT0iHxv078.md b/src/data/roadmaps/prompt-engineering/content/model-weights--parameters@yfsjW1eze8mWT0iHxv078.md index 4d7b6557c83c..de6e3d0a3e9e 100644 --- a/src/data/roadmaps/prompt-engineering/content/model-weights--parameters@yfsjW1eze8mWT0iHxv078.md +++ b/src/data/roadmaps/prompt-engineering/content/model-weights--parameters@yfsjW1eze8mWT0iHxv078.md @@ -1,3 +1,7 @@ # Model Weights / Parameters -Model weights and parameters are the learned values that define an LLM's behavior and knowledge. Parameters are the trainable variables adjusted during training, while weights represent their final values. Understanding parameter count helps gauge model capabilities - larger models typically have more parameters and better performance but require more computational resources. \ No newline at end of file +Model weights and parameters are the learned values that define an LLM's behavior and knowledge. Parameters are the trainable variables adjusted during training, while weights represent their final values. Understanding parameter count helps gauge model capabilities - larger models typically have more parameters and better performance but require more computational resources. + +Visit the following resources to learn more: + +- [@article@What are LLM parameters? - IBM](https://www.ibm.com/think/topics/llm-parameters) diff --git a/src/data/roadmaps/prompt-engineering/content/one-shot--few-shot-prompting@Iufv_LsgUNls-Alx_Btlh.md b/src/data/roadmaps/prompt-engineering/content/one-shot--few-shot-prompting@Iufv_LsgUNls-Alx_Btlh.md index 0496508f650a..76c5c85c7dfb 100644 --- a/src/data/roadmaps/prompt-engineering/content/one-shot--few-shot-prompting@Iufv_LsgUNls-Alx_Btlh.md +++ b/src/data/roadmaps/prompt-engineering/content/one-shot--few-shot-prompting@Iufv_LsgUNls-Alx_Btlh.md @@ -4,4 +4,7 @@ One-shot provides a single example to guide model behavior, while few-shot inclu Visit the following resources to learn more: +- [@article@Few-Shot Prompting - DAIR.AI](https://www.promptingguide.ai/techniques/fewshot) +- [@article@Few-Shot Prompting - LearnPrompting](https://learnprompting.org/docs/basics/few_shot) +- [@article@Few-Shot Introduction - LearnPrompting](https://learnprompting.org/docs/advanced/few_shot/introduction) - [@video@Context Engineering vs. Prompt Engineering: Smarter AI with RAG & Agents](https://youtu.be/vD0E3EUb8-8?si=Fi2igdPTBUocqnX7&t=177) diff --git a/src/data/roadmaps/prompt-engineering/content/openai@Yb5cQiV2ETxPbBYCLOpt2.md b/src/data/roadmaps/prompt-engineering/content/openai@Yb5cQiV2ETxPbBYCLOpt2.md index edf9adedf602..11d7d15a8f7d 100644 --- a/src/data/roadmaps/prompt-engineering/content/openai@Yb5cQiV2ETxPbBYCLOpt2.md +++ b/src/data/roadmaps/prompt-engineering/content/openai@Yb5cQiV2ETxPbBYCLOpt2.md @@ -1,3 +1,8 @@ # OpenAI -OpenAI developed influential language models including GPT-3, GPT-4, and o3, setting industry standards for prompt engineering practices. Their API provides access to powerful LLMs with configurable parameters like temperature and max tokens. Many prompt engineering techniques and best practices originated from working with OpenAI systems. \ No newline at end of file +OpenAI develops leading language models including GPT-5.4, o3, and Codex, setting industry standards for prompt engineering. Their API provides access to frontier models with configurable parameters, and their Agents SDK enables building autonomous AI systems. The OpenAI Cookbook and platform documentation are key references for prompt engineering best practices. + +Visit the following resources to learn more: + +- [@official@OpenAI API Documentation](https://developers.openai.com/api/docs) +- [@official@OpenAI Cookbook (GitHub)](https://github.com/openai/openai-cookbook) diff --git a/src/data/roadmaps/prompt-engineering/content/output-control@wSf7Zr8ZYBuKWX0GQX6J3.md b/src/data/roadmaps/prompt-engineering/content/output-control@wSf7Zr8ZYBuKWX0GQX6J3.md index 505d9c454bd1..9f377368f10d 100644 --- a/src/data/roadmaps/prompt-engineering/content/output-control@wSf7Zr8ZYBuKWX0GQX6J3.md +++ b/src/data/roadmaps/prompt-engineering/content/output-control@wSf7Zr8ZYBuKWX0GQX6J3.md @@ -1,3 +1,8 @@ # Output Control -Output control encompasses techniques and parameters for managing LLM response characteristics including length, format, style, and content boundaries. Key methods include max tokens for length limits, stop sequences for precise boundaries, temperature for creativity control, and structured output requirements for format consistency. Effective output control combines prompt engineering techniques with model parameters to ensure responses meet specific requirements. This is crucial for production applications where consistent, appropriately formatted outputs are essential for user experience and system integration. \ No newline at end of file +Output control encompasses techniques and parameters for managing LLM response characteristics including length, format, style, and content boundaries. Key methods include max tokens for length limits, stop sequences for precise boundaries, temperature for creativity control, and structured output requirements for format consistency. Effective output control combines prompt engineering techniques with model parameters to ensure responses meet specific requirements. This is crucial for production applications where consistent, appropriately formatted outputs are essential for user experience and system integration. + +Visit the following resources to learn more: + +- [@official@Increase Output Consistency - Anthropic](https://platform.claude.com/docs/en/test-and-evaluate/strengthen-guardrails/increase-consistency) +- [@article@General Tips for Designing Prompts - DAIR.AI](https://www.promptingguide.ai/introduction/tips) diff --git a/src/data/roadmaps/prompt-engineering/content/presence-penalty@WpO8V5caudySVehOcuDvK.md b/src/data/roadmaps/prompt-engineering/content/presence-penalty@WpO8V5caudySVehOcuDvK.md index f118cda26816..e2053c8fe0fd 100644 --- a/src/data/roadmaps/prompt-engineering/content/presence-penalty@WpO8V5caudySVehOcuDvK.md +++ b/src/data/roadmaps/prompt-engineering/content/presence-penalty@WpO8V5caudySVehOcuDvK.md @@ -1,3 +1,7 @@ # Presence Penalty -Presence penalty reduces the likelihood of repeating tokens that have already appeared in the text, encouraging diverse vocabulary usage. Unlike frequency penalty which considers how often tokens appear, presence penalty applies the same penalty to any previously used token, promoting varied content and creativity. \ No newline at end of file +Presence penalty reduces the likelihood of repeating tokens that have already appeared in the text, encouraging diverse vocabulary usage. Unlike frequency penalty which considers how often tokens appear, presence penalty applies the same penalty to any previously used token, promoting varied content and creativity. + +Visit the following resources to learn more: + +- [@article@Presence Penalty - LLM Parameter Guide - Vellum](https://www.vellum.ai/llm-parameters/presence-penalty) diff --git a/src/data/roadmaps/prompt-engineering/content/prompt-debiasing@0H2keZYD8iTNyBgmNVhto.md b/src/data/roadmaps/prompt-engineering/content/prompt-debiasing@0H2keZYD8iTNyBgmNVhto.md index 9428f02c398d..c0c09f7338c9 100644 --- a/src/data/roadmaps/prompt-engineering/content/prompt-debiasing@0H2keZYD8iTNyBgmNVhto.md +++ b/src/data/roadmaps/prompt-engineering/content/prompt-debiasing@0H2keZYD8iTNyBgmNVhto.md @@ -4,4 +4,4 @@ Prompt debiasing involves techniques to reduce unwanted biases in LLM outputs by Visit the following resources to learn more: -- [@article@Prompt Debiasing](https://learnprompting.org/docs/reliability/debiasing) +- [@article@Prompt Debiasing - LearnPrompting](https://learnprompting.org/docs/reliability/debiasing) diff --git a/src/data/roadmaps/prompt-engineering/content/prompt-ensembling@HOqWHqAkxLX8f2ImSmZE7.md b/src/data/roadmaps/prompt-engineering/content/prompt-ensembling@HOqWHqAkxLX8f2ImSmZE7.md index e9ea14eab0be..308a1bb43a34 100644 --- a/src/data/roadmaps/prompt-engineering/content/prompt-ensembling@HOqWHqAkxLX8f2ImSmZE7.md +++ b/src/data/roadmaps/prompt-engineering/content/prompt-ensembling@HOqWHqAkxLX8f2ImSmZE7.md @@ -1,3 +1,7 @@ # Prompt Ensembling -Prompt ensembling combines multiple different prompts or prompt variations to improve output quality and consistency. This technique involves running the same query with different prompt formulations and aggregating results through voting, averaging, or selection. Ensembling reduces variance and increases reliability by leveraging diverse prompt perspectives. \ No newline at end of file +Prompt ensembling combines multiple different prompts or prompt variations to improve output quality and consistency. This technique involves running the same query with different prompt formulations and aggregating results through voting, averaging, or selection. Ensembling reduces variance and increases reliability by leveraging diverse prompt perspectives. + +Visit the following resources to learn more: + +- [@article@Introduction to Ensembling - LearnPrompting](https://learnprompting.org/docs/advanced/ensembling/introduction) diff --git a/src/data/roadmaps/prompt-engineering/content/prompt-injection@6W_ONYREbXHwPigoDx1cW.md b/src/data/roadmaps/prompt-engineering/content/prompt-injection@6W_ONYREbXHwPigoDx1cW.md index a10e055bf822..456532301ddd 100644 --- a/src/data/roadmaps/prompt-engineering/content/prompt-injection@6W_ONYREbXHwPigoDx1cW.md +++ b/src/data/roadmaps/prompt-engineering/content/prompt-injection@6W_ONYREbXHwPigoDx1cW.md @@ -4,4 +4,6 @@ Prompt injection is a security vulnerability where malicious users manipulate LL Visit the following resources to learn more: -- [@video@What Is a Prompt Injection Attack?](https://www.youtube.com/watch?v=jrHRe9lSqqA) \ No newline at end of file +- [@official@Mitigate jailbreaks and prompt injections - Anthropic](https://platform.claude.com/docs/en/test-and-evaluate/strengthen-guardrails/mitigate-jailbreaks) +- [@official@LLM01:2025 Prompt Injection - OWASP](https://genai.owasp.org/llmrisk/llm01-prompt-injection/) +- [@video@What Is a Prompt Injection Attack?](https://www.youtube.com/watch?v=jrHRe9lSqqA) diff --git a/src/data/roadmaps/prompt-engineering/content/rag@gxydtFKmnXNY9I5kpTwjP.md b/src/data/roadmaps/prompt-engineering/content/rag@gxydtFKmnXNY9I5kpTwjP.md index 4e0e070abede..58c93604957c 100644 --- a/src/data/roadmaps/prompt-engineering/content/rag@gxydtFKmnXNY9I5kpTwjP.md +++ b/src/data/roadmaps/prompt-engineering/content/rag@gxydtFKmnXNY9I5kpTwjP.md @@ -1,3 +1,8 @@ # RAG -Retrieval-Augmented Generation (RAG) combines LLMs with external knowledge retrieval to ground responses in verified, current information. RAG retrieves relevant documents before generating responses, reducing hallucinations and enabling access to information beyond the model's training cutoff. This approach improves accuracy and provides source attribution. \ No newline at end of file +Retrieval-Augmented Generation (RAG) combines LLMs with external knowledge retrieval to ground responses in verified, current information. RAG retrieves relevant documents before generating responses, reducing hallucinations and enabling access to information beyond the model's training cutoff. This approach improves accuracy and provides source attribution. + +Visit the following resources to learn more: + +- [@article@Retrieval Augmented Generation (RAG) - DAIR.AI](https://www.promptingguide.ai/techniques/rag) +- [@opensource@Introduction to RAG - LlamaIndex](https://developers.llamaindex.ai/python/framework/understanding/rag/) diff --git a/src/data/roadmaps/prompt-engineering/content/react-prompting@8Ks6txRSUfMK7VotSQ4sC.md b/src/data/roadmaps/prompt-engineering/content/react-prompting@8Ks6txRSUfMK7VotSQ4sC.md index 124728a79fb4..7872040d97f1 100644 --- a/src/data/roadmaps/prompt-engineering/content/react-prompting@8Ks6txRSUfMK7VotSQ4sC.md +++ b/src/data/roadmaps/prompt-engineering/content/react-prompting@8Ks6txRSUfMK7VotSQ4sC.md @@ -4,4 +4,6 @@ ReAct (Reason and Act) prompting enables LLMs to solve complex tasks by combinin Visit the following resources to learn more: +- [@article@ReAct - DAIR.AI](https://www.promptingguide.ai/techniques/react) +- [@article@ReAct: Synergizing Reasoning and Acting - LearnPrompting](https://learnprompting.org/docs/techniques/react) - [@video@4 Methods of Prompt Engineering](https://youtu.be/vD0E3EUb8-8?si=Y6MCLPzjmhMB4jSu&t=203) diff --git a/src/data/roadmaps/prompt-engineering/content/repetition-penalties@g8ylIg4Zh567u-E3yVVY4.md b/src/data/roadmaps/prompt-engineering/content/repetition-penalties@g8ylIg4Zh567u-E3yVVY4.md index b7d46b61345f..0ba5a7fb3c02 100644 --- a/src/data/roadmaps/prompt-engineering/content/repetition-penalties@g8ylIg4Zh567u-E3yVVY4.md +++ b/src/data/roadmaps/prompt-engineering/content/repetition-penalties@g8ylIg4Zh567u-E3yVVY4.md @@ -1,3 +1,7 @@ # Repetition Penalties -Repetition penalties discourage LLMs from repeating words or phrases by reducing the probability of selecting previously used tokens. This includes frequency penalty (scales with usage count) and presence penalty (applies equally to any used token). These parameters improve output quality by promoting vocabulary diversity and preventing redundant phrasing. \ No newline at end of file +Repetition penalties discourage LLMs from repeating words or phrases by reducing the probability of selecting previously used tokens. This includes frequency penalty (scales with usage count) and presence penalty (applies equally to any used token). These parameters improve output quality by promoting vocabulary diversity and preventing redundant phrasing. + +Visit the following resources to learn more: + +- [@article@Tips for Writing Better Prompts - LearnPrompting](https://learnprompting.org/docs/basics/ai_prompt_tips) diff --git a/src/data/roadmaps/prompt-engineering/content/role-prompting@XHWKGaSRBYT4MsCHwV-iR.md b/src/data/roadmaps/prompt-engineering/content/role-prompting@XHWKGaSRBYT4MsCHwV-iR.md index 4dee7974661d..b9251915a07c 100644 --- a/src/data/roadmaps/prompt-engineering/content/role-prompting@XHWKGaSRBYT4MsCHwV-iR.md +++ b/src/data/roadmaps/prompt-engineering/content/role-prompting@XHWKGaSRBYT4MsCHwV-iR.md @@ -4,4 +4,6 @@ Role prompting assigns a specific character, identity, or professional role to t Visit the following resources to learn more: +- [@article@Assigning Roles to Chatbots - LearnPrompting](https://learnprompting.org/docs/basics/roles) +- [@article@Role Prompting - LearnPrompting](https://learnprompting.org/docs/advanced/zero_shot/role_prompting) - [@video@Context Engineering vs. Prompt Engineering: Smarter AI with RAG & Agents](https://youtu.be/vD0E3EUb8-8?si=9orzEniOGmRD7g-o&t=136) diff --git a/src/data/roadmaps/prompt-engineering/content/sampling-parameters@JgigM7HvmNOuKnp60v1Ce.md b/src/data/roadmaps/prompt-engineering/content/sampling-parameters@JgigM7HvmNOuKnp60v1Ce.md index 3ac47233af8b..927b62cf659a 100644 --- a/src/data/roadmaps/prompt-engineering/content/sampling-parameters@JgigM7HvmNOuKnp60v1Ce.md +++ b/src/data/roadmaps/prompt-engineering/content/sampling-parameters@JgigM7HvmNOuKnp60v1Ce.md @@ -1,3 +1,7 @@ # Sampling Parameters -Sampling parameters (temperature, top-K, top-P) control how LLMs select tokens from probability distributions, determining output randomness and creativity. These parameters interact: at extreme settings, one can override others (temperature 0 makes top-K/top-P irrelevant). A balanced starting point is temperature 0.2, top-P 0.95, top-K 30 for coherent but creative results. Understanding their interactions is crucial for optimal prompting—use temperature 0 for factual tasks, higher values for creativity, and combine settings strategically based on your specific use case. \ No newline at end of file +Sampling parameters (temperature, top-K, top-P) control how LLMs select tokens from probability distributions, determining output randomness and creativity. These parameters interact: at extreme settings, one can override others (temperature 0 makes top-K/top-P irrelevant). A balanced starting point is temperature 0.2, top-P 0.95, top-K 30 for coherent but creative results. Understanding their interactions is crucial for optimal prompting—use temperature 0 for factual tasks, higher values for creativity, and combine settings strategically based on your specific use case. + +Visit the following resources to learn more: + +- [@article@LLM Settings (Temperature, Top-K, Top-P) - DAIR.AI](https://www.promptingguide.ai/introduction/settings) diff --git a/src/data/roadmaps/prompt-engineering/content/self-consistency-prompting@1EzqCoplXPiHjp9Z-vqn-.md b/src/data/roadmaps/prompt-engineering/content/self-consistency-prompting@1EzqCoplXPiHjp9Z-vqn-.md index f6174efee719..e8d9b9c245e5 100644 --- a/src/data/roadmaps/prompt-engineering/content/self-consistency-prompting@1EzqCoplXPiHjp9Z-vqn-.md +++ b/src/data/roadmaps/prompt-engineering/content/self-consistency-prompting@1EzqCoplXPiHjp9Z-vqn-.md @@ -1,3 +1,8 @@ # Self-Consistency Prompting -Self-consistency prompting generates multiple reasoning paths for the same problem using higher temperature settings, then selects the most commonly occurring answer through majority voting. This technique combines sampling and voting to improve accuracy and provides pseudo-probability of answer correctness. While more expensive due to multiple API calls, it significantly enhances reliability for complex reasoning tasks by reducing the impact of single incorrect reasoning chains and leveraging diverse problem-solving approaches. \ No newline at end of file +Self-consistency prompting generates multiple reasoning paths for the same problem using higher temperature settings, then selects the most commonly occurring answer through majority voting. This technique combines sampling and voting to improve accuracy and provides pseudo-probability of answer correctness. While more expensive due to multiple API calls, it significantly enhances reliability for complex reasoning tasks by reducing the impact of single incorrect reasoning chains and leveraging diverse problem-solving approaches. + +Visit the following resources to learn more: + +- [@article@Self-Consistency - DAIR.AI](https://www.promptingguide.ai/techniques/consistency) +- [@article@Self-Consistency - LearnPrompting](https://learnprompting.org/docs/intermediate/self_consistency) diff --git a/src/data/roadmaps/prompt-engineering/content/step-back-prompting@2MboHh8ugkoH8dSd9d4Mk.md b/src/data/roadmaps/prompt-engineering/content/step-back-prompting@2MboHh8ugkoH8dSd9d4Mk.md index 5e92ea3aaf18..84e3bcb55ab3 100644 --- a/src/data/roadmaps/prompt-engineering/content/step-back-prompting@2MboHh8ugkoH8dSd9d4Mk.md +++ b/src/data/roadmaps/prompt-engineering/content/step-back-prompting@2MboHh8ugkoH8dSd9d4Mk.md @@ -1,3 +1,7 @@ # Step-Back Prompting -Step-back prompting improves LLM performance by first asking a general question related to the specific task, then using that answer to inform the final response. This technique activates relevant background knowledge before attempting the specific problem. For example, before writing a video game level storyline, first ask "What are key settings for engaging first-person shooter levels?" then use those insights to create the specific storyline. This approach reduces biases and improves accuracy by grounding responses in broader principles. \ No newline at end of file +Step-back prompting improves LLM performance by first asking a general question related to the specific task, then using that answer to inform the final response. This technique activates relevant background knowledge before attempting the specific problem. For example, before writing a video game level storyline, first ask "What are key settings for engaging first-person shooter levels?" then use those insights to create the specific storyline. This approach reduces biases and improves accuracy by grounding responses in broader principles. + +Visit the following resources to learn more: + +- [@article@Step-Back Prompting - LearnPrompting](https://learnprompting.org/docs/advanced/thought_generation/step_back_prompting) diff --git a/src/data/roadmaps/prompt-engineering/content/stop-sequences@v3CylRlojeltcwnE76j8Q.md b/src/data/roadmaps/prompt-engineering/content/stop-sequences@v3CylRlojeltcwnE76j8Q.md index 160620bf5b8e..3439c0aa9c1a 100644 --- a/src/data/roadmaps/prompt-engineering/content/stop-sequences@v3CylRlojeltcwnE76j8Q.md +++ b/src/data/roadmaps/prompt-engineering/content/stop-sequences@v3CylRlojeltcwnE76j8Q.md @@ -1,3 +1,8 @@ # Stop Sequences -Stop sequences are specific strings that signal the LLM to stop generating text when encountered, providing precise control over output length and format. Common examples include newlines, periods, or custom markers like "###" or "END". This parameter is particularly useful for structured outputs, preventing models from generating beyond intended boundaries. Stop sequences are essential for ReAct prompting and other scenarios where you need clean, precisely bounded responses. They offer more control than max tokens by stopping at logical breakpoints rather than arbitrary token limits. \ No newline at end of file +Stop sequences are specific strings that signal the LLM to stop generating text when encountered, providing precise control over output length and format. Common examples include newlines, periods, or custom markers like "###" or "END". This parameter is particularly useful for structured outputs, preventing models from generating beyond intended boundaries. Stop sequences are essential for ReAct prompting and other scenarios where you need clean, precisely bounded responses. They offer more control than max tokens by stopping at logical breakpoints rather than arbitrary token limits. + +Visit the following resources to learn more: + +- [@official@Handling Stop Reasons - Anthropic](https://platform.claude.com/docs/en/build-with-claude/handling-stop-reasons) +- [@article@Stop Sequence - LLM Parameter Guide - Vellum](https://www.vellum.ai/llm-parameters/stop-sequence) diff --git a/src/data/roadmaps/prompt-engineering/content/structured-outputs@j-PWO-ZmF9Oi9A5bwMRto.md b/src/data/roadmaps/prompt-engineering/content/structured-outputs@j-PWO-ZmF9Oi9A5bwMRto.md index c23a3bf5d045..aa720be17e73 100644 --- a/src/data/roadmaps/prompt-engineering/content/structured-outputs@j-PWO-ZmF9Oi9A5bwMRto.md +++ b/src/data/roadmaps/prompt-engineering/content/structured-outputs@j-PWO-ZmF9Oi9A5bwMRto.md @@ -4,4 +4,7 @@ Structured outputs involve prompting LLMs to return responses in specific format Visit the following resources to learn more: -- [@article@Generating Structured Outputs from LLMs](https://towardsdatascience.com/generating-structured-outputs-from-llms/?utm_source=roadmap&utm_medium=Referral&utm_campaign=TDS+roadmap+integration) \ No newline at end of file +- [@official@Structured Output - Google Gemini API](https://ai.google.dev/gemini-api/docs/structured-output) +- [@official@Structured Outputs - Anthropic](https://platform.claude.com/docs/en/build-with-claude/structured-outputs) +- [@opensource@Instructor - Structured Output Library](https://github.com/jxnl/instructor) +- [@article@Structured Outputs - LLM Parameter Guide - Vellum](https://www.vellum.ai/llm-parameters/structured-outputs) diff --git a/src/data/roadmaps/prompt-engineering/content/system-prompting@fWo39-hehRgwmx7CF36mM.md b/src/data/roadmaps/prompt-engineering/content/system-prompting@fWo39-hehRgwmx7CF36mM.md index ea6a830462e8..0a09fa43f94f 100644 --- a/src/data/roadmaps/prompt-engineering/content/system-prompting@fWo39-hehRgwmx7CF36mM.md +++ b/src/data/roadmaps/prompt-engineering/content/system-prompting@fWo39-hehRgwmx7CF36mM.md @@ -1,3 +1,8 @@ # System Prompting -System prompting sets the overall context, purpose, and operational guidelines for LLMs. It defines the model's role, behavioral constraints, output format requirements, and safety guardrails. System prompts provide foundational parameters that influence all subsequent interactions, ensuring consistent, controlled, and structured AI responses throughout the session. \ No newline at end of file +System prompting sets the overall context, purpose, and operational guidelines for LLMs. It defines the model's role, behavioral constraints, output format requirements, and safety guardrails. System prompts provide foundational parameters that influence all subsequent interactions, ensuring consistent, controlled, and structured AI responses throughout the session. + +Visit the following resources to learn more: + +- [@official@Prompt Engineering Overview - Anthropic](https://platform.claude.com/docs/en/build-with-claude/prompt-engineering/overview) +- [@article@Instructions - LearnPrompting](https://learnprompting.org/docs/basics/instructions) diff --git a/src/data/roadmaps/prompt-engineering/content/temperature@iMwg-I76-Tg5dhu8DGO6U.md b/src/data/roadmaps/prompt-engineering/content/temperature@iMwg-I76-Tg5dhu8DGO6U.md index e3fe8b88275d..93f27e0023d4 100644 --- a/src/data/roadmaps/prompt-engineering/content/temperature@iMwg-I76-Tg5dhu8DGO6U.md +++ b/src/data/roadmaps/prompt-engineering/content/temperature@iMwg-I76-Tg5dhu8DGO6U.md @@ -1,3 +1,8 @@ # Temperature -Temperature controls the randomness in token selection during text generation. Lower values (0-0.3) produce deterministic, factual outputs. Medium values (0.5-0.7) balance creativity and coherence. Higher values (0.8-1.0) generate creative, diverse outputs but may be less coherent. Use low temperature for math/facts, high for creative writing. \ No newline at end of file +Temperature controls the randomness in token selection during text generation. Lower values (0-0.3) produce deterministic, factual outputs. Medium values (0.5-0.7) balance creativity and coherence. Higher values (0.8-1.0) generate creative, diverse outputs but may be less coherent. Use low temperature for math/facts, high for creative writing. + +Visit the following resources to learn more: + +- [@article@What is LLM Temperature? - IBM](https://www.ibm.com/think/topics/llm-temperature) +- [@article@Temperature - LLM Parameter Guide - Vellum](https://www.vellum.ai/llm-parameters/temperature) diff --git a/src/data/roadmaps/prompt-engineering/content/tokens@NPcaSEteeEA5g22wQ7nL_.md b/src/data/roadmaps/prompt-engineering/content/tokens@NPcaSEteeEA5g22wQ7nL_.md index acfe2f4255a1..95b72611f6de 100644 --- a/src/data/roadmaps/prompt-engineering/content/tokens@NPcaSEteeEA5g22wQ7nL_.md +++ b/src/data/roadmaps/prompt-engineering/content/tokens@NPcaSEteeEA5g22wQ7nL_.md @@ -1,3 +1,8 @@ # Tokens -Tokens are fundamental units of text that LLMs process, created by breaking down text into smaller components like words, subwords, or characters. Understanding tokens is crucial because models predict the next token in sequences, API costs are based on token count, and models have maximum token limits for input and output. \ No newline at end of file +Tokens are fundamental units of text that LLMs process, created by breaking down text into smaller components like words, subwords, or characters. Understanding tokens is crucial because models predict the next token in sequences, API costs are based on token count, and models have maximum token limits for input and output. + +Visit the following resources to learn more: + +- [@article@Understanding tokens - Microsoft Learn](https://learn.microsoft.com/en-us/dotnet/ai/conceptual/understanding-tokens) +- [@article@What Are Tokens in LLMs and Why They Matter - LLM Guides](https://llmguides.ai/learn/what-are-tokens/) diff --git a/src/data/roadmaps/prompt-engineering/content/top-k@FF8ai1v5GDzxXLQhpwuPj.md b/src/data/roadmaps/prompt-engineering/content/top-k@FF8ai1v5GDzxXLQhpwuPj.md index 4ddde7af3f31..dbfd6699d28f 100644 --- a/src/data/roadmaps/prompt-engineering/content/top-k@FF8ai1v5GDzxXLQhpwuPj.md +++ b/src/data/roadmaps/prompt-engineering/content/top-k@FF8ai1v5GDzxXLQhpwuPj.md @@ -1,3 +1,8 @@ # Top-K -Top-K restricts token selection to the K most likely tokens from the probability distribution. Low values (1-10) produce conservative, factual outputs. Medium values (20-50) balance creativity and quality. High values (50+) enable diverse, creative outputs. Use low K for technical tasks, high K for creative writing. \ No newline at end of file +Top-K restricts token selection to the K most likely tokens from the probability distribution. Low values (1-10) produce conservative, factual outputs. Medium values (20-50) balance creativity and quality. High values (50+) enable diverse, creative outputs. Use low K for technical tasks, high K for creative writing. + +Visit the following resources to learn more: + +- [@official@Gemini API Prompting Strategies - Google](https://ai.google.dev/gemini-api/docs/prompting-strategies) +- [@article@Top K - LLM Parameter Guide - Vellum](https://www.vellum.ai/llm-parameters/top-k) diff --git a/src/data/roadmaps/prompt-engineering/content/top-p@-G1U1jDN5st1fTUtQmMl1.md b/src/data/roadmaps/prompt-engineering/content/top-p@-G1U1jDN5st1fTUtQmMl1.md index 7f0edff4ef0b..4cbf7fce3631 100644 --- a/src/data/roadmaps/prompt-engineering/content/top-p@-G1U1jDN5st1fTUtQmMl1.md +++ b/src/data/roadmaps/prompt-engineering/content/top-p@-G1U1jDN5st1fTUtQmMl1.md @@ -1,3 +1,7 @@ # Top-P -Top-P (nucleus sampling) selects tokens from the smallest set whose cumulative probability exceeds threshold P. Unlike Top-K's fixed number, Top-P dynamically adjusts based on probability distribution. Low values (0.1-0.5) produce focused outputs, medium (0.6-0.9) balance creativity and coherence, high (0.9-0.99) enable creative diversity. \ No newline at end of file +Top-P (nucleus sampling) selects tokens from the smallest set whose cumulative probability exceeds threshold P. Unlike Top-K's fixed number, Top-P dynamically adjusts based on probability distribution. Low values (0.1-0.5) produce focused outputs, medium (0.6-0.9) balance creativity and coherence, high (0.9-0.99) enable creative diversity. + +Visit the following resources to learn more: + +- [@article@Top P - LLM Parameter Guide - Vellum](https://www.vellum.ai/llm-parameters/top-p) diff --git a/src/data/roadmaps/prompt-engineering/content/tree-of-thoughts-tot-prompting@ob9D0W9B9145Da64nbi1M.md b/src/data/roadmaps/prompt-engineering/content/tree-of-thoughts-tot-prompting@ob9D0W9B9145Da64nbi1M.md index 29b17f86b4a5..24c711dd7bc3 100644 --- a/src/data/roadmaps/prompt-engineering/content/tree-of-thoughts-tot-prompting@ob9D0W9B9145Da64nbi1M.md +++ b/src/data/roadmaps/prompt-engineering/content/tree-of-thoughts-tot-prompting@ob9D0W9B9145Da64nbi1M.md @@ -1,3 +1,7 @@ # Tree of Thoughts (ToT) Prompting -Tree of Thoughts (ToT) generalizes Chain of Thought by allowing LLMs to explore multiple reasoning paths simultaneously rather than following a single linear chain. This approach maintains a tree structure where each thought represents a coherent step toward solving a problem, enabling the model to branch out and explore different reasoning directions. ToT is particularly effective for complex tasks requiring exploration and is well-suited for problems that benefit from considering multiple solution approaches before converging on the best answer. \ No newline at end of file +Tree of Thoughts (ToT) generalizes Chain of Thought by allowing LLMs to explore multiple reasoning paths simultaneously rather than following a single linear chain. This approach maintains a tree structure where each thought represents a coherent step toward solving a problem, enabling the model to branch out and explore different reasoning directions. ToT is particularly effective for complex tasks requiring exploration and is well-suited for problems that benefit from considering multiple solution approaches before converging on the best answer. + +Visit the following resources to learn more: + +- [@article@Tree of Thoughts - DAIR.AI](https://www.promptingguide.ai/techniques/tot) diff --git a/src/data/roadmaps/prompt-engineering/content/what-is-a-prompt@i4ijY3T5gLgNz0XqRipXe.md b/src/data/roadmaps/prompt-engineering/content/what-is-a-prompt@i4ijY3T5gLgNz0XqRipXe.md index 78304cb0f55a..9d3c77ceb382 100644 --- a/src/data/roadmaps/prompt-engineering/content/what-is-a-prompt@i4ijY3T5gLgNz0XqRipXe.md +++ b/src/data/roadmaps/prompt-engineering/content/what-is-a-prompt@i4ijY3T5gLgNz0XqRipXe.md @@ -1,3 +1,8 @@ # What is a Prompt? -A prompt is an input provided to a Large Language Model (LLM) to generate a response or prediction. It serves as the instruction or context that guides the AI model's output generation process. Effective prompts are clear, specific, well-structured, and goal-oriented, directly affecting the accuracy and relevance of AI responses. \ No newline at end of file +A prompt is an input provided to a Large Language Model (LLM) to generate a response or prediction. It serves as the instruction or context that guides the AI model's output generation process. Effective prompts are clear, specific, well-structured, and goal-oriented, directly affecting the accuracy and relevance of AI responses. + +Visit the following resources to learn more: + +- [@article@Basics of Prompting - DAIR.AI](https://www.promptingguide.ai/introduction/basics) +- [@article@Prompt Elements - DAIR.AI](https://www.promptingguide.ai/introduction/elements) diff --git a/src/data/roadmaps/prompt-engineering/content/what-is-prompt-engineering@43drPbTwPqJQPyzwYUdBT.md b/src/data/roadmaps/prompt-engineering/content/what-is-prompt-engineering@43drPbTwPqJQPyzwYUdBT.md index ff6a3a053f43..e584388b6511 100644 --- a/src/data/roadmaps/prompt-engineering/content/what-is-prompt-engineering@43drPbTwPqJQPyzwYUdBT.md +++ b/src/data/roadmaps/prompt-engineering/content/what-is-prompt-engineering@43drPbTwPqJQPyzwYUdBT.md @@ -1,5 +1,9 @@ # What is Prompt Engineering? +Prompt engineering is the practice of designing effective inputs for large language models to achieve desired outputs. It covers techniques like few-shot prompting, chain-of-thought, and parameter tuning. No programming background is required, making it a universal skill for anyone working with AI. + Visit the following resources to learn more: -- [@video@RAG vs Fine-Tuning vs Prompt Engineering: Optimizing AI Models](https://youtu.be/zYGDpG-pTho?si=yov4dDrcsHBAkey-&t=522) \ No newline at end of file +- [@article@Prompt engineering - Wikipedia](https://en.wikipedia.org/wiki/Prompt_engineering) +- [@article@Introduction to Prompt Engineering - LearnPrompting](https://learnprompting.org/docs/basics/prompt_engineering) +- [@video@RAG vs Fine-Tuning vs Prompt Engineering: Optimizing AI Models](https://youtu.be/zYGDpG-pTho?si=yov4dDrcsHBAkey-&t=522) diff --git a/src/data/roadmaps/prompt-engineering/content/xai@3wshuH7_DXgbhxsLzzI4D.md b/src/data/roadmaps/prompt-engineering/content/xai@3wshuH7_DXgbhxsLzzI4D.md index dbeadae00924..1e231b8b3ebb 100644 --- a/src/data/roadmaps/prompt-engineering/content/xai@3wshuH7_DXgbhxsLzzI4D.md +++ b/src/data/roadmaps/prompt-engineering/content/xai@3wshuH7_DXgbhxsLzzI4D.md @@ -1,3 +1,8 @@ # xAI -xAI is Elon Musk's AI company that created Grok, a conversational AI model trained on web data with a focus on real-time information and humor. Grok aims to be more truthful and less politically correct than other models. For prompt engineering, xAI offers unique capabilities in accessing current events and generating responses with a distinctive conversational style. \ No newline at end of file +xAI develops Grok, a conversational AI model with real-time web access and integration with X (Twitter). The latest model, Grok 4.20, features a 2M token context window, agentic tool calling, and industry-leading low hallucination rates. Grok focuses on delivering truthful, unfiltered responses with strict prompt adherence. + +Visit the following resources to learn more: + +- [@official@xAI Documentation](https://docs.x.ai/) +- [@official@xAI API Console](https://console.x.ai) diff --git a/src/data/roadmaps/prompt-engineering/content/zero-shot-prompting@GRerL9UXN73TwpCW2eTIE.md b/src/data/roadmaps/prompt-engineering/content/zero-shot-prompting@GRerL9UXN73TwpCW2eTIE.md index 0d24c32901ba..af9297005326 100644 --- a/src/data/roadmaps/prompt-engineering/content/zero-shot-prompting@GRerL9UXN73TwpCW2eTIE.md +++ b/src/data/roadmaps/prompt-engineering/content/zero-shot-prompting@GRerL9UXN73TwpCW2eTIE.md @@ -1,3 +1,8 @@ # Zero-Shot Prompting -Zero-shot prompting provides only a task description without examples, relying on the model's training patterns. Simply describe the task clearly, provide input data, and optionally specify output format. Works well for simple classification, text generation, and Q&A, but may produce inconsistent results for complex tasks. \ No newline at end of file +Zero-shot prompting provides only a task description without examples, relying on the model's training patterns. Simply describe the task clearly, provide input data, and optionally specify output format. Works well for simple classification, text generation, and Q&A, but may produce inconsistent results for complex tasks. + +Visit the following resources to learn more: + +- [@article@Zero-Shot Prompting - DAIR.AI](https://www.promptingguide.ai/techniques/zeroshot) +- [@article@Introduction to Zero-Shot Techniques - LearnPrompting](https://learnprompting.org/docs/advanced/zero_shot/introduction)