Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
LLM Design Patterns

You're reading from   LLM Design Patterns A Practical Guide to Building Robust and Efficient AI Systems

Arrow left icon
Product type Paperback
Published in May 2025
Publisher Packt
ISBN-13 9781836207030
Length 534 pages
Edition 1st Edition
Concepts
Arrow right icon
Author (1):
Arrow left icon
Ken Huang Ken Huang
Author Profile Icon Ken Huang
Ken Huang
Arrow right icon
View More author details
Toc

Table of Contents (38) Chapters Close

Preface 1. Part 1: Introduction and Data Preparation
2. Chapter 1: Introduction to LLM Design Patterns FREE CHAPTER 3. Chapter 2: Data Cleaning for LLM Training 4. Chapter 3: Data Augmentation 5. Chapter 4: Handling Large Datasets for LLM Training 6. Chapter 5: Data Versioning 7. Chapter 6: Dataset Annotation and Labeling 8. Part 2: Training and Optimization of Large Language Models
9. Chapter 7: Training Pipeline 10. Chapter 8: Hyperparameter Tuning 11. Chapter 9: Regularization 12. Chapter 10: Checkpointing and Recovery 13. Chapter 11: Fine-Tuning 14. Chapter 12: Model Pruning 15. Chapter 13: Quantization 16. Part 3: Evaluation and Interpretation of Large Language Models
17. Chapter 14: Evaluation Metrics 18. Chapter 15: Cross-Validation 19. Chapter 16: Interpretability 20. Chapter 17: Fairness and Bias Detection 21. Chapter 18: Adversarial Robustness 22. Chapter 19: Reinforcement Learning from Human Feedback 23. Part 4: Advanced Prompt Engineering Techniques
24. Chapter 20: Chain-of-Thought Prompting 25. Chapter 21: Tree-of-Thoughts Prompting 26. Chapter 22: Reasoning and Acting 27. Chapter 23: Reasoning WithOut Observation 28. Chapter 24: Reflection Techniques 29. Chapter 25: Automatic Multi-Step Reasoning and Tool Use 30. Part 5: Retrieval and Knowledge Integration in Large Language Models
31. Chapter 26: Retrieval-Augmented Generation 32. Chapter 27: Graph-Based RAG 33. Chapter 28: Advanced RAG 34. Chapter 29: Evaluating RAG Systems 35. Chapter 30: Agentic Patterns 36. Index 37. Other Books You May Enjoy

What this book covers

Chapter 1, Introduction to LLM Design Patterns, provides a foundational understanding of LLMs and introduces the critical role of design patterns in their development.

Chapter 2, Data Cleaning for LLM Training, equips you with practical tools and techniques that allow you to effectively clean your data for LLM training.

Chapter 3, Data Augmentation, helps you understand the data augmentation pattern in depth, from increasing the diversity of your training dataset to maintaining its integrity.

Chapter 4, Handling Large Datasets for LLM Training, allows you to learn advanced techniques for managing and processing massive datasets essential for training state-of-the-art LLMs.

Chapter 5, Data Versioning, shows you how to implement effective data versioning strategies for LLM development.

Chapter 6, Dataset Annotation and Labeling, lets you explore advanced techniques for creating well-annotated datasets that can significantly impact your LLM’s performance across various tasks.

Chapter 7, Training Pipeline, helps you understand the key components of an LLM training pipeline, from data ingestion and preprocessing to model architecture and optimization strategies.

Chapter 8, Hyperparameter Tuning, demonstrates what the hyperparameters in LLMs are and strategies for optimizing them efficiently.

Chapter 9, Regularization, shows you different regularization techniques that are specifically tailored to LLMs.

Chapter 10, Checkpointing and Recovery, outlines strategies for determining optimal checkpoint frequency, efficient storage formats for large models, and techniques for recovering from various types of failures.

Chapter 11, Fine-Tuning, teaches you effective strategies for fine-tuning pre-trained language models.

Chapter 12, Model Pruning, lets you explore model pruning techniques, designed to reduce model size while maintaining performance.

Chapter 13, Quantization, gives you a look into quantization methods that can optimize LLMs for deployment on resource-constrained devices.

Chapter 14, Evaluation Metrics, explores the most recent and commonly used benchmarks for evaluating LLMs across various domains.

Chapter 15, Cross-Validation, shows you how to explore cross-validation strategies specifically designed for LLMs.

Chapter 16, Interpretability, helps you understand how interpretability in LLMs refers to the model’s ability to understand and explain how the model processes inputs and generates outputs.

Chapter 17, Fairness and Bias Detection, demonstrates that fairness in LLMs involves ensuring that the model’s outputs and decisions do not discriminate against or unfairly treat individuals or groups based on protected attributes.

Chapter 18, Adversarial Robustness, helps you understand that adversarial attacks on LLMs are designed to manipulate the model’s output by making small, often imperceptible changes to the input.

Chapter 19, Reinforcement Learning from Human Feedback, takes you through a powerful technique for aligning LLMs with human preferences.

Chapter 20, Chain-of-Thought Prompting, demonstrates how you can leverage chain-of-thought prompting to improve your LLM’s performance on complex reasoning tasks.

Chapter 21, Tree-of-Thoughts Prompting, allows you to implement tree-of-thoughts prompting to tackle complex reasoning tasks with your LLMs.

Chapter 22, Reasoning and Acting, teaches you about the ReAct framework, a powerful technique for prompting your LLMs to not only reason through complex scenarios but also plan and simulate the execution of actions, similar to how humans operate in the real world.

Chapter 23, Reasoning WithOut Observation, teaches you the framework for providing LLMs with the ability to reason about hypothetical situations and leverage external tools effectively.

Chapter 24, Reflection Techniques, demonstrates reflection in LLMs, which refers to a model’s ability to analyze, evaluate, and improve its own outputs.

Chapter 25, Automatic Multi-Step Reasoning and Tool Use, helps you understand how automatic multi-step reasoning and tool use significantly expand the problem-solving capabilities of LLMs, enabling them to tackle complex, real-world tasks.

Chapter 26, Retrieval-Augmented Generation, takes you through a technique that enhances the performance of Al models, particularly in tasks that require knowledge or data not contained within the model’s pre-trained parameters.

Chapter 27, Graph-Based RAG, shows how to leverage graph-structured knowledge in RAG for LLMs.

Chapter 28, Advanced RAG, demonstrates how you can move beyond these basic RAG methods and explore more sophisticated techniques designed to enhance LLM performance across a wide range of tasks.

Chapter 29, Evaluating RAG Systems, equips you with the knowledge necessary to assess the ability of RAG systems to produce accurate, relevant, and factually grounded responses.

Chapter 30, Agentic Patterns, shows you how agentic Al systems using LLMs can be designed to operate autonomously, make decisions, and take actions to achieve specified goals.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime