Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Building AI Agents with LLMs, RAG, and Knowledge Graphs

You're reading from   Building AI Agents with LLMs, RAG, and Knowledge Graphs A practical guide to autonomous and modern AI agents

Arrow left icon
Product type Paperback
Published in Jul 2025
Publisher Packt
ISBN-13 9781835087060
Length 560 pages
Edition 1st Edition
Concepts
Arrow right icon
Authors (2):
Arrow left icon
Salvatore Raieli Salvatore Raieli
Author Profile Icon Salvatore Raieli
Salvatore Raieli
Gabriele Iuculano Gabriele Iuculano
Author Profile Icon Gabriele Iuculano
Gabriele Iuculano
Arrow right icon
View More author details
Toc

Table of Contents (17) Chapters Close

Preface 1. Part 1: The AI Agent Engine: From Text to Large Language Models
2. Chapter 1: Analyzing Text Data with Deep Learning FREE CHAPTER 3. Chapter 2: The Transformer: The Model Behind the Modern AI Revolution 4. Chapter 3: Exploring LLMs as a Powerful AI Engine 5. Part 2: AI Agents and Retrieval of Knowledge
6. Chapter 4: Building a Web Scraping Agent with an LLM 7. Chapter 5: Extending Your Agent with RAG to Prevent Hallucinations 8. Chapter 6: Advanced RAG Techniques for Information Retrieval and Augmentation 9. Chapter 7: Creating and Connecting a Knowledge Graph to an AI Agent 10. Chapter 8: Reinforcement Learning and AI Agents 11. Part 3: Creating Sophisticated AI to Solve Complex Scenarios
12. Chapter 9: Creating Single- and Multi-Agent Systems 13. Chapter 10: Building an AI Agent Application 14. Chapter 11: The Future Ahead 15. Index 16. Other Books You May Enjoy

Instruction tuning, fine-tuning, and alignment

Fine-tuning such large models is potentially very expensive. In classical fine-tuning, the idea is to fit the weights of a model for a task or a new domain. Even if it is a slight update of the weights for a few steps, for a model of more than 100 billion parameters, this means having large hardware infrastructure and significant costs. So, we need a method that allows us to have efficient and low-cost fine-tuning and preferentially keeping the model weights frozen.

The intrinsic rank hypothesis suggests that we can capture significant changes that occur in a neural network using a lower-dimensional representation. In the case of fine-tuning, the model weights after fine-tuning can be defined in this way:

Y=W′Xwith:W′=W+∆W

∆W represents the update of the weights during fine-tuning. For the intrinsic rank hypothesis, not all of these elements of ∆W are important, and instead, we can represent...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Modal Close icon
Modal Close icon