Summary
This chapter covered poisoning attacks on typical LLM applications in which we have no control over the model in detail. We focused on attacks on RAG embeddings and fine-tuning as the two attack vectors for poisoning in LLM applications, regardless of model hosting.
In the next chapter, we will look at poisoning as part of supply-chain challenges in LLM and other advanced LLM adversarial attacks.
Unlock this book’s exclusive benefits nowTake a moment to get the most out of your purchase and enjoy the complete learning experience. |
![]() https://www.packtpub.com/unlock/9781835087985 |
Note: Have your purchase invoice ready before you begin. |