Creating Single- and Multi-Agent Systems
In previous chapters, we discussed a number of components or tools that can be associated with LLMs to extend their capabilities. In Chapters 5 and 6, we addressed in detail how external memory can be used to enrich the context. This allows the model to obtain additional information to be able to answer user questions when it does not know the answer (when it hasn’t seen the document during pre-training or it relates to information after the date of their training). Similarly, in Chapter 7, we saw that knowledge graphs can be used to extend the model’s knowledge. These components attempt to solve one of the most problematic limitations of LLMs, namely, hallucinations (an output produced by the model that is not factually correct). In addition, we saw that the use of graphs allows the model to conduct graph reasoning and thus adds new capabilities.
In Chapter 8, we saw the intersection of RL and LLMs. One of the problems associated...