Large Language Models (LLMs) are remarkable at compressing knowledge about the world into their billions of parameters.
However, LLMs have two major limitations: They only have up-to-date knowledge up to the time of the last training iteration. And they sometimes tend to make up knowledge (hallucinate) when asked specific questions.
Using the RAG technique, we can give pre-trained LLMs access to very specific information as additional context when answering our questions.
In this article, I will walk through the theory and practice of implementing Google’s LLM Gemma with additional RAG capabilities using the Hugging Face transformers library, LangChain, and the Faiss vector database.
An overview of the RAG pipeline is shown in the figure below, which we will implement step by step.