Do Not Flip a Coin: When to Use RAG or Long Context LLMs | by Atul Dahikar | Jan, 2025


Flip a coin. When it’s in the air, you’ll know which side you’re hoping for. — Arnold Rothstein
Large Language Models (LLMs) showed wide capabilities for zero/few-shot question answering. On the other hand, they hallucinate and lack real-time information and domain-specific knowledge. One solution to these problems is to add additional context to the prompt so that the model exploits it to answer correctly. Two approaches exist today:
*long-context LLM. LLMs have a long context length so you can add much text.

*Retrieval Augmented Generation (RAG). use an external memory and conduct retrieval.
There is no consensus on which of the two approaches is better, especially since most studies have conducted comparisons with little detail and only for particular aspects. In this article, we will discuss a recent paper that attempted to conduct an in-depth comparison.
 ………………………

Recent Articles

Related Stories

Leave A Reply

Please enter your comment!
Please enter your name here