Loading...
Naive RAG(NRAG)
Foundational "Retrieve-Read" framework following traditional indexing, retrieval, and generation process
๐ฏ 30-Second Overview
Pattern: Simple retrieve-then-read pipeline: query โ retrieve documents โ concatenate โ generate
Why: Provides external knowledge access with minimal complexity - foundational approach established by Lewis et al. (2020)
Key Insight: Direct concatenation of top-k retrieved documents to query prompt - no optimization or post-processing
โก Quick Implementation
๐ Do's & Don'ts
๐ฆ When to Use
Use When
- โข Simple Q&A over documents
- โข Proof-of-concept RAG systems
- โข Limited technical complexity allowed
- โข Small to medium knowledge bases
- โข Straightforward factual queries
Avoid When
- โข Complex multi-hop reasoning required
- โข High-accuracy critical applications
- โข Large-scale production systems
- โข Noisy or contradictory knowledge bases
- โข Real-time performance requirements
๐ Key Metrics
๐ก Top Use Cases
References & Further Reading
Deepen your understanding with these curated resources
Foundational Papers
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks (Lewis et al., 2020)
Dense Passage Retrieval for Open-Domain Question Answering (Karpukhin et al., 2020)
RAG vs FiD: Comparing Retrieval-Augmented Generation Models (Izacard & Grave, 2021)
Leveraging Passage Retrieval with Generative Models (Izacard et al., 2022)
Comprehensive Surveys
Retrieval-Augmented Generation for Large Language Models: A Survey (Gao et al., 2023)
A Comprehensive Survey of RAG: Evolution and Future Directions (Gupta et al., 2024)
Retrieval-Augmented Generation for AI-Generated Content: A Survey (Li et al., 2024)
RAG and RAU: A Survey on Retrieval-Augmented Language Model (Zhao et al., 2024)
Contribute to this collection
Know a great resource? Submit a pull request to add it.
Naive RAG(NRAG)
Foundational "Retrieve-Read" framework following traditional indexing, retrieval, and generation process
๐ฏ 30-Second Overview
Pattern: Simple retrieve-then-read pipeline: query โ retrieve documents โ concatenate โ generate
Why: Provides external knowledge access with minimal complexity - foundational approach established by Lewis et al. (2020)
Key Insight: Direct concatenation of top-k retrieved documents to query prompt - no optimization or post-processing
โก Quick Implementation
๐ Do's & Don'ts
๐ฆ When to Use
Use When
- โข Simple Q&A over documents
- โข Proof-of-concept RAG systems
- โข Limited technical complexity allowed
- โข Small to medium knowledge bases
- โข Straightforward factual queries
Avoid When
- โข Complex multi-hop reasoning required
- โข High-accuracy critical applications
- โข Large-scale production systems
- โข Noisy or contradictory knowledge bases
- โข Real-time performance requirements
๐ Key Metrics
๐ก Top Use Cases
References & Further Reading
Deepen your understanding with these curated resources
Foundational Papers
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks (Lewis et al., 2020)
Dense Passage Retrieval for Open-Domain Question Answering (Karpukhin et al., 2020)
RAG vs FiD: Comparing Retrieval-Augmented Generation Models (Izacard & Grave, 2021)
Leveraging Passage Retrieval with Generative Models (Izacard et al., 2022)
Comprehensive Surveys
Retrieval-Augmented Generation for Large Language Models: A Survey (Gao et al., 2023)
A Comprehensive Survey of RAG: Evolution and Future Directions (Gupta et al., 2024)
Retrieval-Augmented Generation for AI-Generated Content: A Survey (Li et al., 2024)
RAG and RAU: A Survey on Retrieval-Augmented Language Model (Zhao et al., 2024)
Contribute to this collection
Know a great resource? Submit a pull request to add it.