🧠 AI / LLM
RAG
▸ EN
Retrieval-Augmented Generation. Fetching relevant docs into the context before the model answers. Reduces hallucinations.
▸ TR
Retrieval-Augmented Generation. Modele cevap vermeden önce ilgili belgeleri çekip context'e koyma tekniği. Hallucination'ı azaltır.
▸ EXAMPLES
- embed docs → store in pgvector → top-k cosine on query → stuff into prompt
- use Cohere reranker on retrieved chunks for higher precision
▸ RELATED TERMS
Embedding
Turning text (or images) into numeric vectors representing meaning. The foundation of similarity search and RAG.
▸ AI / LLM
Context Window
The maximum number of tokens an LLM can "see" at once. When the window fills up, the model starts forgetting or the conversation gets compacted.
▸ AI / LLM
Hallucination
When a model confidently makes up information — nonexistent libraries, fake APIs, imaginary function signatures.
▸ AI / LLM