🧠 AI / LLM
Hallucination
▸ EN
When a model confidently makes up information — nonexistent libraries, fake APIs, imaginary function signatures.
▸ TR
Modelin kendinden emin bir tonla uydurma bilgi üretmesi. Var olmayan kütüphane, sahte API, hayali fonksiyon imzası.
▸ RELATED TERMS
Slop
Low-quality, generic, soulless content produced by AI. Often loaded with typical LLM tics: "It's important to note...", emoji overload, needless padding.
▸ CULTURE
RAG
Retrieval-Augmented Generation. Fetching relevant docs into the context before the model answers. Reduces hallucinations.
▸ AI / LLM
Guardrails
Safety layer defining what the model can't do or say. Lives in the prompt, the system prompt, or pre/post filters.
▸ SECURITY