Advertisement Β· 728 Γ— 90
#
Hashtag
#LLmZoomcamp
Advertisement Β· 728 Γ— 90

πŸ’‘ WRAP-UP

Built a complete RAG system with:
βœ… Optimized retrieval (k=3, 86.67% precision)
βœ… Evaluated prompts (8.0/10 quality)
βœ… Real-time monitoring (7 charts)
βœ… Full Docker deployment
βœ… Hallucination prevention

#LLMZOOMCAMP #BuildInPublic

2 1 0 0

πŸ”„ REPRODUCIBILITY

Everything needed to run this:
πŸ“¦ requirements.txt with pinned versions
🐳 Docker Compose for one-command deploy
πŸ“š Complete documentation
🎯 Sample data included

Clone, configure API key, run. That's it!

#LLMZOOMCAMP

0 0 1 0

⏱️ PERFORMANCE NUMBERS

β€’ Retrieval: < 1 second
β€’ Processing: 1,400 chunks/min
β€’ Batch size: 5,000 docs
β€’ Dataset: 10+ technical books (15,354 chunks)

Fast enough for real-time queries!

#LLMZOOMCAMP #Performance

0 0 1 0

πŸ“₯ SMART INGESTION

Auto-detects existing vector DB or creates new one
Handles PDFs + TXT files
Batch processing for large collections
Graceful error handling

Set it and forget it!

#LLMZOOMCAMP #DataEngineering

0 0 1 0

πŸ“ˆ EVALUATION FRAMEWORK

Retrieval: Precision + keyword relevance
LLM: Quality scoring (accuracy, depth, honesty)

Ran ~50 test queries across both evaluations.

Measure everything. Improve what matters.

#LLMZOOMCAMP #MLOps

0 0 1 0

πŸ›‘οΈ PREVENTING HALLUCINATIONS

Tested with out-of-scope questions.

System correctly says "I cannot tell you based on the provided context" instead of making things up.

Honesty > Confidence

#LLMZOOMCAMP #AIEthics

1 0 1 0

🎨 USER EXPERIENCE

Two-tab Streamlit interface:
1. Q&A System with source previews
2. Analytics Dashboard

Auto-initialization on startup = zero-config for users

Good UX = better adoption!

#LLMZOOMCAMP #UX

0 0 1 0

SCALING CHALLENGES

Hit API limits at 15,000+ document chunks!

Solution: Batch processing (5000 chunks/batch)
Result: ~1,400 chunks/min processing speed

Always plan for scale from day one.

#LLMZOOMCAMP #Scaling

0 0 1 0

🐳 FULL CONTAINERIZATION

Docker Compose with:
β€’ Named volumes for persistence
β€’ Health checks
β€’ Resource limits (2 CPU, 4GB RAM)
β€’ Non-root user for security
β€’ Auto-restart policies

One command deploy!

#LLMZOOMCAMP #DevOps #Docker

1 0 2 0

πŸ“Š MONITORING MATTERS

Built an integrated dashboard with 7 real-time charts:
- Feedback distribution
- Response times
- Query volume
- Activity patterns

User feedback: πŸ‘/πŸ‘Ž buttons after every answer

#LLMZOOMCAMP #DataViz

0 0 1 0

πŸ€– PROMPT ENGINEERING

Tested 4 prompt templates on quality:
β€’ Expert Technical: 8.0/10 ⭐
β€’ Detailed Context: 7.9/10
β€’ Structured: 7.0/10
β€’ Concise: 6.2/10

Comprehensive wins over brevity for technical Q&A!

#LLMZOOMCAMP #PromptEngineering

1 0 1 0

πŸ” RETRIEVAL OPTIMIZATION

Evaluated 4 different approaches:
β€’ Semantic (k=3): 86.67% precision βœ…
β€’ Semantic (k=5): 84.00%
β€’ Semantic (k=10): 84.00%
β€’ MMR (k=5): 84.00%

Less is more! k=3 won with best relevance.

#LLMZOOMCAMP #MachineLearning

1 0 1 0

πŸ› οΈ TECH STACK

β€’ LLM: Google Gemini 2.5 Pro
β€’ Embeddings: text-embedding-004
β€’ Vector DB: ChromaDB
β€’ Framework: LangChain
β€’ UI: Streamlit
β€’ Container: Docker

All production-ready with monitoring!

#LLMZOOMCAMP #TechStack

1 0 1 0

πŸ“š THE PROBLEM

Ever spent hours searching through multiple technical PDFs for one piece of info? Me too!

DocuMind solves this with AI-powered semantic search. Ask questions in natural language, get instant answers with sources.

#LLMZOOMCAMP #RAG

0 0 1 0

πŸš€ Just completed my #DataTalksClub LLM Zoomcamp project: DocuMind - an end-to-end RAG system for technical documents!

Built with Google Gemini, LangChain, ChromaDB & Streamlit.

Let me share what I learned... 🧡

#LLMZOOMCAMP #BuildInPublic #AI

2 1 0 0

Just completed my #LLMZoomcamp final project: AA Assistant a RAG-powered chatbot providing trustworthy Alcoholics Anonymous information to people seeking help

Tech Stack
β€’ NVIDIA NIM for LLM inference
β€’ Jina embeddings v2 for Spanish/English
β€’ FastAPI + Qdrant vector DB

github.com/marcelonieva...

0 0 0 0

πŸ€– Agentic RAG + Function Calling + MCP = AI superpowers.
RAG gives LLMs fresh knowledge, Function Calling lets them trigger actions, and MCP standardizes tool access across platforms. Open standard β†’ smarter, action-driven AI.

#AI #LLM #RAG #MCP #FunctionCalling #LLMZoomcamp

0 1 0 0

πŸš€ MCP: The USB Port for AI Tools
MCP (Model Context Protocol) gives LLMs a toolbox β€” tools are discoverable, callable, and work the same across platforms. No more glue code. MCP = Open standard β†’ LLMs + tools = instant collaboration.

#AI #LLM #MCP #ModelContextProtocol #AItools #LLMZoomcamp

0 1 0 0

πŸ› οΈ Function Calling lets AI run tools like APIs during a conversation.

In Agentic RAG, this means: fetch live data βœ… run computations βœ… trigger services βœ…

πŸ’‘ Your AI doesn’t just talk β€” it gets things done.

#AI #FunctionCalling #AgenticRAG #LLM #LLMZoomcamp

0 1 0 0

πŸ” Just explored Agentic RAG β€” Retrieval-Augmented Generation with autonomous agents.
πŸ“š It doesn’t just find info, it decides how to use it.

Like giving AI both a library card and a research assistant.

#AI #RAG #AgenticRAG #MachineLearning #LLMZoomcamp

0 1 0 0

The future of LLM evaluation involves a greater emphasis on context, ethical considerations, and user-centric metrics. Collaboration across research and industry will be vital. #llmzoomcamp

0 0 0 0

Challenges in LLM evaluation include data contamination, outdated benchmarks, and the inherent subjectivity of human judgments. Addressing these requires ongoing innovation. #AIChallenges #LLMEvaluation #llmzoomcamp

2 0 0 0

Key metrics like perplexity, BLEU, and ROUGE are valuable. However, multi-faceted approaches are essential to capture the nuances of LLM performance across different tasks and use cases. #LLMmetrics #NLP #llmzoomcamp

0 0 0 0

Building robust LLM applications requires continuous evaluation, from pre-production testing to post-production monitoring with real user data #llmzoomcamp

0 0 0 0

The use of "LLM-as-a-judge" is promising for evaluating LLMs at scale. However, it's important to remember that they inherit the biases and limitations of LLMs themselves.
#llmzoomcamp

0 0 0 0

Benchmarking LLMs helps to understand their capabilities, but real-world scenarios reveal their true performance. Avoid relying solely on leaderboards #llmzoomcamp

0 0 0 0

Evaluating LLMs goes beyond accuracy. Assessing relevance, coherence, factual correctness, fairness, and safety is necessary to ensure they are truly useful and reliable. #llmzoomcamp

0 0 0 0

πŸŽ“ Built a comprehensive search evaluation system this week! Learned to compare multiple search approaches systematically. Now I can evaluate any search system with confidence! #LLMZOOMCAMP #SearchEvaluation #VectorSearch

0 0 0 0

⚑ Key learning: Different search methods have different strengths! Learned when to use exact text search vs semantic vector search vs scalable vector databases. Context matters! #LLMZOOMCAMP

0 0 0 0

🎯 Explored ROUGE evaluation for text generation quality! Learned how to measure how well generated text matches reference text - crucial skill for building better RAG systems! #LLMZOOMCAMP

0 0 0 0