Speaker

Tommaso Radicioni

Tommaso Radicioni

AIKnowYou

Tommaso ha una formazione da fisico sperimentale che lo ha portato a fare esperienze di ricerca al CERN di Ginevra e al FermiLab di Chicago. Nel 2021 ha conseguito un dottorato in Data Science presso la Scuola Normale Superiore. Attualmente, lavora come data scientist per AIKnowYou e si occupa di prodotti software integrati con l'AI conversazionale nell'ambito del customer care.

Boost your LLM-based application with a RAG system

Unlock the true potential of your Large Language Model (LLM) applications with Retrieval-Augmented Generation (RAG), a groundbreaking approach that blends advanced information retrieval with AI text generation. This session dives into how RAG transforms LLM capabilities, enhancing the reliability of generated responses by drawing on a rich base of external knowledge.

Discover why RAG is so captivating; it transcends traditional LLM limitations, minimizing inaccuracies and greatly reducing the occurrence of "hallucinations"—where LLMs generate plausible yet incorrect information. You will be equipped to construct your own RAG system using an LLM orchestrator through hands-on examples. Learn the inner workings by comparing popular frameworks like Haystack, Langchain, and LlamaIndex, each offering unique benefits and features for implementing a robust RAG architecture.

By the end of this session, you'll master the practical skills needed to design and implement a high-performance RAG system tailored to your specific needs, boosting the effectiveness of your LLM-based applications and truly maximizing their potential.