Our next meet-up is scheduled for March 18th....and we promise it will be a blast!
Join us for a no-fluff, practitioner-led deep dive into two critical LLM challenges with focus on quantization and cheat-proof evaluations .
This time we will handle a pannel discussion with two experts in the area - David Kоchanov and Yordan Darakchiev.
Pannel 1: Smarter AI at lower cost: How quantization makes LLMs more efficient?
What can you expect?
LLMs guzzle RAM and GPU like crazy. Quick rundown: inference modes, bottlenecks, the math behind it.
Hands-on: when to use each and how to benchmark properly.
The talk will provide a quick review of LLM inference modes, current bottlenecks, and numerics. The main goal is to explain post-training quantization methods that are most commonly supported by modern software and hardware, and give practical recommendations for their use and model benchmarking. We'll also provide a brief summary of the academic literature and discuss quantization-aware training and distillation.
Who will present?
David Kochanov has over 10 years of experience in computer vision research and development. In his current role at CARIAD (Volkswagen Group), he works on autonomous driving and driver assistance systems, with a focus on model deployment for real-time inference, including quantization and architectural optimizations.
Pannel 2: Everybody Lies: How Language Models (and People) Cheat
What can you expect?
Teaching computers and people have the same temptations - students game grades, LLMs game benchmarks. We grade LLMs like exams: bad criteria reward test-taking tricks, not truth. The talk will present specially built for the pannel small BI-style benchmark (SQL, data summaries, and insight generation) with multiple LLM "judges" grade multiple models. With only couple of stress tests on the judges with small changes to the grading scheme all were broken - creating pure chaos!
Rankings flip depending on who grades, response length beats correctness, and judges favour familiar model "accents".
We'll end with an "academic integrity" toolkit for LLM evaluation: ensembles, invariance checks, length controls, and lightweight human calibration to make our LLM metrics harder to cheat on, and easier to trust.
Who will present?
Yordan Darakchiev is a data scientst and technical trainer. He has helped various teams turn data and ML into modern, measureable, production systems. His specialty are vision and language models, with a sharp focus on evaluation - fairness, privacy, trustworthiness, and robustness. He also teaches math and machine learning and has helped kickstart the career of more than 1000 data and ML professionals.
Agenda:
18:30-19:00 - Welcome drinks
19:00-20:00 - Pannel 1&Pannel 2
20:00-20:30 - Q&A
20:30-21:30 - Networking
Note: Event will be handled in English.
Event is free but registration is mandatory.
You may also like the following events from Data Science Society:
Also check out other
Workshops in Sofia,
Arts events in Sofia,
Literary Art events in Sofia.