2 hours
John Crerar Library - Kathleen A. Zar Room
Free Tickets Available
Tue, 03 Mar • 02:00 PM (GMT-06:00)
John Crerar Library - Kathleen A. Zar Room
5730 South Ellis Avenue, Chicago, United States
Pre-trained large language models demonstrate remarkable general capabilities, but achieving consistent domain-specific behavior, specialized output formats, or deeply embedded knowledge requires moving beyond prompting. Fine-tuning adapts these foundation models to specific tasks, yet the computational demands of updating billions of parameters have historically placed this technique out of reach for most practitioners. This workshop bridges theory and practice, guiding participants from the fundamentals of LLM adaptation through modern parameter-efficient methods that make fine-tuning accessible on consumer hardware. We examine the LIMA hypothesis on data quality, numerical precision trade-offs (FP32, FP16, BF16), and the mechanics of Low-Rank Adaptation (LoRA) and QLoRA. Live-coding sessions will implement a complete fine-tuning pipeline using Hugging Face Transformers and PEFT, covering data preparation, hyperparameter selection, and the critical distinction between training loss and actual model quality. Participants will also learn when fine-tuning is the right approach versus prompting or retrieval-augmented generation. Attendees will leave with a working fine-tuned model, practical debugging strategies, and a decision framework for production deployment.
Learning Objectives
Level: Intermediate
Prerequisites: Basic Python programming, familiarity with PyTorch or TensorFlow, and conceptual understanding of transformer architectures.
Also check out other Workshops in Chicago.
Tickets for Fine-Tuning Large Language Models: From Theory to Practice can be booked here.
| Ticket type | Ticket price |
|---|---|
| General Admission | Free |