Fine-Tuning Large Language Models: From Theory to Practice, 3 March | Event in Chicago | AllEvents

Fine-Tuning Large Language Models: From Theory to Practice

Research Computing Center

Highlights

Tue, 03 Mar • 02:00 PM

2 hours

John Crerar Library - Kathleen A. Zar Room

Free Tickets Available

Advertisement

Date & Location

Tue, 03 Mar • 02:00 PM (GMT-06:00)

John Crerar Library - Kathleen A. Zar Room

5730 South Ellis Avenue, Chicago, United States

Save location for easier access

Only get lost while having fun, not on the road!

About the event

Fine-Tuning Large Language Models: From Theory to Practice
Presenters: Youzhi Yu

About this Event

Pre-trained large language models demonstrate remarkable general capabilities, but achieving consistent domain-specific behavior, specialized output formats, or deeply embedded knowledge requires moving beyond prompting. Fine-tuning adapts these foundation models to specific tasks, yet the computational demands of updating billions of parameters have historically placed this technique out of reach for most practitioners. This workshop bridges theory and practice, guiding participants from the fundamentals of LLM adaptation through modern parameter-efficient methods that make fine-tuning accessible on consumer hardware. We examine the LIMA hypothesis on data quality, numerical precision trade-offs (FP32, FP16, BF16), and the mechanics of Low-Rank Adaptation (LoRA) and QLoRA. Live-coding sessions will implement a complete fine-tuning pipeline using Hugging Face Transformers and PEFT, covering data preparation, hyperparameter selection, and the critical distinction between training loss and actual model quality. Participants will also learn when fine-tuning is the right approach versus prompting or retrieval-augmented generation. Attendees will leave with a working fine-tuned model, practical debugging strategies, and a decision framework for production deployment.

Learning Objectives

  • Explain when fine-tuning is appropriate versus prompt engineering or retrieval-augmented generation.
  • Analyze numerical precision formats and their impact on memory requirements and training stability.
  • Implement a LoRA-based fine-tuning pipeline with proper data formatting, hyperparameter configuration, and checkpointing.
  • Diagnose common failure modes including overfitting, catastrophic forgetting, and mode collapse through validation metrics.
  • Evaluate fine-tuned model quality using validation loss, benchmark performance, and qualitative assessment.

Level: Intermediate

Prerequisites: Basic Python programming, familiarity with PyTorch or TensorFlow, and conceptual understanding of transformer architectures.




Also check out other Workshops in Chicago.

interested
Stay in the loop for updates and never miss a thing. Are you interested?
Yes
No

Ticket Info

Tickets for Fine-Tuning Large Language Models: From Theory to Practice can be booked here.

Ticket type Ticket price
General Admission Free
Advertisement

Event Tags

Nearby Hotels

John Crerar Library - Kathleen A. Zar Room, 5730 South Ellis Avenue, Chicago, United States
Register for Free

Host Details

Research Computing Center

Research Computing Center

Are you the host? Claim Event

Advertisement
Fine-Tuning Large Language Models: From Theory to Practice, 3 March | Event in Chicago | AllEvents
Fine-Tuning Large Language Models: From Theory to Practice
Tue, 03 Mar • 02:00 PM
Free