Banner omage with Big Ten campus photos layered with blue shades

Events

Events

Hallucination in the Wild: A Field Guide for LLM Users

Large language models (LLMs) can hold remarkably fluent conversations—but they also make things up. These “hallucinations” (or more accurately, confabulations) are one of the biggest challenges to building trustworthy AI systems. In this talk, Ash will explore why these errors happen, how we can spot them, and what can be done to reduce them. Ash will introduce VISTA Score, a new method for checking factual consistency across multi-turn conversations, and show how it outperforms existing tools in identifying misleading claims. She will also share practical strategies—from better prompts and retrieval methods to fine-tuning with both human and synthetic data—that can make smaller models nearly as reliable as their larger counterparts. The goal: to understand not just how these systems go wrong, but how we can make them more transparent, responsible, and aligned with the truth.

Speaker:
Ash Lewis
is a computational linguist and Ph.D. candidate at The Ohio State University studying how to make AI systems more reliable and less likely to “make things up.” Her research explores why large language models hallucinate—and how to detect, measure, and reduce those errors in dialogue settings. Ash’s work bridges computational modeling and linguistic analysis, developing lightweight, trustworthy AI tools for applications like virtual assistants, education, and question answering. She is especially interested in how smaller, well-trained models can rival massive ones in both accuracy and transparency.

Monday, February 9, 2026 — 1pm EST | 12pm CST | 10am PST