Generative AI From Multiple Perspectives
About this resource
This forum brings together two UW–Madison presenters exploring how generative AI can serve as a collaborative tool in high-stakes environments—whether surfacing clinical anomalies in health data or supporting deeper engagement in the classroom. Both talks emphasize that LLMs are most effective when paired with interpretable structures, such as statistical models or retrieval pipelines, and used in ways that prioritize transparency, safety, and human oversight.
LLMs as data scientists — Benjamin Lengerich
Benjamin Lengerich (Statistics & Computer Sciences) shares a framework for automating surprise detection in electronic health records (EHR) using generalized additive models (GAMs). These interpretable models decompose clinical features into univariate effects, which are then passed to LLMs to identify unexpected trends. Examples include counter-causal mortality patterns linked to biomarkers affected by treatment decisions. His team compares LLM-detected anomalies with human clinicians, proposing a hybrid system to support safer medical modeling. Several examples draw from both proprietary hospital data and the public MIMIC-IV dataset.
LLMs in the classroom — Kaiser Pister
Kaiser Pister (Computer Sciences) describes how he built a retrieval-augmented chatbot for his programming languages course using lecture transcripts and Whisper-generated summaries. Students used the tool to ask questions, retrieve course content, and generate examples. He outlines a new direction where students “teach” an LLM a concept as a way to reinforce learning—turning the act of prompting into a pedagogical exercise.