Causal Reasoning with Large Language Models
When and Where
Speakers
Description
Welcome to our casual research seminar organized in the Department of Statistical Sciences at the University of Toronto. Our aim is to explore the diverse research conducted by our faculty, students, and postdocs. Talks usually last 30 to 45 minutes, followed by discussions. We cover current research, overviews of emerging topics, and more. Some pizza and soda will be offered before the seminar around 12:20pm.
Abstract: Causal reasoning is a cornerstone of human intelligence and a critical capability for artificial systems aiming to achieve advanced understanding and decision-making. While large language models (LLMs) excel on many tasks, a key question remains: How can these models reason better about causality? Causal questions that humans can pose span a wide range of fields, from Newton’s fundamental question, “Why do apples fall?” which LLMs can now retrieve from standard textbook knowledge, to complex inquiries such as, “What are the causal effects of minimum wage introduction?”—a topic recognized with the 2021 Nobel Prize in Economics. My research focuses on automating causal reasoning across all types of questions. To achieve this, I explore the causal reasoning capabilities that have emerged in state-of-the-art LLMs, and enhance their ability to perform causal inference by guiding them through structured, formal steps. Finally, I will outline a future research agenda for building the next generation of LLMs capable of scientific-level causal reasoning.
Seminar organizers: Austin Brown & Archer Gong Zhang & Piotr Zwiernik.