Deciphering Causal Reasoning: Human vs. Language Models

Dr. Anita Keshmirian, our Assistant Professor in Psychology and Data Science, was invited to present a talk in the Probabilistic Model Session at the 4th International Conference on the Mathematics of Neuroscience.
Dr. Keshmirian’s presentation, titled ‘Deciphering Causal Reasoning: Human vs. Language Models,’ covered her investigation into deviations from normative criteria in Bayesian Belief Networks (BBNs), with a particular focus on two classic BBN structures: Chains and Common Cause networks, and how humans violate independence assumptions in these networks. Additionally, the research explores the causal reasoning of large language models (LLMs) such as GPT3.5-Turbo, GPT4, and Luminous Supreme Control when presented with similar queries as humans. This reveals that both humans and LLMs perceive Chains as more causally potent, with potential implications for theories of causal representation in human cognition and LLMs. This work was a collaboration between LMU Munich, Stanford University (Psychology), Urbana-Champaign, and TU Darmstadt, Artificial Intelligence group.