The DIMACS Workshop on Foundation Models, Large Language Models (LLMs), and Game Theory, held at DIMACS on October 19 - 20, 2023, marked the first of many foreseeable steps towards advancing a research initiative at the intersection of these topics. With a notable shift towards generative AI—models trained on extensive data to generate content adaptable to a myriad of downstream tasks—the workshop aimed to delve into the role that game theory, a mathematical subfield of economics, could play in contributing to these developments and pondered the reverse impact that generative AI can have on game theory.
The workshop featured a series of research talks by academics and industry professionals and closed each day with sessions designed to facilitate interaction and broad participation. These includes a rump session during which any attendee could volunteer to give a short presentation, a contributed poster session, and breakout groups on topics generated during a lively panel discussion. The rump and poster sessions showcased both developed research and preliminary directions, with many topics presented by students. The panel discussion and breakout sessions concluded the workshop by outlining future directions based on research questions that emerged during the workshop.
Game theory models the strategic interaction of agents, historically people or groups of people, with incentives. The multi-agent systems community has long taken the view that game theory is likewise suitable to model interactions among AI agents. To this end, the first day commenced with an engaging keynote by Constantinos Daskalakis of MIT, focusing on the prerequisites for training strategic foundation models, which raises an interesting question of how to endow foundation models with appropriate incentives, a problem closely related to value alignment. This keynote initiated a session on how the tools of game theory might be applied to enhance foundation models.
The afternoon keynote talk, which focused on the possibilities and limitations of large language models, was given by Kathy McKeown of Columbia University. Her talk was followed by a session on how foundation models can potentially be used to solve game theory problems. For example, Ph.D. student Athul Paul Jacob discussed his work on incorporating LLM technology into strategic AI agents capable of playing the board game Diplomacy, a game of strategy that requires negotiation in natural language. These talks improved our understanding of how foundation models can be employed to enhance game solvers, which at present seem to require separate modules for language and reasoning.(There was extensive discussion at the end of the workshop as to whether this separation would remain necessary.)
Subsequent to these two sessions, researchers participated in rump and poster sessions presenting burgeoning findings at the crossroads of foundation models, LLMs, and game theory. Among other results, the posters presented novel neural architectures for building foundation models that solve games and novel LLM-based multiagent systems for tackling critical tasks such as causal reasoning. The day concluded with a dinner, where guest of honor Michael Littman—Division Director at the National Science Foundation—delivered a talk on the limits and possibilities of AI in education.
The momentum from the first day carried into the next, with a keynote by John Horton of MIT, who explored the role of LLMs in behavioral economics, by asking whether LLMs might be capable of serving the role of human subjects in some capacity, enabling pilot studies, full transparency, easy replication, and fast iteration. The day continued to uncover the vast potential at the intersection of game theory and foundation models, covering topics ranging from generating reward feedback using foundation models, thus scaling human feedback; to automated test case generation and self-improving code generation; to novel economic models for pricing foundation models. The workshop concluded with a panel discussion on the impact of foundation models and LLMs on society, moderated by Amy Greenwald of Brown University and featuring panelists Fei Fang of Carnegie Mellon, Gabriele Farina of MIT, Kevin Leyton-Brown of the University of British Columbia, and Matthew Stone of Rutgers. The panel led immediately into breakout group discussions on topics generated during the panel discussion and selected by consensus.
The DIMACS Workshop on Foundation Models, Large Language Models, and Game Theory was more than just an academic exchange; it acted as a foundation for future collaborations and a catalyst for innovation at the junction of generative AI and game theory. The discussions, presentations, and interactions among the attendees not only highlighted the current state of the art, but also mapped out a trajectory for future explorations. The application of game-theoretic models and tools in the realm of foundation models is set to unveil new dimensions in AI and significantly advance the theory and applications of foundation models.
The workshop was organized by Denizalp Goktas and Amy Greenwald of Brown University, Tamra Carpenter and David Pennock of DIMACS, and Segev Wasserkrug of IBM Research as part of the DIMACS Special Focus on Mechanisms and Algorithms to Augment Human Decision Making. It grew from discussions following the Workshop on Bridging Game Theory and Machine Learning for Multi-party Decision Making, which was held in October 2022, organized by Carpenter, Pennock, and Wasserkrug and featuring Greenwald as a keynote speaker.
Printable version of this story: [PDF]