profile photo

profile photo

Athul Paul Jacob

I am a Ph.D. student in AI, multi-agent systems and, NLP at MIT CSAIL, advised by Jacob Andreas. I received my master's degree from MIT EECS in 2022, where my thesis was co-advised by Noam Brown and Jacob Andreas.

In 2019, I completed my bachelor's in computer science, combinatorics and optimization at the University of Waterloo, where I was advised by Pascal Poupart. From 2016 until 2018, I was also a visiting student researcher at Mila where I worked under the supervision of Yoshua Bengio.

I have been fortunate to intern at Facebook AI Research several times in 2018, 2020 and 2021 as a researcher, where I was mentored by Noam Brown, Kyunghyun Cho and Mike Lewis. I have also previously worked as a research intern at Microsoft Research in fall 2017 and winter 2018 with Alessandro Sordoni and Adam Trischler. I have also spent time at Google, MIT-IBM Watson AI Lab and General Catalyst Venture Partners as an AI Fellow.

Email  /  Google Scholar  /  LinkedIn  /  Twitter


* indicates equal first author contributions

Modeling Boundedly Rational Agents with Latent Inference Budgets
Athul Paul Jacob, Abhishek Gupta, Jacob Andreas
International Conference on Learning Representations (ICLR), 2024
NeurIPS Generalization in Planning Workshop , 2023

We study the problem of modeling a population of agents pursuing unknown goals subject to unknown computational constraints. We introduce latent inference budget model (L-IBM) that models agents’ computational constraints explicitly, via a latent variable (inferred jointly with a model of agents’ goals) that controls the runtime of an iterative inference algorithm. In three modeling tasks—inferring navigation goals from routes, inferring communicative intents from human utterances, and predicting next moves in human chess games—we show that L-IBMs match or outperform Boltzmann models of decision-making under uncertainty

Regularized Conventions: Equilibrium Computation as a Model of Pragmatic Reasoning
Athul Paul Jacob, Gabriele Farina, Jacob Andreas
Arxiv, 2023

We present a model of pragmatic language understanding called ReCo, where utterances are produced and understood by searching for regularized equilibria of signaling game. Across several datasets capturing real and idealized human judgments about pragmatic implicatures, ReCo matches or improves upon predictions made by best response and rational speech act models of language understanding

The Consensus Game: Language Model Generation via Equilibrium Search
Athul Paul Jacob, Yikang Shen, Gabriele Farina, Jacob Andreas
International Conference on Learning Representations (ICLR), 2024   (Spotlight Presentation)
NeurIPS Workshop on robustness of zero/few-shot learning in foundation models (R0-FoMo) , 2023   (Best Paper)

We introduce a training-free, game-theoretic procedure for language model decoding that improves performance across a number of NLP tasks.

Human-Level Play in the Game of Diplomacy by Combining Language Models with Strategic Reasoning
FAIR, Anton Bakhtin* , Noam Brown*, Emily Dinan*, Gabriele Farina, Colin Flaherty*, Daniel Fried, Andrew Goff, Jonathan Gray*, Hengyuan Hu*, Athul Paul Jacob*, Mojtaba Komeili, Karthik Konath, Minae Kwon, Adam Lerer*, Mike Lewis*, Alexander H. Miller*, Sasha Mitts, Adithya Renduchintala*, Stephen Roller, Dirk Rowe, Weiyan Shi*, Joe Spisak, Alexander Wei, David Wu*, Hugh Zhang*, Markus Zijlstra,
Science Magazine, November 22, 2022  

We introduce Cicero, an AI agent that demonstrates human-level performance in the mixed-motive 7-player strategic board game Diplomacy that involves natural language negotiation, strategic coordination and persuation.

Mastering the Game of No-Press Diplomacy via Human-Regularized Reinforcement Learning and Planning
Anton Bakhtin*, David Wu*, Adam Lerer*, Jonathan Gray*, Athul Paul Jacob*, Gabriele Farina*, Alexander H Miller, Noam Brown
International Conference on Learning Representations (ICLR), 2023   (Outstanding Paper Honorable Mention)
NeurIPS Deep Reinforcement Learning Workshop, 2022   (Spotlight Talk)

We introduce Diplodocus, an AI that achieves expert human-level performance in No-press Diplomacy, a challenging 7-player strategy game that involves both cooperation and competition.

AutoReply: Detecting Nonsense in Dialogue Introspectively with Discriminative Replies
Weiyan Shi, Emily Dinan, Adi Renduchintala, Daniel Fried, Athul Paul Jacob, Zhou Yu, Mike Lewis,
Empirical Methods in Natural Language Processing (EMNLP) - Findings, 2023

A new algorithm for detecting nonsensical responses in complex applications involving dialogues.

Modeling Strong and Human-Like Gameplay with KL-Regularized Search
Athul Paul Jacob*, David Wu*, Gabriele Farina*, Adam Lerer, Anton Bakhtin, Jacob Andreas, Noam Brown
ICLR Workshop on Gamification and Multiagent Solutions, 2022   (Contributed Talk)
International Conference on Machine Learning (ICML), 2022   (Spotlight Presentation)

We show that by regularizing search towards a human policy, we get state-of-the-art human prediction accuracies in chess, Go and no-press Diplomacy while being significantly stronger.

Multitasking Inhibits Semantic Drift
Athul Paul Jacob, Mike Lewis, Jacob Andreas
North American Chapter of the Association for Computational Linguistics (NAACL), 2021   (Oral Presentation)

We prove that multitask training eliminates semantic drift in a well-studied family of signaling games, and show that multitask training of neural latent language policies (LLPs) in a complex strategy game reduces drift and while improving sample efficiency.

Straight to the Tree: Constituency Parsing with Neural Syntactic Distance
Yikang Shen*, Zhouhan Lin*, Athul Paul Jacob, Alessandro Sordoni, Aaron Courville, Yoshua Bengio
Association for Computational Linguistics (ACL), 2018   (Oral Presentation)

A novel constituency parsing scheme free from compounding errors, while being faster and easier to parallelize.

Learning Hierarchical Structures On-The-Fly with a Recurrent-Recursive Model for Sequences
Athul Paul Jacob*, Zhouhan Lin*, Alessandro Sordoni, Yoshua Bengio
Association for Computational Linguistics (ACL), 2018

A hierarchical model for sequential data that learns a tree on-the-fly. The model adapts its structure and reuses recurrent weights in a recursive manner by creating adaptive skip-connections that ease the learning of long-term dependencies.

Boundary-Seeking Generative Adversarial Networks
Devon Hjelm*, Athul Paul Jacob*, Tong Che, Adam Trischler, Kyunghyun Cho, Yoshua Bengio
International Conference on Learning Representations (ICLR), 2018

A principled method for training generative adversarial networks on discrete data that uses the estimated difference measure from the discriminator to compute importance weights for generated samples, providing a policy gradient for training the generator of the network.

Joint Training in Generative Adversarial Networks
Devon Hjelm, Athul Paul Jacob, Yoshua Bengio
International Conference on Machine Learning (ICML), 2017

A generative adversarial network capable of jointly generating images and their labels.

Mode Regularized Generative Adversarial Networks
Tong Che*, Yanran Li*, Athul Paul Jacob, Yoshua Bengio, Wenjie Li
International Conference on Learning Representations (ICLR), 2017

We introduce several ways of regularizing the GAN training objective, which can dramatically stabilize the training of these models.

Design courtesy of Jon Barron