profile photo

profile photo

Athul Paul Jacob

I am a second-year Ph.D. student in AI, NLP and, cognitive science at MIT CSAIL, advised by Jacob Andreas and Joshua Tenenbaum. I am also a part-time student researcher at Facebook AI Research where I work with Noam Brown and Mike Lewis.

In 2019, I completed my bachelors in computer science, combinatorics and optimization at the University of Waterloo, where I was advised by Pascal Poupart. From 2016 until 2018, I was also a visiting student researcher at Mila where I worked under the supervision of Yoshua Bengio.

In the summer of 2020 and fall of 2018, I was a research intern at Facebook AI Research where I was mentored by Noam Brown and Kyunghyun Cho respectively. I have also previously worked as a research intern at Microsoft Research in fall 2017 and winter 2018 with Alessandro Sordoni and Adam Trischler.

Email  /  Google Scholar  /  LinkedIn


Research

Current approaches to solving artificial intelligence falls short of human abilities in their capacity to learn rapidly and flexibly. My current research focuses on utilizing language in a grounded setting to tackle these challenges. My research therefore attempts to combine natural language processing, reinforcement learning and few-shot learning as well as ideas from cognitive psychology.

Prior to joining MIT, my focus was primarily on generative modelling (generative adversarial networks, in particular), constituency parsing and neural networks.

Multitasking Inhibits Semantic Drift
Athul Paul Jacob, Mike Lewis, Jacob Andreas
North American Chapter of the Association for Computational Linguistics (NAACL), 2021  

We prove that multitask training eliminates semantic drift in a well-studied family of signaling games, and show that multitask training of neural latent language policies (LLPs) in a complex strategy game reduces drift and while improving sample efficiency.

Straight to the Tree: Constituency Parsing with Neural Syntactic Distance
Yikang Shen*, Zhouhan Lin*, Athul Paul Jacob, Alessandro Sordoni, Aaron Courville, Yoshua Bengio
Association for Computational Linguistics (ACL), 2018   (Oral Presentation)

A novel constituency parsing scheme free from compounding errors, while being faster and easier to parallelize.

Learning Hierarchical Structures On-The-Fly with a Recurrent-Recursive Model for Sequences
Athul Paul Jacob*, Zhouhan Lin*, Alessandro Sordoni, Yoshua Bengio
Association for Computational Linguistics (ACL), 2018

A hierarchical model for sequential data that learns a tree on-the-fly. The model adapts its structure and reuses recurrent weights in a recursive manner by creating adaptive skip-connections that ease the learning of long-term dependencies.

Boundary-Seeking Generative Adversarial Networks
Devon Hjelm*, Athul Paul Jacob*, Tong Che, Adam Trischler, Kyunghyun Cho, Yoshua Bengio
International Conference on Learning Representations (ICLR), 2018

A principled method for training generative adversarial networks on discrete data that uses the estimated difference measure from the discriminator to compute importance weights for generated samples, providing a policy gradient for training the generator of the network.

Joint Training in Generative Adversarial Networks
Devon Hjelm, Athul Paul Jacob, Yoshua Bengio
International Conference on Machine Learning (ICML), 2017

A generative adversarial network capable of jointly generating images and their labels.

Mode Regularized Generative Adversarial Networks
Tong Che*, Yanran Li*, Athul Paul Jacob, Yoshua Bengio, Wenjie Li
International Conference on Learning Representations (ICLR), 2017

We introduce several ways of regularizing the GAN training objective, which can dramatically stabilize the training of these models.


Design courtesy of Jon Barron