Home
I am a research scientist at Google Research and primarily work on conversational AI. Previously, I led the Conversations AI team at Square, Inc and was the Head of AI at Eloquent Labs. I graduated with a PhD in computer science at Stanford University, working with Percy Liang and the Stanford NLP group. If you’d like to reach out to me, just shoot me an email at chaganty@cs.stanford.edu
.
Resume: online [pdf] Twitter: @arunchaganty GitHub: arunchaganty
Research Interests
I believe any functional future of the world will need natural language technology to make it easier for people to understand and consume information. I care about how we can bring greater trust, transparency and fairness in voice through technology, and I think conversational AI systems will be the key means in which we can achieve these goals.
In the past, I have worked on human-in-the-loop evaluation, providing guarantees for learning latent variable models, probabilistic programming, statistical relational learning and hierarchical reinforcement learning.
Publications
- Chaganty, Leszczynski, Zhang, Ganti, Balog, Radlinski; Beyond Single Items: Exploring User Preferences in Item Sets with the Conversational Playlist Curation Dataset; (in submission)
- Leszczynski, Ganti, Zhang, Balog, Radlinski, Pereira, Chaganty; Generating Synthetic Data for Conversational Music Recommendation Using Random Walks and Language Models; arXiv 2023 [arxiv]
- Gao, Dai, Pasupat, Chen, Chaganty, Fan, Zhao, Lao, Lee, Juan, Guu; RARR: Researching and Revising What Language Models Say, Using Language Models; arXiv 2022 [arxiv]
- Dai, Chaganty, Zhao, Amini, Rashid, Green, Guu; Dialog inpainting: Turning documents into dialogs; ICML 2022 [arxiv]
- Dieter, Chaganty; Conformal retrofitting via Riemannian manifolds: distilling task-specific graphs into pretrained embeddings; arXiv, 2020. [arxiv]
- Dieter, Wang, Angeli, Chang, Chaganty; Mimic and Rephrase: Reflective listening in open-ended dialogue; CoNLL, 2019. [data,code]
- Lamm, Chaganty, Manning, Jurafsky, Liang; Textual Analogy Parsing: Identifying What’s Shared and What’s Compared among Analogous Facts; EMNLP, 2018. [pdf] [data,code]
- Chaganty*, Mussmann*, Liang; The price of debiasing automatic metrics in natural language evaluation; ACL 2018. [pdf][poster][data,code][arxiv]
- Chaganty*, Paranjape*, Liang, Manning; Importance sampling for unbiased on-demand evaluation of knowledge base population; EMNLP 2017. [pdf][code][website]
- Chaganty, Liang; How Much is 131 Million Dollars? Putting Numbers in Perspective with Compositional Descriptions; ACL 2016. [pdf][data,code]
- Werling, Chaganty, Liang, Manning; On the Job Learning with Bayesian Decision Theory; NIPS 2015. [arxiv][poster]
- Wang, Chaganty, Liang; Estimating Mixture Models via Mixtures of Polynomials; NIPS 2015. [paper][poster]
- Kuleshov*, Chaganty*, Liang; Tensor Factorization via Matrix Factorization; AISTATS 2015. [arxiv][slides]
- Chaganty, Liang; Estimating Latent Variable Graphical Models with Moments and Likelihoods; ICML 2014. [paper][slides]
- Chaganty, Liang; Spectral Experts for Estimating Mixtures of Linear Regressions; ICML 2013. [paper][slides][poster]
- Chaganty, Lal, Nori, Rajamani; Combining Relational Learning with SMT Solvers using CEGAR; CAV 2013. [paper]
- Chaganty, Nori, Rajamani; Efficiently Sampling Probabilistic Programs via Program Analysis; AISTATS 2013. [paper]
- Chaganty, Gaur, Ravindran; Learning in a Small World; AAMAS 2012. [paper]
- Chaganty; Inter-Task Learning with Spatio-Temporal Abstractions; Master’s Thesis (IIT Madras). [thesis]