I’m currently a PhD student in computer science at Stanford University with Percy Liang. I’m passionate about the machine learning and optimization, and have some experience.
Email: chaganty [at] cs.stanford.edu.
Resume: [pdf], [template],
I am interested in studying how natural language processing can be tooled to make it easier for people to understand and consume information. On a related note, I care about how we can bring greater transparency, accountability and fairness in voice through information summarization.
I have also worked on providing guarantees for learning latent variable models.
In the past, I have worked on probabilistic programming, statistical relational learning and hierarchical reinforcement learning.
- Werling, Chaganty, Liang, Manning; On the Job Learning with Bayesian Decision Theory; NIPS 2015 [arxiv][poster]
- Wang, Chaganty, Liang; Estimating Mixture Models via Mixtures of Polynomials; NIPS 2015. [paper][poster]
- Kuleshov*, Chaganty*, Liang; Tensor Factorization via Matrix Factorization; AISTATS 2015. [arxiv][slides]
- Chaganty, Liang; Estimating Latent Variable Graphical Models with Moments and Likelihoods; ICML 2014. [paper][slides]
- Chaganty, Liang; Spectral Experts for Estimating Mixtures of Linear Regressions; ICML 2013. [paper][slides][poster]
- Chaganty, Lal, Nori, Rajamani; Combining Relational Learning with SMT Solvers using CEGAR; CAV 2013. [paper]
- Chaganty, Nori, Rajamani; Efficiently Sampling Probabilistic Programs via Program Analysis; AISTATS 2013. [paper]
- Chaganty, Gaur, Ravindran; Learning in a Small World; AAMAS 2012. [paper]
- Chaganty; Inter-Task Learning with Spatio-Temporal Abstractions; Master’s Thesis (IIT Madras). [thesis]