# Articles

##### Highlights from CoNLL and EMNLP 2019
###### nlpresearch

CoNLL and EMNLP, two top-tier natural language processing conferences, were held in Hong Kong last month. A large contingent of the Square AI team, myself included, attended and our fantastic intern, Justin Dieter, presented our work on a new contextual language generation task: mimic rephrasals. Despite being a regular conference attendee, I was surprised by the sheer quantity and quality of innovative ideas presented at the conference: a true testament to how fast the field is moving. It's impossible to cover everything that happened, but in this post I've tried to capture a sampling of the ideas I found most exciting in the sessions I attended.

##### Efficiently estimating recall
###### applied-mlevaluationhuman-in-the-loopbeta

When you're trying to detect or identify an event with a machine learning system, the metrics you really care about are precision and recall. While measuring precision tells you where your model made a mistake, measuring recall can tell you where your model can improve. In this post, we'll look at how to triple the data efficiency of estimating recall in practice using an importance-reweighted estimator.

##### Why we need human evaluation.
###### nlpevaluationhuman-in-the-loop

We're witnessing an exciting boom in the subfield of natural language generation (NLG), with more than 150 related papers published at top conferences in just the last year! Unfortunately, it's incredibly hard to compare these different methods in a meaningful way as many automatic evaluations have consistently been shown to poorly correlate with human judgment. In this paper, we'll ask if complete human evaluation really necessary and if so, what can we do to make it easier or cheaper to conduct human evaluations

##### On Atomic Norms
###### optimizationconvex-geometry

How many linear measurements do you need to (efficiently) recover a low rank matrix? What about a sparse vector or an orthogonal matrix? Given that we know our object of interest has some 'structure', can we answer this question in a general manner?

In this article, I will show you one approach to do so; regression using atomic norms. Most of the material I will cover was presented in the paper, "The Convex Geometry of Linear Inverse Problems" by Venkat Chandrasekaran, et. al.

##### Topic Models, Gaussian Integrals, and Getting Scooped
###### nlpgraphical-models

This post is a brief overview of an approach I took to derive variational inference for the Correlated Topic Model, before realizing it had been done before.