I am a software engineer on the Language team at Google Research. My research focuses on interpretability and analysis of deep NLP models (a.k.a. “BERTology”). I’m interested in how these models encode linguistic structure, how these structures develop, and what they can tell us about model behavior and robustness. I’m also interested in the practical workflow of interpretability: how does the way we interact with these models inform our own mental models of how they work?
I am one of the tech leads for the Language Interpretability Tool (LIT).
From 2016 to 2018, I taught Data Science W266: Natural Language Processing with Deep Learning at UC Berkeley School of Information.
"if" + lastname + "@gmail.com"
- The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP ModelsIn Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, 2020
- What do you learn from context? Probing for sentence structure in contextualized word representationsIn International Conference on Learning Representations, 2019