Ian Tenney

I am a software engineer on the Language team at Google Research. My research focuses on interpretability and analysis of deep NLP models (a.k.a. “BERTology”). I’m interested in how these models encode linguistic structure, how these structures develop, and what they can tell us about model behavior and robustness. I’m also interested in the practical workflow of interpretability: how does the way we interact with these models inform our own mental models of how they work?
I am one of the tech leads for the Language Interpretability Tool (LIT).
I am based in Seattle, but collaborate with researchers across many sites, including Google’s People + AI Research and the LUNAR group at Brown.
In 2018, I was a Senior Researcher on the Sentence Representation Learning Team as part of the 2018 JSALT workshop at Johns Hopkins University. Among other things, I took this terrible group photo.
From 2016 to 2018, I taught Data Science W266: Natural Language Processing with Deep Learning at UC Berkeley School of Information.
In a past life, I was a physicist, studying ultrafast molecular and optical physics in the lab of Philip H. Bucksbaum at Stanford / SLAC.
Contact: "if" + lastname + "@gmail.com"
projects
selected publications
- The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP ModelsIn Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, 2020