Ian Tenney

I am a research scientist on the Language team at Google Research. My work focuses on interpretability and analysis of deep NLP models (a.k.a. “BERTology”). I’m interested in how these models encode linguistic structure, how these structures develop, and what they can tell us about model behavior and robustness. I’m also interested in the practical workflow of interpretability: how does the way we interact with these models inform our own mental models of how they work?

I am one of the tech leads for the Language Interpretability Tool (LIT).

I am based in Seattle, but collaborate with researchers across many sites, including Google’s People + AI Research and the LUNAR group at Brown.

In 2018, I was a Senior Researcher on the Sentence Representation Learning Team as part of the 2018 JSALT workshop at Johns Hopkins University. Among other things, I took this terrible group photo.

From 2016 to 2018, I taught Data Science W266: Natural Language Processing with Deep Learning at UC Berkeley School of Information.

In a past life, I was a physicist, studying ultrafast molecular and optical physics in the lab of Philip H. Bucksbaum at Stanford / SLAC.

Contact: "if" + lastname + "@gmail.com"

selected publications

  1. The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models
    Ian Tenney, James Wexler, Jasmijn Bastings, Tolga Bolukbasi, Andy Coenen, Sebastian Gehrmann, Ellen Jiang, Mahima Pushkarna, Carey Radebaugh, Emily Reif, and Ann Yuan
    In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, 2020
  2. ACL
    BERT Rediscovers the Classical NLP Pipeline
    Ian Tenney, Dipanjan Das, and Ellie Pavlick
    In Proceedings of the 57th Conference of the Association for Computational Linguistics, 2019
  3. What do you learn from context? Probing for sentence structure in contextualized word representations
    Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick
    In International Conference on Learning Representations, 2019