Ian Tenney
I am a Staff Research Scientist on the People + AI Research (PAIR) team in Google Research. My group focuses on interpretability for large langauge models (LLMs), including visualization tools, attribution methods, and intrinsic analysis (a.k.a. BERTology) of model representations.
I am a co-creator and TL of the Learning Interpretability Tool (LIT).
Previously, I’ve taught an NLP course at UC Berkeley School of Information. In a past life I was a physicist, studying ultrafast molecular and optical physics in the lab of Philip H. Bucksbaum at Stanford / SLAC.
Contact: "if" + lastname + "@gmail.com"
(or @google.com
)
news
Apr 15, 2024 | New preprint! Interactive Prompt Debugging with Sequence Salience goes into more detail on the prompt debugging tool we previously released for Gemma. Sequence Salience now works for Mistral and Llama 2, and features a more in-depth tutorial at goo.gle/sequence-salience. |
---|---|
Mar 1, 2024 | New preprint! LLM Comparator, a visualization tool to help LLM developers make sense of side-by-side evaluations, accepted to CHI Late-breaking Work. |
Feb 21, 2024 | LIT v1.1 featured in The Keyword as the debugging tool for the new Gemma family of open models from Google. As part of the Responsible Generative AI Toolkit, use the new sequence salience feature to debug complex LLM prompts, such as few-shot, chain-of-thought, or constitutions. Try it in Colab here: Using LIT to Analyze Gemma Models in Keras |
projects
selected publications
- LLM Comparator: Visual Analytics for Side-by-Side Evaluation of Large Language ModelsCHI Late-Breaking Work, 2024
- The MultiBERTs: BERT Reproductions for Robustness AnalysisICLR (spotlight), 2022
- The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP ModelsIn Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, 2020