Been Kim

Research Scientist, Google Brain

I am interested in designing high-performance machine learning methods that make sense to humans. Here is a short writeup about why I care.

My current focus is building interpretability method for already-trained models (e.g., high performance neural networks). In particular, I believe the language of explanations should include higher-level, human-friendly concepts.

Previously, I built interpretable latent variable models (featured at Talking Machines, and MIT news) and creating structured Bayesian models of human decisions. I have applied these ideas to data from various domains: computer programming education, autism spectrum discorder data, recipes, disease data, 15 years of crime data from the city of Cambridge, human dialogue data from the AMI meeting corpus, and text-based chat data during disaster response. I graduated with a PhD from CSAIL, MIT.

I gave a tutorial on Interpretable machine learning at ICML 2017, slides are here .

I am area chair and program chair at NIPS 2017, steering committee and area chair at FAT* conference , program committee at ICML 2017, AAAI 2017, IJCAI 2016.

I am an executive board member of Women in Machine Learning.

I have co-organized ICML 2016 Worshop on Human Interpretability in Machine Learning (WHI), the second ICML 2017 Worshop on Human Interpretability in Machine Learning (WHI). and NIPS 2016 Worshop on Interpretable Machine Learning for Complex Systems.

 


Related Sessions

View full schedule