Context by Cohere
ML Explainability and Language Model UI — Talking Language AI #5

ML Explainability and Language Model UI — Talking Language AI #5

Share:

Should we care about machine learning model interpretability? Is it more relevant for some scenarios than others? And how can we say that we’ve actually achieved model understanding? When we look at the broader context, these questions can have far-reaching implications for the application of large language models in real-world use cases.

In this session, we’re joined by Professor Hima Lakkaraju, who answers these questions, as well as demonstrates TalkToModel, an interactive dialogue system for explaining machine learning models through conversations.

View the full episode (also embedded below), check out the slides, and feel free to post questions or comments on this episode’s thread in the Cohere Discord channel.

Our guest speaker, Hima Lakkaraju, is an Assistant Professor at Harvard University with appointments in the Business School and the Department of Computer Science. She conducts research on trustworthy machine learning with a focus on improving the interpretability, fairness, robustness, and reasoning capabilities of different kinds of ML models, including language models and other pre-trained models. Learn more about Profesor Lakkaraju and her work on her GitHub profile or follow her on Twitter.

During this episode, Professor Lakkaraju discusses why model understanding is critical for high-stakes decision-making with ML, and how to achieve it. She then walks us through some examples that demonstrate these concepts. Professor Lakkaraju moves on to explaining individual model decisions based on inputs, looking at the LIME Explainability method specifically, and she touches on what to do when explainability methods disagree.

Finally, Professor Lakkaraju explores conversational interfaces for model understanding and dives into the TalkToModel dialogue system. Besides demonstrating a compelling conversational explainable user interface (XAI), TalkToModel shows us how we can use language models to interact with complex systems and make them more accessible to a wide audience.

To learn more about the ML explainability and language model UI, watch the video and join the conversation on Discord. Stay tuned for more episodes in our Talking Language AI series!

Keep reading