Interpretable Machine Learning
About this resource
Interpretable Machine Learning (IML) provides a concise and substantially deep breakdown of many classic machine learning models that serve as the foundation for most machine learning applications today. In a field where it’s easy to be amazed by the performance of “blackbox” models, it’s important for both beginners and advanced practitioners to understand the sources of success and constraint for a given model in a given application. These details not only improve our ability to make great models but also to extract information that might inform the framing of the problem, the use of the tools, or even the science behind the problem. In this book, you will not only explore the important mathematical and algorithmic structrure of models and the importance of their features on the outputs, but you will also learn model-agnostic ways to evaluate a model’s performance.
Questions?
If you any lingering questions about this resource, please feel free to post to the Nexus Q&A on GitHub. We will improve materials on this website as additional questions come in.
See also
- Introduction to Statistical Learning: An introduction to the topics explored in IML.
- Trustworthy AI Workshop. A workshop to apply and explore what you’ve learned in this book while obtaining additional perspectives about explainability and trustworthiness in AI.