Trulli
      Image created by Areal Tal using a text-to-image AI generator, with additional modifications by Areal Tal, 2023.

Welcome To Leveling Up With XAI

Leveling Up With XAI is evolving to become the home of the AI Alignment Liaison, an open-source initiative dedicated to guiding AI development teams in aligning their product development processes with a set of human values. Our goal is to make it easy to build trustworthy AI systems by integrating the principles that are important to you, such as of explainable artificial intelligence (XAI) and ethical AI, into AI product development workflows.

Leveling Up With XAI has been migrated to become a sub-domain of the AI Alignment Liaison. This platform still contains your XAI guide for tabular models, offering a clear path for understanding explainability techniques and applying them to your work. We provide a step-by-step approach to choosing the right XAI methods based on your specific needs, ensuring clarity and transparency in AI decision-making.

Now, as part of AI Alignment Liaison, we will also focus on developing an LLM system that advises on aligning your AI products with your values and those of your users. Whether you're building your first model or refining your approach to AI Safety, this inclusive resource is designed for all levels of expertise.

Step inside and discover a world where AI's decision-making is clear, transparent, and aligned with your values. Engage with our content to empower yourself in bringing trustworthiness and clarity to the forefront of AI applications.



Leveling Up With XAI serves as a comprehensive educational resource, designed to guide model developers in selecting the appropriate tools for explaining their AI models. It also assists those interested in AI by showcasing the range of available explanation tools. The guide is tailored to fit a variety of use cases and caters to varying levels of technical expertise among both the creators and users of model explanations. The focus is on aligning explanation tools with the specific needs of explanation consumers, ensuring relevance and practical applicability.

 

Emphasizing the selection of algorithms and techniques to facilitate transparency and ease of understanding in model decision-making.

Focus on methods like Feature Importance and Surrogate Models to offer stakeholders a broad picture of how a model functions and its decision-making process.

Get a detailed view of how individual features and their interactions influence outcomes.

Reveal the key influences on each prediction, providing clarity for users, especially for high-stakes decisions.

Illuminate how the training data affects a model’s decision-making process. This provides additional insights to the model's strengths, limitations, and potential biases.


Frequently Asked Questions

What programming language do I need to use to explain my models?

I tried to make this guide agnostic of any programming language. However, I personally use Python for machine learning and explaining my ML models. I do not know if all of these techniques have implementations outside of Python.

I don't have a technical background. Is this guide relevant to me?

Though this guide is tailored for use by model developers, anyone involved in the development of AI products can use this guide to learn about what explanatory techniques are out there. This way, you can be a part of the process of choosing how to explain your ML models.

Can you only explain machine learning models with these tools or can you explain models not built with machine learning?

This guide was developed for the purpose of explaining machine learning models. However, most of the tools mentioned (aside from certain example-based explanations) can be used for other models that make predictions as well with a bit of extra work.

Do you cover large language models (LLMs)?

No. This guide does not cover how to build, use, or explain large language models. For a comprehensive view of what is and what is not covered, please see the introduction of the guide.


Contact Me

Get in touch with me for more information, to discuss the contents of the website, or to discuss future potential collaborations.


About Me

I'm Ari Tal, the person behind this initiative. It might not surprise you to hear that I have a passion for Responsible AI and Explainable AI (XAI). I like to open up an AI system to understand what the driving forces are behind how its model was formed and what its decision-making processes look like. Relying on a single metric to ensure your model is performing as expected does not feel satisfying to me.

What if your dataset is not sufficiently comprehensive? That could lead to a model that performs well for your data, but fails on unseen data or specific subpopulations - meaning you could have a model that is biased, perpetuates discrimination, or otherwise behaves unfairly.

How do you communicate model behavior with other members of your team? If you have an exceedingly complex model (as we often do when utilizing machine learning), communication about your efforts in AI development could be quite stifled. For example, your subject matter expert might never realize that the model you developed behaves entirely differently than they believe it should.

What do you do when your model has a problem or has unexpected behavior? Auditing a model, tracing how it makes its decisions, and debugging issues can be nearly impossible without tools to inspect model behavior.

These are just a few questions that come to mind when people ask me about my fascination with XAI.

Transparency paves the way for accountability and thus can potentially have downstream impacts on the fairness & equity of an AI system. That means that the ability to communicate the behavior of your models and how it reaches decisions can be a foundational necessity for responsible AI development and your ability to build trustworthy AI systems. I hope to enable you with the tools for such communication, whether you are working with a teammate to debug your model or explaining a prediction to an end user. This is why the heart of this website is a guide designed to help you select appropriate XAI techniques; so that you can cater your explanations to meet varied needs.

Join me in this exploration, and let's unravel the mysteries of AI together. Let's level up your AI with XAI!