Welcome to the AI Alignment Liaison

The AI Alignment Liaison (AIAL) is an open-source initiative dedicated to guiding AI development teams in aligning their product development processes with a set of human values. Our goal is to make it easy to build trustworthy AI systems by integrating the principles that are important to you, such as of explainable artificial intelligence (XAI) and ethical AI, into AI product development workflows.


Mission Statement

The mission of the AI Alignment Liaison is to empower AI teams to align their AI development processes with human values. By leveraging generative AI and a human-in-the-loop approach, we provide teams with guidance on how to take action on their values during every stage of AI product development. Our focus is on enabling organizations to efficiently develop trustworthy AI through tools that make responsible development accessible. By doing this, we aim to help people create AI systems that genuinely reflect the values of the communities they impact.

 

What problem are we solving?

AI systems are increasingly shaping decision-making processes, yet they often pursue narrow objectives without accounting for their broader effects on people and society. This can result in unintended consequences and ethical dilemmas. The challenge lies in developing AI with processes that are aligned with human values and organizational goals while adhering to responsible practices.

 

Solution: Meet the AI Alignment Liaison

The AI Alignment Liaison addresses the above challenge by leveraging LLMs and a comprehensive database of responsible AI practices, combined with a deep contextual understanding of the product you are developing. We empower teams to build AI-powered products that are grounded in your values, comply with relevant regulations, and earn stakeholder trust. By embedding your core values throughout the development process, you can mitigate risks and thoughtfully shape your product's impact.

 


Frequently Asked Questions

What is the AI Alignment Liaison?

The AI Alignment Liaison is a generative AI tool that advises AI teams on how to align their product development with human values. It provides guidance on Responsible AI (RAI) practices through collaborative brainstorming, documentation, and strategic planning. The system’s key functions include:

- Value Identification: Helping define what values matter to your organization.

- Requirement Development: Creating product requirements that reflect those values.

- Action Planning: Developing strategies to meet the requirements.

- Validation Strategies: Strategies for checking that the requirements are successfully met.

 

What's the goal of building this product?

  • Improve access to best practices in RAI
  • Align product development practices with human values
  • Provide platform for advancing RAI
  • Help meeting compliance requirements

How does it suggest values & requirements for AI development?

The Liaison uses a combination of the Project-driven Context Retrieval, Responsible AI (RAI) Document Retrieval, and User Research to recommend an initial set of values. It then generates RAI requirements based on these values, mapping them to well-studied RAI principles.

Can users customize the system recommendations?

Yes, the system is fully human-editable. Users can add, delete, modify, and prioritize values, requirements, validation strategies, and action strategies. The AI Alignment Liaison also supports collaborative brainstorming, allowing users to refine these elements throughout the development process.

What types of documents can the AI Alignment Liaison generate?

The system can generate various high-quality documents, such as:

  • AI Ethics Statements - the ethical principles underlying your AI initiatives, facilitating alignment throughout your organization.
  • Product Requirements Document (PRD) - specific to your responsible AI requirements.

Tell me about traceability!

Traceability in the AI Alignment Liaison involves tracking the process from "value or compliance document → RAI requirements → action strategy & validation strategy." This structured mapping can serve as evidence for meeting compliance requirements, building brand equity, and increasing both accountability & transparency in AI development.

A traceability tool will be made available to you to automatically generate flowcharts that display your commitment to this process.

How does the system help with compliance?

The Compliance Navigator uses tools to identify relevant compliance documents based on the organization or the product being developed. It then advises on action strategies to meet compliance requirements and suggests validation strategies to check that these requirements are satisfied.

Is this compatible with Agile development?

AIAL will break down your action & validation strategies into manageable user stories that are automatically uploaded into tools like JIRA and ClickUp.

How do I decide what my values are?

When beginning a project, you fill out an intake form, including your website, industry (if applicable), and any relevant project documents. You'll also complete a checklist indicating which groups of values you'd like to explore. The system will then suggest potential values based on:

  • Your documentation (including your website)
  • Automated market research about your users
  • Common values associated with your industry

From there, you can have a conversation with our chatbot to refine and prioritize your values. In the end, you have full control of this process - you decide what is important to you.


Contact Me

Get in touch with me for more information, to discuss the contents of the website, or to discuss future potential collaborations.


About Me

I'm Ari Tal, the person behind this initiative. It might not surprise you to hear that I have a passion for Responsible AI and Explainable AI (XAI). I like to open up an AI system to understand what the driving forces are behind how its model was formed and what its decision-making processes look like. Relying on a single metric to ensure your model is performing as expected does not feel satisfying to me.

What if your dataset is not sufficiently comprehensive? That could lead to a model that performs well for your data, but fails on unseen data or specific subpopulations - meaning you could have a model that is biased, perpetuates discrimination, or otherwise behaves unfairly.

How do you communicate model behavior with other members of your team? If you have an exceedingly complex model (as we often do when utilizing machine learning), communication about your efforts in AI development could be quite stifled. For example, your subject matter expert might never realize that the model you developed behaves entirely differently than they believe it should.

What do you do when your model has a problem or has unexpected behavior? Auditing a model, tracing how it makes its decisions, and debugging issues can be nearly impossible without tools to inspect model behavior.

These are just a few questions that come to mind when people ask me about my fascination with XAI.

Transparency paves the way for accountability and thus can potentially have downstream impacts on the fairness & equity of an AI system. That means that the ability to communicate the behavior of your models and how it reaches decisions can be a foundational necessity for responsible AI development and your ability to build trustworthy AI systems. I hope to enable you with the tools for such communication, whether you are working with a teammate to debug your model or explaining a prediction to an end user. This is why the heart of this website is a guide designed to help you select appropriate XAI techniques; so that you can cater your explanations to meet varied needs.