Relying On AI for Ethical Decisions

Got ethics?

How explainable AI helps you trust your AI algorithms

Artificial intelligence has increasingly become a part of our daily lives. And while this provides numerous benefits and conveniences, a lot of people are uncomfortable with relying on this technology. In fact, AI is often referred to as a “black box” because we (meaning human users) don’t really know how large AI models do what they do. We know the question or data the AI model starts with (input) and the answer it produces (output), but it can be hard to determine why the model chose a particular answer. This is a problem for ethical AI because if you are relying on AI to make critical decisions, you must have a foundation of trust in the algorithm. Our team at Kitware believes trust can be established through understanding; understanding the how behind your algorithm using explainable AI methods. 

Enter Explainable AI 

Explainable AI (XAI) is a set of tools and resources that can help users better understand and appropriately trust the output of their AI models. Especially for large AI models, some of which contain hundreds of millions of parameters, XAI aims to explain the rationale, characterize strengths and weaknesses, and convey an understanding of how the technology will behave in the future. Commonly used XAI techniques include visual, text-based, and counterfactual (“what-if”) explanations.

Want to learn more?

Read DARPA’s explainable AI (XAI) program: A retrospective

Great! So how does it work, exactly?

Imagine an AI model that has been trained to classify images of cats and dogs. When exposed to an image of a cat, the AI model would appropriately classify the image as “cat.” Using XAI techniques, you would learn that the AI model identified a cat within the image because of the presence of fur, whiskers, claws, and pointy ears. This is exemplified in Figure 1, which is applying the saliency map technique. (See “Understanding AI with Saliency Maps” for more information.)

Figure 1. Saliency map categorizes the image of “dog” versus “cat.”

The Importance of Explainable AI

Over the past several years, Kitware has been developing XAI technology, primarily on the DARPA XAI program. We believe that XAI is a useful tool that will advance research and, more importantly, strengthen the ethics of applying AI to real-world scenarios. XAI is critical in high-stakes situations where incorrect/inaccurate outputs can negatively impact human lives. Settings such as autonomous driving, criminal justice, and healthcare require reliable and trustworthy AI algorithms. (See “Why Everyone is Thinking About Ethical AI, and You Should Too” for more on ethical AI)

Kitware’s CVPR 2022 workshop paper, “Doppelganger Saliency: Towards More Ethical Person Re-Identification,” explores this in detail. We used saliency maps provided by the xaitk-saliency package to help users better understand how the images of different visually similar people (doppelgangers) were being matched (Figure 2). The generated saliency maps highlighted subtle differences between doppelgangers, such as logos on shirts or differences in shoe color. In these high-stakes situations, XAI could reduce the risk and negative consequences of potential false matches during person re-identification. This tool could be especially useful to ensure surveillance systems are operating ethically.

Figure 2. Example of doppelganger saliency. Image regions that differ between the two individuals (e.g. face, shirt logo, pants, and shoes) are highlighted in green. For illustration purposes, colored arrows pointing to corresponding image regions are shown. Note that a region does not have to be highlighted in both images to be considered a difference. In a full person re-identification system, the user can view the highlighted regions to quickly spot visual differences in the doppelganger pair.

How to apply XAI 

XAI can serve multiple purposes, depending on the goals and needs of different users. Potential users might include government entities or agencies such as the DoD, researchers, engineers, and data scientists in the field of XAI, and other policy and decision-makers (Figure 3). 

Figure 3. Potential users of the XAI toolkit. Understanding the intended end user is central to determining the appropriate level of explainability needed.

XAI can be applied to help these users understand and/or trust the outputs of a model, certify the model for deployment, drive progress in AI research, or inform business decisions. As part of the XAI Toolkit (XAITK), Kitware developed a concept map to help new users identify the appropriate XAI tool for their task. Our team of experts can also work with you to integrate XAI into your existing AI tools and workflows. Being at the forefront of XAI, Kitware has led many projects focused on advancing and implementing XAI over the past few years. To learn more, request a meeting with our team.

Leave a Reply