What Your Models Are Missing: Approachable Artificial Intelligence

 

 

Explanation of Explainable AI in the context of the field of Artificial Intelligence
Photo by Andrea De Santis on Unsplash

Introduction

Artificial Intelligence has evolved immensely and integrated into all parts of our lives, from Netflix recommendation lists, TikTok filters to chatbots and autonomous cars. We depend more and more on Artificial Intelligence to make decisions in our daily lives, which means we should know how these Artificial Intelligence algorithms make decisions. Eventually, all businesses, education, healthcare, finance, and law enforcement will depend on an Artificial Intelligence algorithm in some way. These decisions and predictions are sometimes life and death, especially for healthcare applications. So we must know how these Artificial Intelligence make decisions, how it is deployed, and what that means for us. This leads us to “Approachable Artificial Intelligence.”

Approachable Artificial Intelligence is a cornerstone of modern Artificial Intelligence that provides context behind what the models do “under the hood.” It’s all about taking a deeper look into how Artificial Intelligence models work; whether that be a sales forecasting model, a customer segmentation model or an image recognition model.

In short, Approachable Artificial Intelligence means it will explain to you how it makes decisions.

Artificial Intelligence naturally learns from patterns in data by itself, and because it is so vast, we can’t see the deep neural networks of how it makes decisions. Being able to program the Artificial Intelligence to explain to us how it gets to those decisions will allow us to have a proper checkpoint on those decisions being made.

Desa Analytics leverages Approachable Artificial Intelligence for clients. We not only build Artificial Intelligence models for our clients, but we also provide Approachable Artificial Intelligence so that we can provide a proper understanding of how the models operate and make decisions “under the hood”. 

What does it mean for businesses?

As Artificial Intelligence becomes more prevalent in business decision-making, companies and organizations need to have governance over their Artificial Intelligence system to allow oversight and regulate the use of Artificial Intelligence. This will prevent incorrect systems from biased Artificial Intelligence explanation models because we are heading towards explosive adoption in Approachable AI. Proper implementation will have a significant total economic impact, increased present value from positive projected cash, return on investment (ROI) with reduced costs, present flexibility to allow an increase of strategic importance while reducing risks over time from continued learning machine learning algorithms. This means the models will continue learning from new data consumption, making the models better over time that make business decisions.

How can we better understand machine learning models using different methods from the research field?

Thanks to the progress in deep learning, we have been able to create fantastic machine learning models. These are the models in autonomous driving or the healthcare sector. Many systems and models have become so complex that the Artificial Intelligence is referred to as black box. We must understand what goes on inside the algorithm.

Interpretable machine learning provides techniques to better understand and validate how machine learning models work.

Worldwide interest of Explainable AI in recent history.
The worldwide interest of Explainable AI over the past five years. Image by Author, screenshot from Google Trends

The trend of interest over the past five years has been increasing steadily, with most searches from Asian countries and Netherlands. Understanding the models is relevant for the data scientists and ML engineers who build the models and the users of the algorithms who expect explanations for why certain decisions are made.

Understanding the models is relevant for the data scientists and ML engineers who build the models and the users of the algorithms who expect explanations for why certain decisions are made.
Image by Author

Transparency and interpretability are currently seen as having a trade-off with model accuracy.

Comparison of model interpretability and model accuracy when comparing different Machine Learning / Artificial Intelligence models. This visual would suggest that higher complex models, on average, tend to be less explainable.
Source: Machine Learning for 5G/B5G Mobile and Wireless Communications: Potential, Limitations, and Future Directions

Usually, when building models from data, a trade-off can be observed. We have simple linear models that can be easily interpreted by humans but might not lead to superb predictions for complex problems.

Or we build highly non-linear models that provide better performance on most tasks but are too complex for humans to understand. Neural networks, for instance, often have millions of parameters that simply exceed human capabilities.

Two different approaches to designing explainable AI models. Desa Analytics specializes in unboxing black box algorithms for our clients so they have a deep understanding of how models work under the hood.
Image by Author

This is where Desa Analytics can shine a light for black-box algorithms and add value to businesses. We can simplify complex Artificial Intelligence models for clients so they don’t have to worry about the technical complexity underneath.

Therefore we generally have two options: ensuring that machine learning algorithm can be interpreted, or we need to derive human-understandable explanations of a complex, trained model. In the literature, this is usually called model-based or post hoc. Post hoc methods can be further divided into black-box approaches white box approaches:

  • Black box approaches mean we don’t know anything about the model. We only use the relationship between inputs and outputs.
  • White box approaches show access to the model internals. The field of Approachable AI entails the whole psychology area about what makes good explanations and which explanations types are the best for humans.
The traditional model lifecycle tends to exclude the component for building explainable AI. Desa Analytics solves that problem for clients.
Image by Author

Now let’s talk about the terminology that is used in this research field.

The different types of methods can be distinguished according to a few properties. First of all, we can differentiate between model agnostic and model-specific Approachable AI methods.

Model agnostic: the explainable AI algorithm can be applied to any model — random forest, neural network, support vector machine. Model-specific: the method was designed for a specific type of machine learning model. For example, it is only for neural networks regarding the scope of the provided explanations. We can categorize the methods into global and local approaches. This refers to either aiming to explain the whole model or just parts to explain individual predictions.
Image by Author
  • Model-agnostic: the Approachable AI algorithm can be applied to any model — random forest, neural network, support vector machine.
  • Model-specific: the method was designed for a specific type of machine learning model. For example, it is only for neural networks regarding the scope of the provided explanations. We can categorize the methods into global and local approaches. This refers to either aiming to explain the whole model or just parts to explain individual predictions.
Source: “Why should I trust you?”: Explaining the Predictions of Any Classifier, Ribeiro et al.

To explain this a little bit further, we can recall that the decision boundary of a complex model can be highly non-linear. For instance, here, we have a classification problem, and only this complex function on the left can separate the two classes. Sometimes it doesn’t make sense to explain the global model, and instead, many approaches zoom into a local area. Then they present the individual predictions made at that decision boundary. To dig deeper into this, you can look at the regional explainability approaches.

Image by Author

Besides agnosticism and the scope of a method, we can further differentiate according to the data type a technique can handle. Not all explainability algorithms can work with all data types.

Make the black box transparent with Approachable AI.

Approachable AI is a cornerstone of modern AI that allow people to work together with AI. It has been growing in interest with increasing searches in Google over the past five years. It has a tremendous impact on businesses because it improves decision-making for companies. With better data coming along, leveraging AI and understanding how it makes complex decisions are excellent for companies. Black-box Artificial Intelligence algorithms can now be explained, and that is where Desa Analytics comes in to provide insights for your business in the simplest way.

https://www.ibm.com/watson/explainable-ai
https://www.forbes.com/sites/cognitiveworld/2019/07/23/understanding-explainable-ai/?sh=7ceedc627c9e