AI explainability and causal inference are necessary for AI adoption

Automated decision-making systems are both producers and consumers of data.

The technology, skill, and research disruption in machine learning and AI have caused a widespread transformation of how companies look at their #datascience strategy. The mindset change is significant—companies which were extremely dependent on their present success for a healthy revenue stream and did not move early enough, are now facing an existential angst.

This post is not about them. The organisations that did invest in it early on also face a myriad of challenges, the primary one being the availability of good talent in the space. This post is not about them either. It's about the ones' that are successfully 'absorbing' intelligence to make their internal systems smarter and are observing a significant impact on their profitability #domorewithless.

AI explainability and causal inference are necessary for AI adoption

Artificial Intelligence and machine learning has caused a widespread transformation on how companies look at their data science strategy.

If we move forward in time, a marked difference in profitability attributed to #AI adoption in these companies will lead to them making further investments in automated decision making. There are certain challenges that such companies will face before decision making moves entirely to a self-learning, managing and evolving AI. The challenges to get there primarily arise due to a very simple fact—automated decision-making systems are both producers and consumers of data. Let's dive into it a bit deeper.

Ideal AI architecture for real-world systems

Let's take an example of a self-driving car trained using reinforcement learning in the video below.

The initial model tries to navigate the route by avoiding a collision with the white lines. It does not plan a route, it does not determine the best path based on objects ahead of it. It's trained for a fixed repetitive task, it's inexpensive and as long as the course remains the same and it has been provided with enough training, it will perform relatively fast and decently well. It's model-free reinforcement learning. However, it does not perform very well under uncertainty.

Model-free reinforcement learning assumes no understanding of the environment or the results of an action. It simply trains a controller based on the reward of an action generated through tremendous trial and error.

The other form, which is a lot more interesting is model-based reinforcement learning. In model-based reinforcement learning, the model assumes a clear knowledge of the agent and the environment in which it operates. In the example of the car above, if there existed a model which explained the constraints, the knowledge about how to deal with obstructions of different types, and when to speed up or slow down, that would significantly improve the driving ability in real-world scenarios. Model-based reinforcement learning can be complex and computationally intensive but deals much better with uncertainty.

There is a debate on which system is better. I support the thought that both are important; there are times when you need speed over accuracy. There are times when you just can't be wrong. A holistic implementation needs both, competing and complementing each other. When there is more uncertainty, model-based systems will work better, and when habits develop, and uncertainty is lower, the model-free systems become dominant. It also doesn't hurt that you completely avoid the scenario of running a computationally intensive model-based system all the time. It's simply efficient.

Wait, but why did I say that such systems are both producers and consumers of data?

Let's talk about what data is produced and consumed. A model-free system would generate data about rewards to actions taken which needs to be fed back into the model. At the same time, a model-based system has to work with partial observability in the real world. Data often does not fully provide the state of the environment because of causal inferencing, and an underlying hidden state. Let's consider an example—If a patient at a particular ward in a hospital has a faster/better recovery rate for the same underlying problems in another clone hospital, with exactly the same processes, what underlying state explains the difference between the two recovery rates? It could be that the nurse attending the first patient provides better care due to deeper empathy. The hidden state, in this case, was not directly observable from the data but is critical to improving patient recovery rates.

An optimal system design is one that:
1. First deals with uncertainty
2. Creates a model-based system to identify hidden states,
3. Runs simulations to train a model-free system
4. Operates with diminished uncertainty
5. Continues measuring uncertainty
6. Re-runs the cycle when uncertainty increases

This is called the Dyna architecture.

Modelling real-world environments are hard—they are many hidden states. This makes causal inferencing one of the most important parts of the process.

The explainability and inferencing challenge

Okay, so we all understand that causal inferences are important. Where is the challenge?

Applied deep learning has significantly improved the performance of algorithms. There is a huge incentive to use them for different classes of problem. We love them all, and for all practical purposes, let's agree that they will shape the future of AI.

However, they present a challenge—how does one perform causal inferencing when what makes then useful also makes them difficult to use? You can't explain the model directly. The end consumers of the model cannot answer questions like—How do I trust the model? Why did you predict X and not Y? When did you succeed or fail? How do I correct an error?

This is where the challenge of causal inferencing starts becoming visible. If you can't explain the model, you cannot infer and thus cannot identify hidden states, which makes all the difference in real environments.

Conclusion

The key theme to solve for is thus, explainability.

This piece has been written by Sankat Mochan Singh Chauhan who is the Executive Vice President at Goals101

Tech2 is now on WhatsApp. For all the buzz on the latest tech and science, sign up for our WhatsApp services. Just go to Tech2.com/Whatsapp and hit the Subscribe button.

Loading...





also see

science