by Sanjay Srivastava

The path to explainable AI

Opinion
May 21, 2018
Artificial IntelligenceIT StrategyTechnology Industry

As more and more enterprises use artificial intelligence to make decisions on their behalf, governance is critical, and traceability into AI reasoning paths key to build trust from customers, employees, regulators, and other key stakeholders.

Artificial intelligence (AI) shifts the computing paradigm from rule-based programming to an outcome-based approach. It allows processes to operate at scale, reducing the number of human processing errors, and inventing new ways of solving problems. AlphaGo inspired Go players to try new strategies after experts had been using the same opening moves for 3,000 years. As adoption increases, AI will enable organizations to unlock the “last mile” that traditional automation could not address. But as more enterprises entrust AI to make decisions on their behalf, governance becomes super critical.

In a recent Genpact survey of C-suite and other senior executives, 63 percent say that it is important or critical to be able to trace the reasoning path of an AI-enabled machine, and this number jumps to 88 percent of respondents at companies that are leaders in AI. Genpact also works with Fortune 500 companies in regulated markets, and these clients see traceability becoming a key requirement before they will consider putting AI to use.

Prepare for increasing regulatory scrutiny

The recent spotlight on the misuse of social media data by the UK-based consulting firm Cambridge Analytica has awakened enterprises to the need for higher regulatory governance. In addition, the European Union’s General Data Protection Regulation (GDPR), which goes live on May 25, is one of many new requirements to come that will look to address governance of data and AI. Among other areas, as mentioned in article 22 of the regulation, “the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” This could become critical in situations where a person is denied a loan or a job opportunity. Similarly, New York City’s recent algorithmic decision-making transparency legislation shows that regulatory scrutiny also is increasing in the United States. In this climate, enterprises will find it difficult to embrace “black box AI.”

Look to traceable AI technologies

Some technologies are mature enough to deliver traceability today. When dealing with text or numbers, enterprises can look towards computational linguistics, where users can easily follow the reasoning path and pinpoint what word or data point that led the machine to its decision. For instance, if a third-party logistics vendor agreed to charge 15 cents per mile, but the invoice shows that it is charging 18 cents per mile, the machine can use context to extract the price in the invoice, the price agreed upon in the contract, and point to the discrepancy. Users can see the documents side by side to determine if the machine is correct. The key is keeping track of where the defining attributes for a decision come from, and bringing out the underlying information in a way that is easily to visualize.

 

Surface the reasoning path

Another way to achieve traceability is to explain the drivers and reasoning path that went into the algorithm. The lead scoring function in Salesforce’s Sales Cloud Einstein product provides direct insight into how it determined the score for a sales lead, so a company’s sales team can understand how the Einstein machine predicted that a specific lead will convert into an opportunity.

Computational linguistics also has an embedded reasoning path logic that can be externalized for the technology’s end users. For instance, in a loan approval process, there are multiple steps that the system needs to take to process an application. If an application is denied in an automated AI-based loan application, the loan officer should be able to trace the decision back to the specific step where the denial occurred, and more importantly, explain the AI’s decision at this specific step.

So, rather than just deny a loan, which can drive poor customer experience and trigger compliance concerns, enterprises can articulate the reasoning path to consumers to explain why a decision was made. With traceability, if an auditor requests documentation, or a customer has an inquiry, or another potential problem arises, companies can know exactly where and how the system came to its decision, rather than be left in the dark because the decision and reasoning is locked away in a black box.

Engineer the data for best use of AI

The key is to fully understand the data’s behavior. It is not just about implementing AI algorithms; it is about building them with effective data engineering in the first place. Best practices include documenting assumptions around completeness of the data, addressing data biases, and reviewing new rules identified by the machine before implementing. If machine learning is being used to identify anomalies, companies can put checks and balances in place to manually test and determine if the results make sense. When designing and testing AI, it also is important to have humans with detailed understanding of the processes and industry domain issues.  

Explainable AI accelerates AI adoption

Traceability also addresses several challenges in AI’s implementation. First, it focuses on the quality in new emerging applications of this advanced technology. Second, in the evolution of human and machine interactions, traceability makes answers more understandable by humans, and helps drive AI’s adoption and related change management necessary for successful implementations. Third, it helps drive compliance in regulated industries such as life sciences, healthcare, and financial services.

Traceability exists in some more mature AI applications like computational linguistics. In other emerging technologies that are less mature, the so-called black box problem still tends to appear. This mostly occurs in the context of deep neural networks, machine learning algorithms that are used for image recognition, or natural language processing involving massive data sets. Because the deep neural network is established through multiple correlations of these massive data sets, it is hard to know why it came to a particular conclusion, for now. Companies need a more comprehensive governance structure, especially with these advanced technologies like neural networks that do not permit traceability.

Overall, traceability allows companies to better understand the entire reasoning process, and builds trust with AI implementations, which can help businesses, the workforce, and customers better embrace AI. As the rest of the AI landscape matures, we would like to see similar reasoning paths become available across other AI-enabled technologies.