Can large language models use digital-twin knowledge to make better explanations about organizational processes?

by IBM

Imagine that you are an operator in an automatic manufacturing line that produces bikes. A few days ago, you started receiving complaint calls from your distributors, reflecting a possible delay in the lead times of the bikes produced. They claim it takes longer than usual for their orders to be fulfilled and that their end-point stocks need to be replenished. You would like to understand what is causing this delay and potentially locate the emerging bottleneck in the process. Previous encounters of similar situations typically required several interruptions to both the manufacturing and distribution lines and physically intervening along different suspicious points in the process in order to eventually identify which factor exactly is the actual source of the delay. However, having recently upgraded your system to include the “SAX process explainer”, you wonder whether you can use it to automatically produce an explanation that may effectively determine and explain the true reasons for the current bottleneck.

In recent years, we are witnessing the pervasiveness of artificial intelligence (AI) in manufacturing production, which automates just about every aspect. However, the adoption of automation in manufacturing comes with a certain degree of hesitation by users due to the “black box” nature of AI models. This lack of trust has given birth in recent years to a series of eXplainable AI (XAI) methods that are meant to explain the inner workings of AI models. Such methods are aimed to give users a better understanding of how a particular AI tool works in order to ensure that the model is making its decision adequately and reliably for their purposes, with little to no inherent biases, and to give its users assurance that the AI model is not going to fail upon encountering some unforeseen circumstances in the future.

When applied to business processes, XAI techniques aim at explaining the different factors affecting a certain condition. In our previous example, the XAI explanation could pinpoint to several reasons for the delay in the bikes’ lead times such as lack of raw materials, malfunctioning of one of the machines, or excessive usage of a certain color in the process. However, contemporary XAI techniques are not adequate enough to produce faithful and correct explanations when applied to business processes as they generally fail to express the business process model constraints;  they don’t usually include the richness of information about the contextual situations that affect process outcomes; they don’t reflect the true causal execution dependencies among the activities in the business process; and they don’t make sense enough to be easily interpretable by process users. In other words, explanations are usually not given in a human-interpretable form that can ease the understanding by humans.

To this end, IBM Research, a partner in the AutoTwin EU project, introduces Situation-aware eXplainability (SAX) as a framework for generating explanations for business processes that are meant to address these shortcomings. More specifically, an explanation generated with the use of the framework, or a “SAX explanation” in short, is a causally-sound explanation that takes into account the process context in which some condition occurred. Causally sound means that the explanation generated provides an account of why the condition occurred with a faithful and logical entailment that reflects the genuine chain of business process executions yielding the condition. The context includes knowledge elements that were originally implicit, or part of the surrounding system, yet affected the choices that have been made during the process execution. The SAX framework provides a set of services in the SAX4BPM library that aid with the automatic derivation of SAX explanations leveraging existing large language models (LLMs). The framework uses knowledge graphs developed by the TUE partner in the project as the single central repository of all the SAX elements. The SAX4BPM library will be released as an open source solution at the end of the AutoTwin project.

LLMs are a type of artificial intelligence (AI) tool that can recognize and generate text, among other tasks. They are trained on a vast amount of text to interpret and generate human-like textual content. While the adoption and usage of LLMs by organizations to automate many aspects of their operations is spreading rapidly, this is also accompanied by a certain degree of doubt related to the tendency of LLMs to create “hallucinations,” or incorrect sentences, due to a lack of inherent capacity to reason like a human. In a recent paper, researchers from IBM Research and TUE tackle this issue and try to answer the question “how well can LLMs explain business processes?”.

Our experiments show that inputs presented to the LLMs aided with the guard-railing of its performance, yielding SAX explanations that have better-perceived fidelity. However, this improvement comes sometimes at the cost of the perceived interpretability of the explanation.

The overall approach to the generation of explanations by an LLM and the resulting user reactions are depicted in the figure below.

Previous
Previous

CORE on the Trilateral Strategy Session with SYX & ERQ - Insights from the GA Meeting in Milan

Next
Next

#WEBINAR3 - AUTO-TWIN meets Dr. Guodong Shao