Management guru W. Edwards Deming famously said: “In God we trust. All others must bring data.” But how far can we trust the data?
This is becoming an important question, as the artificial intelligence systems now being built and deployed across the business landscape are only as good as the data being fed into them, along with the algorithms running the data. AI systems are now making decisions on customer value, courses of action, and operational viability, just to name a few vital functions.
Tellingly, the companies that are struggling with AI are having major trust issues with the insights being delivered by the technology. That’s the major takeaway from a recent survey of 1,000 senior executives released by ESI ThoughtLab and Cognizant, based on the input of 1,000 senior executives.
While 20% of companies are powering ahead in the use of AI for decision-making — a group the survey’s authors call AI leaders — the remaining 80% are struggling with a vicious cycle that holds them back. “In this cycle, the self-reinforcing interplay of three factors is impeding progress: failure to appreciate AI’s full decision-making potential, low levels of trust in AI and limited adoption of these technologies.” they point out.
The use of and trust in AI go hand-in-hand, the survey’s authors find. “The more that companies use AI in decision-making, the more confident they become in these technologies’ ability to deliver.” In their study, 51% of AI leaders trust the decisions made by AI most of the time, far more than the 31% of non-leaders who feel the same. It’s notable that barely half of even the most AI-savvy companies have full confidence in AI decisions.
Limited understanding of AI’s potential fuels uncertainty about what AI can and cannot accomplish, the report states. “This, in turn, undermines trust in it.” More than nine in 10 leaders, 92%, say AI has improved their confidence levels in their decisions. However, only 48% of others have seen such an improvement in confidence levels. In addition, than half of leaders trust AI-made decisions most of the time, compared to one-third of their lagging counterparts. “While this gap is impressive, the fact that nearly half (47%) of leaders only trust AI decisions some of the time (rather than most of the time or always) indicates that building trust in the use of AI to make superior decisions takes time.”
Lack of trust comes from a variety of places. There may be fear of AI altering or replacing jobs. There may be issues with the quality of the data being fed into AI algorithms. The algorithms themselves may be flawed, biased or outdated, subject to the approaches of the developers, as well as their understanding of user requirements. Plus, the interactions or data and algorithms may deliver outcomes that may confound even the data scientists that designed them.
The challenge for all companies, the report’s authors advise, is to “promote widespread understanding of and trust in the use of data and AI in decision-making.” This trust can be built by promoting the benefits AI will deliver to organizations, and “putting humans at the center of AI decision-making by using technology to empower, rather than replace, them.”
Trust in data and associated AI results is something even AI leaders must work at continuously to stay on top, the survey’s authors state. “With the continuous evolution of AI and the ongoing work needed to embed AI decision-making in the company’s DNA, a one-off set of initiatives, even if brilliantly planned and implemented, isn’t sufficient,” the report states. “Through institutionalized processes, businesses can keep abreast of the latest developments in this field, educate workers on how to collaborate with AI systems and establish AI decision-making as a high priority for the company.”
AI proponents can also overcome trust issues by presenting “significant case studies and highlight specific areas of their company where AI can improve decision-making,” the survey’s authors suggest. “Businesses should first define the decisions they want to make with AI support, and the business outcomes they want to achieve and then ensure they have the relevant data.”
Skeptical C-level executives “may need an additional push to embrace wider AI participation in decision-making. Data scientists can help by ensuring the company’s AI is fed with modern data — in the right format, refreshed and available for informing up-to-date algorithmic models — and that the decisions it produces are aligned with corporate strategies. This will fortify trust while making sure AI is an important tool for all executives, including its first proponents, on their daily jobs.”