Lack of senior‑level awareness and expertise is blocking progress on AI at many companies. While almost all executives at big companies say AI is important, just 17% say they’re familiar with the technology, according to a 2017 Deloitte survey of 1,500 C‑level execs at companies with at least 500 employees.
So how is it that U.S. companies are spending $19 billion on AI systems and applications in 2018, and are expected to invest up to $57 billion by 2021, according to IDC? A small but elite group of executive AI black belts appear to be shouldering most of the burden. More than half the execs who told Deloitte they were familiar with AI are juggling between three and 10 AI pilot projects at any given time.
It’s no surprise why: most AI projects have evolved as grassroots efforts in recent years. As AI applications emerged, companies let a thousand flowers bloom. Experts are now saying they need to tend the garden by taking a more strategic approach.
That’s why many companies are adopting new management tactics to tackle the biggest challenge in getting started with AI—sorting through proposals, options, vendors and requirements, and deciding which projects to greenlight. Some are deputizing special councils and committees to accelerate decision‑making. Others are developing AI “fact sheets” in an effort to establish quality and ethical standards.
“Companies need to develop more thoughtful approaches to exploit AI,” says Karl Freund, consulting lead for deep learning at Moor Insights and Strategies. “They need to consider how all the pieces of the puzzle fit together, across the entirety of their business.”
Business and IT take control
California‑based Farmers Insurance has invested aggressively in AI in recent years. One project frees up time for claim adjusters by using image recognition to detect anomalies and fraud in auto insurance claims. Farmers has deployed AI chatbots to interact with customers in its contact centers. Finally, Farmers uses robotic process automation (RPA) to automate mundane back‑office processes.
Most of these projects bubbled up from below, often without unified support from senior management, says Tom Davenport, a fellow of the MIT Initiative on the Digital Economy and a senior advisor to Deloitte Analytics. So, Farmers execs formed two committees to start driving AI decision‑making from the top—with dual roles for the business side and IT.
The business‑side of AI Council focuses on major problems the company needs to solve, AI use cases that align to those problems, and business objectives. The sister council, run out of IT, identifies enterprise tools and data sets that can help to build out the appropriate AI systems. It either buys solutions that are specific to the insurance industry, or develops them internally. For example, Farmers has developed chatbots in‑house using open‑source industry frameworks, Davenport says.
This approach helps executives distill order from chaos. “The councils are a response to the feeling that things were getting out of control [with AI], and people were using too many different types of software and paying the same vendor multiple times,” Davenport says. Now the company is moving towards more “centralization and control.”
Health benefits giant Anthem, on the other hand, sought more centralized control of its AI activities from the start. An IT‑led group, dubbed the “Cognitive Capability Center,” began coordinating AI projects in 2011 when it first began exploring the technology, says chief digital officer Rajeev Ronanki.
Initially, the team focused on use cases for two key areas: process automation and data analytics. With rapid advances in machine learning and AI applications, managers soon realized they needed a more distributed approach and launched several specialist AI teams to cover needs across major business units. Anthem then created a chief AI officer position to centralize decision‑making.
The initial effort “morphed into a much broader set of capabilities that’s now embedded into our enterprise,” says Ronanki. “There isn’t just one place where all the AI happens.”
AI councils can also help break down walls to get two key requirements of AI initiatives—data and money—flowing where they need to go.
Davenport cautions that no best practices yet exist for executing AI strategy. Companies take varying approaches to project ownership, technical standards and budgets.
Standardized models need to emerge, says Gayle Sirard, North America lead for Accenture Applied Intelligence. At a minimum, she recommends a governance model, an operating model, prioritized use cases, an implementation roadmap, and a business case to define and capture value.
While AI systems obviously need quality standards and ethical boundaries, governance might be the least mature practice in that laundry list. It’s also the one that many companies need help with. That’s one reason why IBM researchers have started using what they call “fact sheets” to document the performance of AI algorithms for business executives.
“AI services and solutions should come with some statement of performance and quality control,” says IBM Research fellow Aleksandra Mojsilovic. “How do you know that this service is what you need, that it’s appropriate for your problem and that it will behave according to the requirements you put on paper?”
Among other ideas, Mojsilovic suggests companies require a “supplier declaration of conformity” that answers technical questions about an AI algorithm’s quality and behavior. Executives can use these declarations to audit AI projects and ensure they meet basic standards.
Companies in the Deloitte survey listed “making better decisions” as their second‑most desired benefit of AI. Until everyone agrees on best practices for AI project governance, the wise course is to experiment with a few different approaches to see which ones ultimately prevail.