Pratap Khedkar co-wrote this blog post with Arun Shastri.
At The Wall Street Journal Health Forum, held in Washington D.C. on April 30, Novartis Chief Executive Vas Narasimhan called artificial intelligence “another tool in the toolbox.” This viewpoint diverges from recent notions that AI is a critical capability that’s going to change the way that pharma companies do business. In fact, industry experts have hoped that the dwindling return on investments for R&D (the current 3% ROI is predicted to hit 0% by 2020) could be reversed by employing AI-based technologies in drug discovery. There are more than 100 AI startups in this field that have received generous investments from venture capital. Yet, IBM has discontinued sales of its Watson AI system for drug discovery.
A growing leeriness over the promise of AI isn’t limited to drug discovery in pharma. As Andrew Moore, Google’s VP of AI, said at a Google Cloud event last November: “AI is currently very, very stupid. It is really good at doing certain things which our brains can't handle, but it's not something we could press to do general-purpose reasoning involving things like analogies or creative thinking or jumping outside the box." And on yet another note, increasingly, AI models that can’t be explained in human terms often aren’t implemented, either because regulators require an explanation, or the humans tasked with implementing these models won’t do so without knowing why a recommendation was made. Are we bumping up against the limits of AI as these examples seem to suggest? Are more stakeholders beginning to think of AI as “another tool in the tool box”?
The Gartner Hype Cycle suggests that every new technology reaches a peak of inflated expectations before entering what’s commonly referred to as the “trough of disillusionment.” AI is comprised of many elements and each of these elements may be at different stages along this journey. We predict that AI will reach the trough of disillusionment during the next 24 months due to the technology’s high expectations and a few of its inherent limitations.
For one, industry stakeholders are realizing that there are very few problems in any organization with a million data points. For every one of these problems that has a million data points there are 10,000 problems with a thousand data points and so we must get better at solving problems with smaller data sets. Even when bigger data sets exist, there’s often a lack of the connected, curated and ready-to-use data required for AI to do its magic. For problems with less data, current methods do not perform as well. Human expertise must be meaningfully combined with AI to solve these problems. Issues with such “small data" AI must be tackled head-on. Accordingly, John Mattison, chief medical information officer at Kaiser Permanente, has stressed the importance of combining machine learning with human expertise and judgment which goes beyond Ray Kurzweil’s prediction of technological singularity.
Another issue is that, so far, AI has been used to solve “boring” problems, otherwise known as the highly visible problems that everyone aspires to solve because they would have the greatest impact. While these problems are important, they don’t inspire excitement for AI within an organization. Such problems include reducing costs without impacting the top line, and increasing revenue via identifying new customers, cross-selling, up-selling and preventing churn. An associated realization is that to date, in most organizations, even when these problems have been solved, companies have been poor at managing the changes and disruption that may arise from applying a new approach to existing problems. And finally, there are only so many of these “boring” problems to solve.
As we enter this trough of disillusionment, we must keep in mind that we need to be looking for ways to climb the “slope of enlightenment,” Gartner’s term for the phase in which the capabilities of a new technology become fully understood and its benefits are realized across an organization. To get there, here are a few places to focus:
1. Revamp your data creation and design approach. This is an area that digitally native companies do well, so what can we learn from them? One example is Daphne Koller’s new startup Insitro, which she presented on at the WSJ Health Forum. She talked about how a lack of biological data hampers drug discovery, and how she wants to create a process that makes such data creation possible by simulating cells in silica. The lesson here is that your data limitations are only hindered by your imagination. If you don’t own it, or can’t acquire it, how can you create your data?
2. Overhaul organizational behavior. We should also look for creative ways to ensure change in organizational behavior. Allergan and UPS have demonstrated that change from the outside in sometimes works better than enlisting the people who will be most resistant to it. To overcome such corporate antibodies, these companies created new, outside organizations that drive innovation and share proofs of concept directly with leadership.
3. Go beyond boring. Develop the broader portfolio of use cases that range from leveraging AI for automation to inventing new approaches. Asking new, non-boring questions requires a shift in mindset. Organizations need to adopt a hackathon mentality of trying out a dozen or so outlandish ideas in the hopes that two work, rather than asking three or four well-known questions with predictable solutions.
In the short term, tech disruption tends to be overestimated and in the longer term, the disruption tends to be underestimated. The hype cycle surrounding the birth of the internet is the most glaring example of this, and all signs indicate that AI is headed in the same direction. We believe that AI is at the end of its beginning (a peak of inflated expectations) and not at the beginning of its end, and therefore, expect significant and real progress to come.