shutterstock_304548335.jpgThis is the first post in a three-part series on artificial intelligence in healthcare. 

We encounter artificial intelligence platforms outside of healthcare on a regular basis, like when we ask Siri a question or view Netflix’s list of suggested films. However, within healthcare, and in the pharmaceutical industry specifically, AI has yet to live up to that level of success. In the near future, will AI become embedded in pharma R&D? Will it enable treatment decisions to be more automated—and more accurate? I recently called my colleague John Piccone, a ZS principal and advanced data science expert who previously led IBM Watson Health’s life sciences offerings, to discuss AI’s use both within pharma and throughout the healthcare ecosystem.

In this first part of our conversation, we talk about pharma’s successes and challenges with AI in R&D and commercial applications. In my next two posts, we’ll cover the changes that AI will bring throughout the healthcare ecosystem, and how pharma’s commercial models will have to adapt, as well as the steps that pharma organizations should take now to keep pace.

Q: In order to frame AI in the context of this Q&A, how do you define AI, John?

A: I look at AI systems as cognitive systems with three primary capabilities: understand (such as digesting medical articles about trials), reason (predicting which doctor will try a drug) and learn (for example, changing a “next best action” suggestion because a sales rep didn’t agree with it). But I’d also include machine learning and predictive modeling methods that use traditional big data sources to provide insights that appear as intelligence in solutions that users interact with. For example, an AI system might predict that a patient is going to be afraid of injections based on patterns that it recognizes in data, and then recommend providing education and support at the time that his injections are scheduled.

Q: With that definition in mind, let’s discuss the applications. Can you talk a little bit about what kind of impact AI is having in R&D and the early discovery of molecules?

A: One application of AI in R&D is identifying unmet medical needs. One way of doing that is to identify naturally occurring groups or subpopulations of patients and understand the differences between them. This is especially important in heterogeneous diseases like multiple sclerosis where different patients behave and experience the disease and treatments in very, very different ways. And because the mechanism of disease is not well understood, we don’t understand why these patients are experiencing the disease differently.

For example, you might have patients who respond immediately to therapy and have very few recurrences, while others don’t respond well at all to therapy and have a very deep acceleration of the progress of their disease. Then you’ve got other patients who will respond for a period of time and go into remission, and then their disease will appear again very acutely and they'll respond again, but it’s a repetitive process. Do these patients have the same disease, or is it a disease that’s just being experienced differently in different patients?

Being able to identify and discover the different cohorts, explain what’s different about them and what’s the same, and then probe the information we have about these patients to find the differences is a key capability in the discovery phase. Machine learning and AI systems allow us to do this by reading unstructured information about these patients and creating clusters and organizing their data in longitudinal data sets, and identifying similar sequences of events or episodes across the different cohorts. That whole activity is a huge application of AI in discovery.

Another area—and this relates to the more traditional understanding of AI and its ability to understand, reason and learn—is target selection and candidate identification processes. A  number of companies have created tools that read scientific, medical and clinical literature, patents, etc., and create a semantic map or a cognitive map of all of the concepts—including the underlying genes, proteins and molecular pathways related to various diseases—and allow a researcher to explore that concept space.

For example, scientists sequenced the Zika virus and identified the protein that causes problems in the brain formation of fetuses. Uncovering the amino acid sequence leaves the researcher with questions: What does the human race know in our literature about proteins that look like this? Are there any mutations of this protein that we can create in the virus that inactivate its effect on brain formation? Turning to AI, we can get answers.

In fact, there are a number of case studies where this has accelerated the target identification and selection process from years—seven, in some cases—to weeks and days, simply by having familiarity with the sum total of our knowledge and being able to interact with the research in natural ways. You now have a “research assistant” that not only knows the literature and has the search capability to find common terms, but also can construct a knowledge map of all of the interrelationships. I can ask the system questions to gather ideas about candidate selection and target identification.

Q: Let’s shift gears and talk a little bit about the commercial realm, and how pharma can use predictive and prescriptive AI to influence the way that sales and marketing work. One example is taking AI-based methods to figure out what customers want so that companies can market and sell to them more effectively. And, of course, there are numerous other applications like forecasting.

Pharma’s commercial and R&D organizations work differently, and there’s evidence that R&D has been more successful in embracing AI methods to help them be productive. Do you see a different set of challenges on the commercial side?

A: The adoption challenges on the commercial side are similar to those on both the clinical and research sides: Whoever the intended consumer of the intelligence is, AI-driven information and insights can’t be presented in such a way that it’s replacing their judgment or replacing their roles. In order to achieve adoption, the AI-driven information needs to be presented as an aid that requires the users’ judgment to make decisions based on the options applied or the insights provided.

For example, we had a project with a pharma company where we provided a dynamic call planning tool, and when it was presented as, “Here’s your call plan, and the machine has come up with this for you,” there was almost a total lack of adoption. But when it was presented as, “Hey, here are some changes to the call plan that we had for you with reasons why,” then there was a lot of interest from the reps. And there was even more adoption when they were given the option to accept those changes or to reject them, with reasons, with feedback, and they were told that their feedback would be used to train the machine intelligence so that it would provide increasingly better recommendations and insights to them.

For instance, the machine would say, “We suggest that you call on this doctor today because he just went to a seminar about the underlying mechanism of action for the product that you’re launching, so we believe that he’s a better adoption candidate than the person you were going to meet with today.” The reps liked that feedback. And then if the reps had the ability to respond to the system and say, “I have a more important reason for wanting to talk to this other physician,” and if they could see that their feedback was then incorporated into the recommendations in the next cycle, they were more engaged in providing feedback and much more apt to adopt the technology.

This leads to another obstacle: optimizing the machine-to-human interaction. Many business leaders are more familiar with making decisions by gut instinct and collecting data to validate that instinct, so it’s hypothesis-driven inquiry because they have an instinct and they want data to show or disprove that. However, many of the really surprising and high-impact insights that come from machine learning and AI on both the commercial and R&D sides come from open-ended inquiries, where you just ask a question without any predisposition about what the answer may be. You might say, “Who are our most valuable customers?” or, “What makes them more valuable than others?” or, “Why should I propose this messaging to them versus another message?”

It’s a huge challenge for an organization to get to a point where they’re comfortable with the open-ended dynamic versus the hypothesis-driven dynamic, but the organization potentially can yield much higher value and impact by making that transition.

It’s also a huge challenge to ensure that AI’s outputs are packaged effectively for the end user. As with many communication-related issues, the machine-to-human communication can be improved if AI systems “consider their audience” and package AI-generated information in a compelling and easily consumable way. It can’t simply be a prescription or a bunch of numbers. It needs to be a suggestion followed by an insight, all presented in a language that a human can easily digest—and act on.

Stay tuned for the second blog post in this series of three, which will be published next week.


RELATED CONTENT 

BLOG POST: Paging Dr. Watson: Evolving Pharmaceutical Value Propositions in the Age of Artificial Intelligence

BLOG POST: The State of the Union Between Pharma and Tech

BLOG POST: How Pharma Can Thrive in an Analytics-Enabled Healthcare World


 

Topics: Big Pharma, Pratap Khedkar, R&D, artificial intelligence, automation tools, technology innovation, pharma companies, artificial intelligence & pharma, AI blog series