Healthcare, AI, and the Hype Cycle: Nuance Needed
AI is the hottest topic on Gartner’s hype cycle. People tend to place AI in one spot or another on the cycle, often using the physician-patient treatment interaction as the test case. Evangelists like @VinodKhosla point to the eventual replacement of the doctor in the intimate care setting, while skeptics point to the many reasons they see this as plain crazy.
This polarization obscures both the promise and the limits of AI in healthcare for the next few years, because the truth is nuanced. Healthcare AI is not one thing and should not occupy one spot on the hype cycle.
When applied to huge de-identified data sets, AI is already generating insights useful in population-based work such as accountable care planning and drug discovery. But AI fails badly when "resolving" to the individual care plan for a complex indication, mostly because the full set of data needed to treat a single human being is still too vast, complex, and mysterious for today's computers and algorithms to "automate."
Without a doubt, AI is making amazing progress now that appears to be accelerating. However, most of its critical techniques are based on feeding the algorithms a lot of data. Whether the software is being trained by human experts, or generating its own “insights” by processing reams of data and evaluating its own performance millions of times, the predictions and recommendations AI software creates are generally most useful in characterizing what is typical, likely, or worthy of more focused research.
Extremely valuable in business generally, these population-level insights offer precision that is good enough for the commercial operator of the AI to take a commercial action, such as promoting a specific product to a specific person. Similar insights are equally valuable in healthcare when typical or likely outcomes are good enough to justify clinical or scientific action -- such as the separation of “normal” image scans as part of preliminary diagnosis, or support for a pharma deciding to fund a promising targeted therapy based on AI teasing out subtle patterns in massive genomic data sets.
For such population-level activities in healthcare, AI is moving through the trough of the hype cycle and into early adopter segments of the mainstream. Here, AI works and is getting better, faster.
But if Google or Facebook display one wrong ad to one person, no one dies. In health care, if one wrong drug is prescribed for one person, someone might.
For this reason, the AI-fueled patient-level treatment model is about to plunge into the trough of the hype cycle as people (investors, physicians, entrepreneurs) realize that there is a lot of time, money, and travail ahead when it comes to AI and individual care.
Why is it so hard?
In my view, it is a combination of two factors.
First, the human organism, from a data perspective, is both incredibly vast and shockingly dynamic. It’s possible that in the future (20 years? 50 years? 100 years?) we will be able to observe, measure, store, and process snapshot data on every important cell, every mutation, every protein, every pathway in each patient’s body. How much longer before we can take that astounding feat and extend it from the snapshot to the equivalent of full-motion video, showing (for example) not only the detail of a mutation but how the mutation replicates, morphs, and adapts to ongoing treatments and stresses?
In the real world today, clinical trial patient recruitment offers an excellent example of this problem. Many people think that a full genomic analysis of a patient should suffice to “match a patient to a targeted therapy trial.” It turns out nothing could be further from the truth.
Genetic matching is table stakes when trying to match a real human being to an actual trial that is open and enrolling patients. Once we have the massive NGS data, we need the data in the EMR and imaging systems -- history of present illness, comorbidities, previous therapies, evidence of progression, etc. Then, when the EMR and the NGS data match up, extra-clinical logistical data becomes critical. One critical example: is the right “arm” of the right trial open and recruiting at a site that is geographically accessible, affordable, and practically manageable to this particular patient? Answering that question requires logistical, demographic, financial, and social/psychological data that are not housed in the EMR. The most likely place to find the answers is in the minds and hearts of an engaged physician/nurse team.
The second obstacle to the use of AI in individual patient care is the fact that our knowledge of biology is not advanced enough for us to “train” an expert system in every way that is needed. We don’t know enough to trust the findings of an algorithm sufficiently to treat a patient solely based on that algorithm.
Once again, clinical trial patient recruitment offers an excellent example, demonstrating how the phenomenal turbocharging of molecular biology, triggered by the sequencing of the genome, is arguably still in its infancy. Many (if not most) of the dysfunctions in DNA, RNA, proteomics, and pathways are not yet “druggable” – and scientists think many are yet to be detected, studied, modeled or understood. New measures such as Tumor Burden and Microsatellite Instability are undoubtedly not the last factors to be introduced into the measurement toolkit, with critical needs emerging to understand and predict how different cancers will react and adapt to successive therapies, and how the immune system will respond specifically to various threats, including the therapy itself.
None of this is meant to diminish the wonderful value being provided now, and yet to come, by AI in healthcare. But we do a great disservice when we try to assume that population-level success will translate quickly into complex individual patient care. We need great doctors for that, and our goal should be to assist them with AI-powered insights.
As Dr. Jerome Groopman famously described in his landmark book How Doctors Think, the multifaceted complexity of the unique human patient demands a kind of highly-informed and structurally-educated intuition on the part of the physician. At the level of the intimate care setting, AI needs to serve the doctor and the patient as an increasingly expert advisor. We should see AI as a powerful tool for helping doctors and nurses to operate “at the top of their license,” and not as “Dr. Jetson” coming to tear down medicine as we know it and need it.