Real-world data, machine learning, and the reemergence of humanism
Publish date: October 15, 2018
By Neil Skolnik, MD Christopher Notte, MD

View PDF

As we relentlessly enter information into our EHRs, we typically perceive that we are just recording information about our patients to provide continuity of care and have an accurate representation of what was done. While that is true, the information we record is now increasingly being examined for many additional purposes. A whole new area of study has emerged over the last few years known as “real-world data,” and innovators are beginning to explore how machine learning (currently employed in other areas by such companies as Amazon and Google) may be used to improve the care of patients. The information we are putting into our EHRs is being translated into discrete data and is then combined with data from labs, pharmacies, and claims databases to examine how medications actually work when used in the wide and wild world of practice.

PM360 Cover

Let’s first talk about why real-world data are important. Traditionally, the evidence we rely upon in medicine has come from randomized trials to give us an unbiased assessment about the safety and the efficacy of the medications that we use. The Achilles’ heel of randomized trials is that, by their nature, they employ a carefully defined group of patients – with specific inclusion and exclusion criteria – who may not be like the patients in our practices. Randomized trials are also conducted in sites that are different than most of our offices. The clinics where randomized trials are conducted have dedicated personnel to follow up on patients, to make sure that patients take their medications, and ensure that patients remember their follow up visits. What this means is that the results in of those studies might not reflect the results seen in the real world.

A nice example of this was reported recently in the area of diabetes management. Randomized trials have shown that the glucagonlike peptide–1 (GLP-1) class of medications have about twice the effectiveness in lowering hemoglobin A1c as do the dipeptidyl peptidase–4 (DPP-4) inhibitor class of medications, but that difference in efficacy is not seen in practice. When looked at in real-world studies, the two classes of medications have about the same glucose-lowering efficacy. Why might that be? In reality, it might be that compliance with GLP-1s is less than that of DPP-4s because of the side effects of nausea and GI intolerance. When patients miss more doses of their GLP-1, they do not achieve the HbA1c lowering seen in trials in which compliance is far better.1

This exploration of real-world outcomes is just a first step in using the information documented in our charts. The exciting next step will be machine learning, also called deep learning.2 In this process, computers look at an enormous number of data points and find relationships that would otherwise not be detected. Imagine a supercomputer analyzing every blood pressure after any medication is changed across thousands, or even millions, of patients, and linking the outcome of that medication choice with the next blood pressure.3 Then imagine the computer meshing millions of data points that include all patients’ weights, ages, sexes, family histories of cardiovascular disease, renal function, etc. and matching those parameters with the specific medication and follow-up blood pressures. While much has been discussed about using genetics to advance personalized medicine, one can imagine these machine-based algorithms discovering connections about which medications work best for individuals with specific characteristics – without the need for additional testing. When the final loop of this cascade is connected, the computer could present recommendations to the clinician about which medication is optimal for the patient and then refine these recommendations, based on outcomes, to optimize safety and efficacy.

Some have argued that there is no way a computer will be able to perform as well as an experienced clinician who utilizes a combination of data and intuition to choose the best medication for his or her patient. This argument is similar to the controversy over autonomous driving cars. Many have asked how you can be assured that the cars will never have an accident. That is, of course, the wrong question. The correct question, as articulated very nicely by one of the innovators in that field, George Holtz, is how we can make a car that is safer than the way that cars are currently being driven (which means fewer deaths than the 15,000 that occur annually with humans behind the wheel).4

Our current method of providing care often leaves patients without appropriate guideline-recommended medications, and many don’t reach their HbA1c, blood pressure, cholesterol, and asthma-control goals. The era of machine learning with machine-generated algorithms may be much closer than we think, which will allow us to spend more time talking with patients, educating them about their disease, and supporting them in their efforts to remain healthy – an attractive future for both us and our patients.

References
1. Carls GS et al. Understanding the gap between efficacy in randomized controlled trials and effectiveness in real-world use of GLP-1RA and DPP-4 therapies in patients with type 2 diabetes. Diabetes Care. 2017 Nov;40(11):1469-78.

2. Naylor CD. On the prospects for a (deep) learning health care system. JAMA. 2018 Sep 18;320(11):1099-100.

3. Wang YR et al. Outpatient hypertension treatment, treatment intensification, and control in Western Europe and the United States. Arch Intern Med. 2007 Jan 22;167(2):141-7.

4. Super Hacker George Hotz: “I can make your car drive itself for under $1,000.”