A view from the Center

Deloitte's Life Sciences & Health Care Blog

Direct from HIMSS17: We solved the image recognition problem in health care. Now what?

In health care, it’s all too rare to have a clear-cut victory. Many promising new therapies or diagnostics remain just that – promising – for years, as researchers await the data to prove that they work. New technologies arrive on the scene and can take a long time to become good enough for medical use and then slowly become widely adopted, while their merits are argued and their impact is analyzed. And so on and on and on.

So when those of us in the industry – many of the people walking the halls and appearing on panels here at HIMSS – encounter a victory we often find it difficult to acknowledge out loud. That’s exactly the scenario we face with image recognition – the ability to identify and classify key features in medical images to inform medical decisions. This is traditionally a manual process that virtually begs to be automated. But automation has remained tantalizingly out of reach for years, even as computing ability and machine learning capabilities have boomed.

Now, due to a cascade of advances, many of which have unfolded in industries far removed from health care, it’s clear that the image feature recognition problem is solved.

Yes, solved.

That still feels weird to say out loud. But it is a fact that today, computers can surpass humans’ ability to identify patterns in images – including medical images. This has enormous implications. Imagine that software can detect a pattern of unexpected tumor growth in xrays. Or to view an image of a retina and detect the early signs of a disease. Or to use subtle biomarker features that can reduce false positives in, say, mammogram screenings. All before a human reviews the images.

How did this happen? It’s not the result of the collective brainpower of the world’s leading physicians and technologists, working together to painstakingly catalog every variation of every medical image and then translate them into detailed rules to guide computers – a virtually impossible task. Instead, it’s a direct product of “deep learning” advances that apply to a host of image recognition challenges, regardless of industry. At the risk of oversimplifying, one feeds deep learning systems a ton of images, the system begins to identify patterns and – with help from the humans who actually know the differences for labelling these images – the system begins to learn to identify and distinguish among those images.

Let’s get a little more specific about this breakthrough. Search engine providers, social media companies, hedge funds, and others have been furiously pursuing stronger feature recognition systems for a wide variety of applications – from smarter ways to translate languages, to finding pictures of cats in social media feeds, to driverless cars. What did they all find? That explicit rules make feature recognition systems brittle – but learning from examples makes them strong.

The ability to train neural networks with images is no minor development. It’s like Copernicus discovering that the planets rotate around the sun, rather than earth. Protocol-driven, evidence-based medicine is likely to be rocked by this advance. And it’s likely to reject it at first.

Why aren’t health systems employing deep learning yet? One main reason is the data challenge. One has to be able to feed the system good data along with well-labelled training data – and a lot of it. But most organizations have challenges with collaborating on the data and standardizing it; moreover they may not provide training data in a standard way. So despite the clear, transformative ability to catalog and identify features of interest or classifications in medical images in a matter of moments, we cannot take advantage of it because of data issues.

This situation won’t last for long. It can’t – too much is at stake, and the industry has been waiting too long for this transformative capability. At the same time, we all know it’s not as easy as simply rounding up all of the imaging data and plugging it into a database. Here are the three steps providers should consider to benefit from deep learning-enabled image recognition capabilities.

1. Get comfortable with cloud approaches to data and decision support
Used to keeping all your data inside your organization? Capabilities like image recognition are likely only going to be possible through cloud-based aggregation; it likely will not make sense to attempt do this work internally. Yes, organizations will need to resolve issues around security of patient data.

2. Link and annotate patient information to feed the learning engines
If your organization has trouble pulling together different threads of information (imaging, medical record claims, physicians’ notes, etc.) for individual patients, it’s probably time to invest into making it easier. What’s likely needed is a linked longitudinal record – which may require going back and adding features to existing data sets or changing workflows to collect the key data you need. Whether you have to partner with external vendors, crowdsource this work, or handle it through an in-house sprint, be prepared to do what it takes to connect and annotate training data properly. Yes, it may take time for government regulations to catch up with industry change and you may need to address regulatory issues that arise.

3. Get ready to integrate automated outputs into the diagnostic process
Advanced image recognition systems based on deep learning capabilities require a steady stream of data as their lifeblood. The good news is that many providers are generating this type of data every day. The key is to integrate the results and training into the diagnostic process by having clinical teams use new tools and processes – this is a classic change management situation. At first, most results will likely be too uncertain to be used without duplicate work by staff who interpret images. But over time, groups will likely be able to trust in the productivity, accuracy, and patient experience gains made possible by deep learning.

These are just the starting points, of course – a lot more work needs to be done. But the payoff is worth the investment, given this technology’s ability to fast-forward the process of diagnosis and treatment, allowing physicians to make more informed decisions faster. Pushing forward to integrate deep learning into medical care is some of the most important work the industry can be doing today.

Thanks to work done largely by other industries, a great gift has practically dropped in our laps: The ability to recognize features in medical images at scale, with tremendous accuracy. Now it’s our turn to act.

 

Author bio

Dan Housman is a software veteran with a demonstrated track record of providing valuable and innovative decision support systems to large, complex organizations. Dan leads ConvergeHEALTH’s product innovation efforts with a focus on translational research, bioinformatics and innovative approaches to data capture, analysis, and reporting for clinical quality and performance improvement. Dan earned a BS in Chemistry and Biology from MIT in 1995.