Helping to improve medical image analysis with deep learning

Helping to improve medical image analysis with deep learning
An example visualization using the proposed neural network architecture, with an axial view (top) and a 3D view (bottom). Cerebral grey, cerebral white, and cerebellar grey matters are hidden for better illustration. Credit: IBM

Medical imaging creates tremendous amounts of data: many emergency room radiologists must examine as many as 200 cases each day, and some medical studies contain up to 3,000 images. Each patient's image collection can contain 250GB of data, ultimately creating collections across organizations that are petabytes in size. Within IBM Research, we see potential in applying AI to help radiologists sift through this information, including imaging analysis from breast, liver, and lung exams.

IBM researchers are applying deep learning to discover ways to overcome some of the technical challenges that AI can face when analyzing X-rays and other medical images. Their latest findings will be presented at the 21st International Conference on Medical Image Computing & Computer Assisted Intervention in Granada, Spain, from September 16 to 20.

Artificial neural networks can often struggle to learn when presented with an insufficient amount of training data. These networks also face the challenge of identifying very small regions in images depicting anomalies, such as nodules and masses, that might represent cancers.

At MICCAI 2018, researchers from IBM Research-Almaden and IBM Research-Haifa will present papers describing novel approaches to that may hold the potential to help address some of these challenges.

Learning from incomplete data

IBM Research-Almaden Fellow Tanveer Syeda-Mahmood will present a novel AI network design that was shown in a study to be capable of analyzing twice as many potential disease markers in 3-D images, as well as accurately segmented small structures in those images, in half the time as previously studied AI-based network architectures.

Helping to improve medical image analysis with deep learning
Sample results from a new network architecture show the estimated quadrilateral in red and the one marked by a radiologist in blue. The performance is a significant improvement over a previous architecture. Credit: IBM

Deep neural networks used to train AI systems can sometimes have difficulty breaking down medical images, a process called segmentation. This can present challenges to accurately identifying small disease markers, limiting the use of these networks in clinical settings. The project is our first effort directly targeting this challenge.

Training AI with minimal data

Mehdi Moradi, IBM Research-Almaden's Manager of Image Analysis and Machine Learning Research, and colleagues will discuss their study of neural network architectures that were trained using images and text to automatically mark regions of new medical images that doctors can examine closely for signs of disease.

The researchers trained one network using combined image and text data and a second network using separated text and images, because there are different ways an AI-based imaging system might receive input to analyze. In the study, both networks autonomously located potential health threats in chest X-rays with a level of accuracy comparable to that of experienced radiologists analyzing and annotating the same images.

Helping to improve medical image analysis with deep learning
In these examples of lesion detection, red contours denote automatically detected pairs that correspond to ground truth; cyan contours are false positive automatic detections that were reduced by the dual-view algorithm. Credit: IBM

Recognizing obscure abnormalities

Scientists from IBM Research-Haifa in Israel developed a specialized designed for mass detection and localization in breast mammography and will present their findings at MICCAI's 4th Breast Image Analysis Workshop.

Standard breast cancer screening involves taking two mammography X-ray projections for each breast and comparing the views to pinpoint areas of interest. The new 's design included identical "Siamese" subnetworks, from which analyses were compared to produce image evaluations. The study suggested an effective way of training AI to flag areas of abnormal and potentially cancerous breast tissue.

As the number of taken in the U.S. reaches tens of millions annually, healthcare organizations are increasingly turning to AI to help them accurately and efficiently analyze vital information contained in patient MRIs, CT scans, and other visual diagnostic aids. A 2015 Consumer Reports investigation found 80 million CT scans alone are performed annually in the U.S. AI-infused imaging systems hold promise to help doctors sift through large numbers of images, plan treatment options, and perform clinical studies.

More information: Mammography Dual View Mass Correspondence. arxiv.org/pdf/1807.00637.pdf

Provided by IBM

This story is republished courtesy of IBM Research. Read the original story here.

Citation: Helping to improve medical image analysis with deep learning (2018, September 19) retrieved 29 March 2024 from https://phys.org/news/2018-09-medical-image-analysis-deep.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Training artificial intelligence with artificial X-rays

22 shares

Feedback to editors