Experts at Australia’s University of Adelaide have developed a system that employs deep-learning image analysis technology that can predict patient longevity based on computerized tomography (CT) scans.
University researchers used the technology to analyze the medical images of 48 patients’ chests. By performing an image analysis of patient internal organs, the deep-learning technology was able to predict which patients would die within a period of five years with an accuracy rate of 69 percent. This level of precision is similar to manual prognoses conducted by human clinicians, according to the university.
“Predicting the future of a patient is useful because it may enable doctors to tailor treatments to the individual,” said Dr. Luke Oakden-Rayner, a radiologist and PhD student with the University of Adelaide’s School of Public Health. “The accurate assessment of biological age and the prediction of a patient’s longevity has so far been limited by doctors’ inability to look inside the body and measure the health of each organ. Our research has investigated the use of ‘deep learning’, a technique where computer systems can learn how to understand and analyze images. Although for this study only a small sample of patients was used, our research suggests that the computer has learnt to recognize the complex imaging appearances of diseases, something that requires extensive training for human experts.”
The image-analysis technology combines a convolutional neural network with a radiomics framework. The University of Adelaide said its research could pave the way toward an effective and efficient approach to measuring tissue changes that predict chronic diseases.
CT scans represent a rich source of information that can be mined with deep-learning technologies. More than 80 million CT scans are conducted annually in the United States alone, according to Consumer Reports.
The extensive use of deep-learning for medical image analysis could lead to unexpected discoveries. Features detected by deep learning can be rendered into a visual format, allowing humans to gain an understanding of the insights generated by the technology, according to the university.
Deep-learning also is useful in diagnosis with other types of medical imaging techniques. For example, scientists have employed Google machine learning technologies to detect signs of diabetes-related blindness in retinal photographs. Another group is employing deep convolutional neural networks to classify skin lesions based on images.
The University of Adelaide says it will attempt to use their deep-learning technology to predict other medical conditions. For example, the techniques could be used to anticipate the onset of heart attacks.
Nirel Marofsky is project analyst for the cognitive engine and application ecosystem at Veritone. She acts as a liaison to strategic partners, integrating developers and their capabilities into the Veritone Platform. Learn more about our platform and join the Veritone developer ecosystem today.