An AI Eye on Chest X-Rays
New Stevens-developed system accurately spots respiratory diseases in X-ray images in just a fraction of a second
Respiratory illness outbreaks remain a global challenge; COVID-19 continues to linger in new variants, while avian bird flu — normally an animal disease — has been detected in at least one American this year.
More than 2 billion chest X-rays are captured each year worldwide in an effort to early-diagnose respiratory diseases like these, a number that accelerated after COVID’s arrival on the global health scene in 2020.
But radiologists properly trained to read all those images carefully are in short supply.
Intelligent systems that can automatically spot diseases in X-ray imagery are evolving, and some now even achieve diagnostic results as good as human clinicians. Still, those systems require loads of computing (and electric) power, as well as large quantities of annotated data samples.
“A compact system in your doctor’s office that inspects X-ray images of the chest and immediately detects a high likelihood of a specific disease would be tremendously useful during outbreaks,” says Stevens systems professor Ying Wang.
Now, working with doctoral advisee Ishan Aryendu, Wang has created a new AI system that does just that.
AI-powered prediction system (known as RAIDER, for Rapid AI Diagnosis at Edge using Ensemble Models for Radiology) has proven highly accurate at flagging both COVID and pneumonia cases from X-rays. It’s also compact, produces rapid diagnoses and doesn’t use much power.
In early tests, the duo’s new“We believe this model could help physicians quickly identify and diagnose both known and newly emerging respiratory diseases from chest radiographs,” says Wang, “providing high precision with minimal system requirements.”
High accuracy, early warnings, societal benefit
Wang and Aryendu started with the idea of creating a lightweight AI that doesn’t require bulky hardware or burdensome computing power to make strong predictions.
The duo also knew they wanted several systems to work together, in ensemble examining X-ray images for multiple diseases such as COVID and pneumonia at the same time — using separately developed learning models for each disease, if necessary, but combined to work simultaneously.
“This concept, a single screen, speeds the diagnosis and is more efficient for doctors,” explains Wang.
The duo began by randomly clipping nearly 600 X-ray images of confirmed COVID 19-infected lungs, plus nearly 20,000 images of lungs diagnosed with viral pneumonia, from image banks.
An additional 625 random images of healthy lungs were also clipped and included, to provide a control group to “teach” the AI the differences between health and illness in X-ray images.
Then Wang and Aryendu resized all images to the same dimensions, converted each to grayscale, enhanced image contrast and filtered out visual noise. Finally they rotated, scaled, flipped or cropped the images in order to obtain close uniformity of size and frame-fill.
“Researchers have found this ‘sameness’ of images’ shape and size really helps machine-learning models be more effective learners as we are training them,” notes Wang.
With the data prepped and properly processed, Wang and Aryendu then fed more than 360 chest X-rays into both MobileNetV2 and SqueezeNet, two neural networks that powerfully classify images using lean-computing techniques, to see if they could correctly detect confirmed diagnoses.
The results were remarkable.
“We were surprised by the high accuracy and low latency. The two learning networks in this small pilot study proved to be 97% to 98% accurate at diagnosing COVID-19 and viral pneumonia,” explains Wang. “That’s far better than the 60% to 70%, for example, accuracy of PCR testing for COVID.”
There’s also an important societal benefit to the proposed system’s leanness: it could help expand healthcare to lesser-developed nations and regions.
Since the entire system is powered by a single Raspberry Pi 5 board — an inexpensive, portable and relatively low-power mobile computer — it's ideal for use in the field and in communities lacking infrastructure or robust funding.
“We specifically designed this architecture and model to be easily deployable in regions with limited access to quality diagnostic tools and lacking powerful computing resources for training large models,” notes Aryendu.
Importantly, RAIDER can also be tuned to identify newly emerging diseases presenting worrying imagery that doesn’t quite match what’s known about the damage patterns of existing respiratory diseases.
“New diseases and strains are continually,” concludes Wang. “If our system detects a new class of disease that was not already in the database, it could send instant notification alerts to local physicians or national health agencies about a potential outbreak of a novel disease.
“This could help public health officials get an early start on response and public warnings.”
The research was reported in IEEE Access [Vol. 12] in August.