Security Should Be a Consideration When Designing Medical AI Programs

Security Should Be a Consideration When Designing Medical AI Programs
Credit: John Lund/Getty Images

Artificial intelligence (AI) is becoming increasingly more important in medicine for uses such as improving the accuracy of imaging-based diagnoses for cancer patients, but it can be fooled by adversarial attacks, according to research led by the University of Pittsburgh.

The researchers first created a deep learning algorithm to help diagnose breast cancer based on mammogram images. Such algorithms are already being used by pathologists and oncologists to help improve patient diagnoses. While many are still in early development, some are already being used extensively by pathologists, such as Paige Prostate, the first AI-based pathology tool for cancer diagnostics approved by the FDA in September.

The team’s next step was to use something known as a ‘generative adversarial network’ (GAN) to bombard the model with fake images –images known to be positive were made to look like they could be negative and vice versa. Concerningly, the algorithm was fooled by almost 69% of the false images.

Some of the false images not identified by the AI program were picked up by a group of radiologists who were asked to identify the images by eye based on their medical experience, but not all of them.

GANs were first invented in 2014 and were developed as a way to improve the outcomes of machine learning by creating a two-way system that is trained by competing with itself. GANs do this by generating examples, such as images, and then testing the validity of the examples at the same time until the generated examples become very accurate.

While they do have positive uses, there is concern that GANs can also be used maliciously. “The advancement of computational techniques, such as GANs, can generate adversarial data that may be intentionally used to attack AI models,” write the researchers in the journal Nature Communications.

“Under adversarial attacks, if a medical AI software makes a false diagnosis or prediction, it will lead to harmful consequences to patients, healthcare providers, and health insurances.”

In this study, Shandong Wu, associate professor of radiology, biomedical informatics and bioengineering at the University of Pittsburgh, and colleagues used 1284 mammogram images, 366 confirmed as positive and 918 as negative for breast cancer, to create their initial algorithm. They then trained it on more images and found it had an accuracy of more than 80% for detecting breast cancer cases correctly.

Using the GAN, Wu and colleagues altered 44 positive images to make them look negative and 319 negative images to make them look positive. The diagnostic AI algorithm incorrectly classified 42 of the 44 altered positive images and 209 of the 319 altered negative images.

As an additional arm of the study, the researchers also asked five radiologists to look at the images and state if they felt they were positive or negative. The accuracy varied a lot depending on the individual, ranging from 29–71%, but some false images were classified incorrectly by the pathologists.

“Certain fake images that fool AI may be easily spotted by radiologists. However, many of the adversarial images in this study not only fooled the model, but they also fooled experienced human readers,” said Wu, in a press statement.

The research team now want to use these ‘adversarial’ models to improve the accuracy of medical AI programs and systems and make them more resistant to possible attacks in the future. They emphasize the importance of making such important AI tools as safe as possible.

“We hope that this research gets people thinking about medical AI model safety and what we can do to defend against potential attacks, ensuring AI systems function safely to improve patient care,” said Wu.