Protecting Patient Privacy in the Era of Artificial Intelligence

AI tools create new challenges in protecting patients’ clinical data


David Larson
Larson
Lui
Lui

Advances in artificial intelligence (AI) are revolutionizing not only radiology, but the entire field of medicine. However, these advances create new challenges, particularly regarding patient privacy.

“Many AI tools are developed using clinical data, which raises a whole host of questions about data privacy,” said David Larson, MD, MBA, a radiologist at Stanford University School of Medicine, who moderated and presented an RSNA 2020 session on the ethics of patient privacy in medical imaging.

According to Dr. Larson, the primary purpose for acquiring patient data is fulfilled once it is used to provide care. In terms of secondary uses, such as using data for machine learning, clinical data should be treated as a public good and used for the benefit of future patients.

“This means patients have an obligation to contribute to the common purpose of improving the quality and value of clinical care and the health care system by allowing others to learn from their data,” Dr. Larson said.

This is not to say that a patient’s expectation for privacy ends once the data’s primary use is fulfilled.

“Data should only be shared or widely released when the additional uses adhere to the data use agreement and when the patient’s privacy can be safeguarded,” Dr. Larson said.

But because data — including medical images — is so easily transferable, this is easier said than done.

The Fallacy of Consent

Take for example a smartphone app that tracks vital signs. To download the application, a user must first accept its terms and conditions — i.e., provide consent. However, as Yvonne Lui, MD, a radiologist at NYU Langone Health, pointed out during her RSNA 2020 session, these agreements are so complex nobody actually takes the time to read them, resulting in what she calls “a fallacy of consent.”

“Considering the length of these agreements and the number of transactions we go through every year, it would take the average person 76 straight days of around-the-clock reading to get through these consent agreements,” Dr. Lui said. “Instead, we do what everyone does. . . we scroll down and click ‘agree’ in less than a second.”

The problem with this process is that, ultimately, less data ends up being included in the dataset.

“For all of science, and particularly for machine learning, this can introduce unintended biases,” Dr. Lui said.

The Myth of Anonymization

Another challenge that medical imaging faces is what Dr. Lui refers to as the “myth of anonymization.”

“A Mayo Clinic study showed that in 85% of cases, standard facial recognition software was able to identify the research volunteers based on their MRI reconstruction,” Dr. Lui said. “Not only is this possible, it’s also incredibly easy to do. You can do a surface rendering via a free app on your phone.”

Although there are many deidentification tools available, Dr. Lui warns that some could create new challenges.

“Even though using skull-stripped images removes identifiable facial features from the image, it can also negatively impact the generalizability of the models developed using this data,” she added.

Dr. Larson concurs. “Deidentification is not 100% reliable, especially as identification technology continues to advance,” he said. “But privacy remains of paramount importance, and we need to continue to develop new mechanisms for protecting it.”

For More Information

View the RSNA 2020 session, “Ethical Issues in Medical Imaging AI for Radiologists and Industry,” at RSNA2020.RSNA.org.

Read previous RSNA News stories on artificial intelligence: