How can New AI Technology be Used to Protect Privacy in Healthcare Settings?

Introduction

Digital medicine has led to an array of new possibilities. For example, it can identify and detect tumors very early. However, the efficiency of the new AI algorithms is based on the quality and quantity of the data used to train the AI technology. To utilize the data pool to its maximum potential, exchanging copies of databases between the clinics where the algorithm is trained is quite common. This is where privacy in healthcare settings is compromised.

The material generally undergoes pseudonymization and anonymization processes (which have been criticized) to protect the data. “Such processes have often been considered as inadequate when it comes to the health data of patients,” states Daniel Rueckert, Alexander von Humboldt Professor of Artificial Intelligence in Healthcare and Medicine at Technical University of Munich (TUM).

AI Algorithms Can Support Doctors

To resolve this issue, a multidisciplinary team at TUM has collaborated with researchers from the non-profit OpenMined and the Imperial College London to create an exclusive mix of AI-based diagnostic processes for radiological image data that ensures data privacy. In a paper published by Nature Machine Intelligence, the team has now demonstrated a successful application: a deep learning algorithm that assists in classifying pneumonia conditions in x-rays of children.

Privacy Protection

One way to prevent the records of patients is to retain those records at the collection site instead of sharing and exchanging them with other clinics. Presently, clinics share and exchange patient data by sending copies of databases to clinics where algorithms are trained. Federated learning has been used for this new algorithm that has been developed, where not the data but the deep learning algorithm is shared. “Our models were trained in the different hospitals with the local data and then returned” states first author and project leader Georgios Kaissis of the TUM Institute of Medical Informatics, Epidemiology, and Statistics. Thus, the data owners do not have to share their data and retain complete control over the data, which in turn can help maintain privacy in healthcare.

Data Cannot Be Tracked

To protect the identity of the institutions where the algorithm has been trained, the team applied another technique termed as secure aggregation. “Using this technique, the algorithms have been combined in an encrypted form and can only be decrypted once the algorithms are trained with the data of all participating institutions” states Kaissis. Eventually, one can access statistical correlations from the data records; however, one cannot track the contributions of individual persons ” says Kaissis.

Charting the Road Ahead for Digital Medicine

This amalgamation of the latest data protection processes will also enable cooperation between institutions, as the team demonstrated in a paper published in 2020. Their privacy-preserving AI method could overcome legal, ethical, and political obstacles; thus, charting the road for the extensive use of AI in healthcare, which can significantly help with research in rare diseases.

Wrapping Up

Scientists know that by preventing the patients’ privacy, technology can significantly contribute to the progress of digital medicine. Training good algorithms requires good data, which can only be obtained by ensuring data privacy in healthcare. With data protection, a lot more can be done in the field of AI than people can think or imagine.