Sean Lane: Olive Artificial Intelligence

Olive Set to Achieve Record Growth in 2019, Digital Employee Hired at More Than 500 Hospitals

Healthcare AI faces a fundamental challenge: it uses data to find insights and make crucial discoveries, but healthcare data is heavily protected due to patient privacy rights. So, how are AI technologists working to address this dilemma?

Healthcare generates trillions of data points each and every day. It’s the kind of rich detailed information that AI researchers dream of, if only the data could actually be accessed. At both an ethical and regulatory level, maintaining patient privacy and PHI is a must. HIPAA, the Health Insurance Portability and Accountability Act, was passed in 1996 as the foundation of patient data security, but technology has come a long way since 1996. In the current digital era of IoT and deep learning, patient privacy may require some rethinking.

To gain the enormous benefits and potential of AI and machine learning – from reduced costs to advanced cancer treatments – patient data needs to not only be secure at the point of capture (such as the hospital), but also secure in transfer with the AI algorithm, and secure from the other participating data sources. Researchers and third-party companies need to be able to use the patient data to feed their AI algorithms without compromising the data itself. That’s a much bigger ask than the traditional way of thinking about securing PHI through de-identifying patient records. And as the amount of private and public data increases, the ability of AI algorithms to also re-identify these records is substantial.

Privacy-Preserving AI is How Healthcare Will Capitalize on Data without Compromising Compliance

With the current capabilities of AI and the rich amount of data available, de-identification is no longer enough to keep PHI safe. A more sophisticated approach is needed. These new techniques have been termed “privacy-preserving AI.”

New mathematical models use artificial intelligence to protect data, and can even guarantee anonymity in some cases, making it a more secure method of protecting PHI. Currently, there are many different methods of privacy-preserving AI, each with its own pros, cons, and use cases. Here are some of the frontrunners in this developing field:

Methods of Safeguarding Patient Data

Federated machine learning is where copies of a machine learning algorithm are distributed to individual systems. The individual systems use the algorithm on their own, local data, and then the results are reported back to the central “mother” node. While this process reduces security risk because the individual data stays at each independent hospital (reducing risk of breach in transfer or access by other hospitals), additional security measures are still needed to protect the data at each point.

With differential privacy, the idea is that the effect of any one individual’s data has on the algorithm’s output is limited. In other words, it is difficult for an attacker to know if any one individual is present in the dataset or not. This is typically accomplished by adding statistical “noise” to the dataset. Challenges with DP include maintaining integrity of the data and explaining the techniques to patients to gain consent.

Homomorphic encryption is another method of privacy-preserving AI – and is easier to explain than differential privacy. Homomorphic encryption is a form of cryptography (using code to “crack” the data) that enables an algorithm to use encrypted data as if it were unencrypted.

In secure multi-party computation (SMPC), multiple parties permit their data to be secretly shared with other participating parties. The data is processed, split, and results announced without any party – even a third-party – ever having access to the complete dataset. Interest in SMPC has risen dramatically because of this “secret sharing” ability. For a conceptual look at how data sharing works, here is an example.

Patient Privacy Methods are a Cornerstone of AI Development

In today’s world, relying solely on de-identifying patient records is like bringing a knife to a gunfight. But it remains a standard because of its advantages: it is easy to do, meaning it does not require significant computing power, and it is easy to explain, meaning it is easy to gain patient consent.

On the other hand, the more advanced privacy-preserving techniques that we described above require significantly more computing power and a deeper technical understanding. Plus, adding complexity to any process adds in room for error as well. Because of these limitations, significant room for improvement in security measures remains. The most sophisticated algorithm that we can dream of is limited by the data that it has access to, so improving patient data security is as important as improving the AI algorithms themselves.

At Olive, we understand how AI and patient privacy technologies go hand-in-hand. That’s why we are healthcare first, healthcare only, so our customers know that our security measures stand up to the rigorous needs of patient privacy.



Investing for Enterprise Impact Whitepaper

A Healthcare Executive's Guide to AI Workforce Models

Read this white paper to understand the total cost of ownership of various purchasing models for AI and automation, so you can chart the right course as you invest.

Get the White Paper

Related Articles

Learn More


Olive logo

99 E Main St,
Columbus, OH 43215

Stay Up-to-Date with OliveReads

Join 14,000+ other healthcare executives and subscribe to our newsletter – you’ll receive the latest Olive news and trending stories about healthcare AI and automation.