Learn, Connect, Growth | Tingkatkan Mutu Pelayanan Kesehatan Indonesia

WHO Guidelines on AI in Health : Impacts on privacy and regulatory considerations

The integration of artificial intelligence (AI) into healthcare offers great potential for advancing medical research, improving patient outcomes, and optimizing healthcare delivery. However, this integration also presents significant ethical, legal, and privacy challenges. The World Health Organization (WHO) has recently issued three guiding documents addressing the challenges of AI use for health. This contribution seeks to provide an overview of these guidelines, focusing on aspects of data protection and privacy in general.

Regulatory Considerations on AI for Health

The WHO’s 2023 document “Regulatory Considerations on AI for Health” highlights critical aspects of privacy and data protection in the regulatory landscape. Due to the vast amounts of data involved in the development and use of this technology, anonymization techniques are often less effective, and the multiplication of processing activities increases security risks. This is particularly true regarding genetic data. The current regulatory environment is diverse, with international, national, and regional laws overlapping, and specialized regulations providing specific requirements, such as the GDPR, which introduces further challenges related to consent and transborder data exchanges.

The main aspect developed in this guidance regarding privacy and data protection pertains to the necessity to document and be transparent. Institutions can strive to establish trust, by providing detailed documentation of their data protection practices. These policies should also clearly outline the types of data collected, the roles of involved parties (data controllers or data processors), the applicable legal bases, and the collection methods. Additionally, in order to demonstrate the compliance of an activity, and where the processing relies on such legal basis, a description of the consent collection method may be described.

The disclosure of these policies allows for regulators to determine a “standard of practice” by the companies making them available, upon which compliance can be examined. Companies must disclose significant uses of personal information for algorithmic decisions, detailing data types, sources, and the technical and organisational measures implemented to mitigate risks.

Still in the topic of documentation and transparency, the principle of data accuracy is put forward. Aimed at developers of AI, the necessity to ensure “quality continuum”, a concept that is reflected in Art. 5 GDPR (with the principles of “accuracy” and “accountability”), but also in the field of Quality Control, such as the ICH E6 (R2) Guidelines for Good Clinical Practice (see Section “5.5 Trial Management, Data Handling, and Record Keeping”). The idea is to ensure the accuracy of the information throughout its lifecycle. Applied to AI, this means that privacy needs to be taken into account from the design to the deployment of AI.

Apart from the performance of privacy impact assessments, required by Art. 35 GDPR and recommended by the NIST privacy framework, documentation is a central method to ensure transparency, where developers should take a central role. Similarly, the creation of audit trails and annotating AI models allows to describe the decision-making process, and contributes to the “explainability” of the outputs of a model, further enhancing its transparency.

readmore