AI and Health Apps

/ / Legal x-rays
AI and Health Apps

Evangelia Manika, Associate LL.B. MSc.

Nowadays, AI is increasingly being used in healthcare around the world as it can improve a range of medical issues such as prevention, more accurate diagnosis, treatment methods, disease prediction and spread, and generally lead to better outcomes for patients. In this context, healthcare application developers are using AI to improve the efficiency of various processes, both at the administrative level and at the patient care level. It should be noted that there are several cases of clinical trials testing AI systems on humans.

The main challenge for the use of AI in healthcare is not whether this technology is powerful enough to be useful and effective, but how to ensure its adoption in everyday clinical practice while respecting a basic ethical framework. In particular, the development of AI applications in healthcare faces serious challenges at the level of managing a critical volume of data that allows the development of appropriate algorithms to support clinical applications, but also at the level of security of the applications, so that they can be used in a safe manner by healthcare professionals, end-users and recipients of healthcare services.

A major ethical issue is also the transparency of AI applications in the healthcare sector, since such systems have often been described as “black boxes”, in particular due to their technological complexity, which can lead to issues of opacity and bias, which are particularly critical in the healthcare sector, where issues of determining the responsibility of the user (e.g. healthcare professional, healthcare staff), who is ultimately responsible for each medical act, may arise. This underlines the importance of proper training of healthcare professionals, in order to ensure that they are able to practice medicine in a correct manner. In addition, healthcare professionals should not be fully replaced by AI systems in their final treatment decisions, so AI systems should play a mainly supporting role.

In addition, there is the issue of providing adequate information to patients when they are asked to consent to medical procedures involving the use of AI, where their information is often found to be incomplete, while at the same time sensitive personal data is involved. This sensitive personal data should be protected by the necessary procedures to ensure its confidentiality and protection against unlawful access. Therefore, the information provided to patients must at least be adequate, complete, clear and comprehensible, and must be properly carried out by the healthcare professional, who must have been properly and adequately trained beforehand to ensure the correct use and understanding of the functioning of AI in each medical application.

In fact, according to the accompanying report of the National Commission for Bioethics & Technoethics of July 2023, it is also very important in the case of health apps using AI to take into account the possibility of secondary use of personal data. First of all, a distinction must be made between personal data already collected and stored for other purposes (e.g. medical) and new data that are needed to feed algorithms, for which informed consent of the data subject is always required. For existing databases, the possibility of secondary use for public health and medical purposes is allowed without the need for new consent from the data subject, provided that technical and organisational measures are in place, e.g. pseudonymisation.

In addition to the protection of personal data, the developers of health apps using AI should also comply with the protection of intellectual property (e.g. by obtaining permission from the owner of the intellectual property rights), if their application, and thus the algorithms, are based on scientific data that may be covered by intellectual property rights. Similarly, in the context of ensuring the safety of the health app, the producers of the apps should comply with consumer protection legislation, as they are responsible for the programming and proper functioning of the software.
An important factor in the ethical context of AI is also the possibility of equal access, without any form of discrimination (e.g. financial), to medical procedures in which AI is used, in the context of equal treatment of patients.

In conclusion, both in view of the further integration and use of AI in the healthcare sector and the adoption of the final text of the AI Act, it is very important that developers of health apps using AI, as well as healthcare professionals, take into account the ethical aspects of AI. In other words, it is their duty to implement proper procedures for the ethical and responsible use of AI, as well as procedures for the proper use and processing of special categories of personal data of recipients of healthcare services. Adherence to such procedures and proper training of AI users will, among other things, ensure the safety of the use of AI systems in the healthcare sector by eliminating the occurrence of potential risks in relation to the expected benefits. Finally, there is a possibility for the providers of AI models and systems to cooperate with the European AI Office to share best practices and contribute to the development of codes of conduct and codes of practice that could facilitate the proper development of health apps.