AI Applications in use

We attach great importance to the safe and responsible use of AI. Therefore, we work with a quality management system to identify risks and ensure that we comply with laws and regulations surrounding AI. In addition, we protect patient privacy. Before we deploy AI, we always conduct a Data Privacy Impact Assessment (DPIA), in collaboration with an Information Security Officer (ISO). This assessment helps us to ensure privacy is well protected.

No-show model
Since the end of 2024, we have been using an AI model to estimate the risk of a missed appointment (no-show). The model uses characteristics of the appointment (time and day of the appointment, how many appointments a patient has scheduled, how long ago the appointment was scheduled, etc.), the patient (age and distance to UMC Utrecht) and previous no-shows.

Based on this estimate, we call patients three business days in advance to remind them of their appointments. Healthcare providers and switchboard staff have no insight into the predicted risks of no-show. The outcomes of the model are used only than this telephone reminder and thus never to treat patients differently.

For more information on the technical operation, check out the source code on GitHub.

AI-generated discharge documentation
To support physicians in writing clinical discharge letters, we use Large Language Models (LLM) to write a draft version of the course of an admission. The LLM used is on a secure environment on a European server managed by UMC Utrecht. OpenAI and Microsoft have no access to patient data, and the data is not used to further train or improve the model.

The AI only writes a draft, which is always reviewed and completed by a physician (assistant) and checked by a supervisor. Currently, this application is used in the Intensive Care Unit (ICU) and the Neonatal Intensive Care Unit (NICU).