German physician Dr Eva Weicken, who is involved in the new WHO-ITU-WIPO Global Initiative on AI for Health (GI-AI4H), will deliver AICare 2023’s opening plenary on setting global standards for health AI innovation.
GI-AI4H is an international initiative to promote the global adoption of standardised artificial intelligence solutions in healthcare, facilitate safe and appropriate use of AI, and ensure it fulfils its potential to support diagnosis and treatment.
“The initiative will ensure that AI solutions for health meet certain requirements, which can assist policy makers and regulators,” she said. “It is our hope that these standards will also contribute to the democratisation of healthcare; providing new resources to regions that lack access to healthcare.”
The GI-AI4H, spearheaded by the United Nations organisations, the International Telecommunication Union (ITU), the World Health Organization (WHO), and the World Intellectual Property Organization (WIPO) was announced at the AI for Good Global Summit in July 2023. It builds upon the momentum created by the ITU-WHO Focus Group on AI for Health (FG-AI4H).
Experts from academia, research, AI development, ethics, regulation, policy makers and clinicians are involved in the global initiative and include Australian Professor Sandeep Reddy FAIDH who has written a textbook on AI in healthcare.
Dr Weicken is co-chair of operations and co-chair of its Clinical Evaluation of AI for Health working group. This group developed a Clinical Evaluation of AI for health framework, which gives guidance for best-practice evaluation of AI technologies in health including a checklist for clinical evaluation of AI systems.
Clinical evaluation fundamental
“Clinical evaluation is fundamental to the safe and effective use of AI health technologies,” Dr Weicken said. “It enables clinicians, patients, regulators, and other stakeholders to have the evidence they need to assess the safety, effectiveness and likely value of the technology and its performance in their setting.
“From a clinical perspective, it’s very important these tools are safe, effective, fair and useful. If you want to use AI it must be certain that it will cause no harm. AI is like any other medical intervention, and it is crucial to weigh its benefits versus risks. Clinical evaluation is the way to effectively demonstrate the intended benefits.”
Dr Weicken studied medicine at Ludwig-Maximilians-University in Munich and did her residency in neurology including intensive care and psychiatry rotations. After years of clinical practice and with the growing presence of AI in medicine she wanted to dive deeper into this field.
As Chief Medical Officer in the Department of Artificial Intelligence at Fraunhofer Heinrich Hertz Institute for Telecommunications in Berlin, her research focuses on finding solutions for the safe, fair, and effective use and applicability of AI in health which requires an interdisciplinary approach.
The AI department specialises in explainable AI and applied machine learning, efficient methods for AI, standardisation, and quality assessment for AI in health and other domains.
Progress hampered by lack of standards
She said digital health technologies, in particular AI, were progressing rapidly and playing a transformative role in healthcare, for example through applications in diagnostics, therapy, and clinical workflows. AI can assist in addressing the shortage of healthcare professionals, especially in remote regions.
“However, progress in data-driven health solutions is hampered by the lack of internationally accepted standards for quality assessment to ensure their safe, effective, and equitable application,” Dr Weicken said.
The Global Initiative on AI for Health is a continuation of the ITU/WHO Focus Group on AI for Health (FG-AI4H, established in 2018, which ended in September 2023. FG-AI4H established working groups on ethics, regulation, clinical evaluation, and data specification, which documented best practices and made open-source software available for independent assessment of medical AI solutions in line with the UN’s Sustainable Development Goals.
2000 pages of free guides
To produce best practices and explore the potential for benchmarking, 24 use cases were explored. For example, there is a use case on point-of care-diagnostics, which studies cervical cancer screening by AI digitized microscopy in a hospital in Kenya.
Over 2000 pages of free guidance documentation were produced by FG-AI4H. In addition, the Open Code Initiative, a software platform for end-to-end AI solution assessment, is in beta testing. The WHO has published the ethics and governance on AI for health, and – just recently – regulatory considerations on AI for health guides.
The Australasian Institute of Digital Health presents AI.Care 2023 at Crown Melbourne from November 22-23.
More reading:
ITU/WHO Focus Group AI for Health (precursor to Global Initiative on AI for Health)
Website https://www.itu.int/en/ITU-T/focusgroups/ai4h/Pages/default.aspx
Main publications by:
- Working Group “Clinical Evaluation of AI for Health” https://www.itu.int/pub/T-FG-AI4H-2023-.3
- Working Group “Regulatory considerations of AI for Health” https://iris.who.int/bitstream/handle/10665/373421/9789240078871-eng.pdf
- Working Group “Ethics and Governance of AI for Health” https://www.who.int/publications/i/item/9789240029200
- Technical and all other outputs https://www.itu.int/en/ITU-T/focusgroups/ai4h/Pages/deliverables.aspx