AIDH facebook

We are offering access to recorded sessions for those that were unable to attend.

Several presentations recorded at AI.CARE are now available to view online. If you attended in person and haven’t received a link to view, please contact the AIDH team

If you were unable to attend and would like to purchase the recordings you can register here.
 

On-demand program

Welcome by AIDH Vice-Chair

Dr David Hansen, CEO and Research Director, Australian e-Health Research Centre at CSIRO


Welcome and Introduction from Co-Chairs

Dr Tanya Kelly CHIA and Mr Neville Board FAIDH


Setting Global Standards for Health AI Innovation

Dr Eva Weicken, Chief Medical Officer, Department of Artificial Intelligence at Fraunhofer Heinrich Hertz Institute for Telecommunications in Berlin, Germany

Overview

Digital health technologies, in particular AI, are progressing rapidly and are playing a transformative role in healthcare, as demonstrated through applications in diagnostics, therapy and clinical workflows. However, progress in data-driven health solutions is hampered by the lack of internationally accepted standards for quality assessment to ensure their safe, effective, and equitable application.

The Focus Group “AI for Health” (FG-AI4H) was established in 2018 as an international standardization initiative by the two UN agencies International Telecommunication Union (ITU) and World Health Organization (WHO). In line with WHO’s Global Digital Health Strategy, the aim of FG-AI4H is to document best practices, standards, and make open-source software available for the independent assessment of medical AI solutions. More than 2000 pages of guidance documentation have been produced and the Open Code Initiative, a software platform for the end-to-end AI solution assessment is currently in beta testing. The group is dedicated to identifying, promoting, co-developing, and independently assessing and evaluating innovations in close collaboration with local partners to ensure AI solutions are conceived with a focus on implementation to make sure they will create a sustainable impact and benefit. FG-AI4H has recently transformed into a Global Initiative, striving to maintain its momentum and promote the global adoption of standardized AI solutions in healthcare.


A National Policy Roadmap for AI in Healthcare

Professor Enrico Coiera FAIDH, Professor of Medical Informatics and Director, Centre for Health Informatics at Macquarie University

Overview

AI has benefits too big to ignore. Taking advantage of these benefits for healthcare will require a sophisticated and co-ordinated national approach. Join leading Australian authorities in healthcare as well as AI research and application as we delve into the National Policy Roadmap for AI in Healthcare. This roadmap serves as the blueprint for a highly coordinated national approach, positioning Australia in alignment with peer nations that have already made significant investments in and adoption of AI. During this panel discussion, you will hear perspectives on the path forward for the secure and ethically sound integration of AI in healthcare.


Panel Discussion: How should artificial intelligence be used in healthcare? Recommendations from an Australian citizens’ jury

Prof Stacy Carter, Professor of Empirical Ethics in Health and Founding Director of the Australian Centre for Health Engagement, Evidence and Values (ACHEEV) in the School of Health and Society at the University of Wollongong

Overview

As Australia moves towards broader implementation of healthcare AI, have we forgotten to ask consumers and communities if this is what they want?

In March-April 2023, the first national citizens’ jury on using AI in healthcare was run by the Australian Centre for Health Engagement, Evidence and Values (ACHEEV – University of Wollongong). Diverse jurors from every state and territory worked together for three weeks, producing 15 recommendations to shape the future of healthcare AI in Australia.

In this session you will learn about the benefits of this high-quality approach to community engagement, hear about the jury process and outcomes, and meet some of the jurors. You will inspired to engage productively and meaningfully with the communities you serve, and take away key insights about what informed Australians want from healthcare AI.

Biography

Stacy’s training is in public health, and her expertise is in applied ethics and social research methods. Her research program sits at the intersection of three crucial issues for health systems: Using artificial intelligence, detecting disease in populations and individuals, and high-quality consumer and community involvement.


Clinical Perspectives | The potential of AI in health – AI Implementation within the Virtual Hospital

Dr Emily Kirkpatrick CHIA, Executive Medical Director, Calvary-Amplar Health Joint Venture (CAHJV)

Overview

The Calvary-Amplar Health Joint Venture (CAHJV) Virtual Hospital has delivered care across five jurisdictions, as a nation-leading NSQHS-accredited virtual hospital caring for more than 200,000 patients across Australia.

To create efficiencies the CAHJV implemented an AI-based patient assistant technology to support non-clinical patient triage into the service. This enabled speed and accuracy for patients to be streamlined based on their responses when receiving their care. As a virtual hospital the team partnered with Southern Adelaide LHN to deploy an algorithm to identify patients suitable for the Virtual Hospital that worked on a ‘pull’ approach to patients identified who were occupying beds within the bricks and mortar hospital.

Implementation has been focused on consumer and safety outcomes, highlighting opportunities to expand the virtual hospital service. Key lessons learnt have been the challenges to implement within our health services due to the complexities of completing medical device and technology sign off. However, through the accredited virtual hospital and robust medical-led clinical governance, the CAHJV has been able to partner with organisations to grow the AI capability within the service.


Implications of AI for the health workforce

Mark Nevin FAIDH, Interim CEO, AIDH

Overview

Numerous implications for the health workforce stem from the introduction of artificial intelligence (AI) into clinical care. This session will outline duties for health professions to understand new technologies, expectations from professional bodies for their members to upskill, and emerging desires amongst practitioners to do so. While earlier predictions of health workers being replaced were premature, this session explores how the roles of health professionals will change and who amongst them can lead the safe adoption of AI into their clinical domains.


AI-based Triaging Adverse Events of Special Interest (TRAESI) for Surveillance of Adverse Events Following Vaccination in the Community (SAEFVIC)

Christopher Palmer

Overview

The TRAESI project aims to enhance Adverse Events of Special Interest (AESI) detection by using large language model-based natural language processing (NLP) techniques — to overcome the limitations of the current pattern matching system, including the need to constantly adjust the patterns algorithm, and an inability to capture linguistic and structural context, leading to incorrect identification of AESI.


The current state of multimodal medical language modelling

Dr Aaron Nicolson

Overview

The recent rise of ChatGPT and large language models (LLMs) in the public’s eye has resulted in an AI arms race between big tech companies. The effects of this have begun to trickle into the medical world, resulting in a wave of medical LLMs that can perform complex tasks, such as answer medical questions from medical images. This will be a high-level talk discussing the aim and functionality of recent medical LLMs and the data used to develop them, providing a snapshot of the current state of multimodal medical language modelling.


Positioning generative artificial intelligence for the nursing profession in Australia – A position paper

A/Prof Naomi Dobroff FAIDH

Overview

The Chief Nursing Informatics Officers Faculty of the ACN have undertaken an evidence-based process to develop a position paper to ensure nurses are kept informed of their professional requirements particularly when using clinical informatics systems. The process included engaging with leading universities, liaising with subject matter experts, undertaking literature review, and facilitating focus groups.


VaxPulse: Machine learning health system to address public vaccine concerns in Australia

Dr Gerardo Luis Dimaguila FAIDH

Overview

Vaccines save lives, prevent morbidity and protect economies. Public concerns about risks of adverse events following immunisation (AEFI) as greater than the risk of diseases are amplified through online social networks and media. For instance, exposure to exacerbated online concerns about HPV vaccine that prevent 6 types of cancers make parents less likely to consent to their children’s immunisation. We are developing VaxPulse, a machine learning (ML)-based learning health system (LHS) to monitor and respond to online public vaccine-related concerns in Australia.


Revolution in practice: A quick dip into generative AI in Australian primary care and beyond

Dr Roy Mariathas

Overview

Considering Australia’s distinct healthcare attributes, this session discusses generative Al’s role in primary care, informed by international practices and national data.


AI Revolution: Unleashing Potential, Navigating Change – The Workforce Frontier

Overview

As technology advances, the integration of AI in various fields has become inevitable. One area where AI can make a significant impact is in healthcare. This proposed conference talk aims to explore the practical applications and, potential roles of AI in the future, and how AI is reshaping the clinical workflow to revolutionise patient care. AI technologies, such as machine learning and predictive analytics, can analyse vast amounts of data and identify patterns to optimise inventory management, demand forecasting, and logistics planning. AI’s impact goes beyond traditional supply chain management, an example is the crucial role AI can play in the field of genomics, which is experiencing remarkable growth, generating massive amounts of genetic data. AI can be employed to analyse genomic data efficiently, identify disease markers, and aid in personalised medicine. The potential applications of AI in genomics research can revolutionise healthcare by providing tailored treatments based on an individual’s genetic makeup.

Facilitated by Dr Paul Cooper FAIDH CHIA, Deakin University

Presentations include:

User Experience

Prof Chris Bain, Monash University
Dr Catherine Jones, Radiologist, I-MED Radiology Network

General Workforce

Neville Board, Chief Digital Health and Information Officer, Justice Health and Forensic Mental Health Network NSW

Regulatory

Dr Sarah Anderson, National Manager, Research, Evaluation and Insights, Australian Health Practitioner Regulation Agency (AHPRA)

Is my software AI regulated?

Ms Tracey Duffy, First Assistant Secretary – Medical Devices and Product Quality Division, Therapeutic Goods Administration (TGA)

Overview

A presentation by the Therapeutic Goods Administration (TGA) on what and how software as a medical device (including AI) is regulated in Australia. The presentation will include work underway globally by comparable regulators and examples and observations about the type of evidence and documentation that is required when seeking regulatory approval.


Ethics and Standards for AI in Medicine on behalf of Royal Australian and New Zealand College of Radiologists

Prof Liz Kenny AO, Professor, The Royal Brisbane and Women’s Hospital

Overview

The Royal Australian and New, Zealand, College Radiologists, created the artificial intelligence committee in 2018.

Its first task was to develop ethical principles on the use of artificial intelligence in medicine.

The ethical principles were focused primarily on patient safety, taking into account the teams that care for them. Ethical principles included safety, privacy and protection of data, avoidance of bias, transparency, and explainability, the application of human values, decision making on diagnosis and treatment, teamwork, the responsibility for decisions made, and governance.

The standards of practice for AI in medicine, included algorithm developments, data management, algorithm deployments, professional standards, audit, and governance.

There is application across the whole of healthcare for these ethical principles and the standards.


Medico Legal Considerations

Dr Sandra Johnson, Developmental Paediatrician and Clinical Academic, University of Sydney

Overview

Artificial intelligence will be playing a greater role in Medicine into the future. Computer image recognition, clinical decision support systems and AI tools for diagnosis and treatment will assist us in increasing ways. Robotics in surgery is growing in application.

Concerns about privacy, accuracy, reliability and safety requires medical professionals to be mindful about the use of AI, particularly in view of their duty of care.

Medicolegal obligations demand that doctors obtain informed consent which requires some understanding when using AI systems. Issues like how the system was trained, understanding of bias and the fact that algorithms can make predictions confidently even when they are wrong, all need consideration. When harm occurs, accountability may need to be shared between agents, but doctors retain the key responsibility of acting in the best interests of their patients. Regulation and governance of AI in healthcare will also be addressed in the presentation.


AI Transparency and Explainability in Healthcare

A/Prof Klaus Veil – Western Sydney University

Overview

Participants will gain a good understanding of the threats that AI can pose to ethical treatment/care decision-making as well as consent-giving in healthcare as well as approaches to mitigate these risks. In this workshop we will explore the importance of making AI algorithms and resulting decision-making processes transparent and understandable to healthcare professionals, patients, caregivers, and the public. This is important not only to build trust and accountability but also to enable the obtaining of meaningful informed consent from patients when AI technologies are involved in their care and treatment decisions. We will also look at what measures and processes can be put into place to ensure and verify that the AI algorithms and systems in healthcare are meeting the ethical standards expected by patients as well as their families and caregivers.


Converting messy healthcare data to usable clinical assets

Sonika Tyagi, PhD | A/Prof, Digital Health and Bioinformatics, School of Computational Technologies (Data Science), STEM College, RMIT

Overview

The integration of electronic health records (EHRs) has opened new avenues for leveraging historical data in predicting clinical outcomes and enhancing patient care. Nonetheless, the existence of non-standardized data formats and anomalies poses significant hurdles in utilizing EHRs for digital health research. Additionally, to develop robust and reproducible predictive models, one needs to use data from multiple healthcare sites to account for population wide variations in their modelling approaches. However, institution specific data formats and inherent heterogeneity of EHR data hinders the seamless data harmonization. To tackle these issues head-on, will introduce EHR-QC, a comprehensive tool comprising two core modules: the Data Standardization Module and the Pre-processing Module.


Digital Health Safety: Let’s get the basics right

Melissa Andison, Delivery Lead, Office of the Chief Clinical Information Officer, eHealth Queensland

Overview

The advancements in data driven healthcare are fuelling the desire to deploy AI rapidly. Often, rapid deployments of technology do not assess patient safety risks, resulting in harm, which have ethical and legal considerations (Health Education England, 2019). Therefore, healthcare decision makers must be cognisant of the potential liability of digital health safety incidents (Ash et al., 2020). Understanding the barriers impacting the adoption of digital health safety guidelines has never been more critical. Conducted for a Master’s Dissertation in Digital Health Leadership with The Institute of Global Health Innovation Imperial College, this study investigated what factors impact the scaled adoption and implementation of digital health safety guidelines as a professional practice in Australia. The data collected via an online survey, semi-structured interviews, and focus groups was analysed alongside data mined from Australian and English safety guidelines and artefacts from the Australian Commission on Safety and Quality in Health Care. The findings confirmed that overcoming the barriers to adopting guidelines will be achieved by investing in the workforce, improving governance, and securing adequate funding. The findings highlighted that digital health safety requires a new professional identity with recognised skills to support adoption. A notable finding was that a safety culture that is ‘just,’ learns, and looks after the psychological wellbeing of the digital health safety workforce is vital. Findings also showed that employing design thinking will humanise digital health safety and the adoption of guidelines because it puts the person at the centre of the practice. Digital health safety matters. The pace of AI development and adoption needs a professional patient safety practice that evolves and embraces technological advances. This original research is at the heart of investing in the practice, process, and professional skills to ensure digital and data technologies can improve the quality and safety of care. The scaled adoption of digital health safety will positively impact the ability to realise the transformative benefits of digital health and the use of AI.


Balancing Bytes and Blame: Navigating Medical Liability in the Age of AI

Dr Anuj Saraogi, A/Prof James Cook University and Experienced Digital Health Leader


Debate | AI will do more harm than good

Moderator: Mr Sam Peascod, Assistant Secretary of Digital Health and Services, Australian Department of Health
Affirmative: Prof Trish Williams, Prof Vince McCauley, Prof Liz Kenny AO
Negative: Emma Hossack, Dr Shainal Nathoo, Prof Jeannie Paterson


Fireside Chat: Next Steps

Facilitated by Dr Paul Cooper, Affiliate A/Prof. Unit Chair, Faculty of Health; School of Medicine
Panel Members: Dr Tanya Kelly CHIA and Mr Neville Board FAIDH