Recommendations in a national AI in healthcare roadmap show the way for Australia’s digital health community, industry and governments to design and implement artificial intelligence so it benefits, rather than harms, patients.
Professor Enrico Coiera FAIDH, launched the National Policy Roadmap for Artificial Intelligence in Healthcare at the Australasian Institute of Digital Health’s AI.Care 2023 conference in Melbourne today, November 22, 2023. Professor Coiera, the Director of the Australian Institute of Health Innovation and Centre for Health Informatics at Macquarie University, is founder of the Australian Alliance for AI in Healthcare (AAAiH) which produced the roadmap.
Key recommendations include establishing a National AI in Healthcare Council, led by government but with broad membership, to coordinate and harmonise responsibilities and activities of those responsible for oversight of AI safety, effectiveness, and ethical and security risks.
“While Australia has several regulatory and government agencies responsible for some aspects of AI, a coordinated system-wide approach is the only way to ensure protection of patients, optimisation of our health workforce and the growth of a healthcare-specific AI industry,” Professor Coiera said.
Other recommendations include profession-specific codes of practice, safety frameworks, a shared code of conduct for safe, responsible and effective use of AI by health professionals and organisations, and that, for accreditation, healthcare organisations using AI should demonstrate that they meet minimum AI safety and quality practice standards.
Also recommended are a consumer digital health literacy program, incentives and access to clinical data for local industry, and a National AI Capability Centre in Healthcare to help especially small to medium size enterprises bring products to market. Digital health was US-dominated, making it hard for start-ups, Professor Coiera said.
“The journey to get tech signed off is complex and expensive and start-ups need help,” he said. “Australia is at the back of the pack in supporting research in AI in healthcare.”
Urging government, industry, health organisations to act
Professor Coiera told delegates: “The AI opportunity is too big not to ignore and too important not to get right. It’s the first time in digital health there has been a consensus on what needs to happen. It’s very clear this is your roadmap so use this as a tool to do what the community wants to bring it to life.
“I’m really proud to be here launching this today. I hope it paves the way forward for the next two to three years. It represents hundreds of people’s time and effort.”
Professor Coiera said the roadmap would be given to government, industry and healthcare organisations urging them to implement recommendations.
“These recommendations are the first step; we now need policy at state and federal levels and adoption by health systems, and hope the Federal Budget will help fund this,” he said. “We’ve seen a strong appetite to do something, a willingness to listen and the politics of will are probably there, so we are optimistic that a good chunk of what we have recommended will happen within three years.”
The roadmap identifies gaps in Australia’s capability to translate AI into effective and safe clinical services, across industry capability, implementation, regulation and AI safety, and provides guidance on how to close these gaps.
Professor Coiera said the national plan will safeguard Australia and is a critically needed approach to bring us into line with comparable nations such as the US and UK that have invested billions in their healthcare AI sectors.
“Artificial Intelligence has benefits too big to ignore,” he said. “Taking advantage of these benefits will require a mature and co-ordinated national approach. AI offers us profound new opportunities to improve clinical diagnosis, treatment and workflows.”
The roadmap’s vision and mission
The roadmap’s vision is for an AI-enabled healthcare system delivering personalised and effective healthcare safely, ethically and sustainably. Its mission is a fully funded national plan by 2025 designed to create an AI-enabled Australian healthcare system capable of delivering personalised healthcare, safely, ethically and sustainably supported by a vibrant AI industry sector that creates jobs and exports to the world, alongside an AI-aware workforce and AI-savvy consumers.
The plan lays the foundation for Australia to embrace opportunities that AI brings to healthcare, and is designed to assist all levels of government, industry and civil society. It provides guidance on key issues such as workforce, industry capability, implementation, regulation, and AI safety. It acknowledges and builds on extensive work undertaken nationally and internationally.
“AI promises a pathway to creating a smarter, more adaptive health system. From interpretation of imaging and pathology, triage and resource allocation, clinical documentation through to personalisation of therapy, AI will be used by consumers, clinicians and touch most aspects of the healthcare system,” the roadmap states.
A lively panel discussion on the roadmap followed the launch, with panellists speaking about collaboration and connectiveness, saying the plan spelt out what needs to be done and resources that need to be allocated, but some questioned the value proposition to consumers who might ask why use AI to do something that can be done by humans?
“It’s a great day for AI in healthcare in Australia – with the roadmap launched we just need to get onto doing it,” said panellist Prof Farah Magrabi FAIDH, Professor of Biomedical and Health Informatics, Australian Institute of Health Innovation.
The roadmap’s 16 recommendations focus on five priority areas to progress over the next 2-5 years:
Priority area 1 – AI safety, quality, ethics and security
This will ensure patients receive safe, effective, and ethical care from AI healthcare services developed in accordance with ethical principles, a safety framework and appropriate post-implementation monitoring. Recommendations are:
- To establish a National AI in Healthcare Council to better coordinate and harmonise responsibilities and activities of those entities responsible for oversight of AI safety, effectiveness, and ethical and security risks. It would bring together existing agencies and ensure there were no gaps.
- To develop and deploy AI in healthcare within a robust risk-based safety framework to ensure AI is safe, effective and does not harm patients including providing evidence that their algorithms perform in real world settings and vitally, post-market, improve national post-market safety monitoring.
- For accreditation, healthcare organisations using AI should demonstrate that they meet minimum AI safety and quality practice standards.
- Urgently communicate the need for caution in the clinical use of generative AI when it is untested or unregulated for clinical settings, including preparation of clinical documentation.
- Ensure the national AI ethical framework from the Department of Industry, Science and Resources supports deployment of value-based clinical and consumer AI in routine practice. The framework would integrate many pre-existing ethics frameworks across jurisdictions, regulators and other bodies.
Priority area 2 – Workforce
This urges understanding of knowledge gaps in the workforce then training current and future healthcare workforce in use and implementation of AI-enabled healthcare services. Recommendations are to:
- Support the development of a shared code of conduct for the safe, responsible and effective use of AI by health professionals and organisations.
- Assist professional bodies in accessing expertise and prior models to support development of profession-specific codes of practice for the responsible use of AI.
Priority area 3 – Consumers
This will help all Australians, including vulnerable consumers, safely use AI to navigate the complex healthcare system and be active participants in management of their own care and wellbeing. Recommendations are to:
- Co-design and collaboratively implement a nationally accessible program for digital health literacy to inform the public of AI’s benefits, risks, safe use and increase trust and confidence in AI.
- Work together with Aboriginal and Torres Strait Islander communities to develop a mechanism that collates health-related data for use in AI in a culturally safe and trusted manner within their control, in line with principles of Indigenous Data Sovereignty.
- Ensure professional codes of conduct and training emphasise the role of clinicians in educating patients about responsible use of AI, as part of shared decision making.
Priority area 4 – Industry
This aims to support development of the local healthcare AI industry to become globally competitive and able to deliver significant clinical and economic benefits to Australia. Recommendations are to:
- Develop national clinical AI procurement guidelines, instead of existing different state ones, in partnership with the jurisdictions, health services and industry.
- Provide support and incentives for local industry especially small to medium sized enterprises (SME), consider expanding the R&D Tax incentives scheme to cover regulatory compliance costs (which are more than half the product costs) and targeted grants to help the most promising and innovative products reach market,
- Develop mechanisms to provide industry with ethical and consent-based access to clinical data to support AI development and leverage existing national biomedical data repositories.
- Support development of a National AI Capability Centre in Healthcare (NAICCH) to assist industry (SMEs in particular) to bring products to market.
- Assist future policy by identifying emerging AI markets and opportunities, quantifying economic costs and benefits of AI in healthcare (including climate risks and benefits), and indicators of effective use of AI in national health priority areas (e.g. ageing, disability, mental health, Indigenous health, rural and remote health.)
Priority area 5 – Research
This is to ensure the development and deployment of AI in healthcare is based on the most up to date evidence, and that Australia retains world-class sovereign capability to use AI and support industry in the national interest. Of 84 randomised clinical trial studies of AI published internationally between 2018-2023, none were conducted in Australia and less than 1% of NHMRC and MRFF grant funding in Australia goes to AI. The scale of the AI opportunity, and the significant head start other nations have made, indicate there is a need for bold rather than incremental thinking. The recommendation is:
- To provide significant targeted support for healthcare AI research that builds sovereign capability and can translate to improved priority health services and support for industry.
Development of the roadmap
The alliance (AAAiH) developed the roadmap after extensive consultation with representatives from federal, state and territory government departments, research, regulatory and professional bodies, consumer and industry representatives. The alliance has more than 100 member organisations from industry, health service providers, academia and consumer organisations.
It was supported in development of the plan by Macquarie University, the Australasian Institute of Digital Health, the CSIRO Australian eHealth Research Centre, RMIT University and the Digital Health Cooperative Research Centre.
Professor Coiera formed the alliance five years ago because he said despite evidence that AI was a tremendous opportunity for healthcare to improve care, Australia was nowhere near ready to seize the opportunity for safe and ethical use. The alliance brought together key players and stakeholders to support each other and avoid duplication.
A previous roadmap in 2021 had limited cut-through due to the pandemic, and this updated version was needed because of the rapid acceleration of AI innovation over the past two years, particularly with generative AI.