Petra is as an embryologist with over 23 years of clinical experience in the IVF laboratory. Petra is currently Principal Scientist and the Head of Health Informatics at Virtus Health where she leads a dynamic team dedicated to harnessing data to inform clinical decisions and drive substantial enhancements in IVF clinical outcomes.
🗣️ Digital health frontiers: GPT-4’s role in medical assessment and discharge summaries
In the contemporary healthcare environment, professionals are faced with the challenge of efficiently gathering patient information, ensuring its accuracy, and refining the discharge process. The central issue: outdated processes and systems that decelerate operations and can lead to potential miscommunication, preventing optimal patient care.
Enter GPT-4, a cutting-edge language model with capabilities beyond simple text generation. With the ability to understand context, interpret vast datasets, and produce comprehensive outputs, GPT-4 offers a means to revolutionise processes in healthcare, particularly in the areas of medical history assessment and discharge summary drafting.
Two flagship projects were initiated:
1. Medical History Assessment Tool: GPT-4 is trained with anonymised patient records to recognise patterns, symptoms, and potential conditions. Medical professionals can input raw patient details, and the tool outlines a structured medical history, flagging potential areas of concern.
2. Discharge Summary Drafting System: GPT-4 is utilised to take treatment details, patient responses, and medical notes to draft preliminary discharge summaries. These drafts require final verification by a healthcare professional, but the majority of the work – the detailed, time-consuming compilation of data – is managed by the model.
The introduction of GPT-4 into these processes not only streamlines operations but also heightens the accuracy of records and facilitates better communication between patients and healthcare providers. Healthcare professionals indicate a significant time-saving, and summaries more intelligible.
However, the journey was not without its challenges. Key lessons included:
– Training is Vital: For optimal results, the GPT-4 model requires extensive training with region-specific healthcare data.
– Human Oversight is Crucial: Whilst GPT-4 can handle the majority of the drafting process, human expertise remains indispensable for final checks and confirmations.
– Continuous Updates: Healthcare is a constantly evolving field. Regularly updating the model with new findings and guidelines ensures it remains a pertinent tool.
These endeavours with GPT-4 exemplify the potential of Al in healthcare, underscoring the importance of innovation, adaptability, and the harmonious melding of human expertise with technology.
Sandra L J Johnson is paediatrician in child development and clinical academic in the Faculty of Medicine & Health Sciences at the University of Sydney. She is Fellow of the Royal Society of Medicine UK, Fellow of Royal College of Paediatrics and Child Health UK (RCPCH), Fellow of Royal Australasian College of Physicians (RACP), Fellow of Australasian College of Legal Medicine (ACLM) and Honorary Fellow of American College of Legal Medicine (CLM). She was the President of the Australasian College of Legal Medicine in 2018 and 2019.
Sandra is the main author of the textbook: “A clinical handbook on child development paediatrics” published by Elsevier in 2012 and she self-published a book in 2013 for parents: “Your child’s development.”
Sandra represented the RACP for the Australian National Children’s Digital Health Collaborative work on eHealth records, she is member of the RACP Digital Health Advisory Group, and she is the RACP representative to the Australian Institute of Digital Health for the Clinical Informatics Fellowship Program.
Her personal research in AI technology spans 8 years. She did a keynote address at the American College of Legal Medicine conference in February 2019 on “AI, Machine Learning and Ethics in Healthcare” and she has a published article on this topic in Journal of Legal Medicine (JLM). In January 2023 her article on “AI in Healthcare: the challenge of Regulation” was published in JLM.
Sandra has focussed on AI in Healthcare with respect to ethics, regulation and governance. She is passionate about sharing what she has learned. She believes that the education of doctors and health professionals in the field of AI is essential in view of the rapid expansion of this technology and its increasing importance in Medicine.
🗣️ Medical legal considerations
Prof Dwyer has 15 years clinical experience in the diagnosis and management of renal-related disease, including acute and chronic kidney injury, refractory hypertension and renal transplantation.
Her research interests focus on purinergic signalling, specifically the role of CD39 and adenosine signalling in health and disease. She has developed small animal models to examine acute and chronic ischaemic induced kidney injury, and renal, liver and islet transplant related injury.
Prof Dwyer also has an interest in clinical research project specifically surrounding the diabetes arising post transplantation.
🗣️ Using AI to predict progression of chronic kidney disease
Chronic kidney disease (CKD) is a growing public health concern, with -2 million Australians affected. However, predicting which patients will progress to kidney failure requiring dialysis or transplantation remains challenging.
This project aimed to develop a machine leaning model using laboratory data to accurately predict CKD progression. The model was developed using a retrospective cohort of adult CKD patients from a single health service.
Key variables including estimated glomerular filtration rate, urine albumin-to-creatinine ratio, age, and diabetes status will be used to train a neural network to identify patients at high risk of progression and likely to benefit from renal replacement therapy. Model performance will be evaluated using AUC-ROC, F1 scores, and internal validation.
The optimised model will be applied prospectively to a separate cohort of patients referred to nephrology under current guidelines. It is anticipated this data-driven approach will improve risk prediction compared to current referral practices reliant on eGFR alone. The model has potential to assist primary care physicians in more targeted nephrology referrals and personalized CKD management. If validated, the model could be scaled nationally to optimize resource allocation and care for people with progressive CKD.
Jeannie Marie Paterson is a Professor of Law and Co-Director the Centre for AI and Digital Ethics (CAIDE) at the University of Melbourne. Her research covers consumer protection, consumer credit, product liability law, and data protection law, with a focus on the ethics, law and regulation of new digital technologies. She teaches subjects on these topics at the undergraduate, Juris Doctor and postgraduate level.
Jeannie holds a current legal practising certificate and regularly consults to not for profits, government and industry.
🗣️ Debate | AI will do more harm than good
As the Co-Founder & CTO of Heidi Health, previously known as Oscer.ai, Yu leads the organisation’s mission to advance the world’s transition to AI-supported healthcare.
Dr Emily Kirkpatrick is the Executive Medical Director of the Calvary-Amplar Health Joint Venture (CAHJV), which has provided care through a nation-leading NSQHS-accredited virtual hospital to more than 200,000 patients across Australia. The CAHJV Virtual Hospital has delivered care across five jurisdictions, leveraging AI capabilities to support clinical decision making and identifying patients in bricks and mortar facilities suitable for virtual home-based acute care.
🗣️ Clinical Perspectives | The potential of AI in health – AI Implementation within the Virtual Hospital
The Calvary-Amplar Health Joint Venture (CAHJV) Virtual Hospital has delivered care across five jurisdictions, as a nation-leading NSQHS-accredited virtual hospital caring for more than 200,000 patients across Australia.
To create efficiencies the CAHJV implemented an AI-based patient assistant technology to support non-clinical patient triage into the service. This enabled speed and accuracy for patients to be streamlined based on their responses when receiving their care. As a virtual hospital the team partnered with Southern Adelaide LHN to deploy an algorithm to identify patients suitable for the Virtual Hospital that worked on a ‘pull’ approach to patients identified who were occupying beds within the bricks and mortar hospital.
Implementation has been focused on consumer and safety outcomes, highlighting opportunities to expand the virtual hospital service. Key lessons learnt have been the challenges to implement within our health services due to the complexities of completing medical device and technology sign off. However, through the accredited virtual hospital and robust medical-led clinical governance, the CAHJV has been able to partner with organisations to grow the AI capability within the service.
Dr Jarrod Marks is the current Chief Medical Information Officer (CMIO) at Northern Adelaide Local Health Network (NALHN). Transitioning from a career in systems engineering to medicine, he brings a rich expertise in automation and computation to healthcare. Fascinated by complex adaptive systems, Jarrod is keen on leveraging his engineering background to enhance healthcare processes to improve patient outcomes. In his role, he continues to utilise his technical background to improve healthcare delivery by collaborating with clinicians and technologists to improve the system’s ability to deliver quality, efficient, resilient, and sustainable healthcare.
Jon is a researcher of nearly 40 years standing with over 100 publications. He has proven expertise in software engineering, language technology and psychology. Formerly he has held both the Chairs of Language Technology and Information Systems at The University of Sydney. He has worked in health applications of IT and language processing for the past 18 years but has a career of IT innovations stretching back many years. In 2005 he won Australia’s national Eureka Science Prize for Scamseek, which identified financial scams on the Internet. He has conducted extensive research on the use of language technology in Intensive Care, Pathology and Radiology departments, and in information systems research in emergency medicine and oncology. In 2012 he left the University of Sydney to pursue his interests in R&D consulting in Health IT and NLP and is the CEO for the companies Health Language Analytics (HLA) and Innovative Clinical Information Management Systems (iCIMS).
Bio coming soon
🗣️ Balancing Bytes and Blame: Navigating Medical Liability in the Age of AI
Michelle’s career spans over 30 years in the healthcare industry, piloting cross-border enterprise-wide transformations, and understanding first-hand nuances of leading successful businesses, while knowing what works to keep ahead of industry trends. Ranked in the top 10% of high-level strategist in Australia. Michelle is a strategic healthcare executive with a passion for the patient experience which has seen her ensure consistent and successful business growth.
Dr Sanjeev Khurana is a Senior Paediatric Surgeon based at the Women’s and Children’s Hospital , North Adelaide. Sanjeev is also a certified ‘CHIA’ with a major interest in the learning healthcare system approach to clinical quality improvement. Along with his colleagues, Sanjeev has co founded an Adelaide based start up specialising in development and deployment of learning healthcare systems – in hospital settings as well as clinical research organisations.
Data scientist with a solid background in research. Over four years of expertise in image processing, object detection, predictive modelling, human action and activity monitoring, data processing, machine learning, and data mining algorithms, as well as working with challenging data sets to solve real-world business problems.
Dr. Siegfried Perez is an emergency physician, clinical researcher, and fencing coach. With a strong academic foundation, he earned an MD degree and embarked on a career in emergency medicine, where his calm under pressure and quick decision-making skills have saved lives. Dr. Perez is equally committed to research, contributing to emergency medicine research. Beyond medicine, he shares his passion for fencing as a coach, imparting discipline and teamwork to aspiring athletes. Dr. Siegfried Perez’s multifaceted contributions leave a lasting impact, inspiring others to excel in medicine, research, and sports.
Jessica Rahman is a postdoctoral research fellow at CSIRO Health Intelligence team, under the Health & Biosecurity business unit based in Westmead, Sydney. Her research involves developing, validating and implementing machine learning algorithms and workflows for applications in healthcare such as improving productivity and efficiency, risk stratification and clinical decision support She completed her PhD at the Research School of Computer Science in the Australian National University (ANU). Her doctoral research involved looking at the effects of auditory and visual stimuli on human physiological signals to analyse how human affective (emotional) reasoning is influenced by sensory input, particularly different types of music. Jessica worked as a lecturer, academic tutor and supervisor for bachelors and masters students projects at ANU. She is an Associate Fellow of the Higher Education Academy (AFHEA) and an accredited Mental Health First Aider (MHFA).
Dr Gluck trained in Cambridge before undertaking Anaesthetic training on the South Coast of the UK. He emigrated to Australia in 2013, to pursue a career in ICU. He now finds himself in a medical Admin role, having completed a PhD in using smartphone data (step and GPS data) to generate patient outcomes following critical illness. He has a passion for quality improvement and digital innovation. He recently purchased some land in the Adelaide hills and is trying (and failing) to build a Passive House.
Saadia Danish is a dedicated PhD candidate whose research centres on workforce capabilities cultivated throughout and following the digital transformation within the healthcare sector. Her work delves into the nuanced shifts in strategies, workflows, business models, and behaviours that accompany this transformation. With a profound commitment to driving positive change in healthcare, Saadia is passionate about researching and pioneering innovative solutions to address the workforce challenges within the industry.
As a dentist with an interest in healthcare innovation, Nida saw how technology could transform healthcare during her residency, and has been working on making that vision a reality ever since. Nida earned an M.S. in Health Informatics to learn more about how technology can improve healthcare delivery. Her doctoral project at Macquarie University explored how smart home technologies can help elderly people live independently, especially in the wake of COVID-19. She worked with an industry partner to gather feedback from older people and their families. Nida is particularly drawn to qualitative methods because they provide deeper insights into people’s experiences. Nida is excited to continue pioneering innovative applications of technology to improve the healthcare experience. She believes that technology has the potential to transform healthcare, making it more accessible, efficient, and effective for everyone.
Dr Deveny is an experienced and well-respected senior executive with a strong commitment to providing sustainable health outcomes for all Australians, and a demonstrated ability to build and maintain positive, productive partnerships with key stakeholders and the broader community.
Dr Deveny is currently the Chief Executive Officer of Consumers Health Forum. Elizabeth is the immediate past Chair of the Australian Digital Health Agency. Dr Elizabeth Deveny’s recent roles include CEO of South Eastern Melbourne PHN, CEO of Bayside Medical Local and Chair of Southern Metropolitan Partnership.
She holds a masters in vocational health education and a PhD in Medicine (clinical decision support development), both from Melbourne University.
🗣️ Panel Discussion
Raymond is a General Practitioner, and Chief Growth Officer at Telecare Australia. At Telecare his portfolio includes clinical governance and clinical process design. He is also involved in multiple advisory and working group roles within the Primary Care Sector. Due to his varied portfolio of roles, Raymond has both a system wide and coal-face perspective of the challenges and solutions to healthcare problems.
Vikram Palit is an Associate Professor at the Australian National University’s College of Health and Medicine. He is also a paediatric respiratory physician and the Founder and CEO of Consultmed, an Australian health-technology venture. Vikram has a background in healthcare consulting, academia and clinical medicine. He has led multiple digital transformation and service improvement programs across public and private healthcare settings in Australia and the United Kingdom.
Dawid, an AI and innovation leader at Accenture, has immersed himself in cloud and innovation throughout his career, spanning industry, start-ups, and enterprise. With deep expertise in Generative AI and cloud solutions, he’s set to present real-world examples of AI in healthcare, detailing their outcomes. Emphasising “practical innovation”, Dawid demystifies complex AI concepts, turning them into tangible insights. Leveraging both his and Accenture’s extensive experience, attendees will leave his sessions with a clear understanding of AI’s real-world impact on healthcare, inspired to tap into its transformative power.
Dr. Sonika Tyagi is an Associate Professor and Machine Learning lead scientist at RMIT University. Her expertise lies in the development and application of AI tools to address critical biological and clinical research questions. Her research primarily revolves around the analysis of large-scale genetic and healthcare data to predict personalized risk profiles and care plans for individuals affected by various diseases. In addition to her role at RMIT University, Dr. Tyagi serves as a co-investigator in the SuperbugAI project at Monash University. Her outstanding contributions to the field have garnered recognition in the form of various awards and funding. She has been a recipient of NHMRC grants and was honored with an Early Mid-Career Research Fellowship from the Australian Academy of Science for her exceptional work on AI models for the diagnosis of preterm birth. Dr. Tyagi’s accomplishments were further acknowledged when she became a finalist in the Australia/New Zealand Women in AI awards 2022, specifically in the “AI in Health” category. Recently Sonika was awarded 2023 Brilliant Women in Digital Health.
🗣️ Converting messy healthcare data to usable clinical assets
The integration of electronic health records (EHRs) has opened new avenues for leveraging historical data in predicting clinical outcomes and enhancing patient care. Nonetheless, the existence of non-standardized data formats and anomalies poses significant hurdles in utilizing EHRs for digital health research. Additionally, to develop robust and reproducible predictive models, one needs to use data from multiple healthcare sites to account for population wide variations in their modelling approaches. However, institution specific data formats and inherent heterogeneity of EHR data hinders the seamless data harmonization. To tackle these issues head-on, will introduce EHR-QC, a comprehensive tool comprising two core modules: the Data Standardization Module and the Pre-processing Module.
With over 30 years of leadership and significant contributions, Klaus is internationally recognised as an expert in Health Informatics, Healthcare Systems Interoperability and Digital Health Standards Development. An Adj. Associate Professor at Western Sydney University, Klaus has also taught at Sydney and LaTrobe universities as well as to industry and governments in the USA, the UK, Singapore, China, Malaysia, Germany and Bulgaria. Klaus currently serves as president of the Australian Council of Professions and led his professional association, the Australasian College of Health Informatics from 2009 to 2017.
🗣️ AI transparency and explainability in healthcare
Participants will gain a good understanding of the threats that AI can pose to ethical treatment/care decision-making as well as consent-giving in healthcare as well as approaches to mitigate these risks. In this workshop we will explore the importance of making AI algorithms and resulting decision-making processes transparent and understandable to healthcare professionals, patients, caregivers, and the public. This is important not only to build trust and accountability but also to enable the obtaining of meaningful informed consent from patients when AI technologies are involved in their care and treatment decisions. We will also look at what measures and processes can be put into place to ensure and verify that the AI algorithms and systems in healthcare are meeting the ethical standards expected by patients as well as their families and caregivers.
Kevin Ross is SVP of Product Enablement for Orion Health, a healthcare platform managing over 100 million patients worldwide. He is also a Fellow of Health Informatics New Zealand, and Chair of the Advisory Board for the Institute for Natural, Artificial and Organisational Intelligence. Kevin was CEO of Precision Driven Health, New Zealand’s partnership supporting over 100 collaborations in health data science 2016-2023. Kevin was part of New Zealand’s COVID-19 response modeling team Te Pūnaha Matatini to receive the 2020 Prime Minister’s Science Prize, and a finalist for the 2021 IT Professional of the Year. Kevin is passionate about data science and analytics, especially the safe use of data and technology for good. He founded the New Zealand Data Science & Analytics Forum and was a member of the Digital Council of New Zealand. He holds a Ph.D. from Stanford University, and a BSc(Hons) from the University of Canterbury.
🗣️ Building trust in AI with a robust and secure data platform
Healthcare workers are exploring ways to leverage AI in healthcare delivery, as a potential solution to major resource constraints. But the expected benefits are often counteracted with perceived risks, recognising the potential for misuse of data or an over-reliance on unproven technologies.
How should we navigate our way through evaluating and adopting AI in an industry that is built on trust, consent, and high-touch human involvement?
In New Zealand, healthcare workers, academics, and software providers came together through a partnership which has supported over 100 data science collaborations in the past seven years. The result is a platform of technology as well as a robust set of guidelines and processes designed to help ensure that data is protected, while also being ready and available for AI enhancements. The platform is deliberately open, allowing users to customize according to their own data requirements, and enabling applications from any party to access the data that is needed for healthcare.
One place this platform has been deployed is for the New Zealand Algorithm Hub (the “Hub”). The Hub originally delivered a series of COVID-19-related models and algorithms to provide immediate value in support of New Zealand’s COVID-19 pandemic response. This has since became a central, shared knowledge base of New Zealand-validated algorithms and models. The Hub provides a path to adoption of models without the burden of support falling on the creator.
Protection of privacy is critical to trusted data science, including granular consent (allowing users to specify how data may be shared), and de-identification (allowing the creation of valuable insights without compromising privacy). AI needs to build rather than erode trust, mitigating the risks of privacy, bias or compounding inequities. This can be achieved through a clear data strategy, a transparent technology, and a robust governance process.
🗣️ The current state of multimodal medical language modelling
Professor Denis King, MB, BS, FRACS, OAM, is a colorectal surgeon with 40 years experience. He was Executive Clinical Director, SESIAHS and the Director of the Division of Surgery at St George Hospital. Professor King is Clinical Professor Graduate School of Medicine, and Fellow, University of Wollongong, and Associate Adjunct Professor, University of NSW. He sits on the Boards of St George and Sutherland Medical Research Foundation and the Illawarra Health and Medical Research Foundation, and chairs the Justice Health and Forensic Mental Health Network NSW Board, and chaired the NSW Health Ministerial Advisory Committee.
🗣️ An AI-based personal risk assessment of the risk of colorectal cancer
Colorectal cancer (CRC) is the third most common cancer globally, and the second most common cause of cancer deaths. We know the personal and environmental conditions that predispose to it. It has a precursor condition, the adenomatous polyp, that remains non-malignant for at least 10-20 years. This means that the disease is preventable. There are well-established government funded screening programs designed to prevent the condition. Despite that, the incidence had decreased only marginally in the last 30 years, and in some segments of the population, particularly young men in the USA, it is increasing. Why is that? The screening is offered to the community through faecal testing, and the high-risk groups, through colonoscopic screening. The burden on the individuals concerned, and on health systems more broadly is becoming a major public health issue. For the high-risk group it screening is less well targeted than it might be, due to the lack of a method of stratifying risk. Such a method would allow us to separate those within the high-risk group who have a high personal risk from those that do not, and to concentrate our efforts on the former. This is a report of a 40+ year study of 5048 patients believed to be a high risk of CRC, from which such a stratification method has been identified. With our knowledge of the environmental risks that are quantifiable, together with this stratification method, and supported by Artificial Intelligence methods, a personalized CRC risk assessment is now feasible. This will allow more cost-effective screening and improve the known poor compliance with such screening.
David Hansen is CEO and Research Director of the Australian e-Health Research Centre at CSIRO – Australia’s national science agency. The AEHRC is CSIRO’s digital health research program and a joint venture between the CSIRO and Queensland Health. With over 150 scientists and engineers the AEHRC is Australia’s largest digital health research centre. The AEHRC undertakes research in data semantics and interoperability, genomics, medical imaging, artificial intelligence and machine learning across healthcare. The technology developed by AEHRC scientists is aimed at enabling digitally enabled services to improve the safety, quality and efficiency of healthcare. David is involved in leadership positions in many national research initiatives including the NHMRC Centre for Research Excellence in Digital Health and the Australian Alliance for Artificial Intelligence in Healthcare. David is on the board of the Australian institute of Digital Health and is a member of the Connected Care Council for the Australian Digital Health Agency. Prior to joining CSIRO, David has held senior positions leading technology research and development for SRS with LION Bioscience Ltd and before that the European Bioinformatics Institute. David is passionate about the role of information and communication technologies in health care and the role of digital health professionals in developing a safe, high quality efficient and sustainable healthcare system in Australia.
Farah Magrabi is a Professor of Biomedical and Health Informatics at the Australian Institute of Health Innovation, Macquarie University. She has a background in Electrical and Biomedical Engineering and is an expert in the design and evaluation of digital health and artificial Intelligence (AI) technologies for clinicians and consumers.
Farah’s research seeks to investigate the clinical safety and effectiveness of digital health and AI technologies. She is internationally recognised as a leader in this area, and has made major contributions to documenting the patient safety risks of digital health by examining safety events in Australia, the USA and England. Her work has shaped policy and practice including a new specification by ISO, the International Organization for Standardization (ISO/TS 20405) for the surveillance and analysis of safety events.
Shaun has more than 30 years experience of successfully delivering IT solutions in Australia and internationally. He is an expert in data and analytics solutions with experience covering data strategy, big data, data governance, analytics, AI and IoT. He has a wide range of industry expertise with a current focus on public sector and utility organisations.
🗣️ Translating AI ethics into breakthrough innovation
Jim Warren is Professor Health Informatics at the University of Auckland, based in the School of Computer Science. He specialises in design and evaluation of information systems for long-term conditions, e.g. cardiovascular risk management. In recent years he has increasingly focused on screening and e-therapy to support mental health of New Zealand youth. He is collaborating with a UK-based programme, “Adolescent Mental Health and Development in the Digital World,” on methods of personalization to promote adherence to mental health apps. He is a Foundation Fellow of the Australasian College of Health Informatics (now the Australasian Institute of Digital Health). His degrees are in Computer Science and Information Systems from University of Maryland.
🗣️ Enabling AI for digital mental health
There are many ways in which IT can support mental health. Such computer-based tools often cross the threshold into Al, and can also be seen as part of ‘consumer-health informatics’ where IT acts to empower individuals in their own care. Applications include patient-operated screening tools, and chatbots that teach self-management skills and promote well-being.
A key engineering question is how tightly to script app dialog, and where deep-learning based methods can safely be allowed to make decisions. Most existing tools rely chiefly or wholly on expert-written dialog and pre-programmed pathways. These are safe, but can fail to be engaging. Conversely, packaging a generative Al to engage in counselling-like dialog has many risks. Some of the best near-term opportunities lie in leveraging the classification power of deep learning.
This talk presents a range of mental health applications focused on the author’s experience in creating tools for New Zealand youth, notably the YouthCHAT mental health and lifestyle screening and help-assessment tool, used in over 100 practices and schools; and the Headstrong chatbot platform for youth resilience, available nationwide under Ministry of Health contract. Further experimental results are presented showing potential for natural language classification to provide oversite to relatively unrestricted generative dialog, and for deep-learning based sequencing of chatbot agendas.
Computationally-simple approaches are already safe and effective in screening and in delivering mental health/wellbeing support, particularly with cognitive-behavioural therapy (CBT). Opportunities exist to make more engaging products that leverage deep learning and elements of generative Al to allow greater engagement and dynamism.
Chris is a machine learning and data engineer in the Informatics Department at Murdoch Children’s Research Institute (MCRI). He uses natural language processing to classify and extract vaccine adverse events and syndromic surveillance information from clinical and social media texts. His work enhances MCRI’s vaccine surveillance capacity. He is also involved in authoring and managing informatics and business intelligence requirements at MCRI.
🗣️ AI-based Triaging Adverse Events of Special Interest (TRAESI) for Surveillance of Adverse Events Following Vaccination in the Community (SAEFVIC)
The TRAESI project aims to enhance Adverse Events of Special Interest (AESI) detection by using large language model-based natural language processing (NLP) techniques — to overcome the limitations of the current pattern matching system, including the need to constantly adjust the patterns algorithm, and an inability to capture linguistic and structural context, leading to incorrect identification of AESI.
The project aim is to prioritize AESI for clinical attention, rather than to exactly identify adverse events. Therefore, the NLP model only needs to differentiate between the major classes of AESI and a greater number of common and expected adverse event classes. A major hurdle was the limited availability of clinical help to annotate data. As processed SAEFVIC reports are assigned a limited number of reactions codes and many more ad-hoc reactions descriptions, these became an alternative to formal labelling. With clinical guidance they were assigned to 21 categories, which were then allocated to the individual reports. Consequently, a potentially unlimited pool of labelled records was available, and the great numbers of records minimized the “noise” inherent in the approach. Nevertheless, there were underrepresented AESI which made it difficult for the model to learn about, and so the freely available and massive US VAERS system was used to supply extra data examples, increasing the model’s capability. An Alpha score of 0.82 and an F1-Score of 0.89 are obtained on test data.
The model is currently being assessed and refined, based on feedback from SAEFVIC clinical staff, who are very pleased with the model’s performance, and with the easiness of the annotation approach. It is deployed in a Databricks environment, regularly fetching data from the underlying SQL database and saving its predictions into SQL. The aim is to integrate the model into the SAEFVIC application. Further models are planned to exactly identify specific AESI.
Mohd Saberi is a Professor of Artificial Intelligence and Health Data Science. He is now the Director of the Health Data Science Lab in the Department of Genetics and Genomics, College of Medicine and Health Science, UAE University (CMSH-UAEU). His research area includes Artificial Intelligence, Data Science, Bioinformatics, and Computational Biology. He applies his research area in fields such as Genomics, Rare Diseases, Cancer, Public Health, Metabolic Engineering, and Drug Repurposing. He is a principal investigator (PI) for 21 research grants and the co-Pl for 20 research grants. Also, he has published more than 300 scientific articles and 17 books. Furthermore, he has also been an advisor, leader, and member of committees to develop academic programs at the Bachelor, Master, and Ph.D. levels for Biomedical Engineering, Computer Science, IT, AI, Data Science, and Bioinformatics. Previously, he was the Director of the Institute for Artificial Intelligence and Big Data, Head of the Artificial Intelligence and Bioinformatics Research Group, and founder of the Department of Data Science. In appreciation of his leadership, the university has appointed him as a member of the University Senate. Additionally, he is a member of the advisory board for the Artificial Intelligence Research Institute and loT Digital Innovation Hub in Europe.
🗣️ A health data science ecosystem for School of Medicine and Health Sciences
Health Data Science (HDS) is an advanced interdisciplinary field combining Computer Science/lT, Math, and Healthcare to study health data in order to gain insights and knowledge from the data. The role of HDS is crucial in medicine and healthcare, which is one of the major current and future scopes in the healthcare research and industry. However, several challenges exist in the medicine and healthcare fields that highlight the need for innovative solutions based on HDS. These challenges are often related to the vast amounts of data generated, the complexity of healthcare systems, and the desire for improved patient care and outcomes.
Therefore, HDS plays a pivotal role in transforming healthcare by providing advanced innovative solutions to overcome the challenges. Recently, some medical schools from Harvard University, The University of Cambridge, The University of Oxford, Imperial College London, etc. have established HDS in their research and education. It is essential for a school of medicine and health sciences to have an HDS ecosystem in order to support students, academicians, and staff with advanced knowledge and technology to fulfill current and future needs. Thus, this presentation proposes an HDS ecosystem for the School of Medicine and Health Sciences. It contains an HDS overview, core technologies (Artificial Intelligence, Machine Learning, Big Data, and the Internet of Things), infrastructure, infostructure, application, software, education (academic program, course, training), and strategic partnership. It also presents some of our ongoing HDS projects being carried out. In summary, the HDC ecosystem has significantly impacted medicine and healthcare, providing numerous outcomes and benefits especially for schools of medicine and health sciences. Moreover, it has the potential to transform medicine and healthcare by enhancing diagnostic accuracy, personalizing treatment, accelerating research, improving patient care, reducing costs, and making healthcare services more accessible and efficient.
Aarti is an experienced clinical leader in Radiology with a strong commitment to driving continuous improvement in patient experience through the design and implementation of best practice healthcare information systems and processes, and active development of workforce capability and capacity. Aarti has proven business acumen and entrepreneurial spirit. Aarti’s areas of expertise are health management, clinical leadership, customer experience, digital transformation, project management, and human-centered design. She is also a skilled leader who is known for her relationship management, communication, research, and analysis abilities and is a trusted and passionate clinician.
🗣️ Leveraging AI in imaging: A Path to Cost-Efficient Healthcare in Australia
Professor Karin Verspoor is Dean of the School of Computing Technologies at RMIT University in Melbourne, Australia. Karin’s research primarily focuses on the use of artificial intelligence methods to enable biological discovery and clinical decision support, through extraction of information from clinical texts and the biomedical literature and machine learning-based modelling. Karin held previous posts as Director of Health Technologies and Deputy Head of the School of Computing and Information Systems at the University of Melbourne, as the Scientific Director of Health and Life Sciences at NICTA Victoria Research Laboratory, at the University of Colorado School of Medicine, and at Los Alamos National Laboratory. She is also the Victorian Node lead and co-founder of the Australian Alliance for Artificial Intelligence in Health.
Neville Board RN, BA, MPH, FAIDH is Chief Digital Health and Information Officer at the Justice Health and Forensic Mental Health Network in NSW. Prior to coming to this appointment in 2023, he headed Victoria’s Digital Health Branch and operations. He previously established and operated the Health Information, eHealth and Medication Safety programs at the Australian Commission on Safety and Quality in Health Care, and convened the WHO clinical practice working group for the Global Patient Safety Challenge Medication without Harm. Mr Board is a registered nurse and has worked in clinical, management, and informatics roles, including six years working in primary health care partnerships in low income regions. He has published on hospital in the home, use of data in health care, short stay surgery and post-acute care.
🗣️ Safe clinical use of generative AI
Dalibor is an experienced leader at Accenture. He leads Accenture’s Data and Artificial Intelligence practice in Australia and New Zealand, giving him a vantage point to see the latest developments, including in GenAI, across a range of industries. He specialises in advanced analytics, artificial intelligence, cloud data computing and automation. He has delivered complex data and AI programs across Health and the Public Sector in Australia and overseas. Currently he specialises in developing industry solutions with data and AI that solve tough business problems, with a special focus on healthcare.
🗣️ A story from the frontier of AI in Healthcare: Transforming clinical coding with an AI co-pilot
We set about to solve for the critical shortage of Clinical Coders in the Australian market, which is contributing to acute coding backlogs, variability in accuracy and quality of coding and delays in revenue collection and payments.
The technology brief was to prove that an AI/ML approach could raise coding productivity, accuracy, standardise process and improve user experience. The solution had to be hosted in Australia for data sovereignty and privacy reasons, be open for integration with the EMR and other hospital applications and provide a good user experience. Other features were:
The implementation process was a 12-week Pilot with a single, multi-disciplinary team drawn from the hospital and the technology partner. We followed a rigorous methodology which can be summarised into the following steps:
We successfully proved that the AI Co-pilot for Clinical Coders
Sue is a Managing Director with Accenture and brings extensive executive leadership career as a clinician, health service delivery, design and service lines development underpinned by data and digital transformation.
Professor Chris Bain is an experienced clinician (former) and health IMT practitioner with a unique set of qualifications, and a unique exposure to broad aspects of the healthcare system in Australia. Chris has extensive experience in designing, leading and running operational IMT functions in healthcare organizations. His chief interests are in the usability of technology in healthcare, data and analytics, software and system evaluation, technology ecosystems and the governance of IT and data.
🗣️ Workshop: User Experience
An accomplished digital health and health information management expert with a diverse background in medical research, digital health, organization change, bioinformatics, and tertiary education. Paul is a passionate advocate for diversity and inclusion and excels in engaging effectively with stakeholders from government, education, and private sectors. He currently participates in several health areas relating to Australia’s digital health sector including cyber awareness, AI ethics and governance, and reducing the emerging digital divide in society. He serves on the St. Vincent’s Healthcare Human Research Ethics Committee as well as boards and advisory committees on health, cybersecurity, AI governance, education and biotechnology. He is currently serving as a sessional Unit Chair (School of Medicine) and a Senior Research Fellow with the Deakin University Institute for Health Transformation where current research focuses on reducing the impact of the digital divide on Australia’s ageing society.
🗣️ Next steps
Sarah is the National Manager of Research, Evaluation and Insights at AHPRA. Sarah has worked in the research and academic space for over 18 years and is an Adjunct Associate Professor at La Trobe University. Sarah leads the Research, Evaluation and Insights team at AHPRA. The team’s core functions support AHPRA as a risk based regulator through undertaking research and evaluation and informing regulatory policy and practice. Sarah started her career as a prosthetist/orthotist, has a Master of Public Health, a PhD in Ergonomics and Human Factors, and is a graduate of the Australian Institute of Company Directors.
🗣️ Workshop: Regulatory
Professor Catherine Jones is a consultant cardiothoracic radiologist, specialising in lung cancer and occupational lung disease. She has practised in Australia since 2011, after completing radiological training in the UK and thoracic radiology fellowship in Canada. She is adjunct professor at the University of Sydney Faculty of Medicine and Health, and associate professor at the School of Public and Preventive Health at Monash University, enjoying a long career in medical research. She has a Bachelor Degree in Mathematics from the University of Queensland, majoring in statistical methodology. Catherine is an executive member of the Australia and New Zealand Society of Thoracic Radiologists (ANZSTR) and works with medical AI and technology companies to develop clinically useful AI tools for medical imaging interpretation, workflow, and education of doctors and other healthcare professionals. She is currently the chair of AI innovation at Australia’s largest radiology company.
🗣️ AI Revolution: Unleashing Potential, Navigating Change – The Workforce
A career data professional with over 20 years of senior leadership experience, Nigel has a passion for building highly motivated talented teams that create brilliant systems for business people to deliver great outcomes for the customer and executive management. During his extensive career across Australia, the UK, Canada, and the US, Nigel has always worked in the business analytics and data platform field. He has worked for many different types of businesses and sectors including large-scale complex organisations and has most recently worked for Epworth Healthcare as the Delivery Manager, Business Analytics, focused on modernisation and improved service delivery. Nigel’s passion for data and how it can really benefit every part of an organisation is evidenced by the positive impact data has made through all the data engineering strategies and projects that he has led and delivered via nurturing high-performing teams.
Nasim Salehi is an Assoc/Prof in “health promotion” and “healthcare leadership”, at the School of Business and Law, Edith Cowan University, and an adjunct Assoc/Prof at Southern Cross University (Faculty of Health). She has 19 years of experience in both academic and healthcare industries (since 2004). She has held leadership and management positions across health, social, and community care settings, resulting in implementation frameworks to enhance integrated, effective, equitable, and empathetic care services. As a “health promotion specialist”, she advocates for integrating health promotion approaches into health care for more holistic, preventive, and functional approaches to health that can be sustainable.
🗣️ “Positive Connections”: A health prevention program, empowering adolescents with the bright side of social media
With the rapid expansion of social media, adolescents have limitless online opportunities to explore/question who they think they are or want to be. Adolescents with poorly developed self-identities (uncertain about their true selves/goals) may approach social media as an escape, getting trapped in the dark side (e.g., passive use, addiction, excessive self-promotion, echo chamber, and radicalisation). Hence, they will be more susceptible to cyberbullying vulnerability and harmful comparisons, fostering negative self-perception, mis/dis information, lack of empathy, isolation, loneliness, and even suicidal thoughts over time. While social media has faced criticism for its negative effects, it remains significant due to technological advances and accessibility. Social media can be a valuable tool for self-discovery, connecting adolescents with support, depending on why and how it is used.
Rather than the traditional deficit focus assumption that social media is a problem, we have designed “Positive Connections” through a personalised pedagogical framework using AI, gamification, and simulation to empower adolescents with the bright side of social media. “Positive Connections” assists with both the prevention of getting trapped in the dark side of social media as well as the promotion of the bright side based on the specific context of each adolescent, and their environment.
“”Positive Connections”” is implemented through 5 phases:
“Positive Connections” uses a strengths-based approach to empower adolescents to 1) Create positive self-identity through the development of vision/goals related to who they are and what they want in life; and 2) build and maintain a positive social identity/sense of belonging, facilitating access/provision of various supports.
Emarson Victoria leads product innovation at care technology company Rauland Australia. With a background in driving product management, user experience and innovative business models, he has over 25 years’ experience in digital transformation across various technology industries, building high performing, successful product teams. Emarson enjoys the challenges and complexities of multi-channel, multi device product development and cherish the excitement of commercially successful go to market introductions in an agile environment. Highly entrepreneurial, Emarson is interested in all things digital, Health Tech, AI, IoT and Data.
🗣️ Beyond the buzz: What AI in healthcare is really all about – a practical example
Despite popular belief, artificial intelligence (AI) is not a new concept. It has been with us since the 1950s, yet more quietly transforming industries and lives. It’s only recently, especially given the buzz around advancements like ChatGPT, that AI has truly gotten into the spotlight. Nowhere is its potential more exciting or impactful than in healthcare. Like anything cutting edge, AI brings with it a slew of challenges around reliability, scalability, cost effectiveness, ethics and user adoption.
Healthcare’s AI challenges are complex, varying greatly based on the type and purpose of the application.
Clinical applications come with issues around accuracy, data governance, and privacy, while monitoring applications require highest levels of reliability, precision, and confidentiality. The concerns increase when healthcare applications directly impact individuals such as patients, residents, and staff. But it does not mean we should stop pursuing the transformative power AI has for the healthcare industry, because it is.
In this presentation we would like to show how to introduce successfully and implement AI technology in a real healthcare setting that instils confidence and drives adoption.
Many talk about AI, few can actually speak to the proof how to implementing it well. We will take you on a journey through our experiences in operationalising a specific AI-based application.
More importantly, we will share what implementation strategies succeeded, what not, and how we navigated these challenges to still meet customer and user expectations, finding solutions.
Ian is a Health Industry executive and Board advisor with 30 years’ experience in clinical, business consulting and general management roles. He cultivates executive relationships to deliver Healthcare technology results, drive digital transformation in health systems, enhance human+machine productivity and improve health system outcomes. Ian leads a specialised Health business and technology team at Fujitsu to resolve customer challenges and co-design innovative solutions. He assembles transformational teams to deliver sophisticated, end-to-end digital platforms that put workforce and consumers in the driving seat to achieve their healthcare goals. Digital integration, cloud and data to improve population health outcomes.
🗣️ Implementation of conversational and text-based generative AI in a customer service system
Our managed service desk receives a high volume of calls from system users facing technical and business challenges. The model of hiring experienced technicians to resolve issues was getting more expensive for customers and as business challenges became more complicated, the issues took longer to resolve as customer service staff needed to consult the knowledge repository or message colleagues before proposing solutions.
We needed an AI system that could learn from both voice and text then interact with users in both voice and text. It needed to be able to ingest the knowledge repository so that it could learn from past and future use cases. It needed to be fast and secure to install then to provide measurable improvements in performance above the baseline so that we could understand whether it was more or less effective than a human customer service agent. It needed to be able to handle many simultaneous lines of enquiry and response. It needed to provide an economic return on investment so that it was less costly than hiring more staff. It needed high availability and resilience so that it could operate safely and consistently every day.
The system was technically simple to install in the Proof of Concept stage. Initially we needed to allocate dedicated staff to supervise the responses to validate that they were correct. This workforce was additional to the customer service agents dealing with enquiries and diminished as the response accuracy improved to the point that the POC was complete and we implemented in a live customer setting.
The generative AI system seemed to learn more from the POC than from the knowledge repository so expectations needed to be tempered as it took longer than initially planned to be customer ready. Post POC the system delivered on the brief and now works very well. It has reduced cost to serve and time taken to resolve service calls. The system could have other healthcare uses such as providing a Medicines Information service.
Doctor Dominika Kwasnicka (MA, MSc, PhD), is a Senior Research Fellow in Digital Health at the University of Melbourne, affiliated with the NHMRC Centre for Research Excellence in Digital Technology to Transform Chronic Disease Outcomes, Australia. She is also a Chief Executive Officer at Health Redesigned Pty Ltd, Perth-based research and evaluation consultancy, and a Director and a Co-Founder at Open Digital Health, international not-for-profit organisation encouraging reuse and wide implementation of evidence-based digital health solutions. She is a behavioural scientist who has diverse interests in health behaviour change, digital health, and research methods focusing on individuals. She is also passionate about science translation and dissemination. She will talk about Retmarker – system that uses Artificial Intelligence to screen patients with diabetes, in order to automatically detect Diabetic Retinopathy. Retmarker is a certified Class IIa medical device that has already been used to screen more than half a million patients in Europe, soon coming to Australia.
🗣️ Preventing vision loss using the Retmarker Eye Screening System – validated AI technology for autonomous detection of diabetic retinopathy
People with diabetes (types 1 and 2) are at increased risk of eye problems and vision loss. Diabetic retinopathy is the main form of diabetic eye disease. Almost 1.9 million Australians have diabetes. On average, one in three of these people have some level of diabetic retinopathy. Regular eye checks are key to detecting the early stages of diabetic retinopathy. But about half of all Australians with diabetes do not get the eye checks they need.
The Retmarker Eye Screening System is extensively validated AI technology for autonomous detection of diabetic retinopathy, tested on over half a million images collected in real-world clinical environments. The Retmarker uses deep learning to automatically detect diabetic retinopathy quickly, accurately, and consistently, with higher sensitivity than an eye care expert trained in image grading. This technology is certified as Class IIa medical device in Europe and also TGA approved in Australia.
The Retmarker system is easy to use and with minimal training can be employed by technicians or nurses, enabling diabetic retinopathy screening at the point of care and eliminating an eye care specialist appointment just for screening.
AI Eye Screening with Retmarker system makes it possible for any physician to identify patients with vision-threatening retinopathy in-clinic, in real-time, so they can be immediately referred to an eyecare specialist for treatment to save their sight.
Most vision loss from diabetic eye disease can be prevented, as long as it’s caught early enough. If you work with diabetes patients as a general practitioner, endocrinologist, diabetologist, ophthalmologist, or an optometrist, Retmarker can enable fast, reliable, and accurate diabetic retinopathy screening for your patients.
Mark has held several executive positions in international peak membership bodies overseeing policy, strategy, government affairs, digital health and workforce. He has wide ranging experience of governance, implementation of strategy, workforce upskilling and the application of new technologies in health. His qualifications include a BSc Economics, MSc in European Politics and Governance, certificates in leadership and the AICD company director’s course. He also became a Fellow of the Australasian Institute of Digital Health (AIDH) in 2020 in recognition of his inaugural work on telehealth and artificial intelligence.
Mark is interim CEO of AIDH, working towards healthier lives, digitally enabled. Prior to leading AIDH, Mark worked as a consultant to governments on health policy, digital health and system-wide reforms. Mark was also interim CEO at The Royal Australian and New Zealand College of Radiologists (RANZCR) from 2021 to 2022. Before taking on that role, he oversaw the Faculties of Clinical Radiology and Radiation Oncology and led RANZCR’s policy and advocacy work for six years. Prior to moving to Australia, Mark headed up policy and strategy for the optometry sector in the UK, Republic of Ireland and the European Union. Mark’s interest and expertise in health stems from 10 years delivering front line care as an optometrist in the UK and Ireland.
Mark has a range of digital health publications, including on the implications of AI for the health workforce. He has also developed frameworks for the safe deployment of telehealth and AI in clinical care, including standards of practice to guide providers.
🗣️ Implications of AI for the health workforce
Professor Kenny has an extensive and distinguished clinical career in cancer care as a specialist in Radiation Oncology. Her main areas of clinical, scientific and research interest are in complex skin cancer, head and neck cancer and breast cancer. She is a principal or co-investigator of several national and international clinical trials and has published over 150 scientific papers. She is a Professor in the School of Medicine at The University of Queensland.
Professor Kenny has served on the Board of Cancer Australia and held the Presidencies of the Clinical Oncological Society of Australia, and the Royal Australian and New Zealand College of Radiologists.
Liz was the inaugural chair of the Queensland Clinical Network Executive, has recently stepped down as chair of the Artificial Intelligence Committee RANZCR, led the development Practice Standards for Interventional Oncology for CIRSE, and was involved in developing the International Accreditation System for Interventional Oncology Services.
Professor Kenny’s work has been recognised with honours by international radiological organisations and she was appointed an Officer of the Order of Australia in 2017.
🗣️ Ethics of AI
Sedigh’s interests and expertise are in clinical informatics, natural language processing (NLP), machine learning, and text mining. Her research primarily focuses on developing and applying state of the art NLP algorithms to extract insight from large free text data. Sedigh’s research has led to publications in leading academic journals and prestigious conferences, as well as collaborations with healthcare organizations and industry partners. Sedigh has extensive experience in the IT industry, working with large IT and industrial organizations.
She holds two master’s degrees and a PhD from Monash university. Her PhD effectively used a novel natural language processing (NLP) approach to detect personal health mentions in social media for enhancing vaccine safety surveillance. Since then, working with Murdoch Children’s Research Institute and the Victorian Department of Health, she has applied her skills into improving Adverse Events Following Immunization (AEFI) detection in social media and electronic medical records, and syndromic surveillance.
🗣️ Active learning for improving patient care and outcomes in AI-driven healthcare
Emergency department (ED) triage notes are a valuable source of information for syndromic surveillance of emerging public health diseases. They can be processed by Natural Language Processing (NLP) techniques to identify cases in real-time, which can help to for improved public health planning and responsiveness. However, the development of NLP models for ED triage notes is challenging due to the lack of labelled data. Active learning (AL) offers a solution by selecting the most valuable data for labelling, reducing the burden on annotators. AL works iteratively by using an NLP model to identify the most informative and representative data points to be then labelled by a domain expert. This enables the model to learn from a smaller set of data while still achieving high performance.
We investigated the effect of different query strategies with pool-based AL for labelling ED notes, to train models to classify acute asthma presentations. We evaluated three active learning approaches: uncertainty sampling, diversity sampling, and random sampling. We employed sentence embedding models to map triage notes to a 384-dimensional dense vector space. The vector embeddings were used by UMAP and HBDSCAN to cluster the data into distinct topics which formed the basis of the representative sampling strategy. These vector representations were also stored within a ChromaDB vector database, which facilitated the calculation of document similarity. A Roberta-based Transformer model was used for classification of the ED notes.
Our results showed that uncertainty sampling outperformed the other query strategies, achieving an F1 score of 0.91, which was 7.6 percentage points higher than the baseline score.
Our work demonstrated the effectiveness of active learning for clinical text classification, specifically for asthma detection in ED triage notes. Active learning holds promise for enhancing syndromic surveillance, allowing rapid response to public health events, and ultimately improving patient care and outcomes through AI-driven healthcare solutions.
Rashina Hoda is an associate professor in software engineering in the Faculty of Information Technology at Monash University, Melbourne. She is the group lead of software engineering and the deputy director of the HumaniSE Lab. She is a leading international expert on agile methods and has introduced socio-technical grounded theory for qualitative research based on over 15 years of practice. Her research focuses on human and socio-technical aspects of software engineering applied to a variety of domains including Digital Health, Education, and IT.
Rashina leads a Digital Health CRC project on enhancing telehealth in collaboration with the Victoria Department of Health, Health Direct Australia, Monash Health, and the University of Melbourne. She is passionate about girls and women in STEM and was selected a Superstar of STEM (2021-22 cohort) by Science and Technology Australia. She enjoys presenting to public, industry, and academic audiences alike, for example, at TEDx Auckland, Digital Health Week 2023, Agile Australia, Agile New Zealand, Agile India, among others. To access her talks and research, please visit www.rashina.com. You can also find her on Twitter (@agileRashina) and LinkedIn.
🗣️ Beyond the shiny new tech: Co-designing with consumers in digital health
It is tempting to be driven by the latest new tech when designing software prototypes for healthcare. However, there are many critical considerations that need to be aligned and tradeoffs to be made, when designing with health outcomes improvement and possible research translation in mind. In this session, A/Prof Hoda will share a socio-technical framework for designing digital health systems based on her experiences of leading an interdisciplinary digital health project with multiple government, industry, and research stakeholders, and working closely with consumers to understand and embed lived experience into software design.
Dr Brankovic is a Research Scientist with CSIRO Australian e-Health Research Centre. In her role, Dr Brankovic is applying her expertise in complex algorithms and artificial intelligence to the development of new medical devices and decision support tools, improving the provision of healthcare and changing the lives of patients. In September 2022 she was selected as the finalist for the Global Australian GLOBAL TALENT Award supported by the Australian government for her contribution to Australia’s reputation as a world leader in medical innovation and in 2020, she was awarded an Early-Career Advance Queensland Research Fellowship (AQRF). Her recent work on the reliability of XAI has been acknowledged with the Best Paper Award at Medinfo 2023.
🗣️ Predict, explain, engage
With 67 per cent of Australian adults overweight or obese, 8 per cent of the burden of disease in Australians due to obesity, supporting that health intervention and lifestyle change is more important than ever. Recently, there has been an increase in digital behavioural intervention programs to help reduce modifiable health risk factors such as obesity and lack of exercise. Engagement is key to interventions that achieve successful behaviour change and improvements in health. There is limited literature on the application of predictive machine learning (ML) models to data from commercially available weight loss programs to predict disengagement. Such data could help participants achieve their goals. We used explainable ML to predicted disengagement from the program on a weekly basis, based on the user’s total activity on the platform, including weight entries from the weeks prior. This study showed the potential of applying ML predictive algorithms to help predict and understand participants’ disengagement with a web-based weight loss program. With this information, digital health interventions can become more tailored, supportive and offer users a greater chance of making long-term lifestyle changes. To that end the potential of applying generative AI for this purpose is discussed.
Dr Tina Campbell is a health promotion specialist with a career spanning two decades. Tina has led the production of Australia’s largest video library of patient experiences and developed the GoShare Healthcare content distribution platform and suite of products, including GoShare Plus and GoShare Voice. Tina is the Managing Director and co-founder of Healthily, an Australian health technology company specialising in patient education. Healthily works with organisations across primary and tertiary health care to improve health literacy and support patients to play a more active role in their health.
🗣️ The use of conversational voice AI to deliver an accessible, multilingual long COVID survey and educational resources to 45,000 people in Western Sydney
At the peak of the COVID pandemic, WSLHD had one of the highest rates of COVID-19 in Australia. More than 45,000 of the affected population were supported by WSLHD’s InTouch service.
WSLHD is developing the District’s Long COVID model of care and is screening people supported by the InTouch service to identify those with Long COVID symptoms and provide support. A challenge for the project team is the requirement to screen such a large culturally and linguistically diverse group of people with an accessible survey approach, and provide timely health literate education resources.
Digital patient education specialist, Healthily, has developed GoShare Voice, a conversational Voice AI solution to enhance patient education and support. Accessed through Healthily’s GoShare patient education platform, GoShare Voice enables outbound phone calls to patients in multiple languages to provide relevant, safe, and accessible information, surveys, and support.
Western Sydney LHD and Healthily have developed an innovative approach to survey delivery by combining online and conversational voice AI surveys using GoShare and GoShare Voice.
Co-design an evidence-based approach, including the delivery of an online and voice AI survey, and health literate patient education resources, for a diverse population of ~45,000 people.
A combination of online surveys and automated phone calls were utilised to efficiently reach the impacted population.
The implementation process involved:
•Use of a validated screening tool
•Application of health literacy/digital literacy principles
•Co-designed with clinicians/consumers
•Testing with diverse communities
The use of GoShare Voice brings significant cost savings and workload efficiencies that would be difficult to achieve through human resources. Early results indicate high survey completion rates and patient satisfaction.
Dr Rosie Dobson is a Registered Psychologist and Senior Research Fellow based at the School of Population Health at the University of Auckland, and in Service Improvement and Innovation at Te Whatu Ora Waitematā. Her research investigates the potential of digital tools to make health services more accessible. Alongside her research, she leads teaching and supervision in digital health and evaluation at the University and Te Whatu Ora. She has been involved in the co-design, development and trial of digital health tools in a range of areas, including diabetes, maternal and child health, smoking cessation, mental health, and pulmonary rehabilitation, and has been an invited expert to the WHO’s ‘Be Healthy Be Mobile’ global mHealth initiative for non-communicable diseases. More recently, her research has explored patient and consumer perspectives on the secondary use of their data to inform the development of digital tools, including AI and algorithms. She is a member of the National AI and Algorithm Advisory Group for Te Whatu Ora and the Northern Region AI Governance Group.
🗣️ What do health service users think about the use of their data for AI?
AI tools are being introduced within health services around the globe. It is important that tools are developed and validated using the available health information of the population where it is intended to be used. We set out to determine what patients and clinicians thought about the use of their health information for this purpose. Through interview studies using AI use case scenarios, we have found that the patients of health services in Aotearoa New Zealand, are generally comfortable with their health information being used for these purposes but with conditions (around public good, governance, privacy, security, transparency, and restrictions on commercial gain) and with careful consideration of their perspectives. We suggest that health services should take the time to have these conversations with their communities and to provide open and clear communication around these developments in their services.
Karen is a corporate and privacy lawyer qualified in Australia, New York and England and Wales. She has been a legal business partner for medical devices, pharmaceutical, digital health and fintech clients in Australia and Hong Kong. Karen has advised on AI, ethics and governance framework development, training and rollout and been the legal lead on machine learning projects in predictive medicine and clinical studies. Karen has led regulatory change projects for ASX-listed and international organisations. Karen is the chair of the Association of Corporate Counsel Australia’s Legal Technology and Innovation Committee and a judge for the 2023 Asia-Pacific Legal innovation and Technology Association awards. Karen was a member of the Australasian Institute of Digital Health Cybersecurity Community of Practice in 2021-2022 and completed the inaugural AIDH Women in Digital Health Leadership Program in 2022.
🗣️ Navigating legal waters: Your questions toolkit for AI healthcare project success
The problem this practical session seeks to solve is to arm non-lawyers with smart questions to ask to set their AI projects up for success in the uncertain and changing regulatory environment globally and locally. The presentation is intended to apply generally to AI projects in healthcare. The intersection of AI and healthcare gives rise to new and unexpected legal challenges that require innovative solutions and collaborative working, both of which are supported by curiosity and intelligent questions.
The presentation will be made more engaging by using AI-generated images and limited words on slides so that attendees can be invited to explore and apply the questions in their minds to their own circumstances and projects. Attendees will be taken through 5 topics through 5 different images and there will be a countdown from topic 5 to topic 1 to assist with recall.
The flow for each of the topics will be: 1) legal issue and why it is important to the attendee including what could go wrong; 2) the toolkit of smart questions to ask; 3) why those questions will help overcome or limit risks.
The five legal topics which will be covered are:
Quick summary of 5 areas covered.
In last minute of the presentation, ask attendees to turn to person next to them and share which was the most useful/applicable question for them that they will use in their own AI healthcare project moving forward.
Close with individual and group commitment that we will be curious and ask powerful questions moving forward to help run better AI healthcare projects which proactively address local and international regulatory risk and challenges.
After graduating MB BS with Honours, Dr Porter was awarded a PhD at Flinders University studying the neurophysiology of the human intestine and spent another year at the Child Health Research Institute and starting a project exploring the cell and molecular biology of craniosynostosis. He has spent over twenty years practising as a plastic surgeon and managing director in public and private hospitals. He re-entered academic research eighteen months ago to bridge the gap between machine learning engineers and the medical profession. He is interested in reducing Emergency Department and Hospital overcrowding and is studying for a Graduate Diploma in Artificial Intelligence and Machine Learning at the University of Adelaide.
🗣️ Do androids dream of electric patients?: AI and informed consent
At a time when Artificial Intelligence (AI) is about to permeate the clinical domain, its alignment with foundational medical principles has become a pressing concern. Among the most prominent of these principles is informed consent, whose essence might be at odds with the often opaque nature of advanced AI models, particularly deep learning.
As potential mitigative measures, the enhancement of AI explainability and the thorough clinical validation of AI tools have been put forth. However, even with these measures in place, the question remains: Can the inherent obscurity of certain AI models ever be reconciled with duty of care and informed consent?
This presentation attempts to untangle these complex issues, offering a future perspective on the interaction between AI, ethics, law, and clinical practice.
— PhD, Philosophy, UNSW — HCISPP — CHIA — Member of AIDH Exam Committee — Continuous service at Nehta/ADHA since 2008
🗣️ Is there a philosopher in the house? Reflections on AI and the future of work
After what seemed like endless AI winters of disappointment and unfulfilled promises, the relatively recent emergence of AI systems with startling new capabilities sees us blinking in the unaccustomed sunlight of progress and the apparent likelihood of rapid and perhaps accelerating improvements to come.
All this tumult brings with it both promise and uncertainty about the future, one which few of us feel well prepared for. But those of us with substantial philosophical training are perhaps in a better position than most to navigate this new world wherein Chinese Rooms and other familiar thought experiments become fodder for LinkedIn training videos.
Philosophers have long had to endure widespread perceptions that their work is at best a genteel pastime with little or no relevance to people’s day-to-day lives. But that perception seems to be changing as a consequence of the emergence of (what appear to be) thinking machines forces us to confront deeper issues on a number of different fronts.
Stefan joined IBM Research at the IBM T. J. Watson Research Center, New York, in 2008. In 2015 he founded the Brain-Inspired Computing Research Program of IBM Research. As IBM Senior Technical Staff Member, Global Lead of Epilepsy Research, and Member of the Neuroethics Working Group of the Director of IBM Research, he co-developed IBM’s research and innovation strategy for AI in Health and the Life Sciences. In 2021 Stefan became Chief Innovation Officer of Digital Health CRC Ltd., a $200M incubator and funder of digital health innovation. Stefan holds 71 granted patents in AI, bionanotech and MedTech and has authored over 70 peer-reviewed scientific articles, books, and book chapters. He holds a PhD in Electrical Engineering and Computer Science and an Honours Master’s Degree in Technology Management, and is a member of the New York Academy of Sciences, Forbes Technology Council, and a Senior Member of the IEEE.
🗣️ Generative AI transforms health and medicine: Potential, risks, examples. What to expect and how to prepare
Large Language Models (LLMs) are a key component of generative artificial intelligence (AI) applications such as for example ChatGPT, Bard or Med-PaLM. LLMs allow to create new content including text, imagery, audio, code, and videos in response to textual instructions. Without human oversight, guidance and responsible design and operation, such generative AI applications will remain a party trick with substantial potential for creating and spreading misinformation or harmful and inaccurate content at unprecedented scale. However, if positioned and developed responsibly as companions to humans augmenting but not replacing their role in decision making, knowledge retrieval and other cognitive processes, they could evolve into highly efficient, trustworthy, assistive tools for information management. This talk will describe how such tools could transform data management workflows in healthcare and medicine, explain how the underlying technology works, provide guidance on how to assess and mitigate risks and limitations, and illustrate insights through practical examples and use cases. The talk incentivises users, developers, providers, and regulators of generative AI to pragmatically embrace the transformational role this technology could play in evidence-based sectors and provide useful guidance on how to prepare.
Murray is a ‘prejudice aware, digitally disruptive Ai copiloted proceduralist psychiatrist’, with formal training in Addictions and Forensic Psychiatry in the UK and Australia. Murray is currently practicing in Oqea Cares – Incorporating Salvado, based in Subiaco and online. Murray left the Kimberley 7 years ago, where he practiced as Director of Services and as a Regional Psychiatrist for 14 years. He had the privilege of learning from Indigenous patients, families, staff and community Elders, for whom he retains the greatest respect. Around 7-8 years ago Murray started on his journey of discovery and skill acquisition in Digital Disruption, helping to found Oqea.com, which is attempting to re-think how mental wellbeing and psychiatry might better serve in supporting people to support themselves and thrive.
🗣️ AI in mental wellbeing: A trust revolution
In the rapidly evolving world of healthcare, OQEA is at the forefront, harnessing the power of Artificial Intelligence (AI) to revolutionize mental wellbeing. However, this journey is not without its challenges. The paramount issue at hand is navigating the complex landscape of AI in a sector where trust is everything and stakes are high.
AI, a broad domain spanning from machine learning to natural language processing and beyond, offers immense potential. It enhances diagnostic accuracy, personalizes treatments, optimizes resources, and increases access to care. However, it also presents challenges such as data security concerns, ethical dilemmas, regulatory hurdles, data and algorithmic bias.
To navigate this complex landscape, OQEA has developed an AI Governance Framework. This framework serves as a compass, guiding the organization in embracing AI responsibly, ethically, and transparently. It emphasizes principles such as human privacy, data security, trustworthy AI, and collective responsibility for AI governance.
The journey of integrating AI in mental wellbeing is thrilling yet daunting. The implementation of the AI Governance Framework has been instrumental in addressing challenges and leveraging the benefits of AI. The key lesson learnt is that while AI offers immense potential, its adoption must be guided by strong ethical principles and robust governance structures. The pursuit of AI should not compromise our commitment to these values. As we continue to refine this framework in response to the evolving field of AI, we invite you to join us in this exciting journey. Are you ready to step into the future?
Victoria specializes in harnessing AI to facilitate healthcare discovery through scientific evidence. She is the architect of the HUSKI (Harnessing Untapped Scientifically-based Knowledge Intelligently) design methodology, a groundbreaking approach aimed at accelerating biomedical research and discovery. This platform has the potential to transform healthcare by bridging the gap between innovative research and practical applications, offering a new paradigm for evidence-based medicine. Recognized for her contributions, Victoria has received accolades from industry associations for her innovative work. Her pioneering role in this uncharted field and commitment to ethical AI practices have made her a key influencer, dedicated to shaping the future of healthcare and improving patient outcomes.
🗣️ AI safety in healthcare: Bridging the gap between innovation and trust
The integration of Artificial Intelligence (AI) into healthcare is transforming the way we diagnose, treat, and manage diseases. However, this transformation comes with its own set of challenges, particularly in ensuring the safety and reliability of AI applications. The question we aim to address is: How can we safely adopt AI in healthcare without stifling innovation?
Our approach involves a conceptual framework that balances innovation with safety. This framework is designed to be adaptable across various healthcare settings and aims to establish a set of best practices for AI safety. While the specifics are proprietary, the framework incorporates real-time monitoring, explainability, and fail-safe mechanisms.
The implementation is a phased approach that includes:
Our framework has been successfully implemented in various settings, demonstrating its adaptability and effectiveness. The key takeaway is that it is possible to innovate safely in the realm of healthcare AI. Lessons learned highlight the importance of a multi-disciplinary approach, involving clinicians, technologists, ethicists, and policymakers, to ensure both the safety and efficacy of AI applications.
Dr Melanie Tan is a medical practitioner who was in intermittent clinical practice for over 20 years, primarily as a locum doctor in various public hospital emergency departments – working across different systems, both paper and electronic. Melanie is also a legal practitioner and has worked as a medical negligence lawyer (and claims executive with medical indemnity insurers), medico-legal adviser with medical defence organisations, and most recently an aged care lawyer. Melanie is also a Certified Health Informatician of Australasia (CHIA). As an independent consultant, Melanie now harnesses her experiences and perspectives to support clients in clinical governance, drawing upon contemporary principles and best practice. She has a keen interest in the intersection of digital health and clinical governance.
🗣️ Proposing a framework for informed consent in AI: Considering the impact of explainability
As part of the health and care ecosystem, AI must be subject to clinical governance. This means it should be person-centred, in alignment with the National Model Clinical Governance Framework. Further, Australia’s AI Ethics Principles include ‘human-centred values’ (which encompass respect for autonomy) and ‘transparency and explainability’ (so people can understand when they are impacted by AI, and when an AI system is engaging with them). These principles effectively support person-centred care in clinical governance – foundational to which is the legal concept of informed consent.
At common law, informed consent is pivotal to autonomy; and transparency is pivotal to informed consent. Informed consent is also part of our duty of care, and failure to ensure informed consent can amount to negligence. Importantly, it can undermine trust and person-centred care.
To what extent does AI need to be explainable to ensure informed consent? This will depend on multiple factors such as individual values and preferences, purpose of the AI and how it is used, potential benefit versus potential harm, considerations relating to datasets and degree of potential bias (or extent to which such bias is understood), and issues around privacy. It will also turn on how we communicate.
In support of clinical governance, it is proposed that informed consent frameworks are developed alongside individual AI applications to align with ethical principles of human-centred values and transparency and accountability.
A/Prof Michael Franco FAIDH CHIA (MBBS, FRACP, FAChPM) is a medical practitioner specialising in Medical Oncology and Palliative Medicine. He is Monash Health’s Chief Medical Information Officer and Program Director of EMR & Informatics. Through this work as well as research endeavours, Michael is an Adjunct Associate Professor at Monash University. Michael also works in medical accreditation and education and holds positions of national standing with the Australian Medical Council and Postgraduate Medical Council of Victoria.
🗣️ Guiding the growth: Gearing up for AI governance in healthcare
With the sudden explosion of accessible AI tools, there is a critical need to closely govern this technology, whilst not smothering its potential for system disruption and benefits in the current healthcare landscape.
Content: Policy, Key Stakeholders, Governance Structure and Process, Enablers and Pitfalls
Tracey Duffy is the First Assistant Secretary of the Medical Devices and Product Quality Division at the Therapeutic Good Administration (TGA). The TGA is approves the quality, safety and performance of therapeutic goods and is part of the Australian Government Department of Health and Aged Care. Her division is responsible for good manufacturing practice of medicines, laboratory testing and medical device regulation, including the regulation of software as a medical device. Tracey is Australia’s representative on the International Medical Devices Regulators Forum (IMDRF) and is chair of the IMDRF Personalised Medical Devices Working Group.
🗣️ Regulatory control
Professor Coiera possesses a diverse educational background, having received training in medicine and earning a Ph.D. in Artificial Intelligence (AI) from the field of computer science. He boasts a robust research portfolio that spans both industry and academia, and he has earned a distinguished international reputation for his contributions to decision support systems and communication processes within the realm of biomedicine.
During his tenure, Professor Coiera spent a decade at the esteemed Hewlett-Packard Research Laboratories in Bristol, UK, where he assumed leadership roles in numerous health technology initiatives. His responsibilities included overseeing the development and implementation of various eHealth interventions, including the groundbreaking Healthy.me consumer system and clinical decision support systems. Notably, the technological foundations of have been instrumental in the establishment of a promising U.S. healthcare startup called Healthbanc. Additionally, Professor Coiera’s widely-used textbook, “Guide to Health Informatics” now in its third edition, enjoys global recognition and has been translated into several languages.
Professor Coiera’s distinguished career is further punctuated by his receipt of prestigious accolades, such as the 2015 International Medical Informatics Association (IMIA) François Grémy Award for Excellence and the 2011 UNSW Inventor of the Year (Information and Communication Technology) for his pioneering work in a literature-based computational discovery system.
His professional affiliations include being elected as a Foundation Fellow and the inaugural President of the Australasian College of Health Informatics, serving as a founding member of the International Academy of Health Sciences Informatics, and being designated as an International Fellow of the American College of Medical Informatics.
Professor Coiera’s influence extends to key appointments on various boards, councils, and editorial roles for international journals, including his role as Associate Editor of the journal “Artificial Intelligence in Medicine“.
🗣️ A national policy agender on AI in healthcare
Stacy Carter is Professor of Empirical Ethics in Health and Founding Director of the Australian Centre for Health Engagement, Evidence and Values (ACHEEV) in the School of Health and Society at the University of Wollongong.
Her training is in public health, and her expertise is in applied ethics and social research methods. Her research program sits at the intersection of three crucial issues for health systems: Using artificial intelligence, detecting disease in populations and individuals, and high-quality consumer and community involvement.
🗣️ The Consumer Jury
Melissa is an Occupational Therapist and the first Allied Health Professional to be elected to the National Chief Clinical Information Officer (CCIO) Advisory Panel in the UK. She was awarded a Fellowship with the British Computer Society in 2022 for her work over the last decade digitally transforming healthcare. After graduating from the NHS Digital Academy, Melissa continued her studies with Imperial College London to complete an MSc in digital health leadership. Her research area is digital health safety. Melissa returned home to Australia in 2023 and is currently working with eHealth Queensland and the Office of the Chief Clinical Information Officer.
🗣️ Digital health safety matters: Factors impacting the adoption of safety guidelines
The advancements in data driven healthcare are fuelling the desire to deploy AI rapidly. Often, rapid deployments of technology do not assess patient safety risks, resulting in harm, which have ethical and legal considerations (Health Education England, 2019). Therefore, healthcare decision makers must be cognisant of the potential liability of digital health safety incidents (Ash et al., 2020). Understanding the barriers impacting the adoption of digital health safety guidelines has never been more critical.
Conducted for a Master’s Dissertation in Digital Health Leadership with The Institute of Global Health Innovation Imperial College, this study investigated what factors impact the scaled adoption and implementation of digital health safety guidelines as a professional practice in Australia. The data collected via an online survey, semi-structured interviews, and focus groups was analysed alongside data mined from Australian and English safety guidelines and artefacts from the Australian Commission on Safety and Quality in Health Care.
The findings confirmed that overcoming the barriers to adopting guidelines will be achieved by investing in the workforce, improving governance, and securing adequate funding. The findings highlighted that digital health safety requires a new professional identity with recognised skills to support adoption. A notable finding was that a safety culture that is ‘just,’ learns, and looks after the psychological wellbeing of the digital health safety workforce is vital. Findings also showed that employing design thinking will humanise digital health safety and the adoption of guidelines because it puts the person at the centre of the practice.
Digital health safety matters. The pace of AI development and adoption needs a professional patient safety practice that evolves and embraces technological advances. This original research is at the heart of investing in the practice, process, and professional skills to ensure digital and data technologies can improve the quality and safety of care. The scaled adoption of digital health safety will positively impact the ability to realise the transformative benefits of digital health and the use of AI.
Lisa New has over 20 years’ research experience of the potential of ethical and safe AI to support ‘good’ collaboration to solve global problems. She proposes a novel generative AI LLM model for holistic decision-support, expert consensus and real-world risk management. She introduces and overviews her approach for global collaborators to continuously improve collective intelligence, shared wisdom and its fine-grained, contextual real-world impact. The collaboration proposal is self-sustainable, with high incentives and returns. Incentives include whole-of-supply chain end-user centric application in Massive Multiplayer Online Gaming Simulations with constructive competition, fair IP reward, and user-friendly no-code UI; guided by Upper Level AI tools for timely, sustainable and safe real-world solutions. Comprehensive and dynamic augmented immersion to learn, do, evaluate, decide, and continuously improve together, cross boundaries of person and technology, human and machine; human to human, science and spirituality, and intent to outcome, across vertical and horisontal end-user centric supply chains.
🗣️ A methodology to collaboratively solve global problems, supported by safe, ethical AI
Time is running out for humanity to mitigate our worst global risks. Advanced communication technology is becoming more accessible but remains under-utilised to help solve our greatest problems in an interconnected, safe, trusted and sustainable manner. A shared language across cultures and belief systems for shared comprehension and building of trust relationships to enable such global collaboration has historically been a major obstacle for such advancement; as have political, commercial, privacy, cybersecurity, and technological factors.
The proposed solution involves the establishment of a global collaboration platform to address both continuous improvement of ethical and safe AI support, and its real-world application to ‘best’ solve global problems. A continuously peer-reviewed, dynamic and interactive generative LLM model guides holistic decision-support with expert consensus and real-world risk management, transparently and accountably. The platform and its use are continuously improved with AI and expert guidance regarding fine-grained, interconnected, real-world problem-solutions. A startup bootstrapping process curating and integrating existing knowledge and technology is aided by the novel LLM. The startup process is aided by the potential for exponential returns at macro-, meso- and micro-levels.
History demonstrates the urgent necessity of whole-of-supply chain end-user-centric problem solutions to global problems. It has also demonstrated the current and dangerous trend for humanity to lag in terms of technological advancement, to the detriment of all and everything. This is a real opportunity for collaborators for ‘good’ to join into Massive Multiplayer Online Gaming Simulations with constructive competition, fair IP reward, and user-friendly no-code UI; guided by Upper Level AI tools for timely, sustainable and safe real-world solutions. Incentives include dynamic augmented immersion to learn, do, evaluate, decide, and continuously improve together, cross boundaries of person and technology, human and machine; human to human, science and spirituality, and intent to outcome, to best have ‘good’ outcomes for all and everything.
Soraya is a multi-disciplinary technology and media executive with a focus on management, strategy and business development. Currently Strategic Initiatives Manager, and the Critical and Emerging Technologies Lead at Standards Australia focusing on AI, cybersecurity, 5G, blockchain, Internet of Things and smart cities.
🗣️ Why using AI standards is a pathway to safe and responsible AI
The risks and opportunities within artificial intelligence are overawing and overwhelming. AI will revolutionize our lives for the better, AI at some point will create a new threat and through the power of people that threat will be mitigated. We don’t ban things that have risks, we adopt safety controls, standards, regulations and policy to reduce the risk. Standards are the ‘soft law’, voluntary consensus driven documents which lay the foundations to the road ahead with AI. But first, what’s a standard? Standards Australia are a founding partner of the Responsible AI Network in partnership with CSIRO and are the leading not for profit, non government standards body in Australia. This talk will explain what standards are and what role they are playing within AI, and discuss the ISO AI Management System Standard released Q4 2023.
Peter Williams is Oracle’s Executive Director, Healthcare Industry, for Asia-Pacific, continually researching the field, leveraging Oracle’s global expertise and knowledge to work with healthcare organisations and guide them on their journey of business transformation and digital evolution. Prior to joining Oracle, Peter held senior executive roles in digital health in both State and Federal governments. He is a Fellow of the Australasian Institute of Digital Health and a longstanding member of Australia’s IT-014 Health Informatics Committee. Peter represents Standards Australia on several ISO TC215 Health Informatics Working Groups and is the Support Convenor of ISO Task Force 5 AI Technologies in Healthcare and the ISO Project Leader of ISO/IEC SC42 Artificial Intelligence Joint Working Group 3 AI-enabled Health Informatics.
🗣️ Standards for AI in healthcare: Seatbelts or airbags
The use of AI in healthcare is not new but recent global interest in large language models and generative AI has raised community awareness of both the opportunities and risks of all types of AI. The Australian Government has recognised these concerns, publishing a discussion paper on Responsible AI and supporting the CSIRO in establishing the National Responsible AI Network.
Key to responsible AI is having standards to assure that AI is developed and implemented appropriately. IEC SC42 Artificial Intelligence leads that work internationally. ISO TC215, Health Informatics, has also established Task Force 5 AI Technologies in Health Informatics. The Australian mirror committees are IT-010 Artificial Intelligence and IT-014 Health Informatics.
Should standards for the healthcare vertical vary from those across all industries? Is the risk for healthcare any more important than for an autonomous vehicle? Healthcare also operates within a wide digital ecosystem and we do not want to create unnecessary barriers to innovation.
The consensus is that healthcare’s risk profile requires some specialised adaptation but this should ideally be kept to a minimum. A joint ISO/IEC working group on AI enabled health informatics has been established to help identify priorities for action (the first industry vertical of its kind).
The standards will apply across the full AI life cycle from manufacture to deployment. There will be challenges with how they are being implemented and compliance managed (seatbelts or airbags?). As all of this occurs, Australia must ensure its voice is heard internationally so we are aligned and our needs are understood, but recognise international processes can move slowly. A more effective process is often recommending international adoption of a pre-existing national standard. The health informatics community should take the initiative and opportunity to engage in both these endeavours to keep Australia at the forefront of developments in healthcare AI.
Dr Stephen Bacchi is a Neurology Advanced Trainee with interests in clinical research and medical education. Stephen’s PhD focussed on clinical applications of machine learning in general medicine and stroke. He has published more than 140 peer-reviewed articles across a range of medical and surgical specialties. He has been fortunate to have been awarded several prizes for his research, including the award for the SA Health Young Professional of the Year, the Nimmo Prize, and the Royal Australasian College of Physicians Trainee Research Award (twice). He is currently the principal investigator on multiple clinical artificial intelligence implementation studies.
🗣️ Study of artificial intelligence-enabled penicillin allergy delabelling
Many penicillin allergy labels in electronic medical records (EMR) are inaccurate. This inaccuracy is significant, as EMR penicillin allergy and intolerance labels have been shown to be associated with different antibiotic prescribing practices and worsened patient outcomes. The implementation of artificial intelligence (AI) may be able to assist with this issue.
An artificial neural network that has been derived and validated in prior studies was employed. This algorithm, following pre-processing involving negation detection and word stemming, uses six fully-connected layers to provide a classification of a described adverse reaction as either (1) consistent with intolerance or allergy, and (2) if an allergy, a high or low risk allergy.
The described algorithm was employed once per week for a period of 20 weeks to identify potentially suitable penicillin allergy delabelling candidates in a tertiary hospital. On a weekly basis, the three individuals ranked as most likely suitable for delabelling had a notification sent to their inpatient doctors. This notification was sent via email. Of the intervention group (n = 59), 3 (5.1%) individuals had their penicillin allergies delabelled, which was significantly more than the control group (0%, P = 0.002).
AI-facilitated notifications may be an effective means by which to facilitate penicillin allergy delabelling. Further investigation in this area is indicated.
Toby is a General Physician by trade, specialising in acute medicine. His love of quality improvement has led him to meander into the world of machine learning algorithms to improve care and reduce administrative burden on hospital clinicians. Toby is the current Divisional Director for Medicine at the Northern Adelaide Local Health Network.
🗣️ Implementation of a discharge prediction algorithm into a general medical service, the possibilities and lessons learned
We have derived and validated a Natural Language (NLP) algorithm that is able to predict discharge within 48 hrs of a ward round note. However, implementation of these algorithms remain challenging.
Our algorithm aims to generate a list of patients predicted for discharge in a production environment. We explored options for potential application:
Currently, the NLP algorithm is run on Thursdays, providing the details of patients predicted to be discharge-ready over the weekend to the clinical teams and prompting them to consider discharge. The algorithm is run every other week and the number of weekend discharges recorded.
In the first week of implementation, the algorithm predicted a very sick patient would be discharged within the next two days. Realising our mistake, the algorithm was retrained, this time omitting discharges that resulted from patient death.
We have demonstrated an appropriate use-case for an NLP algorithm to assist clinicians. Being able to rederive and validate the algorithm is vital to enable successful implementation.
Adjunct Associate Professor Naomi Dobroff is the Chief Nursing and Midwifery Information Officer (CNMIO) and General Manager of the EMR and Informatics Program at Monash Health. Naomi has a Master of Public Health and is an Adjunct Associate Professor at Deakin University. She is a Fellow with the Australian College of Nursing and the Australasian Institute of Digital Health.
Naomi is the longest tenured CNMIO in Australia with a national and international profile. She is the CNMIO representative on the Victorian Department of Health, Clinical Informatics Council and Chairs the Victorian CxCIO forum. Naomi also Chairs both the ACN Nursing Informatics and Digital Health Faculty and of the CNMIO Faculty. She represents the Australian College of Nursing on many national advisory groups, including on the National Nursing and Midwifery Digital Health Capability Framework which was published in October 2020 and the Australia Alliance for Artificial Intelligence in Healthcare in 2023.
🗣️ Positioning generative artificial intelligence for the nursing profession in Australia: A position paper
The Australian College of Nursing (ACN) is the national voice of the nursing profession. The ACN is undertaking, through their Chief Nursing Informatics Officers Faculty, the development of a position paper to support nurses in their use of generative Artificial Intelligence.
Ensuring nurses are aware of the issues, advantages and impact generative Artificial Intelligence may have on their practice, decision making and the community they care for is of utmost importance. Nurses’ role as a patient and community advocate as well as often being the last point of contact with a patient when delivering care, will make this position papers specifical focus on nursing relevant and timely.
The Chief Nursing Informatics Officers Faculty of the ACN have undertaken an evidence-based process to develop a position paper to ensure nurses are kept informed of their professional requirements particularly when using clinical informatics systems. The process included engaging with leading universities, liaising with subject matter experts, undertaking literature review, and facilitating focus groups.
The ACN CNIO Faculty is developing a position paper and broadly communicating it all nurses and nurse leaders across Australia. Presenting the core elements of the position paper to the AIDH IA + Care event is a key implementation process across the broader health context and informatics family.
Ensuring nurses have a voice in the generative AI conversation can only improve the diversity of discourse in this important arena. The ACN through the CNIO Faculty will continue to support nurses by developing this position paper and advancing nursing into the future.
Gerardo leads a team of data analysts, machine learning experts, EMR analysts, and statisticians within the Health Informatics Group and the Centre for Health Analytics. The team are translating health information and machine learning solutions for public health and clinical outcomes. He drives vision of the work towards increased problem-innovation cycle, and research translation: problems of varying sizes and length often drive research/data innovation needs, and research can be translated relatively quickly.
As a qualified specialist in digital health/informatics, Gerardo critically analyses challenges in implementing innovative health information technologies; develop efficient, evidence-based solutions; and integrates them into organisational business processes.
🗣️ VaxPulse: Machine learning health system to address public vaccine concerns in Australia
Vaccines save lives, prevent morbidity and protect economies. Public concerns about risks of adverse events following immunisation (AEFI) as greater than the risk of diseases are amplified through online social networks and media. For instance, exposure to exacerbated online concerns about HPV vaccine that prevent 6 types of cancers make parents less likely to consent to their children’s immunisation.
We are developing VaxPulse, a machine learning (ML)-based learning health system (LHS) to monitor and respond to online public vaccine-related concerns in Australia.
For data collection/analyses, we conducted experiments by finetuning various BERT models, and progressing with CT-BERT V2 for sentiment analysis and BERTopic for classification. AEFI is a significant cause of public concern and hesitancy, thus list of topics generated were analysed by a research clinician. For public vaccine hesitancy, we trained a Gradient Boosting Classifier using the NPS chat, and labelling posts to classify sentences across a hesitancy spectrum.
As a LHS, VaxPulse incorporates interdisciplinary expertise and consumer partnership. It will provide real-time insights for government agencies, policy makers, and health professionals to respond quickly to new concerns.
Specifically, VaxPulse allows SAEFVIC, Victoria’s vaccine safety service, to continuously adapt its public-facing information channels to current public concerns, as we have successfully trialled for identified COVID-19 vaccine concerns. This enhances our ability to provide timely, accurate, reliable, and relevant information to the public and parents; important for helping them make informed decisions, and reduce anxiety about providing immunisation consent for children.
VaxPulse allowed us to understand how best to respond to concerns on appendicitis and menstrual health changes AEFI, and identify opportunities to improve our website through insights from public concerns on COVID-19 vaccine AEFI and an interdisciplinary expert group. We are now processing other online data sources such as YouTube, in multiple languages.
Roy is a GP, clinical innovator, alumni of the Australian Clinical Entrepreneur Program (AUSCEP) and proud first-generation immigrant raised in Western Sydney. After experiencing a transformative journey with AUSCEP, Roy founded reggie health, a startup leveraging AI to better health outcomes for patients with chronic diseases. Passionate about the union of healthcare and cutting-edge technology, Roy has immersed in global and Australian explorations of AI in primary care.
🗣️ Revolution in practice: A quick dip into generative AI in Australian primary care and beyond
Primary care worldwide is seeing increasing integration of generative AI, with Australia actively participating in this evolution. This presentation aims to objectively evaluate generative AI’s place in primary care.
Generative AI employs data-synthesising algorithms for potential advancements in primary care diagnostics and treatment. It simulates cognitive tasks, aiding clinical decision-making and personalised care.
An extensive literature review was conducted to assess generative AI’s global impact. Additionally, its current application in Australia’s primary care was analysed, capturing both its achievements and areas for growth.
Considering Australia’s distinct healthcare attributes, the session will discuss generative AI’s role in primary care, informed by international practices and national data.
Dimitry Tran was named one of Australia’s top 100 innovators by The Australian in 2021. With his brother Aengus, he co-founded Harrison.ai, the largest healthcare AI company in Australia. The company’s radiology AI technology is approved as medical devices in Australia (TGA), Europe (CE Mark), and the US (FDA) and can detect hundreds of findings in chest X-Ray and CT brain scans. It is currently being used across hundreds of hospitals and clinics in Australia, the UK, and Asia, benefiting over a million patients each year.
🗣️ From concept to care: Australia’s journey in implementing AI for a million patients
Associate Professor Ronnie Ptasznik is a fellow of the AIDH and the Program Director of Radiology at Monash Health. Prior to 2020 , he was the CMIO of Monash Health for 7 years and the clinical lead for the Implementation of it’s EMR Prior to that he was the Clinical lead of Monash Health’s RISPACS implementation project. He has held advisory positions related to Health Informatics for the Australian Digital Health Agency and the Victorian Department of Health. He is currently the chair of the Clinical Design Advisory Group for the Implementation of the Victorian Health Information Exchange
🗣️ Implementing artificial intelligence in radiology: It’s more than just detecting lung nodules
Professor Wickramasinghe is the Professor and Optus chair of Digital Health at La Trobe University within the School of Engineering. She also holds honorary research professor positions at the Peter MacCallum Cancer Centre, MCRI, Epworth HealthCare and Northern Health. After completing 5 degrees at the University of Melbourne, she completed PhD studies at Case Western Reserve University, and later executive education at Harvard Business School, USA in Value-based HealthCare. For over 20 years, she has been actively, researching and teaching within the health informatics/digital health domain with over 350 scholarly publications, a patent, 25 books, numerous posters and book chapters and a very successful grant funding portfolio. In 2020, she was awarded the prestigious Alexander von Humboldt award for her outstanding contribution to digital health.
🗣️ The design, development and deployment of digital twins for precise and personalised clinical decision support
Sankalp Khanna is a Principal Research Scientist and leads the Health Intelligence team at the CSIRO Australian e-Health Research Centre. His research is focused on applying Artificial Intelligence and Machine Learning techniques to improve patient flow and develop and deploy solutions for operational and clinical decision support in healthcare. Sankalp is leading CSIRO’s collaboration with the Westmead Neonatal Intensive Care Unit to develop algorithms to predict adverse outcomes in premature infants. Solutions developed by Sankalp have helped reshape workflow and policy in hospitals in Australia and overseas, and deployed in commercial health software. Sankalp is also Adjunct Associate Professor at the Queensland University of Technology and Griffith University, the Secretary of the Pacific Rim International Conference on Artificial Intelligence (PRICAI) Steering Committee, and a founding Fellow of the Australasian Institute of Digital Health.
🗣️ Improving healthcare planning and care delivery: AI-powered operational and clinical decision support
Eva Weicken is Chief Medical Officer in the Department of Artificial Intelligence at Fraunhofer Heinrich Hertz Institute for Telecommunications in Berlin. She studied medicine at the Ludwig-Maximilians-University in Munich and did her residency in neurology including intensive care and psychiatry rotations. After many years of clinical practice and with the growing presence of AI in medicine she wanted to dive deeper into this field.
In her research, she is particularly interested in finding solutions for the safe, fair, and effective use and implementation of AI in health which requires an interdisciplinary approach. She is taking an active role in the international standardization initiative WHO/ITU/WIPO Global Initiative of AI for Health (previously ITU/WHO FG-AI4H) as co-chair of the working group “Clinical Evaluation of AI for Health” and in the overall management. Further engagements at a national level within this field include the German Standardization Roadmap for Artificial Intelligence and other projects focused on validation of AI in health.