AIDH facebook
Select Page

A leading Sydney doctor with medico-legal training believes Australia may need laws that require doctors to advise patients if they are using artificial intelligence for their care and to seek informed consent.

“There is no current legislation saying that doctors should tell patients if they are using AI in their treatment or care; but legislation is likely to be considered in the future and will be a work in progress,” developmental paediatrician Dr Sandra Johnson said.

Dr Johnson said it was her opinion that doctors should already be telling patients if they are using AI when providing medical care and explain how specific systems work, as far as they are able.

“My belief is that if you are using an AI system you should be informing your patients because there are legal obligations that require you to act in your patients’ best interests,” she said.

Interim CEO of the Australasian Institute of Digital Health, Mark Nevin FAIDH added that while there are some regulations which would capture high-risk adoption of AI, for example software as a medical device is regulated by the TGA, there is currently no overarching legislation or law for AI in Australia.

Hear more at AI.Care 2023

Dr Johnson will discuss medicolegal considerations in artificial intelligence during a keynote address at the Australasian Institute of Digital Health’s AI.Care 2023 conference at Crown Melbourne from November 22-23.

A Clinical Academic in Child and Adolescent Health in the Faculty of Medicine and Health Sciences at the University of Sydney, she is also director and consultant developmental paediatrician at her private practice, Child Development Paediatrics, and a former president of the Australasian College of Legal Medicine. She does medico-legal assessments for the court where she works with defence and plaintiff lawyers. Sandra has also undertaken studies in ethics.

Dr Johnson said written informed consent was required from patients undergoing surgical or invasive medical procedures. Verbal consent is implied when a doctor prescribes treatment and explains the use of medication to a patient. The doctor may in some circumstances record in their notes that they have explained the treatment to the patient.

Computer image recognition, clinical decision support systems and AI tools for diagnosis and treatment will increasingly provide assistance to health professionals in the future, she said.

“Robotics in surgery is growing in application and AI will play a greater role in medicine into the future,” she said.

Concerns about privacy, accuracy, reliability and safety require medical professionals to be mindful about the use of AI, particularly in view of their duty of care.

“It’s my personal belief that doctors need to inform patients when they are using AI so that patients can give signed informed consent when AI is being used in their management,” Dr Johnson said.

The EU AI Act

She added: “Legislation is being addressed in the European Union currently – The EU Artificial Intelligence (AI) Act – and it may be enacted next year. It will be the world’s first legislation on AI by a major regulator.

‘’I believe that if a health professional is going to use AI, they should aim to understand it, how it works, the biases and the data on which the system was trained. If they don’t understand it, then maybe they should consider not using it or obtain a second opinion.”

How do they do that? “Through education, learning what AI is about, attending meetings and education sessions whenever they can to learn more and gain information so they have the best knowledge possible to ensure safe and responsible use of AI. They would also need clear explanations from the provider of such AI devices or systems.”

Educate yourself and talk to manufacturers

“Doctors might say they don’t have time to learn this new technology. I’d say that if you don’t have time to learn and understand, then don’t use the system in direct patient care. They should also talk to the manufacturers and those selling the products to ensure they understand what it does and how it works so that they can explain it to the patient.”

Dr Johnson said medicolegal obligations require doctors to obtain informed consent which means that they have at least basic understanding when using AI systems. Issues such as how the system was trained, understanding bias and the fact that algorithms can make predictions confidently even when they are wrong, all need consideration.

“When harm occurs, accountability may need to be shared between agents, but doctors retain the key responsibility of acting in the best interests of their patients”, she said.

Dr Johnson also said that more clinicians – doctors, paramedics, nurses – now realise that they need to become informed and educated about AI systems.

“The presentation will be around my personal perspective as someone with medical, legal and ethics training. I aim to give a big picture view, although I don’t claim to have all the answers. My key take home message will be to educate yourself to understand more about AI, keep an open mind and don’t use a system that you don’t understand.” The presentation will also briefly discuss regulation and governance of AI in healthcare to ensure safety in the community.

While there is currently no overarching legislation for AI in Australia, some guidelines have been developed by the Royal Australian and New Zealand College of Radiologists (RANZCR), specifically The Ethical Principles of Artificial Intelligence in Medicine, and the Standards of Practice for Artificial Intelligence in Clinical Radiology.

RANZCR led the medical community in Australia and New Zealand in considering the impact of machine learning and AI in health care and developing this advice. The AIDH’s Interim CEO Mark Nevin helped develop those ethical principles and standards while he worked for RANZCR.

Pin It on Pinterest

Share This