AIDH facebook

Having an ethical framework in place before implementing artificial intelligence in healthcare is vital to mitigate risk and eliminate bias, accelerating innovation and enabling better patient outcomes, our upcoming AI.Care 2023 conference will hear.

Nigel Thornton, Principal Consultant Digital Delivery, Fujitsu Australia, will discuss the importance of ethical frameworks at an AI and ethics workshop at the November 22-23 conference at Crown Melbourne. He predicts Australia might introduce legislation mandating frameworks in certain cases.

“The opportunity of innovation can be assured by AI ethical frameworks,” he said. “AI needs to be understood and trusted before it is implemented by organisations. We don’t want it to cause an issue or deliver an incorrect or biased outcome; we want to know it is safe.

“We will be discussing what people can do to address those problems.

“Many people think a governance framework is unnecessary bureaucracy but in artificial intelligence, it is important to have a governance and ethical framework so trust is built, bias is eliminated and what is delivered is what helps the community and society.

“Risk is mitigated, and innovation encouraged If there is a good ethical AI framework in place.”

Nigel said the workshop was for anyone contemplating adopting AI in the workplace.

He said organisations were becoming more aware of the importance of ethical AI frameworks but those that did not have them would struggle sometimes to drive innovation with surety without implementing one. A human-centered approach was critical and collaboration was needed from stakeholders including consultants and internal staff working with others such as technology providers, patients and the wider community, he said.

Once a framework was established, it needed updating and improving to ensure continued relevancy.

In the European Union, legislation was coming with serious repercussions for failing to have fair and ethical AI models, Nigel said. He added all countries were working on this, and laws would vary in each jurisdiction. It was important to know that technology partners had AI ethical frameworks that were aligned to the OECD AI Principles that Australia had committed to. Some applications of “high-risk” AI may in future be banned altogether, he said.

“The marketplace is demanding this,” he said.

Using AI to allocate resources

The workshop will discuss challenges including ethical dilemmas around AI systems that use health data to predict medical decisions and patient care including resource allocation. One example studied will be an AI system that forecasts patient outcomes by analysing individual patient data and medical records and estimating risks for admission and potential complications.

“Using health data to predict medical decisions about patient care can improve resource allocation such as ventilators to help the patient recover faster, can allocate interventions to high-risk patients before the event happens, and for those at risk of readmission provide an extra day in hospital to prevent readmission,” Nigel said.

“But there are also ethical dilemmas – is the AI model fair, is it treating patients the same across the board, have patients granted consent to use their data in this way?

“One size fits all does not account for patient uniqueness and some may not want to receive a certain treatment, so how can we mitigate these problems?”

If done properly, ethical frameworks can help ensure the AI model is not biased against race, sex, age or certain patient cohorts, that informed consent is provided, and make sure we account for human oversight.

The workshop will also discuss the importance of AI culture in an organisation – that leadership must ensure AI is based around ethical frameworks, so the culture is ingrained from the top down.

He suggested healthcare organisations considering implementing an AI model investigate who built the system and if they did so using an ethical framework.

“Is the model fair and accurate, and does it have a human-centric design?” Nigel asked. “AI models don’t replace the person doing the rosters or the doctor’s diagnosis, but they should complement them providing information to help them make a non-biased accurate decision – recommending not deciding.

“Most organisations are already looking at ethical frameworks. Attendees will be able to take information back to their workplaces to help explain frameworks and their importance. If you can explain a model to people, and the framework can determine if it is fair, they are more likely to trust it, and can then implement and monitor an innovative AI pilot project.”

The Translating Ethics into Breakthrough Innovation: Sponsored by Fujitsu workshop will emphasise the connection between AI ethics and innovation. It will discuss challenges, the impact of getting it wrong and highlight an early adoption approach. It will present an AI healthcare development case study involving a real-world scenario and workshop in groups to identify potential AI ethical issues, possible solutions and their impact on innovation.

It will also present AI ethical fundamental framework concepts and develop an AI ethical framework as an example.  After presenting an AI development case study, groups will think of potential frameworks that could be applied and ideation to ensure ethical development and how that connects with innovation.

AI ethics in action will be showcased by presenting real-world projects where an AI ethical framework has helped integrate AI ethics into projects leading to commercial innovation. Discussion will ideate what has been done to prioritise AI ethics inside and outside your organisations and discuss how much further we need to go. It will focus on building an ethical AI culture, share strategies for promoting ethical AI practices, and how that connects to culture and innovation.

Fujitsu is a proud partner of AI.Care 2023.
Fujitsu logo