Blog
Artificial intelligence and precision healthcare
By Peter Williams
Part of the Precision Medicine Community of Practice Steering Committee
The HISA Precision Medicine Community of Practice sees precision medicine as not constrained to developments associated with the advance of genomics, as important as they are, but rather considers a broader spectrum where the data driving individualised care can include clinical, social and environmental data factors.
To take account of that complexity a key capability required is artificial intelligence (AI), which has moved beyond the hype to implemented and proven solutions. It is being used to gain new insights from data and embedded in software to improve service quality and efficiency.
I presented at the recent HISA Health Data Analytics conference on the challenges arising from the rapid uptake of AI and the need to ensure that the AI is being put to best use to support clinical practice and drive innovation, and that adoption is being done in an ethical way. Similar themes were presented by several other speakers. AI was clearly the topic de jour!
There are as many definitions of AI as there are articles written about it. For the purpose here, its scope includes machine learning , neural networks, natural language processing (NLP) and image processing.
In healthcare, machine learning and image processing currently have the greatest penetration. There is, however, a lot of research and investment focused on NLP. This has huge potential benefit but is complex semantically and operationally in clinical settings (still lots of PhDs to be had here!). From a precision medicine perspective, the areas of most immediate impact appear to be in pharmacogenomics and diagnostics.
A key distinction for AI is whether it is acting in a deterministic or non-deterministic way, in other words is it providing ‘augmented intelligence’ against which a human being may exercise judgement or is it being relied on for a decision (for example, ‘this is the diagnosis’ or ‘this treatment is now required’). These distinctions are particularly important when looking at regulatory and ethical issues. Augmented intelligence can be expected to largely rely on existing ethical frameworks, but deterministic uses will require a new level of regulatory oversight.
The US and European authorities have recently both put forward slightly different recommended approaches to regulation. These developments are being watched closely by the Therapeutic Goods Administration, which has been undertaking wide consultation to help it determine and propose the best model for adoption in Australia.
Achieving ‘precision’ in AI depends on the quality of the data it is based on and also on confidence in how the algorithms are actually working:
- What is the provenance of the data? There is a need to exercise caution when using un-curated data (i.e. not from a managed system like an EMR or laboratory) – for example, using IOT to assemble ‘real world evidence’ from consumer devices;
- Access to sufficient data volumes for analytical rigour often requires agreements between organisations which can be inhibited by proprietary motivations; and
- The ‘black boxing’ of algorithms hiding the underlying data science diminishes trust and increases uncertainty regarding the value of the result.
It is also should be recognised that, as several presenters observed at the conference, algorithms are designed by fallible humans with inherent biases which may skew the results. Even well designed analysis may exhibit bias simply because of limitations in the population studied. For example, the FDA in the US has approved an AI enabled application for early detection of diabetic retinopathy but would we be confident of using that application in an Australian indigenous community without first undertaking some local validation – that work is currently happening.
Adoption of AI will critically depend on how well the AI integrates with existing clinical systems and the level of clinical acceptance. The cultural aspect is the most important. Do clinicians accept, for example that AI can enhance their support of and interaction with their patients? To achieve acceptance clinicians will need training on the effective use of AI – including how to not overly rely on it. They will also need to have access to the right tools – so they can look at the patient and not at the screen. This is one reason for such strong interest in the potential role of NLP in improving the doctor-patient experience.
All of this not without its risks. Might AI erode patient’s trust in doctors because their special clinical expertise appears diminished? Particularly if patients are able to access the same information directly? A similar debate is already occurring with respect to the impact on clinical services and consumer behaviour arising from heavily marketed direct-to-consumer genomic testing.
I finished my conference presentation with a reference to The inconvenient truth about AI in Healthcare (Npj/Digital Medicine, Aug 2019). This paper concluded that algorithms in research literature are, for the most part, not executable in clinical practice for two fundamental reasons. Firstly, AI innovations don’t re-engineer the incentives that support existing ways of working and second, most healthcare organisations lack the data infrastructure to assemble data and train the algorithm to meet local population and practice needs.
The authors suggested that the way to resolution was to follow the experience of the “rest of the economy” and implement high performance cloud computing infrastructure supported by government action, to resolve issues of data access and governance.
While I agree with the authors suggestions I am unsure that it is as tall an order (their words) as they think. Government is already looking to get ahead of the curve on AI. It is difficult because of the pace of change but there is recognition of need and a willingness to act.
Access to High Performance Computing (HPC), has been a cost barrier in the past but now with the availability of highly secure, scalable and flexible HPC in the cloud (some itself enabled by AI with autonomous capability) that cost barrier has been significantly diminished.
Artificial intelligence offers promise for more personalised, better targeted care, insights into population health and innovative models of care. To achieve those outcomes will require the trust of consumers, clinicians and regulators that adoption is safe and reliable with clear evidence of value.
Uptake of AI into clinical systems will be driven by the expectation of benefit but it remains a challenge as to how adoption will occur at scale. Who pays and how will they pay? Government has a role in setting policy and providing direction – does AI get an MBS item or a share of the medical service fee? The aim should be for the benefits to flow to all sectors as AI becomes embedded in the tools used by clinicians and consumers to support the health and wellbeing of the community.
Peter Williams
VIC
Precision Health Community of Practice Steering Committee Member, and Healthcare Innovation Advisor, Oracle