AI adoption in healthcare is ‘lagging’

5 minute read


Here’s how to bring healthcare services up to speed.


The use of AI in healthcare services in Australia is lagging, experts say, and adopting the technology more widely will depend on more education for clinicians, more clinical trials showing the benefits of AI tools, greater investment in AI by health services and sorting out the regulatory and medicolegal implications.

“Despite hundreds of regulator-approved AI-enabled tools internationally, relatively few feature in routine clinical care, in part due to inattention to how AI tools integrate into sociotechnical healthcare environments,” wrote Australian and UK researchers .

Increasing demands on resource-constrained healthcare systems mean there’s an urgency to adopt AI, in a similar way to the large-scale adoption of previously shunned digital technologies during the covid pandemic, the researchers said.

The adoption of AI “depends not only on its intrinsic capacity to assist but also on the willingness of clinicians and organisations to use it and restructure processes and workflows to fully leverage its capabilities”, they said.

Lead author Professor Ian Scott, professorial research fellow at the University of Queensland’s Digital Health Centre, said while multiple AI tools had been approved by regulatory authorities in Australia and internationally, Australian health services were “lagging”.

“We’re really not using them to any great extent,” Professor Scott told HSD.

“Although ambient AI scribes have really grabbed attention, and there’s now widespread use of these, other tools – for example, risk prediction models and other clinical decision support models – are much slower in being implemented because there are a number of barriers.”

One of those barries is a lack of training for clinicians, who need better understanding of AI and guidance around its proper use and limitations, Professor Scott said.

“There is a need for practicing clinicians to have more AI literacy.

“I think there’s still a perception that they don’t trust these tools, they don’t necessarily know when the tools are reliable or safe.

“They worry that many of these models are opaque and don’t tell them how the model works in the way it does and has produced the outputs that it has.”

Professor Scott said it was essential that AI tools were co-designed, developed and evaluated with input from clinicians.

Another issue is the lack of clinical trials providing robust evidence that AI tools will benefit clinicians in real-world situations, he said.

“And that’s a legitimate concern. The evidence base has to grow to show that in a clinical environment, these tools really do provide value.”

A lack of investment in AI infrastructure was also a problem, Professor Scott said.

“Health services, as we all know, are under the pump. Most of them are in the red in relation to their budgets, so their concern is how much money can they realistically invest in AI before they start getting a return on investment.”

But adopting AI is a long-term strategy that sees AI as a value-adding asset, Professor Scott said, and not something that should be expected to give a return on investment within six or 12 months.

“The benefits will come if the tools have been proven in clinical trials to be of value,” he said.

“Health services have to invest in the data infrastructure to be able to provide the tools, the models, with the data they need to generate their outputs.

“There also needs to be more focus on what happens over the longer term in terms of monitoring the performance of the tool, updating it and recalibrating it.”

But one major concern of health services was that most AI models were being developed by proprietary companies, Professor Scott said.

“Health services aren’t going to have the resources to develop all the models they may want to use in-house.

“They’re going to have to negotiate with proprietary companies, and deal with the concerns about being locked into this or that vendor.

“How much money are they going to have to pay in licence fees over the long term? If this company goes broke, for example, who’s going to take responsibility for keeping the tool working?

“You need to have a long-term investment strategy, and you’ve got to be prepared to spend a certain amount of money in-house to develop, or co-develop with a vendor, your own tools, but also take ownership of the proprietary tools over time.

“There are a lot of vendors out there who want to sell their product. Obviously, healthcare is big money.”

That gives healthcare services the bargaining power to negotiate on license fees and tool upgrades to a sustainable and scalable level, Professor Scott said.

Other obstacles to implementing AI more widely were the regulatory and medico-legal implications, Professor Scott said.

“For clinicians, a real concern is that if they act on the recommendation of an AI tool, and it turns out that recommendation wasn’t appropriate and the patient was harmed, who then carries liability?

“Is it the clinician, is it the vendor or the developer of the tool who didn’t design it properly and it therefore it generated erroneous outputs, or is it the health service who deployed the tool and employs the clinician?

“No one has a practical answer to that question at this stage, but we need to develop a medicolegal approach that makes everyone feel confident about using AI.”

Read the full paper here.

End of content

No more pages to load

Log In Register ×