Know your AI tool before using it, clinicians told

4 minute read


The ACSQHC has released pragmatic guides for AI use in clinical settings.


The Australian Commission for Safety and Quality in Health Care wants clinicians to understand AI tools better before they start using them, according to the Commission’s project manager Toby Mathieson.

“At the end of the day, [clinicians] are going to be responsible for their decisions, whatever the tool they’re using,” Mr Mathieson told delegates at the AIDH’s HIC2025 conference in Melbourne.

“We would like people to understand those tools better before they start using them.”

Last week the ACSQHC released its Pragmatic Artificial Intelligence (AI) guidance for clinicians resources, including an AI Clinical Use Guide, AI Safety Scenario – Interpretation of Medical Images, and AI Safety Scenario – Ambient Scribe.

Topics covered by the guides include:

  • Reviewing the evidence on the efficacy of the AI tool;
  • Common limitations and risks of AI;
  • Transparency in the use of AI and informed consent;
  • Understanding the implications for patient information;
  • Understanding AI and automation bias; and,
  • Ongoing evaluation and monitoring of AI tools.

“We’re all using AI in various ways,” said Mr Mathieson.

“We know that it can provide some benefits to us, but we probably know there’s some gaps in what’s being put out there at a high level.

“We’re really keen on balancing the implementation risks of AI with the opportunities to make it support a sustainable health system.”

Mr Mathieson said recent research published earlier this year showed that while half of us are using AI, a quarter of us have some concerns about the negative impacts and a third of us are worried about whether the benefits are going to outweigh the risks.

“And there’s a desire for some greater regulation,” he said.

“On top of that, not a lot of us have had training or education around how to use these tools.

“When you put that in the context of a clinician, a health service or a patient, we’ve got trust as a big issue, and that’s going to manifest in a multi-relationship way.”

Mr Mathieson said that clinicians were interested in delivering quality care but were, in some ways, having the rapid adoption or deployment of AI tools “thrust upon them”.

“We wanted to emphasise the need to make sure you know what problem you’re trying to solve,” he said.

“Look at a tool that you think is going to solve that problem, and then use the evidence that is available on how effective that tool will be.

“Given that AI is running ahead at leaps and bounds, and regulation and evidence isn’t keeping up – what do you do then?

“We’re reinforcing the need to treat it as an innovation where you don’t have that rigour.”

Mr Mathieson said the Commission was keen to emphasise issues of concern with clinicians’ use of AI.

“There’s a requirement for some informed consent, and not just, ‘do you mind if I record the conversation’,” he said.

“We need to talk about the impact or risk that that tool is going to be introduced to the patient’s care, and look at consent from the lens of what’s proportional and appropriate for that level of risk.

“These tools are coming thick and fast in an unregulated environment. In some cases, we’re recommending a privacy impact assessment.

“The third point around this is … the data the tool is trained on. Is it applicable to your circumstances and your patients?

“The last point is to keep an eye on how the tools are performing. It needs to be evaluated and monitored over time.”

Mr Mathieson said it was “unreasonable” to expect all these things to be done by clinicians themselves.

“There’s clearly a partnership [needed] here between the clinician and the organisation that’s managing that tool.”

HIC2025 is on in Melbourne on 18,19 and 20 August.

End of content

No more pages to load

Log In Register ×