The Productivity Commission is currently weighing up how large a role the government has in ‘setting the rules of the game’ for AI.
As the role of AI in daily life grows, the Royal Australian College of GPs has raised the ethical question of whether GPs should have the clinical autonomy to override AI decisions or if AI recommendations should take precedence.
On request of the Commonwealth, the Productivity Commission is currently investigating the best approach to regulating AI as part of its Harnessing data and digital technology inquiry.
While it doesn’t necessarily cover health in great detail, the final report from the inquiry will set the tone for the broader rules around how AI is used.
One component of this is whether the government, the industry or the end user should be responsible when things go pear-shaped.
In its interim report, the Productivity Commission recommended taking an approach to regulation “that limits the risks that AI presents without stifling its growth potential”.
As part of this, the commission said AI-specific regulation should only be considered “as a last resort”.
In its response to the interim report, the RACGP said it only “somewhat” supported this approach.
“Healthcare data is inherently sensitive and demands robust safeguards,” the submission, which was released on Monday, read.
“The RACGP believes stronger regulatory oversight is essential where AI supports healthcare delivery and decision making.”
Regulation is made particularly thorny because AI systems learn and adapt over time, the college pointed out, meaning that a device which is compliant one day may not be compliant the next.
There is also the potential for AI to re-identify anonymised data, making it susceptible to a breach.
While acknowledging the role of the Therapeutic Goods Administration in pre-market approval, the college called for more emphasis on post-market surveillance given the ability of AI tech to learn and adapt.
Related
“Liability is closely tied to regulation,” the RACGP said.
“Determining fault becomes complex if a clinician ignores AI advice or follows advice which results in a negative outcome.
“The use of AI raises ethical questions, such as whether a GP should have the clinical autonomy to override AI decisions, or if AI recommendations should take precedence, and to what extent.”
Ultimately, the RACGP said, “the greatest burden of compliance must lie with the developer”.
This was the only area where the RACGP expressed hesitancy; it supported or generally supported all other Productivity Commission recommendations.
These included gap analyses of current rules and regulations (the college said industry codes of ethics are “insufficient”), pausing steps to implement mandatory guardrails on high-risk AI (this would buy time to “better understand how to effectively balance innovation with regulation”) and expanding basic data accesses for individuals (patient ownership of health data, for instance).
The final report will be handed down in December.