The National AI Plan is exactly what we asked for

4 minute read


All this handwringing does is to put an unwarranted handbrake on committed investment and proven confidence in products in the market today.


Over a year ago the Medical Software Industry Association responded to consultation papers from the Department of Health and Aged Care (as it was then), the TGA and the Department of Industry Science and Resources about the regulation and use of AI in healthcare.

Our responses to all three consultations had a similar theme, which was dubbed the “Goldilocks” approach – not too much, not too little, just right.

While that sounds over-simplified, it distilled the three key objectives for the use of AI in healthcare – improved clinical care, efficiency leading to better productivity and, of course, safety.

To achieve this trifecta, transparency is key.

The TGA has this in its Essential Principles for Software as a Medical Device, which incidentally maps exactly to the DISR’s proposed voluntary and mandatory guidelines.

In other words, there was no need for more regulation and regulators which would create confusion and doubt for health software companies and clients, as well as uncertainty for investors in Australian AI.

We made this point three times in the separate submissions. Happily,  the point got through.

The Australian government has succeeded where others have failed with its National AI Plan, released this week. Commentators have been urging the government to regulate so Australia was not left behind, but we may now be in a position to leap ahead.

Either by design or good luck – and an election in between – there was a well-sequenced consultation so Australians became familiar with AI and its adoption, particularly in health through the AI scribes. Over 75% of GPs, for instance, use AI scribes like Lyrebird and Heidi.

AI encompasses the most complex issues in health – privacy, security, safety, equity, access, bias, trust, ownership and sharing of data, commercialisation, with technology over the top. The Privacy Act has for decades been the subject of less complex consultations and still doesn’t achieve its promise, although the long overdue statutory tort of privacy is a step in the right direction.

The government’s National Plan for AI should be lauded for what is almost a Hemmingway- esque piece of work – what you read is the tip of the iceberg and founded on so much deep knowledge and understanding.

The stated aims of capturing opportunity, spreading benefits, and keeping Australians safe are spot on. The words are backed up by examples.

Like that of MSIA member InfoXchange – “Technology for Social Justice” – which is referred to in the report for their Digital Transformation Hub and AI Learning Community.   

The DISR minister Tim Ayres says he wants to see “technology work for people, not the other way around”.

This is of course what all technology should be doing in every arena, but it’s a good point to reinforce. Commitment to that is seen in the fact there are transparent processes for measuring the success and use of AI through the ABS, Jobs and Skills tracker and National AI Centre Adoption Tracker and National AI Ecosystem report .

So, where are the gaps in the Plan?

Because it is principles-based, it is hard to find any. It is easy to suggest more standards, engagement, committees, and oversight, some of which have the smell of rent-seeking organisations.

AI has been in use now for years. All that this handwringing does is to put an unwarranted handbrake on committed investment and proven confidence in products in the market today. In some cases, it shows a lack of deep understanding of existing regulations and market implementations.

Australia has more than enough existing regulatory frameworks, as the MSIA emphasised in its detailed responses in 2024.

The TGA Software as a Medical Device rules cover the high-risk applications. Voluntary codes for unregulated products can be managed elegantly and safely through mechanisms like the MSIA and MTAA’s jointly produced  Voluntary Code for the Use of AI in health software finalised and launched on 1 December.

It is a cost-effective, safe, proactive approach which complements the National AI Plan which will increase innovation, investment, safety and productivity.

Never before have we seen such a well-considered response – it is so welcome after such a tech regulation-heavy year.

Emma Hossack is the CEO of the Medical Software Industry Association.

End of content

No more pages to load

Log In Register ×