Consent in health is broken but how much does it matter?

9 minute read


Experts in health data and privacy argue that consent in healthcare is broken, which should mean privacy is broken too. Welcome to Alice in Health Data Privacy Land.


If you’re interested in melting your brain, just put data privacy lawyers and digital health informaticians on the same panel and ask them to slog it out as to whether we are winning or losing as far as privacy and patient wellbeing is concerned.

The Australasian Institute of Digital Health did as much last week at its HIC 2025 conference in Melbourne.

The audience was taken deep into a privacy and data rabbit hole to try to gauge if – as leviathan and complex as privacy and consent process and law is – we are doing harm or good to patients with our current consent and privacy regime, and, how we might do better.

When you consider that the Department of Health, Disability and Ageing is amid a comprehensive plan to rapidly build on the national platform of consent and privacy that the My Health Record legislation has so far provided, and introduce a series of new laws around the data-sharing ability of healthcare providers to significantly grease the wheels of its “sharing by default” program, some of the points being made by this HIC panel might prove seminal.

To significantly oversimplify what was a very complex discussion, the HIC panel seemed to land on four themes:  

  • Whatever we think about adjusting consent and privacy law in Australia to share health data more seamlessly we will probably never get away from an ecosystem of over 1200 different but mostly connected privacy laws, many of which are changing as well. This is more complex than people realise and will continue to be;
  • Consent is broken. If it is, surely by implication privacy is too (the panel didn’t actually formally state this but it follows). What does that mean for patients?
  • The above makes the whole issue of privacy and consent a risk management (or maybe risk tolerance) equation. How we engage patients in that equation – new sharing laws and transparency settings, for instance – could make or break all our good intentions for better patient outcomes from rapidly building out more seamless data sharing.
  • Technology – legal informatics and AI – might help us manage consent and privacy data complexity risk and with that, patient engagement and expectations around this delicate issue.

1200 privacy laws and the rest

According to Sinead Lynch, a partner in digital, data and cyber at Gadens, we have at least 1200 laws on the go at any one time which are relevant to assessing rights to privacy, before we get to the overlay of data protection laws, standards, and a matrix of internal policies and procedures across individual organisations.

“We’re swimming in a sea of rules, regulations and requirements and that’s not even the moral standards that we might want to impose on ourselves and [how] we turn up for business and align with the mission that they have,” she said.

Despite this complexity Ms Lynch isn’t pessimistic about our ability to manage the complexity for patient good, if we develop reasonable measures of risk tolerance and adjust that tolerance as we go to suit patient and system outcomes.

Marina Yastreboff (far left) speaks at HIC2025. Picture: Jeremy Knibbs

Apparently, there’s a fair bit of science, AI and math now going into risk management when law becomes so inordinately complex. Cochlear’s R&D, IP and marketing legal counsel Marina Yastreboff believes that can be brought to bear on managing the issue in health.

“We have to be mission driven,” she said.

“There is a risk tolerance, and we should be able to adjust our systems to be within that risk tolerance”.

Consent is broken, even coercive

An interesting aspect of the problem facing health system managers is that, according to all the panel members, what most people think is the beating heart of transparency and meaningful patient engagement in allowing their data to be shared – consent – is actually quite broken.

And in all sorts of ways.

Consent is hardly ever real consent, according to Lynch.

“[Consent] is binary. All of the things that give you the right to revoke your consent … to be honest, are just more documentation, more policies, more Ts and Cs to read, and fundamentally, people are not using it.”

Danielle Bancroft, immediate past product manager for our largest GP patient management system, Best Practice, points to a fundamental disconnect between what our consent and privacy frameworks have been developed for and the pace of technology.

“Privacy laws were written for static databases and paper workflows [not] for continuous data streams, AI algorithms, and all the use cases we’re trying to use now, like interoperability and sharing of data,” she told the conference.

“The problem with that is that we’re trying to take 21st century tech and put it through 20th century rules.

“It’s like putting a rotary dial on an iPhone.

“Consent is broken because we’re trying to retrofit paper workflows and informed consent that we think it is at the time, but we end up with sloppy workflows, uninformed patients, and a technical system that makes it too hard for people to actually use.”

The panel also outlined how most consent on big tech platforms these days is largely coercive – if you don’t tick the box and/or read 200 pages of Ts and Cs you don’t get the service – which is just another aspect of how broken the idea of consent has become.

Trust and cognitive dissonance

If consent is broken, then surely privacy is as well in many respects. The panel discussed this problem in the context of trust.

When do patients actually trust a healthcare platform with their data?

In most cases they probably don’t, according to Ms Lynch.

In healthcare government unfortunately has a particularly poor track record with certain projects. The Australia Card and, more recently, the opt-out program of the My Health Record, are cases in point.

But like consent being broken, Ms Lynch and Ms Yastreboff don’t really see this track record as the problem in health.

It’s a matter of context and risk, they suggest, a dynamic which is probably as much in play for a patient assessing whether to sign off their data rights on a healthcare platform as it is for a global tech platform like Amazon, Facebook or Google.

It’s about the return you get for your privacy trade-off in other words, or put another way by Ms Yastreboff, it’s about tolerance and the context in which a consumer becomes tolerant.

Several panellists pointed to the apparent cognitive dissonance that seems to occur with patients when assessing privacy risk between the private and the public sector: they tend not to trust the government and be more careful around their healthcare data (which is often associated with a government system such as the My Health Record), but when it comes to allowing themselves to be mined deeply for personal data, they are happy to let the big global tech platforms have nearly everything about them.

Tim Blake of Semantic Consulting put it this way to the panel:

“On the one hand, we trust health tech companies [he’s referring more to companies like Fitbit and Apple with their watch] with almost all of our data and our most intimate of thoughts, and yet we struggle with the thought that we might give that data to government.

“We give data to the people who are very explicit that they will sell it for profit, and we don’t want to give it to the people who are very explicit that they won’t necessarily use it for secondary use, even.”

How does the government, which is planning new laws designed to hand power over their data to patients, in essence, by sharing their data more seamlessly over and above what is already in place for the My Health Record, approach this problem?

Complexity and risk managed by algorithms and AI

The solution might be at hand, according to Ms Yastreboff, in the form of the rapidly evolving field of legal informatics.

According to Ms Yastreboff, data models for all the elements of law and risk are already having a big impact in the field of financial services privacy and data risk.

She points to the work of the Australian Law Reform Commission DataHub project which is an open-access repository of Australia’s Commonwealth legislation with an increasingly sophisticated analytical engine overlay to help policy and law makers develop frameworks to manage the scale, complexity, and evolution of Australian law

Ms Yastreboff thinks that healthcare privacy law is a market made for this sort of data analysis and risk stratification approach.

She points out that the Privacy Act is just the beginning of complexity.

“We have to deal with various laws in the commercial sphere such as the Corporations Act, you have references to the acts, interpretations of the act, and because interpretation is such a big issue in the law, generally, we need a law that helps us interpret other laws and it keeps it going on.”

Pointing to the dynamic that fast-moving software platforms introduce to the problem Ms Yastreboff says that the execution of how rules are put into place can be a major stumbling block to how we customise particular patient needs to respond, or how we should respond.

“The problem is complexity … but there are solutions to this, and the solution is a legal informatics approach, similar to what you’re achieving here with digital health and informatics approach to health.

“We need [this approach] precisely for law and the rules that apply to your practice.”

CSIRO’s Data 61 group is currently mapping laws and the requirements within the law, including those aspects of law which can be vague, like the concept of “fair and reasonable”.

“Rather than relying on a broken consent system you can map out the requirements, work out where there are inconsistencies, overlaps, inefficiencies, and then set your own tolerance for risk on your record,” says Ms Yastreboff.

In essence what Ms Yastreboff is saying is that it really doesn’t matter that much how much consent or privacy is broken so long as you can assess risk on behalf of a patient and their tolerance of that risk.

Using an informatics (and an AI) approach healthcare law and policy makers can start to map risk and complexity within models and set risk where they think it is most appropriate for both the system and the patient.

End of content

No more pages to load

Log In Register ×