AI's rapid transformation of business demands proactive board oversight to harness opportunities in efficiency and innovation while addressing risk.
There is considerable hype around AI in business – and for good reason.
AI is no longer a futuristic concept; it’s rapidly transforming businesses across every sector. From automating tasks and improving efficiency to unlocking new insights and creating personalised experiences, AI presents both immense opportunities and significant challenges.
AI is rapidly transforming businesses through efficiency and productivity gains and enabling innovation. Boards (and executive leadership teams) must proactively manage the increasing risks associated with AI.
In my humble opinion, the Australian government has dropped the ball on providing an AI-specific regulatory framework for the safe use of AI which leaves organisations who don’t fully understand AI and all its complexities, including potential interaction with other laws, exposed to this increasing AI risk.
For boards and executive leadership teams, they are navigating the AI revolution the best they can in the absence of a regulatory framework.
Decisions of the Australian government
The Australian government this month published a National AI Plan that focuses on AI’s enablement of economic growth at the expense of adequately addressing the potential risks of AI to our communities.
Previously the Australian government had signalled a move towards AI regulations when they published the Voluntary AI Safety Standard which consists of 10 voluntary guardrails.
The government, however, has suspended plans for legislated controls and instead has published its Guidance for AI Adoption which sets out six essential practices for responsible AI governance and adoption.
While the new National AI Plan establishes an AI Safety Institute within government, this entity has no statutory powers, with the government relying on existing peripheral legislation.
I believe this lack of a targeted regulatory framework leaves organisations exposed to the risks of using AI as the technology continues its march towards human cognitive capability.
There needs to be a better balance between the benefits of an unregulated market and ensuring our communities are safe, as required by the social contract that is implicit between governments and citizens whereby individuals forego certain freedoms in exchange for protection against threats.
If the EU, China and Canada can adopt specific AI legislation, then why not Australia?
Given this lack of a regulatory environment, particularly for high-risk AI, boards and executive leadership teams need to step up to the challenge of fully managing AI risks in their organisations.
Board responsibilities for the use of AI include:
- Developing AI literacy: gain a basic understanding of AI technologies, applications, and limitations;
- Overseeing AI strategy: actively develop and oversee the organisation’s AI strategy, aligning it with overall business objectives;
- Ensuring ethical development and deployment: establish clear ethical guidelines for AI development and use, ensuring fairness, transparency, and accountability;
- Managing AI risks: implement robust risk management frameworks to identify and mitigate AI-related risks;
- Fostering a culture of innovation: encourage experimentation and responsible innovation.
It is the need to manage AI risks that I want to focus on.
Related
Understanding AI risks
There’s no shortage of resources for organisations to consider when using AI. Many have established AI policies addressing key principles from available guideline such as the Australian Institute of Company Directors’ Directors Guide to AI Governance and various international standards such as ISO/IEC 42001:2023 Information technology – Artificial Intelligence – Management system.
However, most organisations—and particularly their boards and executive teams—struggle to fully understand the AI being used and what constitutes high-risk AI. Improving AI risk understanding requires grasping two factors:
- How the AI is used in the organisation;
- The type of machine learning / algorithmic basis of the AI.
How the AI system is being used within the organisation
Organisations should consider adopting an AI use classification scheme to helps risk assess the different types of AI used within an organisation. This classification should consider:
- Purpose: full or partial automation, analytics, decision support, generative, or interactive;
- Technology complexity: from basic AI (e.g., simple rule-based algorithms) to general AI with broad, cognitive human-like capabilities;
- Human interaction level: humans in the loop (oversight and decision-making), on the loop (monitoring of autonomous AI), or out of the loop (no human intervention);
- Deployment mode: standalone or embedded (e.g., agentic AI), or cloud-based with limited organisational oversight;
- Business function: consumer-facing or in high-risk sectors like healthcare. Other groupings include business operations, sales/marketing/communications, and product development;
- Data dependency: data-driven and continuously learning, trained on static datasets, or hybrid;
- Ethical and regulatory impact.
Machine learning classification
In addition to how the AI is implemented, it is recommended that a classification scheme for machine learning algorithms is needed to help assess AI technology risks. Key factors include:
- Learning paradigm: supervised, semi-supervised, or unsupervised;
- Algorithmic approach: mathematical models used—lower risk (e.g., regression) vs. higher risk (e.g., ensemble methods combining multiple algorithms);
- Type of data handled: structured (corporate databases) to unstructured (images, audio).
Other factors with different risk profiles include model complexity, training approach, and mathematical properties. Each of these machine learning characteristics carry different risk profiles.
How can boards approach these AI understandings
While there are technical complexities to the above two areas of AI understanding, that should not be an excuse for organisations to ignore these risks.
An optimal (but possibly less pragmatic) approach for organisations would be to establish an AI register that risk-classifies usage and risk assesses the underpinning technology based on these schemes.
A more pragmatic first step approach is to create a policy requiring risk assessments based on these two classification schemes (AI usage and ML/algorithmic basis). Let technology leads build the schemes, then apply them during AI procurement.
Summary
AI’s rapid transformation of business demands proactive board oversight to harness opportunities in efficiency and innovation while proactively addressing risk.
While there is good guidance available to organisations on the key principles and approaches to implementing AI, there is a gap in understanding AI risk based on how the AI is used and the algorithmic underpinnings of the AI.
Even AI risk frameworks such NIST AI RMF 1.0 and ISO/IEC 23894:2023 focus on output risks of AI rather than the input risks discussed. The best guidance I have seen relating to these input risks is the OECD Framework for the Classification of AI systems.
Key recommendations for boards:
- Build AI literacy and ensure AI strategy is aligned with business goals;
- Mandate relevant guidelines and robust risk frameworks for both the procurement and operation of AI;
- Implement pragmatic AI classification schemes and registers for AI usage and ML risks, starting with procurement policies led by tech experts.
This approach enriches AI governance beyond principles and operational guidelines to ensure risks are fully understood and consistent with the organisation’s risk appetite.
Dr Malcolm Thatcher is a digital executive, author and advisor. He is the former chief technology officer of the Australian Digital Health Agency. He is the founder of the Strategance Group.
This article was first published on Dr Thatcher’s LinkedIn profile. Read the original here.



