Governing AI – From Compliance to Competence

By Sarah Jenkins, Menzies Leadership Foundation

AI will not replace leaders — but leaders who cannot govern AI responsibly will be replaced by those who can. 

Artificial intelligence is reshaping societies, economies and institutions faster than most leaders can absorb. From automated decision-making to personalised learning systems, AI is now embedded in everyday life. Yet as its influence grows, so does public concern about transparency, fairness and accountability. 

In 2025, the real question is no longer whether organisations should use AI — but whether they can govern it well. 

Australia is entering a new chapter of AI maturity. With the Government’s Safe and Responsible AI framework, national governance standards emerging through CSIRO, and global regulation accelerating, organisations must move beyond early experimentation into a state of competence. 

Compliance is important. But competence — ethical literacy, oversight, transparency and stewardship — is what will determine trust. 

Australia Moves From Guidance to Guardrails 

In 2024–2025, Australia made its most significant shift in AI policy to date. The Federal Government released Safe and Responsible AI in Australia, setting out a roadmap for emerging guardrails, strengthened governance expectations, and potential mandatory requirements for high-risk AI systems. 
🔗 https://www.industry.gov.au/publications/safe-and-responsible-ai 

The policy signals several priorities: 
  • Protecting Australians from harmful AI outcomes 
  • Increasing transparency for automated decisions 
  • Encouraging sector-specific standards 
  • Building national capability in AI assurance 
  • Aligning with global regulatory movements 

Alongside this, the CSIRO National AI Centre published a practical AI Governance Playbook and maturity framework, offering concrete tools for boards and executives. 

From a rights perspective, the Australian Human Rights Commission’s Human Rights & Technology Report remains foundational, clarifying the ethical risks of AI and advocating for safeguards against discrimination. 

The Office of the Australian Information Commissioner (OAIC) has also expanded its guidance on automated decision-making and privacy-protective AI design. 

Although Australia has not yet introduced an AI Act equivalent to the EU’s, the global context matters. With the EU AI Act now in force, and ISO/IEC 42001 establishing the world’s first AI management system standard, Australia is rapidly aligning with international expectations. 

The message is clear: 
AI governance is no longer optional — it is a leadership responsibility. 

Why AI Governance Matters 

AI systems can scale impact quickly — both positive and negative. Without governance, biased or incomplete training data can lead to unfair decisions in hiring, lending, healthcare and public services. 

Governance matters because: 

  • AI decisions affect real people. 
  • Opaque systems erode trust. 
  • Unregulated deployment carries ethical, legal and reputational risk. 
  • Communities expect transparency, appeal pathways and accountability. 

Leaders must ensure AI enhances — not harms — human dignity. 

Leadership Across Levels 

At the individual level 

Leaders must cultivate AI literacy. This includes understanding: 

  • What training data and assumptions underpin a model 
  • How algorithmic bias emerges 
  • When human-in-the-loop oversight is required 
  • The ethical implications of automation 

Literacy is not about coding — it’s about judgment, questioning and responsibility. 

At the organisational level 

Australian organisations must embed governance structures that are aligned with the national direction and global standards. This includes: 

  • Establishing AI governance committees or ethics boards 
  • Conducting algorithmic impact and transparency assessments 
  • Creating documentation and audit trails 
  • Ensuring meaningful human oversight of high-risk systems 
  • Publishing clear explanations for automated decisions 

These practices strengthen trust with employees, customers and communities. 

At the system and community level 

Governments, regulators, civic groups and industry must collaborate on shared guardrails. This includes public registers for high-risk AI, community consultation on major deployments, and investments in digital literacy. 

AI governance is not just a technical challenge — it is a social contract. 

Governance in Motion 
  • Australian public sector 
    Agencies trialling the Algorithmic Transparency Standard are building trust by explaining how automated decisions are made and providing citizens with appeal mechanisms. 
  • Local banks and insurers 
    Financial institutions are exploring AI ethics boards and independent audits to ensure fairness in credit scoring and risk assessment. 
  • Health and social services 
    Hospitals adopting AI-enabled diagnostics are embedding clinical oversight to prevent automation bias. 
  • Education and training providers 
    Institutions are developing internal guidelines to ensure AI tutors and predictive systems support — rather than replace — human educators. 

These examples show governance isn’t bureaucracy; it’s modern risk management.  

What Leaders Can Do 

Build competence, not just compliance 

Leaders must understand AI’s risks and potential — not just rely on technical teams. 

Create accountability loops 

Clear roles for monitoring, escalation and audit. 

Embed ethics in design 

Use diverse teams, lived-experience input, and scenario testing to identify harms. 

Communicate transparently 

Explain what the AI does, why it is being used, and what its limits are. 

Invest in national capability 

Participate in industry working groups, contribute to standards development, and support workforce training in AI literacy. 

Risks of Ignoring AI Governance 

When organisations adopt AI without governance, they risk: 

  • Biased outcomes that harm vulnerable communities 
  • Erosion of public trust 
  • Damage to organisational legitimacy 
  • Legal exposure under privacy or discrimination law 
  • Strategic misalignment through overreliance on flawed automation 

AI without governance is not innovation — it is vulnerability. 

Synthesis: Competence Is the New Currency of Trust 

Australia stands at a pivotal moment. As global standards evolve and domestic guardrails strengthen, leaders must step beyond experimentation into responsible stewardship. 

Competence in AI governance — not hype, not speed — will determine which organisations earn trust and which lose it. 

This is not a technical challenge. It is a leadership challenge. 

Series Overview 

This article is part of Leadership in 2025 – A Shared Responsibility, a thought-leadership series authored by Sarah Jenkins at the Menzies Leadership Foundation. Drawing on global research and local insights, the series explores how leadership is evolving across individuals, organisations, communities, and systems. From trust and grievance to AI governance, human sustainability, and the future of work, each piece unpacks the challenges and opportunities shaping leadership in an age of complexity.

Australia’s move from guidance to guardrails marks a pivotal shift in what responsible leadership now requires. As AI systems shape decisions that affect people’s lives, leaders must move beyond experimentation and into stewardship — building capability, embedding oversight, and ensuring transparency becomes a norm, not an afterthought.

The leaders who will earn trust in this next chapter are those who approach AI with clarity, curiosity and accountability. Those who recognise that governance is not bureaucracy, but protection — of dignity, of fairness, of institutional legitimacy.

If you are committed to leading with this standard of care and competence, we invite you to stay connected with the Menzies Leadership Foundation. Join a community shaping the guardrails that will define Australia’s future.

Natasha Eskinja

Digital Communications Coordinator

Natasha is driven by a profound passion for both creativity and analytics, a synergy that fosters authentic storytelling in the digital realm with both innovation and integrity. 

Throughout her career, she has consistently integrated the overarching marketing and communications narrative with the emotional connections of audiences. She is currently pursuing a Certificate in Society and the Individual from Flinders University, furthering her exploration of human behaviour and the critical importance of connectedness between organisations, individuals, and communities.