For Australian CEOs and directors, artificial intelligence and the law in Australia is no longer a theoretical discussion. It is now a board-level risk and growth agenda that demands informed action. The federal government has signalled mandatory guardrails for higher-risk AI, privacy reform is advancing, sector regulators are tightening expectations, and international rules are pulling local firms into their orbit. The companies that move early on governance, transparency and assurance will unlock value faster and reduce regulatory, reputational and litigation exposure.
The headline: no single AI Act yet, but a fast-evolving patchwork
Australia does not yet have a comprehensive AI Act. Instead, AI use is governed by existing laws alongside new targeted proposals. At the core is the Privacy Act 1988 (Cth), which requires lawful, fair and transparent handling of personal information under the Australian Privacy Principles. Proposed reforms, following a multi‑year review, would strengthen consent, introduce a fairness requirement, expand individual rights and heighten accountability for high-risk automated decision-making. Executives should plan for more explicit notices, rights to object or review certain automated decisions, and greater scrutiny from the Office of the Australian Information Commissioner.
Copyright remains a live risk. Australia has not created a general text-and-data mining exception, so training on copyrighted works can raise infringement and moral rights issues. The government has been consulting on AI and copyright to clarify training and output liabilities, but until reform lands, leaders should assume licences or defensible exemptions will be required and expect provenance and attribution expectations to rise.
Consumer and competition laws also bite. Under the Australian Consumer Law, misleading or deceptive conduct can capture overstatements about AI capabilities, “hallucination” risks that are not disclosed, or unfair contract terms in AI-enabled services. From November 2023, unfair contract terms attract civil penalties, increasing legal exposure for vague AI disclaimers and one‑sided model-use terms. Defamation and content laws apply to AI-generated outputs; the eSafety Commissioner can act rapidly on image-based abuse, including intimate deepfakes, and online safety removal notices can apply to generative content circulating on platforms.
Sector rules add further obligations. The Therapeutic Goods Administration regulates AI as Software as a Medical Device, imposing pre‑market evidence, post‑market vigilance and change management protocols for adaptive models. In financial services, ASIC expects firms to ensure AI-driven advice, marketing and risk models comply with design and distribution obligations and do not mislead; AI does not dilute responsible lending, market integrity or breach reporting duties. APRA’s CPS 234 information security standard is already in force and CPS 230 operational risk management takes effect from 1 July 2025, requiring robust control testing, incident response, third‑party risk management and business continuity frameworks that must explicitly cover AI systems and critical data dependencies.
For critical infrastructure operators subject to the Security of Critical Infrastructure regime, AI-related cyber, supply chain and operational technology risks belong in the risk management program and incident thresholds. In employment, the use of AI monitoring tools intersects with state workplace surveillance laws and federal anti‑discrimination statutes; undisclosed or biased automated screening can create significant legal and reputational consequences.
What is coming next: guardrails, safety and stronger privacy
Canberra’s direction of travel is clear. The government’s response to its “Safe and Responsible AI in Australia” consultation indicates support for mandatory guardrails for higher‑risk AI, watermarking and provenance for deceptive synthetic media, stronger enforcement capability, and clearer accountability across the AI supply chain. The 2024–25 Budget earmarked tens of millions of dollars over five years to build an AI safety function within government and to develop standards, guidance and testing capability in partnership with industry. Executives should expect obligations to crystallise first around transparency, risk classification, incident reporting and record‑keeping for high‑risk use cases.
Privacy reform is also advancing, with the government accepting or accepting in principle most of the Privacy Act Review recommendations. While exact drafting will matter, boards should plan for higher penalties, a direct right of action, more prescriptive consent and notice requirements, and obligations to explain, justify and, in some cases, allow review of consequential automated decisions. Internationally, the EU AI Act’s extraterritorial reach means Australian providers and deployers serving EU markets will face risk categorisation, conformity assessment, logging, human oversight and post‑market monitoring obligations, making alignment with global best practice prudent even before domestic law catches up.
Case law and precedents to note
Courts have already shaped boundaries. In Thaler v Commissioner of Patents, Australian courts confirmed that an inventor must be a natural person, closing the door on AI being listed as an inventor under current law. Copyright jurisprudence continues to emphasise human authorship, underscoring the value of retaining meaningful human involvement in creative outputs where protection is a strategic goal. Privacy and consumer regulators have taken action where automated systems create misleading impressions or insufficiently protect personal information, signalling a low tolerance for “black box” excuses when harms occur.
From policy to practice: how leaders should act now
Boards should treat AI as a cross‑cutting operational, legal and reputational risk with material upside for those who govern it well. Begin with a complete inventory of AI systems, from vendor models and APIs to in‑house tools and shadow AI. Classify use cases by risk based on potential for harm, legal impact and sector rules, and map them to applicable obligations across privacy, copyright, consumer law, safety, and industry standards. Establish an AI governance framework aligned to the Australian AI Ethics Principles and recognised standards such as ISO/IEC 42001 for AI management systems and the NIST AI Risk Management Framework. Assign executive ownership, define approval gates for higher‑risk use, and require model cards, data sheets and change logs to support accountability and auditability.
Data governance is the backbone. Ensure lawful basis and provenance for training and fine‑tuning data, implement de‑identification that is robust against re‑identification, and maintain data minimisation by design. For generative systems, implement content provenance and watermark detection where practical, and adopt red‑teaming and adversarial testing regimes to surface bias, toxicity and safety failures before deployment. Human oversight should be real, not nominal; define when a person must verify or can override an AI output, especially in consequential contexts such as credit, employment, healthcare and safety‑critical operations.
Third‑party risk deserves equal attention. Contractual terms with AI vendors should cover data rights and retention, model update cadence and change impact notifications, transparency into training sources and evaluation metrics, security controls, uptime and incident SLAs, and cooperation with audits. Where outputs may affect regulated decisions, require explanation artefacts and enable contestability. Incident response plans should include AI‑specific triggers—prompt injection, model drift, output misuse, data leakage—and clear pathways for regulator notification and customer remediation.
The executive takeaway
Artificial intelligence and the law in Australia is moving from guidance to guardrails. Leaders that invest now in governance, transparency, testing and trustworthy data foundations will be ready for incoming privacy obligations, sector rules and safety expectations, while building the organisational confidence to scale AI responsibly. Kodora partners with Australian enterprises to design and deploy AI that meets today’s legal standards and tomorrow’s regulatory reality—so you can accelerate innovation without compromising trust.