False starts and stalled reforms: Australia’s tech regulation landscape
Lizzie O’Shea examines Australia’s uneven approach to tech regulation, highlighting the gap between public support for stronger safeguards and a political system swayed by industry influence and the global power of Big Tech.

By Lizzie O’Shea
Amid the chaos and cruelty of the second Donald Trump administration in the US, the interests of Big Tech have aligned with state power in ominous ways. It’s a situation that’s forced many digital rights activists to rewrite their strategies. Any potential regulation of technology comes with an appreciable risk of generating a diplomatic confrontation with the US (to the extent that a tariff war can be described as diplomacy). Many countries — especially those that have been allies of the US since the end of the Second World War — are contending with the meaning of sovereignty in a world that’s technologically connected, with giant US companies operating at all levels of the stack. For reasons both understandable and unforgivable, many regulators are opting for stasis over leadership.
In recent years, Australia has developed a reputation for regulating Big Tech in ways that are being emulated in other parts of the world. Australia was the first country in the world to establish an agency committed to keeping its citizens safer online. It was the place that created the news media bargaining code, facilitating payments from digital platforms to media organisations — a model later also adopted in Canada and New Zealand. Most recently, Australia introduced a social media ban, described as another world first, for children under sixteen years of age.
They may look like bold moves, but these regulatory initiatives are not without their failings. The eSafety Commission is overseeing the development of online codes of practice that have seen increased dependence on automated decision-making for content moderation, age assurance technology and privacy invasive design. The news media bargaining code has been criticised for failing to secure funding for public interest journalism, with the real winners being large media companies like Rupert Murdoch’s News Ltd. (In Canada, the equivalent initiative has been labelled a fiasco.) Meanwhile, the social media ban is facing genuine — and anticipated — operational challenges. The proposal has been significantly controversial for a variety of reasons, even attracting criticism from those administering it.
There’s something commendable about Australia’s regulatory zeal, but it’s telling how poorly it is often directed. Privacy reform, which the current government has been accepted as a necessity, remains stalled. It’s increasingly clear that the tech industry, sensing an opportunity, is leaning hard into productivity arguments against privacy reform — mainly in service of allowing AI to develop unimpeded. In a recent submission, Meta said it was “concerned that recent developments are moving Australia’s privacy regime to be out of step with international norms … and [will] disincentivise industry investment in AI in Australia or in pro-consumer outcomes.” Strong privacy reform has the potential to improve the online experiences of people of all ages, but powerful and well-resourced opponents are successfully diverting the conversation and delaying regulatory action.
This sits in sharp relief to public opinion. Australians have consistently polled as strongly supportive of improved privacy protections. Almost all Australians think they should have additional rights under the Australian Privacy Act. Nine in ten Australians want the government to provide more legislation that promotes and protects the privacy of individuals. The Australian government’s neglect of this issue belies a reform agenda that’s shaped less by bravery and more by the banality of politics and vested interests.
The public also has strong views on AI, the other field of tech regulation which has moved at a laggardly pace. The Australian government has introduced a voluntary safety standard but no enforceable regulation. Australia ranks among the lowest globally on acceptance, excitement and optimism about AI — only 30% of Australians believe the benefits of AI outweigh the risks. This has largely been understood as a problem of attitude rather than an issue of power or politics. But Australians are more sensible than that: 77% agree regulation is necessary and 83% say they would be more willing to trust AI systems when assurances are in place. Rather than inhibiting the embrace and adoption of AI technology, strong regulation would do the opposite.
In some ways, therefore, the job of digital rights activists is straightforward: remind elected leaders who they work for. We must speak for the majority of everyday people who want strong privacy reform and AI regulation. We need to ask questions about who benefits from failing to act on these issues, or acting in ways that are ineffective and inept. During this Trump administration, representative democracy is facing increased pressure and fault lines are emerging. Tech policy is one such example, in and Australia’s experience demonstrates that — if not properly consultative or thoughtful — both action and inaction in the face of growing tech power can lead to a government failing its citizens.
Lizzie O'Shea is a founder and the chair of Digital Rights Watch, which advocates for freedom, fairness and fundamental rights in the digital age. She also sits on the board of Blueprint for Free Speech. She speaks regularly about law, technology, and human rights, and her writing has appeared in the New York Times, Guardian, and Sydney Morning Herald, among others.
Join one of the following groups and we'll keep you in the loop, including sending you a copy of the digital version of the first issue of Balance.
Join Signal Group | Join Whatsapp Group | Join Telegram Group