Read the article below or download the PDF.
Australia’s latest consultation
Australia is once more considering how best to regulate AI. Building on 2023’s ‘Safe and responsible AI in Australia’ consultation,1 the Government is proposing mandatory guardrails for AI in high-risk settings.
The Proposals Paper focuses on AI harms and how to mitigate them. These are important issues. But if Australia also wants AI to increase competition, innovation, productivity, and growth then in parallel the Government needs to consider the extent to which regulating to reduce AI harms could raise barriers to entry and expansion in the AI sector.
The risk is not merely theoretical. As Mario Draghi recently noted in his report for the European Commission, EU regulatory barriers to scaling up are now ‘particularly onerous’ in the tech sector and ‘the EU’s regulatory stance towards tech companies hampers innovation’.2
The risk of raising barriers to entry and expansion is not just relevant to Australian-made AI. Such barriers also need to be considered from the point of view of foreign companies that may want to make AI models and products available to Australian businesses (including SMEs) and consumers, and from the point of view of Australian companies that may want to build on top of foreign models. Potential knock-on impacts on Australia’s ability to attract and retain talent should also be borne in mind.
Before determining the path forward, Australia would benefit from a deeper consideration of the potential tradeoffs between regulation and competition. This goes to multiple issues in the consultation, including: how high the bar should be set for ‘high risk’, how onerous the obligations should be once the ‘high-risk’ test is satisfied, and whether there could be advantages in an incremental approach to introducing mandatory guardrails.
Against this backdrop, two further points are worth noting.
I. Should Australia align with the EU?
The consultation seems to be proceeding on the basis that Australia is best served by trying to align its regulatory framework with the EU, and to a lesser extent Canada.3
It may be worth unpacking this, and testing whether the underlying assumptions are sound:
- How useful would such alignment be for Australia in practice? Will the best and/or most innovative international models and products come from companies based in the EU (or Canada)? Or is it more likely that they will continue to come from the US, the current leader in AI?
- If the assumption is instead that the EU’s AI Act will in time become the global standard, is that assumption sound? If the EU turns out to be a regulatory outlier on AI, or if the US takes a different course: will it be optimal for Australia to have focused on EU alignment? It is not clear that US companies will always invest the resources necessary to bring their models and products to the EU. Apple is delaying the EU launch of Apple Intelligence, and, as the Draghi report noted, ‘Young innovative tech companies may choose not to operate in the EU at all.’4 If US companies do not invest in launching in the EU, and Australia is aligned with the EU, Australia may not get the benefit of these models and products either.
II. Regulating open-source models
There are likely to be particularly important tradeoffs when considering how to regulate open-source models. The Proposals Paper touches on such models, but does not ask about their potential importance for competition and innovation, or whether the regulation of such models may pose particular risks to these two policy aims. Again, the EU may be illustrative. Meta is not launching its open multimodal Llama model in the EU given the ‘unpredictable’ nature its regulatory environment, and Mark Zuckerberg - together with Spotify founder Daniel Ek – have argued that ‘pre-emptive [EU] regulation of theoretical harms for nascent technologies such as open-source AI will stifle innovation’. In terms of how mandatory guardrails might apply across the AI supply chain:
- The tradeoffs are likely to be most relevant in regard to obligations on ‘developers’ (in effect, companies that build or work with AI models and applications). Before imposing such obligations, thought should be given to how an open-source model would or could meet them in practice, and whether the harms that may flow from such models could be mitigated in other ways, such as by imposing specific obligations on ‘deployers’ (i.e., companies that supply or use AI to provide a product or service to end users) instead.
- To the extent that general obligations are proposed for developers that would likely be especially onerous in the context of open-source approaches, the Government should weigh the expected harm-prevention benefits of such obligations in an open-source context against expected competition and innovation downsides.
A version of this blogpost was submitted to the Australian Government consultation.
- Considered in this previous blogpost ‘Regulating AI: trust and adoption’.
- Mario Draghi, 9 September 2024, ‘The future of European competitiveness – A competitiveness strategy for Europe’, at page 26. See also page 79 of the accompanying report, ‘The future of European competitiveness – In-depth analysis and recommendations’.
- See e.g. page 31 of the Proposals Paper.
- ‘A competitiveness strategy for Europe’, at page 26.