Regulating AI: trust and adoption

Posted August 10, 2023
by Ben Hooper

Read the piece below or download the PDF.

Trust and takeup

Australia is consulting on the regulation and governance of AI. The government’s Discussion Paper, and many of the consultation questions, appear to be based on an assumption that interventions to increase public trust in AI will increase the takeup of AI.

If true, this assumption makes policymaking easier: actions can be taken to mitigate AI’s risks without worrying about whether they will simultaneously dampen AI’s takeup. But if this assumption is flawed then Australia could end up intervening in ways that increase trust but reduce takeup. Stumbling in this way could have significant implications for Australia’s economy and productivity growth.

So a lot may ride on an underlying assumption that interventions to increase public trust in AI will increase the takeup of AI. Yet the issue is not much discussed in the Discussion Paper, and the evidence base for such an assumption is lacking. Further analysis is needed to identify what can drive adoption - or undermine it, and whether any tradeoffs may need to be made between de-risking AI and supporting its takeup.

How might intervention reduce takeup?

Increased regulation may raise barriers to entry.1 So a more interventionist regime could make it harder for Australian firms to launch AI products, in turn reducing competition and choice - even as public trust increases. With less competition and choice, the ‘market’ for AI might grow more slowly.

In an echo of the Discussion Paper’s approach, Europe’s GDPR has traditionally been justified in part as a means to increase public trust and so enable the growth of the digital economy.2 Yet the GDPR appears in practice to have done the opposite. It has reduced the revenues and profits of affected businesses, with small companies bearing the greatest impacts.

For cutting-edge AI, the availability within Australia of products built elsewhere may be the bigger issue. The delays that the EU is experiencing in gaining access to AI products illustrates the risk.3 An important factor is likely to be the extent to which Australia’s AI regime aligns with that of the exporting jurisdiction.

For the foreseeable future, the US is likely to remain the clear leader in AI. So an Australian regulatory regime that significantly overshoots the US’s approach risks US firms delaying (or avoiding) launching products in Australia. And the same point could apply to other jurisdictions - such as, potentially, the UK - where AI-based entrepreneurship may end up flourishing. The result might be that even as Australians become more willing to trust AI, they lack access to the best and most advanced foreign-made AI products, leading to slower uptake overall.

What evidence is there that interventions to increase public trust in AI will increase its takeup?

The Discussion Paper touches on the relationship between seeking to mitigate technological risks and the aim of fostering innovation and adoption before stating that ‘these are not mutually exclusive’. It then states that ‘proportionate’ and timely governance responses will build the build trust needed to reap AI’s ‘full benefits’ - without further discussion of how proportionate responses are to be identified.

A footnote refers to a statement by Australia’s Productivity Commission that ‘trust is a central driver for widespread acceptance of AI’. The relevant Productivity Commission report in turn relies on a 2020 study, ‘Trust in Artificial Intelligence: Australian Insights’. (The UK’s recent policy paper, ‘A pro-innovation approach to AI regulation’, similarly relied on a broader version that also covered the UK.)

The 2020 study states that trust is ‘the central driver’ of AI acceptance. But the study doesn’t exclude other drivers. Nor does it appear to explore whether the various drivers of AI acceptance could ever conflict.

In addition, the 2020 study is based on survey evidence. For instance, members of the public were asked whether they would be more willing to use an AI system in various scenarios. But adoption is a function of revealed rather than stated preference (in other words, what people actually do rather than what people say they will do). In the digital context especially, stated and revealed preference may diverge.

The so-called ‘privacy paradox’ is a well-known example (in practice, consumer behaviour does not appear to match stated preferences regarding privacy). More relevantly for the current debate, ChatGPT broke the then record for a consumer app’s adoption by gaining 100 million users within two months of launch. Yet this was against the backdrop of current regulatory approaches to AI - suggesting that, whatever people may say, additional regulation may not be needed to support consumer takeup of compelling AI products.

ChatGPT’s impressive growth can be contrasted with the slower progress of Australia’s data portability regime, the Consumer Data Right (CDR). The CDR’s roll out has been accompanied by a sustained regulatory focus on building public trust. But, as the Assistant Treasurer and Minister for Financial Services the Hon Stephen Jones MP recently observed, consumer takeup of the CDR has in practice proved to be a major problem. UK Open Banking has fared much better, and it seems that this was more due to the inclusion from the outset of payment initiation - which enabled more compelling products to be built - than because of greater regulatory efforts to build trust.

The 2020 ‘Trust in Artificial Intelligence’ study (like the broader version relied on by the UK Government) provides valuable information on public attitudes towards AI. But it does not support a blanket assumption that interventions to increase public trust in AI will increase the takeup of AI.

Conclusion

Measures to increase public trust can be critical enablers of technological adoption. The history of GM food in the UK is a stark example of how misjudging the level of public concern can lead to a backlash that can set adoption back by decades.4 But this does not mean that all interventions to mitigate risks and grow public trust will also encourage takeup. Europe’s GDPR may ultimately have hampered digital growth.

Determining which policy levers to pull to de-risk AI but also support its takeup requires further empirical analysis. It may be that all contemplated interventions that aim to grow public trust will be net positive in terms of adoption, but Australia should not simply assume this before proceeding further.5

  1. See for example the UK Competition and Markets Authority’s ‘Regulation and Competition: a review of the evidence’.
  2. See for example the original Proposal from the European Commission for what became the GDPR COM2012 11 final.
  3. For example, Anthropic’s latest chatbot, Claude 2, has launched in the US and UK but not the EU.
  4. See the UK Parliamentary Office of Science and Technology’s “The ‘Great GM Food Debate’ - a survey of media coverage in the first half of 1999”.
  5. A version of this blogpost was submitted to the Australian Government consultation.