Users are struggling to understand the multitude of risks and responsibilities of AI usage

25 January 2024 | Regulation and policy

David Abecassis

Video


The speed of change in AI brings uncertainty. End users are not (yet) subject to AI-specific laws, but existing legislation certainly specifies responsibilities and liabilities. The challenge for individuals and companies lies in maximising the evolving benefits that AI offers without breaching rules on fairness, data protection and copyright, among others. Managing these risks requires policies, tools and know-how within each organisation, which will be especially important for operators of critical infrastructure.

Transcript

My name is David Abecassis, I'm a Partner at Analysys Mason and I specialise in digital market across strategy, regulation and policy.

What should players in the TMT industry be aware of in terms of AI regulation?

I think at this point, what characterises AI is the constant change and the uncertainties that flow from that. From a regulatory perspective, that creates specific challenges. What policymakers are focusing on today, is AI models and the suppliers of AI models. Your OpenAI, your Inflection and so forth. The other part of the market is AI users. And for those users, their use of AI is still governed by the laws that are currently in place; there's nothing really specific about AI that they have to comply with. So understanding what they need to be aware of, what the risks are, how they comply with their contracts, the copyright laws, the data protection laws and still benefit from AI is really the challenge that we want to help them grapple with today.

What steps should players in the TMT industry take to manage the regulatory risk of AI?

When our clients think about the risks and, in particular, the regulatory risks associated with their use of AI, I think there's three approaches that they need to take into account. The first question that they need to answer is what tools are they going to use and for what purpose? And each AI tool has a slightly different purpose and a slightly different set of benefits and risks. So understanding that really well is really important. The second point is to understand how the use of AI interacts with their existing responsibilities, liabilities and risks. That's particularly important in the context of data protection and data security. The third point is how do you manage those risks? And that's something that is coming to the fore in some of the policy discussions around AI regulatory regimes, but really understanding how you put in place policies and infrastructure, including people who are able to understand, deploy and manage the risks associated with AI tools, is really essential.

What are the risks of AI regulation for the telecoms and technology industry?

So for telecoms and technology providers looking at the risks associated with the regulation of AI products, I think there are two aspects, two dimensions that are important today. The first dimension relates to critical infrastructure resilience and security. Telecoms operators have been subject to some form of critical infrastructure regulation for a long time. But as they look at ways to use cloud services and to use AI in particular to manage more and more functions of their operations, they have to be extremely mindful and extremely careful in how that interplays with their responsibilities, their liability as critical infrastructure providers. Point number two relates to data protection. That's particularly topical for, again, telecoms operators, but that really applies to every company in Europe and in many other countries. Data protection regulation is very strict. It's now being applied very consistently and pretty aggressively by some data protection authorities and making sure that your use of AI doesn't infringe your data protection obligations is also extremely important.

What are the risks of AI regulation for the media industry?

In the media industry, there are two areas that are of interest to regulators and policymakers and where media companies need to be aware of the risks. The first one relates to creation. Generative AI in particular is already being used for creation, and that raises three particular issues. One is around copyright. The second one is around the risk of deepfakes, and the third one is around the fairness of content recommendations that are augmented by AI. The second area is an opportunity for media players and, in particular, video platforms who now have an obligation to ensure that some types of content doesn't show to end users to use AI for content moderation. There's a consensus emerging that AI is really necessary to do this at scale, and we look forward to exploring this with our clients.

What is your one key message to the TMT industry right now?

The one constant that AI is bringing to the fore is change. And it's the uncertainty that's related with a really fast pace of change that we're seeing in AI models and in the application of AI. So having an approach that's really mindful of risks, opportunity and having a framework in which you can work that accommodates that change is, I think, very important.

Author

David Abecassis

Partner, expert in strategy, regulation and policy