GUIDES published on 13 Sep 2024

AI regulation: Are public service media’s needs being met?

The European Commission has signed the Council of Europe Framework Convention on Artificial Intelligence (AI Convention) on behalf of the EU. This marks the end of a years-long process during which the EU and the Council of Europe worked in parallel to ensure that their respective flagship legal instruments on artificial intelligence complement each other – a good moment to take stock of their combined impact on public service media’s interests.

The European Union’s Artificial Intelligence Act (AI Act) constitutes one of the first comprehensive AI regulations in the world. It imposes concrete obligations on AI providers and deployers for specific uses of AI and will become part of the domestic law of EU Member States. As a product safety regulation, it aims to promote safety and reliability of the riskiest AI systems within the EU internal market. It bans certain AI systems considered to pose unacceptable risk, imposes risk management requirements on high-risk AI systems and disclosure requirements on AI systems considered to be of limited risk. However, certain loopholes and a potentially weak enforcement structure could impact the AI Act’s overall effectiveness in practice.

The Council of Europe’s Framework Convention on artificial intelligence, human rights, democracy and the rule of law (AI Convention). It requires signatory States to ensure that AI uses adhere to international human rights standards and do not undermine democracy and the rule of law. As the first international AI treaty, it provides a framework to assess legislation, such as the EU’s AI Act, but also other domestic laws. Accordingly, it has the potential to fill regulatory gaps. But the instrument does not create any new AI-specific obligations, and, like the AI Act, includes some loopholes. Its main achievement lies in creating a regular forum for State parties – the so-called “Conference of the Parties” – to exchange on AI policies and developments and supervise and facilitate the implementation of AI regulation to comply with existing international standards.

Fighting disinformation and preserving information integrity online

Both instruments recognise disinformation as one of the main threats that AI systems pose to our societies but provide little recourse. The AI Act introduces disclosure requirements to identify AI systems interacting with individuals as well as to mark and label AI generated content. These rules may assist individuals to better assess certain information, but it is unlikely that the actors behind malicious disinformation campaigns will abide by them.

Importantly, genuine transparency requires more than just labels, as the recent EBU News Report on Trusted Journalism in the Age of Generative AI shows. In fact, mere labels can be counterproductive because they undermine trust in the content without meaningfully informing the public about how AI contributed to the journalistic process. Therefore, the exception for media outlets not to label AI-generated texts provides welcome flexibility for media services to develop and adjust their transparency policies to the audience’s needs. On the other hand, this exception also insulates many “fake news” outlets from having to disclose their use of generative AI. Overall, the rules of the AI Act may have limited impact on limiting disinformation and promoting a healthy media ecosystem. It will be one of the main challenges of the coming years to develop effective measures against the harmful effects of mis- and disinformation that preserve a vibrant free speech environment. The Conference of the Parties established by the AI Convention could provide a new forum to facilitate the development and implementation of new solutions with global reach. In doing so, it could build on the extensive standard setting work and expertise of the Council of Europe around free speech and media freedom. For instance, a Committee of Experts has been formed this year to provide guidance on the impact of generative AI on freedom of expression.

Protecting journalistic sources from AI-enhanced surveillance

In recent years, journalistic sources have been increasingly threatened by technological advancements in spyware and expanded surveillance in the name of national security [1]. In contrast to other comparable EU laws, the AI Act provides a blanket exemption for activities related to national security [2}. Although it limits the ability of law enforcement to use profiling and biometric identification, these limitations could be potentially circumvented if they are justified in the name of national security. While national security remains a competence of Member States, such a blanket exemption might enable Member States to employ AI systems in the national security context without any scrutiny from EU institutions.

By contrast, it remains unclear whether the use of AI systems for national security purposes is covered by the AI Convention. Although parties are not required to apply the entire Convention to activities related to national security, Article 3(2) pronounces “an understanding” that national security activities must respect international law. In a press statement, the Council of Europe suggests that this wording implies an obligation on State Parties “to ensure that these activities respect international law and democratic institutions and processes.” It remains to be seen whether this provision will also be interpreted, for instance, to allow the Conference of the Parties to assess potential violations in case of reasonable suspicion.

Human rights law fully applies in the national security context, but supervision and enforcement at national level are notoriously weak. Additional scrutiny and redress from the EU and Council of Europe may be desirable in the future to prevent and redress Member States’ abuse of AI-enhanced surveillance tools. For the media, stronger protection of journalistic sources is necessary to prevent the heightened chilling effect that the increased surveillance power of AI will likely cause.

Checking big tech dominance and promoting a pluralistic digital public sphere

Taking a product safety and a human rights approach, respectively, both instruments merely provide basic guardrails for AI uses. As a result, they do little to address the broader upheaval that AI will likely cause in the media sector.

Rapid AI developments risk exponentially increasing the proliferation of synthetic content as well as big tech’s control over the public’s access to information. Further-reaching regulatory intervention will be necessary to ensure that Public Service Media can continue to fulfil their democratic mission by reaching their audiences with pluralistic quality content. Solutions might be found in competition law remedies or ex ante regulations, ensuring the prominence of general interest content, and providing public service organisations with the necessary resources and computing power to champion AI innovation in the public interest – to name only a few.

One area where significant regulatory developments could somewhat rebalance the relationship between big tech and public service media is copyright. The AI Act introduced obligations for AI providers to disclose the copyright-protected content fed into their training data and to adopt internal policies that respect copyright law, including rightsholders’ decision to reserve their rights (the so-called “opt-out”). Rightsholders and policymakers have discussed at length the practical ways to implement the opt-out. The current and most common mechanism to signal one’s opt-out is the explicit blocking of webcrawlers on webpages. Unfortunately, this method presents several downsides for rightholders, like the risk of compromising the indexing of the website on search engine platforms or the circumstance that rightsholders can only express their opt-out but cannot actually prevent AI systems from scraping their content. Media organisations and other rightsholders are working on alternative solutions, but for now neither copyright law nor the AI Act provide a vehicle for enforcing them. Ensuring effective respect for copyright is also a human rights issue. Thus, the AI Convention could again provide a useful forum for States to develop and promote a common global approach that can strike a fair balance between the interests of AI providers and the rights of copyright owners like PSM.

For additional background on the emerging AI regulatory landscape and public service media concerns see our overview and our four principles on AI and copyright.

Contact Details

Sophia Wistehube
Legal Counsel
Legal & Policy
wistehube@ebu.ch

Endnotes

[1] The doctrine of the protection of journalists, as upheld by the European Court of Human Rights (Goodwin v. the United Kingdom [GC], no. 17488/90, 27 March 1996; Big Brother Watch and Others v. the United Kingdom [GC], nos. 58170/13, 62322/14 and 24960/15, 25 May 2021), is increasingly under attack in many Council of Europe member States as well as in the European Union. (see, e.g., Pascal Hansens, Harald Schumann, Ariane Lavrilleux (12 December 2023) “Hardline EU governments in late push to legitimise surveillance of journalists”).

[2] For instance, Article 23 of the General Data Protection Regulation (GDPR) limits States’ ability to restrict the obligations and rights provided in the GDPR for the purpose of national security. By contrast, Article 2(3) of the AI Act does not impose any conditions on the AI uses for national security purposes and even exempts private actors carrying out national security activities. Relatedly, in Privacy International and La Quadrature du Net and Others, the Grand Chamber of the CJEU ruled that the e-Privacy Directive prevents bulk retention and transmission of traffic and location data, unless Member States can prove serious threats to national security, thereby asserting the CJEU’s authority in national security matters.