Back to Stories
STORIES

"You can’t look at GenAI too fearfully. You have to find as many opportunities as you can."

07 August 2024
Portrait image of Manuela Kasper-Claridge, Editor in Chief, Deutsche Welle
Manuela Kasper-Claridge, Editor in Chief, Deutsche Welle

We talk to Manuela Kasper-Claridge, Editor-in-Chief, Deutsche Welle, in our series of interviews with leading industry experts who have contributed to the EBU News Report – Trusted Journalism in the Age of Generative AI 

Lead author and interviewer is Dr Alexandra Borchardt
 
The EBU News Report 2024 is available to download now 
 

In which ways is generative AI a game-changer for journalism?

As with previous disruptive technologies – the internet, social media, and smartphones – we are expecting generative AI (GenAI) to change people’s media use and that will bring new opportunities and challenges. We’re aiming to have AI support our work, to automize a number of regular tasks and leave our journalists more time to focus on storytelling and creative work. In some cases, GenAI can help with that creative work too. For example, you can use it as a tool to help you craft interesting, SEO-friendly teasers or as an extra sparring partner to develop story ideas targeted towards specific audiences. But we’re also already seeing a rise in the amount and the quality of misinformation. 

Are you delighted or worried about GenAI for your company and in general? 

I would say the word delighted is going too far. I think we have a broadly positive attitude towards GenAI, while at the same time we are considering the limits we need to impose on its use and analysing the risks it poses to journalism. Those limits include publishing anything generated with AI without it being checked by a journalist or publishing photorealistic images. However, you have to find as many opportunities as you can. You can’t look at the topic too fearfully. 

What kind of mindset and behaviour do you encourage in the newsroom?

We are striving to find the right balance between freeing ourselves to test as much as possible and making sure we use the tools responsibly. Colleagues are always free to make suggestions. However, we have committed ourselves to always having a human being in control of every piece of journalism we produce. We are also paying very close attention to data privacy. Data protection is a very big topic in Germany, and we have to make sure everyone who works with AI tools has undertaken the relevant online training. Our Legal Department has published guidance on using AI chatbots. 

Is this more of a top-down or a bottom-up endeavour?

I want our journalists to try things out. I want people to discover things and tell us what they think works. Our teams that have tested chatbots and AI tools so far have collected a significant amount of information demonstrating what works and where they see the opportunity to have AI support their work. That feedback is so valuable. We have multidisciplinary teams working on and giving feedback on projects. You need people with different backgrounds and different experiences working on projects and their prioritization together. You also need to share expertise. We have a DW-wide AI Circle that meets once every two weeks and brings colleagues from multiple departments into project groups. My Editor-in-Chief’s Council is also following the subject closely. 

What’s your favourite GenAI product/use case – in your company or beyond?

As an international broadcaster that publishes in 32 languages, the rapid development in AI-supported translation and voicing is very exciting. It has the potential to save us a lot of time translating and revoicing our journalism from one language into another. Still, these translations and voiceovers would need to be checked by an editor. We developed an AI-powered content adaptation platform, plain X, which helps with this. It is integrated into our editorial systems, bundles various tools in one interface and offers lots of options for transcription and subtitling of videos, as well as other AI-based services. 

The potential to use AI to make more of our content completely barrier free is also exciting. More subtitling is the obvious way to go, but having quality AI sign language could be very useful in the future.

Many worry about bias, particularly in AI-generated images. Do you see a danger in further scaling stereotypes or an opportunity to fight bias with AI?

We need more data on that. In our test cases we see a huge bias in AI generated illustrations. For example, if the topic is domestic violence in the Arabic world, women are always pictured with a hijab. 

What is the biggest challenge in managing AI in your organization? 

It’s a massive topic with a lot going on all at once. People get information about developments from different sources. It's very difficult trying to keep everyone on the same level of understanding, with similar amounts of knowledge. Communication between people and with the wider organization is vital to let people know what we are working on.

Have you made mistakes with AI strategy?

We say that we want to work quickly and flexibly, but we have over 3000 employees and as is the nature in large organizations, we sometimes aren’t able to start projects or react to developments with the agility that we would like. Communication can be difficult. Obviously not everyone is working on AI projects, and some people sometimes hear about what is going on through the grapevine. This can be unsettling for colleagues who have heard about how AI might be coming for everyone’s jobs – which I do not think is the case.

What about talent? Some expect that journalism will be researching the facts, storytelling will be done by AI with personalized for different audiences. Do we need different types of journalists in an AI-supported media world?

What we still need is journalists on the ground who talk to real people and deliver stories about humans. These are the kind of stories that AI cannot deliver and are classic skills of journalism we cannot lose. Journalists will have to learn about AI on top. They will need to write prompts, identify sources, understand AI, identify what is real and what has been generated or faked. It is very important that we train our journalists in this. Young colleagues will grow into this world naturally. I’m a mother of three children, they all know how to write prompts.

Do you have AI guidelines – and what’s special about them?

We have strategic guidelines that were issued by our Business Management and then my Council and I released editorial AI guidelines. They outline our position on GenAI and explain our rules. For example, we state clearly in the introduction that human beings will always been in control of our journalism, we outline exactly what kind of information may and may not be used in prompts, and we link people to the necessary training. We also outline what will guide our future approach – transparency, control, and data security. As with all our editorial guidelines, it is a 'living document' that can be updated at any time.

Do you think journalism will develop from being a push activity where news is directed to the audience by the media to a pull activity where people choose customized news and formats to fit their needs?

As and when chatbots become the main way that people find their information, their relationship with news will change. It’s likely they will be able to ask questions about news events and stories much more easily, and more context will be at everyone's fingertips. 

Many people are worried about misinformation. Are those fears justified or overblown?

I think those fears are very real. It’s clear that the quality of fake news and deep fakes will only get better, and they will become easier to produce. It will take effort to counter false narratives as they spread. It will likely take a combination of good journalistic training and helpful technology. We will also need to reassure audiences about what is real and how they can trust our information.

Do you think GenAI will impact audiences’ trust in journalism?

I think that in the age of chatbots, being able to show we have reporters on the ground, correspondents around the world, talking to people and telling human stories, will be extremely important for maintaining audiences' trust in quality journalism. 

Deutsche Welle is operating globally. Do you see differences in the acceptance and uptake of AI around the world?

The internet is not as fast or as affordable in every region we cover. The biggest divide is age and wealth. If you are younger, you are more open to new technologies, if you are wealthier, you have better access in Africa, for example, people living in cities have good access to the internet, but it is mostly expensive.

Some of the dynamics are beyond the influence of the media industry. In which ways do you think AI should be regulated?

Transparency is very important, as is human oversight. Ideally, this is what would be a sort of standard in the future of AI, especially for news and media. We want transparency about where AI has been used to produce content and for the chatbots to be able to reliably link to information sources. The EU's AI Act envisions some of this for high-risk AI systems, but we need as many people as possible to be obliged to uphold transparency.

What is missing from conversations in the current hype?

I think the constructive approach to GenAI is missing too much. The companies developing the large language models are obviously focusing on the positives and the opportunities it offers. At the same time, there are many people who are focusing entirely on the negatives, from the amount of misinformation that may be created to the possibility that AI systems could turn against humanity. We need to have more balanced conversations about it.


 

Relevant links and documents

Contact


John O'Callaghan

Head of Content Communications

ocallaghan@ebu.ch