A recent worldwide study revealed that half of newsrooms use artificial intelligence tools, while only 20% of them have approved regulations for their use. Artificial intelligence (AI), a sphere still unexplored by the media in the Republic of Moldova, gradually becomes part of working tools applied by some local journalists. Caught between enthusiasm for innovation and fear of breaching journalistic ethics, publishers feel increasingly challenged to consider the strategies of using tools like ChatGPT without decreasing the quality of the information provided. Media Azi spoke with representatives of several media outlets to find out whether and how they used generative AI tools in their work or how reluctant they were to use them.
For a year already, the Observatorul de Nord newsroom has been using artificial intelligence software to provide the host’s image and voice for reports and news. Elena Cobăsneanu, director of the regional publication, explains this choice by the lack of professionally trained journalists and limited financial possibilities of the newsroom in Soroca.
“The Observatorul de Nord team is very passionate about AI tools and innovation in general. We came across this software by accident; it is titled Synthesia.io – this is still an experimental program. We made an agreement with the production company to use Synthesia as an experiment: it is applied mainly for training sessions in large companies, which, also for economic reasons, draft courses and present them to users with the help of such software. First, we informed the audience about our intention and what the platform itself was like. As a newsroom, we kept asking ourselves from the very start what the readers’ reaction would be like. The audience’s opinions differ. Some affirm that people are irreplaceable, anyway. Still, we believe that Synthesia is an interesting innovation, which deserves being also used for journalistic purposes,” Elena Cobăsneanu explains.
“We have been using it for a year and are still testing its capacities. We prepare the texts thoroughly, according to all the professional requirements, and the software is used for their presentation. Synthesia offers a multitude of ways of use. We apply it for three purposes: creating templates for various columns, presenting news, and video reports which enjoy the greatest popularity. We still have to keep working on the formula for presenting the news by means of AI to get a greater number of views,” the journalist adds.
When asked about eventual errors, the Observatorul de Nord director says that journalists still use it in a “testing mode,” because the software, for instance, can misread the names of localities or place a wrong accent in a word. “It’s a robot, anyway”, she emphasizes.
SPEED VS. QUALITY
Part of the Agora portal team also uses AI tools in their work. They prefer ChatGPT for summarizing information, generating headlines, hashtags, and keywords used for social media. According to Irina Ghelbur, executive director of the media outlet, as soon as AI tools became widely available, Agora organized a training session to stipulate some rules for using it. “Initially, after a two-hour training session, everyone was enthusiastic and trying to use ChatGPT to make their work more efficient. Over time, enthusiasm waned, largely because the transition from human to artificial content generation is still faced with reluctance by many journalists, not only at Agora. Some fear they could lose their jobs and, to prevent it, avoid using it entirely,” Alex Gurdila, general producer at Agora, adds.
Alexandru Eftode, Radio Europa Liberă (REL) director, emphasizes that, though speed might be an advantage, it must be considered carefully due to the need to ensure that the information published or broadcast is accurate, objective, well-balanced, and complies with high-quality journalism standards. “We keep watching it closely and are curious, as we believe that artificial intelligence is developing, improving, and we try to integrate it into the newsroom’s activity. We have not decided definitively how to do it yet, but we know for sure that we do not use these tools for writing the texts to be published. Everything produced by our newsroom is the result of human work, not that of artificial intelligence, including headlines, titles, or social media posts. The rule also applies to multimedia content, which is entirely prepared by the journalists from our newsroom,” the journalist says.
According to him, his colleagues can use AI platforms for documentation or translation, but every generated response is verified. “We also apply Dataminer – a useful tool for monitoring the news appearing online in a particular region of the world, on specific topics of interest. We use Dataminer to trace the world’s recent topics, but this is a starting point. After that, the information is checked using the human filter,” the REL manager specifies.
“WE ARE AFRAID OF ERRORS”
Some media outlets’ representatives are even more reluctant. Ruslan Grabari, TV8 news department director, affirms that the newsroom does not currently see ChatGPT, for instance, as a reliable tool or a partner. “We are afraid of errors. Besides, the information it provides cannot be verified, whereas the rules of journalism make us quote living sources instead of robots. We do test it periodically, but we do not think it is useful at this point. As the head of the news department, I fear that artificial intelligence, in the mid-run, could accelerate the disappearance of reporters’ journalistic skills,” he believes. In the same manner, Victor Moșneag, Ziarul de Gardă editor-in-chief, emphasizes that their newsroom does not currently use artificial intelligence, but that does not imply that any of the ZdG journalists could not be using it individually. “We know that some newsrooms use artificial intelligence tools even for writing the news. We are still exploring the possibilities of using them on a wider scale in our journalistic activity,” the journalist says.
The journalists of the Diez portal share this opinion; they say that, in April, they held a meeting with the team dedicated exclusively to the topic of using artificial intelligence. “Currently, we have decided not to use ChatGPT for preparing journalistic materials. It is still not quite stable, and we cannot completely rely upon the information it provides. The reason is that this platform compresses information gathered from multiple sources, and finally provides a reformulated answer with the help of algorithms. I guess, in the future, this platform will keep improving and offer new possibilities. Finally, it could make journalists’ activity easier. Yet certain limits must not be exceeded. Journalists’ job should be eased, not replaced. This aspect defines the way we talk about applying artificial intelligence in journalism,” Petru Beșleaga, editor-in-chief of the informational portal, concludes.
Globally, such terms as “AI media” or “AI journalists” have already emerged, and, according to the London School of Economics, media organizations around the globe have already started publishing their principles of using artificial intelligence in their work. For instance, after the deontological manifesto on using AI by the Heidi newsroom, similar rules also appeared in Wired, USA Today, Axios, The Jordan Times, and, recently, in The Guardian and Politifact. There were also articles by the executive directors of such newsrooms as The Washington Post, The New York Times, Reuters, and Financial Times on the phenomenon of AI and the response to the challenges brought by such tools to contemporary journalism. A study by the World Association of News Publishers suggests that half of newsrooms use generative AI tools, yet only 20% of them have formulated ethical boundaries for their use.