Image: Tara Winstead via Pexels – The current state of generative AI lacks critical abilities that would allow it to play a more significant part in journalism, despite its potential usefulness in areas such as information synthesis, editing, and informing reporting, says AI editor Madhumita Murgia.
By Dominic Naidoo
The emergence of Artificial Intelligence-powered Generative Pre-trained Transformer, better known as ChatGPT, has immensely changed the technological working world, for better or for the worse, depending on who you ask.
Yes, AI has been around in many forms for decades. Google’s auto-fill search feature is AI, the autocorrect on your smartphone is AI, and most of the filters on Instagram, TikTok or Snapchat are forms of AI. So why has ChatGPT become such a prolific feature in life today?
Only two months after its launch in late November, the chatbot had 100 million monthly active users in January, according to data from Similarweb.
A study from Swiss bank UBS noted that “in 20 years following the internet space, we cannot recall a faster ramp in a consumer internet app”.
OpenAI, which owns and hosts ChatGPT, recently became one of the 50 most visited websites in the world, according to Digital-adoption.com.
For the Reuters Institute, Francesco Marconi, Madhumita Murgia, Charlie Beckett discussed the impact of generative AI on the news industry.
Since ChatGPT was launched, journalists around the world have been discussing its potential impact on the news industry and to the writing profession in general.
Some of the questions asked on social media pages and discussion forums include: How many journalists will be replaced by the rise of generative artificial intelligence? How fast will this process take place?
Which journalists will be most vulnerable to this kind of disruption? And should we see ChatGPT as a challenge or as an opportunity to solve some of the problems the news industry faces?
Francesco Marconi is the co-founder of the real-time information startup AppliedXL and works as a computational journalist. He has held positions as the co-lead for AI and news automation at the Associated Press and the R&D Chief at The Wall Street Journal.
In 2020, Marconi wrote Newsmakers: Artificial Intelligence and the Future of Journalism, a book addressing AI and journalism.
Madhumita Murgia has taken on the role of artificial intelligence editor at the Financial Times. She most recently served as the FT’s European technology correspondent.
JournalismAI is led by Professor Charlie Beckett of the London School of Economics (LSE) and its journalism think tank, Polis. The project not only publishes a report on the state of AI in the news industry, but also hosts a fellowship programme for journalists and engineers, provides training for small newsrooms, and collects and organises instances of AI’s use in the news industry.
Many publications already make minimal use of AI to improve their processes. Others are envisioning whole novel frameworks enabled by the technology.
Since last November, several creative uses for this technology have been proposed, with journalists frequently testing the writing and editing abilities of chatbots.
Murgia speculates that the widespread interest in ChatGPT and similar tools stems in part from its user-friendliness and ability to function in an entirely natural language setting.
Even if it is only a highly sophisticated form of predictive technology, “it feels like there is an intelligence there,” she explains.
Since the language models these tools use require human guidance, the content they produce is derivative rather than original. The model learns by exposure to a certain set of inputs before producing novel results depending on those inputs.
Murgia argues that the current state of generative AI lacks critical abilities that would allow it to play a more significant part in journalism, despite its potential usefulness in areas such as information synthesis, editing, and informing reporting.
“Given its current state, it is not novel. It is not a completely novel concept. It makes use of previously collected data,” Murgia contends, “It lacks both the analytic capacity and the voice.”
She says that this means generative AI cannot keep up with reader demand for deeper dives into topics at publications like the Financial Times.
Both Google and Microsoft’s new AI-powered tools have made factual errors in public demonstrations, which is another barrier to generative AI playing a larger role in journalism. For example, readers may be sent to a nonexistent reference by ChatGPT.
“These models often have difficulty generating accurate and factual information regarding current events or real-time data,” remarked Marconi.
The current state of artificial intelligence technologies makes it clear that they are not appropriate for breaking news reporting, which is a time-sensitive, resource-intensive endeavour that needs meticulous fact-checking and cross-referencing.
Professor Beckett would warn against utilising new tools without human oversight, and he would prefer that journalists not do so. He explains that artificial intelligence is not meant to replace humans entirely, but rather to help them do their jobs more efficiently so that they may focus on what they do best.
We reduce the dangers associated with human journalism by editing it. It is the same with AI. Verify your familiarity with the resources at your disposal and the dangers they pose. Do not put too much stock and trust in the technology.
Marconi adds that the media should adapt to the technology by taking into account and working around its existing flaws. “Journalists should direct their attention on creating event detection systems that can record and process real-time information because of the limits of huge language models like GPT,” Marconi said.
He claims that a new kind of journalism will be possible by combining event detection technologies with huge language models. Marconi uses his own firm, AppliedXL, to illustrate an event detection system. He calls AppliedXL “an event detection company where journalistically minded people work together to anticipate the news”.
By combining machine intelligence with investigative journalism techniques, he hopes to help firms avoid embarrassing public disclosures of issues in clinical studies. The importance of journalists in information synthesis, contextualisation, and narrative identification was highlighted by both Murgia and Marconi. The going is becoming tougher for Marconi.
As a result of the internet, sensors, mobile devices, and satellites, there is now an overwhelming amount of data available. We are creating more data than ever before, and it is harder than ever to sift through it all and find the useful stuff, he argues.
According to Marconi, this is an area of journalism where AI may play a significant role in reducing human labour demands. “AI should not only be seen as a tool to generate more Content but also to help us filter it,” he argues. The majority of web material may be created automatically by machines by 2026, according to some estimates.
As a result, we have reached a tipping point when our efforts must be directed towards developing robots that can automatically sort through irrelevant data, separate reality from fiction, and emphasise what really matters.
Marconi thinks that journalists should help create AI technologies. By applying journalistic ideals to the emerging technology, for instance, and building editorial algorithms. The media, he argues, “must take part in the AI revolution”. Since they already have text data for training models and ethical guidelines for constructing dependable and trustworthy systems, media businesses already have a leg up on the competition and can become big players in the sector.
Closer to home, AI software has already begun replacing humans in certain professions such as copywriting and graphic design.
A quick poll on Facebook group, Freelance South Africa, found that 10 freelancers lost work to AI, 16 voting that they themselves have begun leveraging AI for their own work with 24 voters not yet affected by the tech.
Andrea Altgayer, a South African-based creative writer and translator, experimented with AI tech and found that it can be repetitive and formulaic. “When it is over-utilised, readers can pick up on it. Everything begins to look and sound the same.,” Altgayer said.
While Altgayer currently avoids using AI professionally, she admits that it can be helpful with basic grunt work when deadlines are looming. “I only use it as a last resort, but I would still need to edit when necessary.”
Media specialist and writer, Siphiliselwe Makhanya, commented on Facebook that ChatGPT, in her opinion, is “like a more sophisticated version of Google”, but is concerned about the level of bias found on the app.
“At this point, I think you have to actually know more than it to be able to use it effectively. For example, I gave ChatGPT a list of words to alphabetise and it made three simple mistakes.”
“It also claimed that it couldn’t give me a list of one cultural item from each African country but proceeded to give me two items each for European countries when all I did was change the word”Africa “ to”Europe “ after copying and pasting my first request,” Makhanya said.
With the unprecedented pace and tenacity of advancements in AI, it will not take long before the bugs are fixed. The tech does not have the capacity to replace the feel and colour of “human-generated” journalism, for now.
Dominic Naidoo is an environment activist and writer