With a democracy-defining election on the horizon, it's never been more important to trust the news.
Republican presidential candidate Donald Trump clashed with Democratic Vice President Kamala Harris in a feisty debate on ABC News on September 10 that saw the former president test in real time false claims that Haitian immigrants in Springfield, Ohio, were eat the pets of the city residents.
“Now he wants to do transgender operations on illegal aliens who are in prison,” the 78-year-old also said, referring to Harris who said “transgender individuals who depend on the state for treatment [should] receive the treatment they need,” in response to an ACLU questionnaire in 2019.
The Republican has continually strengthened his belief that he won the 2020 election against Joe Biden, despite election officials across the country confirming that nothing was rigged. ABC News moderators David Muir and Linsey Davis were there to debunk Trump's various claims throughout the lengthy debate, including that some American states allow the “execution” of children after birth. Davis quickly told Trump, “There is no state in this country where it is legal to kill a child after birth.”
However, verifying claims on social media hasn't proven to be as rigorous, or even instantaneous. Trump, for example, has also been criticized in recent weeks for falsely sharing AI-generated images of Taylor Swift endorsing him in the 2024 election (the pop star later declared her intention to vote for Harris on her Instagram account official). That didn't stop the images from circulating for weeks on Elon Musk's X and Mark Zuckerberg's Facebook.
A World Economic Forum survey called disinformation the biggest global risk for the next two years, ahead of climate change and war. This year, it's not just Americans who are voting for their leader. Nearly half of the world votes in elections in over 50 countries. Deepfakes, fake news and propaganda are now posing unprecedented challenges in content verification for newsrooms and trusted media brands.
This puts pressure on some of the world's largest news organizations to spread authentic and truthful journalism to the masses online. According to the Pew Research Center, just over half of U.S. adults (54%) get some of their news from social media. In a world where even a former president and presidential candidate can repeatedly lie on the debate stage, amplifying the truth remains a priority.
Now several major global media organizations are embarking on a cross-industry initiative, led by the BBC and CBS News, called Design your weapons in the fight against disinformation. Under the IBC Accelerator Media Innovation program, the goal is to help organizations address the challenges of misinformation in the media by joining forces, for the first time ever, to tackle the problem.
It's not just the BBC and CBS News that participate. The initiative includes Associated Press (AP), Paramount, Reuters, ITN, Globo, Al Jazeera and Channel 4. “Look, disinformation is nothing new,” says Tim Forrest, ITN content editor, Projects The Hollywood journalist. “But what we have seen in recent years has been a growth in falsehood. Basically, what we're seeing is that journalists and news organizations have always been asking the question: Is this true? And this continues to be the case today, but more and more often, going forward, we will also have to ask ourselves: is it real? This is a change for all of us that we need to take into account.”
Nadia Ahmed, AP's social media and user-generated content editor, adds: “How can we help viewers understand what is fact and what isn't, and what is real and what isn't?” Simply put, they are working to make sure that falsehoods are filtered out before they reach you, their audience.
Mark Smith, head of the IBC Accelerator program and Judy Parnall of the BBC.
IBC
The initiative is three-fold and starts with provenance, meaning where a complaint, image or video originated. “Provenance is similar to a food nutrition label,” explains Judy Parnall, the BBC's head of standards and industry. “So we're used to what's in your piece of food, but actually what's in your media? Understanding where it came from, who is behind it and how it was made. So you're not just blindly receiving a video and saying, 'Man, this must be real, because that's a video I received.'”
Parnall says the BBC has also noticed this with “shallowfakes,” which are much less edited than a deepfake, but can still completely change a story. To solve this problem, news organizations want to find out what tools are available to investigate the provenance of a story. “What are the different approaches? Do you need to use blockchain? How complex is it to use?” he said.
Ahmed says detection becomes vital if a user can't get provenance information and detection tools need to step in to decipher what is real or true. “It's a very, very new technology,” Ahmed says. “Because creation is a very new thing, and to be honest, discovery tools are struggling to keep up. In most cases there is not enough money. What we have discovered through the Accelerator initiative is therefore the analysis of a wide variety of tools available on the market.”
He continues: “What often becomes a good approach is to use a combination of these to try to get as much contextual information as possible, a manual verification process, so you don't necessarily have to rely on a machine to tell you that this is true or this it's false.
New technology can now be used to quickly determine whether an image has been altered or tampered with through the use of artificial intelligence.
Ahmed says: “You're looking at the results the machine tells you with this percentage confidence rate [how confident the machine is in its findings]we can say that there are some issues and artifacts around the face in this photo, or the voice in this video, which seem to be generated by artificial intelligence. And then journalists – or, hopefully, in the future, any user – will be able to go in and look at that information and make a confident editorial or decision.”
But what makes this initiative revolutionary is the collaboration. The outlets hope to find a way forward together and, above all, to share their resources. “Very soon we will be in a position where the human eye will not be able to tell the difference [between real and fake]” says Forrest. “No one organization can solve this problem. The scale we're talking about here means that, even as national broadcasters, we're often operating against platforms that are the size of a nation and have the budgets to cope, and unless let's collaborate, we can't really hope to have control over this thing.
Ahmed adds that the only way forward is with transparency. Identifying falsehoods and being honest with your readers is what matters most. The threat posed by misinformation is palpable: “If I'm being honest at this point, specific instances of misinformation are changing public opinion and influencing elections everywhere. It's a huge problem.
Meta, owned by Zuckerberg, and X by Musk can also help us. “One of the things I hear a lot is that social media platforms have been absent from our group,” says Mark Smith, president of the IBC Council and head of the IBC Accelerator Program. “Much more dialogue is needed there, because they are an important part of this process.”
Just last year, the EU issued a warning to Musk to comply with new laws on fake news and Russian propaganda, after X, formerly Twitter, was found to have the highest rate of disinformation posts among all large social media platforms.
Forrest says: “As news providers, we have a heritage that has built trust over decades and I think we can bring that experience to this industry. That's what we're trying to do here. Adds Parnall: “Everyone wins if the truth prevails, and it's not just broadcasters and traditional news providers, but it's social media platforms. Eventually, if people see that they can't rely on social media platforms, they will start to turn away from them. This rising tide lifts all boats.”
A December 2020 survey reported by Redline said that 38.2% of US news consumers reported having unknowingly shared fake news or misinformation on social media. With the rapid development of artificial intelligence even in the last two to three years, the percentage is likely to be higher in 2024.
IBC2024, the annual media trade fair taking place in Amsterdam, recently announced that 45,085 visitors from 170 countries attended the conference from September 13-16, bringing together the global media, entertainment and technology community to showcase innovations and address pressing industry challenges, including misinformation.
“As media organizations we don't necessarily want to position ourselves as arbiters of truth,” Ahmed says. “Our job is to surface the facts as we find them, and the facts have and should always have power.”