Newsroom / Blog

The Damaging Deluge of Disinformation: What Researchers are Learning from Africa’s Digital Boom 

Media, South Africa

Share

Since Oxford Dictionaries selected “post-truth” as the 2016 word of the year, the world has witnessed a proliferation in all forms of false information. These inaccuracies – either intentionally or inadvertently spread – have real and injurious impacts on democratic institutions, public health and safety, and our perceptions of others. Especially alarming is that just one-quarter of Americans feel confident in their ability to detect misleading material. 

To help us unpack the deluge of disinformation, we spoke with Mark Duerksen, PhD, a researcher from the Africa Center for Strategic Studies. Dr. Duerksen examines Africa’s unparalleled urbanization and the growing impact of disinformation campaigns on the continent’s rapidly changing information systems. 

Q: Although disinformation isn’t new, it feels like we’ve reached a fever pitch in its ubiquity. Why do you think we’re more susceptible to inaccurate news than ever before? 

While the political ploy of intentionally misleading people is age old, what has shifted are the mediums and speed at which we consume and share information. This revolution has been fast paced and quietly seismic in places like the African continent, where I focus. African countries largely leapfrogged the pervasive use of personal computers (though internet cafés had their moment) and embraced cellphones and social media apps they now carry in their pockets. In fact, there are now over 400 million Africa social media users – up from 100 million in 2016.   

There are now over 400 million Africa social media users – up from 100 million in 2016.   

This has meant tremendous opportunities for creativity, commerce, and connectivity, but has also thrown what were often fragile ecologies of trusted information — print, radio and television, for example — into flux. In places like Kenya and Nigeria, people now get most of their news from social media feeds. That’s a fundamental change from even five years ago.  

Anytime societies develop and adopt new information mediums, whether it’s the printing press, radio, or television, there’s an adjustment period that’s often accompanied by some bumps in the road. Unfortunately, bad actors realize this and seek to make these bumps big enough to do real damage – even forcing us off the road by devising disinformation campaigns that relentlessly attack and distort our new information spaces. These campaigns have found ingenious ways to spew false stories and to weaponize social media algorithms to work these narratives into our feeds. 

As a colleague from Burkina Faso recently described it, many countries are going through a moment in which the spread of social media has meant that more information is available than ever before, but there are simultaneously more falsehoods. What is missing in many places are watchdogs and hubs of credibility within this proliferation. Sadly, this trend has been accelerated by a rise in media restrictions and attacks against journalists in much of the world, further limiting access to trusted and reliable information when it’s needed more than ever. In fact, between 2022 and 2023, Reporters without Borders documented a jump from 33 percent to 40 percent of African countries falling into their lowest rating of conditions for journalism. 

What is missing in many places are watchdogs and hubs of credibility within this proliferation.

Q: You’ve witnessed some very real consequences of disinformation. Can you describe what that looks like on the ground? 

When I’m working in places like Kenya, Senegal, Nigeria or Ghana, people are concerned about the stresses they’re seeing on their democracies and instability they’re witnessing in neighboring countries. Researchers in these regions speak of “information disorder,” in which the circuits of societal communication have been so clogged with intentionally false stories that young voters are disengaging. Kenya and Nigeria saw their lowest voter turnouts in decades in their recent elections, a startling trend that begs big questions about how democracies can engage in the digital era when so much disinformation is designed to drive distrust and discredit democratic institutions. 

In the Democratic Republic of the Congo, the Central African Republic and Mali, similar disinformation tactics and narratives are directed at peacekeeping missions. These campaigns that originate from foreign actors and local politicians are directing local grievances towards UN missions through amplification of conspiratorial claims and exploitation of confusion around missions’ mandates. They’ve used tactics like setting up dozens of coordinated fake social media profiles and cloning copies of UN pages to spread these narratives. In the instance of eastern DRC, these conspiracies – including claims that the UN was supporting and selling weapons to armed rebel groups – were maliciously injected en masse into local social media circles in the summer of 2022. This sparked violent protests that resulted in the deaths of five peacekeepers and over 30 protesters who clashed outside a MONUSCO base in Goma, as well as in Beni, Butembo and Kasindi. It’s a wake-up call that disinformation can quickly escalate and incite deadly violence.  

Q: What lessons should we take from these situations? 

It’s clear that we need a far better understanding of our digital information spaces — what’s circulating in them and how the content is manipulated by sophisticated disinformation actors. We also need this information faster than it’s currently available. People I’ve spoken to at all levels of the UN are thinking about ways to work closely with local actors and to adopt new technologies that monitor and flag disinformation attacks in real time.  

We need a far better understanding of our digital information spaces — what’s circulating in them and how the content is manipulated by sophisticated disinformation actors.

Reimagining what proactive strategic communication looks like in the information age is another part of the puzzle. Identifying and working with trusted voices at the community level and finding ways to reach young people with fact-based narratives that resonate with their values and priorities will take a concerted effort; it cannot be an afterthought. Fighting disinformation with accurate information is inherently asymmetrical, so it will take a diversity of perspectives and skills from journalists to researchers to factcheckers to digital storytellers to defend our digital ecosystem.  

Q: It’s probably fair to say that most of us have been duped by clickbait and false narratives. How are regular people and organizations getting smart and fighting back? 

The burden of policing fake news can’t solely fall on individuals. Tactics used by disinformation actors exploit the fact that we’re more likely to believe untruths that align with our pre-held beliefs and assumptions – a phenomenon known as confirmation bias. And the design of social media platforms that feed us material that matches our attitudes only reinforces this trend. Fortunately, concerned organizations are demanding more transparency in our feeds to know whether material has been planted and amplified by known disinformation sponsors through manipulative tactics. To get smart and fight back, organizations need access to this data and open-source, interoperable tools to untangle it together. If we can scale up access and the number of organizations with these tools, then the mapping and analysis work will help counter disinformation and coordinate efforts to build digital awareness, monitor disinformation campaigns and inform regulations that will protect us from malicious clickbait and false narratives. 

To get smart and fight back, organizations need access to this data and open-source, interoperable tools to untangle it together.

This work is starting to come together, and analysts are getting good at collaboratively sifting through data to detect and document disinformation. Counter disinformation groups are also starting to use a common language and standards to identify and discuss the problem of digital information manipulation and interference. This is progress.  

Q: Any big, bold or unusual ideas you’re seeing to combat the spread of disinformation? 

A lot of the current thinking around the disinformation problem is too small and only secondary to other priorities. The tools and frameworks I’ve described need to be massively scaled up and baked into nearly all development, security, and commercial projects and proposals. Unfortunately, funding to counter disinformation continues to lag the growing awareness of the seriousness of the problem. So much attention has been paid to issues like climate change and violent extremism in African contexts in recent years – rightfully so – but disinformation, which is often an afterthought, helps drive and derail solutions to both these challenges.  

I think today’s fight against false information is where cyber security was several decades ago. In 2023, an organization wouldn’t set up a server or a website without first considering cyber security precautions. There’s not yet this level of foresight with disinformation. Instead, it’s only when organizations find themselves in the crosshairs of it that they scramble to react. We need to work towards a model where there’s the same level of due diligence and vigilance as there is in the cyber security community. The strongest cyber defenses involve multistakeholder models and open-source communities of concerned watchdogs. Some leading organizations are arguing that we have an opportunity for a moonshot that can skip past a lot of the trial-and-error that cyber security went through to arrive at something workable and scalable. Again, however, this requires buy-in and support that recognizes how destabilizing disinformation has become to peaceful societies and democratic processes.  

Mark Duerkson, PhD, is a Research Associate at the Africa Center for Strategic Studies.