State of the land: Misinformation and its effects on global catastrophic risks
Throughout the pandemic, we’ve experienced firsthand how misinformation and disinformation can prevent an effective response to a global public health issue. We are lucky that the current coronavirus pandemic does not threaten the future of humanity, but, given our experience with COVID-19, it’s difficult to imagine that things would work out much better if the world faced a global catastrophic risk. There are many technological, logistical, and regulatory issues in the pandemic response, but even if these were all solved, it is extremely difficult to effectively coordinate any response to catastrophe[1] when a significant proportion of the global population lacks confidence and trust in government and public health institutions[2].
While most people see examples of misinformation as a problem in local politics and personal lives, I suspect that many effective altruists are hesitant to work on the issue because it does not seem tractable. Both determining how to spread good information effectively, as well as countering misinformation or actively propagated disinformation are complex and challenging problems. However, there is a huge incentive for bad actors—who seek to increase divisiveness to slow progress in a rival country, to increase their own political power, or simply to make profit—to hone and grow their operations in the space, making the problem worse and increasingly difficult to reverse in the absence of a coordinated counter.
For EAs who prefer to focus to alleviate suffering now (compared to saving future lives by preventing GCBRs), it is important to note that misinformation and disinformation already contribute to modern genocide and dictatorships.
Countering misinformation/disinformation is neglected from a funding standpoint
It’s difficult to know exactly how much money is spent on disinformation, except for public spending by the governments of some countries. In some cases, money spent on disinformation may be offset by profits.
The Russia’s Internet Research Agency famously spent 25 million dollars to pay Russian employees to pretend to be Americans on social media in a complex cross-platform operation (Facebook, Twitter, Instagram, and YouTube). Investing in propagating misinformation or disinformation is inexpensive compared to other country-to-country attacks, but requires long-term thinking, as it can take time for efforts to bear fruit. Russia engages in a lot of disinformation because its ruling party expects to be in power for a long time, and it makes sense to make these investments. They have been creating misinformation rumors since the 80s; the rumor about AIDs being made by the US military to kill Black people and homosexuals was manufactured by Russians.[3] Domestic investment in countermeasures is difficult to justify and maintain in the context of America’s short election cycles.
Misinformation and disinformation is a national security problem but isn’t funded as one because it’s become so partisan, even as increasing partisanship is a common misinformation campaign strategy (examples include Russian vaccine disinformation[4] and misinformation during the Brazilian Ebola crisis[5]). I would bet money that this issue is so neglected from a funding standpoint that more dollars are spent globally on propagating misinformation and disinformation, than on countermeasures.
Anti-misinformation communication strategies
Good examples of countering misinformation with good information are hard to find amidst all the writing about failed efforts[6], and there isn’t clear academic consensus on the best way to do it. Strategies include:
-
Individual media literacy. Before the internet was accessible by everyone, reading material was limited, and schools taught students how to read critically before deciding what to believe. Now, there is so much reading material available that it’s possible to spend all of one’s time reading only one side of a story, so it’s more important to decide what to read than to read critically. Studies show that lateral reading (making a new tab and googling the subject/author of what you were reading) is much more effective than reading critically, in preventing false beliefs.
-
Pre-bunking, also known as inoculation. Not biosecurity-related, but this recent Twitter thread about Ukraine’s fight against Russian disinformation cites some real-world examples.
-
Humor and memes. Taiwan’s Humor Over Rumor strategy is a great example. Taiwan has a government department responsible for making memes to counter new misinformation rumors
-
Humanization with stories. Two example I like are Roald Dahl writing about his daughter and encouraging people to take the measles vaccine, and this story the CDC has about the Vitamin K shot for babies.
-
Corrections or informational warnings overlaid on content. This can be hit or miss, and implementation details are important.[7]
The strategies above (memes, stories, etc.) can be used for both good information and misinformation. A great example of this is how Facebook is itself uses similar tactics in defending itself when its reputation is challenged by whistleblowers. Misinformation can’t be countered by content creation alone, which leads to the next section.
Policy recommendations
This section is focused on the USA, though supporting good misinformation and disinformation policy in the USA would have global impacts, as American social media companies have a global audience. I think that the best report on this topic is from the Aspen Institute, and this section is mostly a summary of their report.[8]
A few points of context :
“In a free society there are no ‘arbiters of truth’” The goal is to mitigate misinformation’s worse harms, not eradicate it.
“Certain communities bear [the harm] burden in greater proportion”, not just marginalized people, but also susceptible people like the elderly
A caveat not covered in depth in the Aspen report is that policy needs to be nuanced. For example, some of the regulation meant to counter misinformation going through Congress doesn’t define a process for determining what content is misinformation beyond government mandate. I’m not sure what the best system is, but I think it’s pretty clear that the responsibility of content evaluation shouldn’t rest solely on the government, nor should it rest solely on social media companies. The process needs to be transparent, and apolitical.
Policy recommendations aim to increase transparency, coordinate a response, and create accountability.
Increase transparency
Give authorized academics access to data from social media companies. Because social media companies have all the data, and academics don’t , academia lags far behind in understanding how social media amplifies content, and how it affects different demographics. It’s difficult to make public health improvements in social media when there’s no outside visibility into how the algorithms work and who they are targeting.
Regularly disclose content that has high reach. Instead of public interest groups trying to track misinformation from the outside, social media companies should report what content is reaching the most eyeballs, so that public health officials can respond if necessary.
Provide transparency on the content moderation process so that it can be audited by authorized external researchers. Similar to content amplification, there isn’t currently visibility into content moderation beyond single case studies, for researchers to measure how fair and effective the processes are.
Mandated ad transparency. “Under this effort, ad platforms will be required to comply by disclosing more information about which communities are being targeted, by whom, and with which content”
Coordinate a response
Coordinated response at a national level. State and local level public health officials don’t have the time or expertise to deal with misinformation.
Create accountability
Accountability for misinformation superspreaders. Online platforms should use clear, transparent, and consistently applied policies that enable quicker, more decisive actions and penalties, commensurate with their impacts—regardless of location, political views, or role in society.
Withdrawing platform immunity from responsibility for user-generated content, when the content is paid, or the content is actively promoted by recommendation engines.
Open Problems
More sources of good information
I think it’s worth investing in more organizations that can serve as sources of information by creating highly engaging yet trustworthy and healthy content for different political demographics.[9]
More foundational research
More foundational scientific research about the metrics of misinformation and disinformation: What harms and effects can be concretely attributable to misinformation? What parts of misinformation are more powerful than others? How can pre-bunking and de-bunking penetrate communities that believe misinformation and disinformation? The Kaiser Family Foundation polling project has gotten a great start on foundational research about how people think about different aspects of public health, and it would be great to have even more granular knowledge on how different groups get their information to inform the creation of good information sources.
Lack of trust stemming from malicious acts
Some lack of trust is caused by malicious acts. For example, there are plenty of examples of communities that have been intentionally harmed by medical science, like colonialists treating sleeping sickness—sometimes forcing treatment at gunpoint—with medicine that caused blindness in 20% of people. A more modern example is increased vaccine hesitancy in Pakistan, after Osama bin Laden’s location was discovered using a fake vaccination program.[10]
Workforce diversity in social media and news companies
From the Aspen report, “It is critical that those in control of decisions regarding content moderation and amplification are representative of the cultural terrain of marginalized communities impacted by disinformation”
What people are doing
Here are some examples of people I’m following who are working on this issue. Some have agreed to keep an eye on this post. If you have questions about working on or funding misinformation countermeasures or other biosecurity topics, you can start a discussion here.
Tara Kirk Sell studies misinformation and effective public health communication. Sophie Rose doesn’t look at misinformation directly but looks at factors that affect pandemic performance in different countries, and was the co-founder of 1 Day Sooner, an advocacy organization promoting human challenge trials to speed up testing of the covid vaccines. Nikita Salovich studies how people come to believe misinformation as they read it, and what interventions might help.
As for myself, I am slowly completing a Master of Public Health, one course at a time, with the goal of working out how to how to make good information at least as viral as misinformation, as well as how to develop policies that are reasonably enforceable by social media companies.
How you can help
Continue the conversation
How do you see health-related misinformation and disinformation as it relates to GCBRs? Have you personally encountered issues with misinformation and disinformation during COVID-19? What are some untried or emergent ways that this problem should be dealt with?
Contribute through your career
Work affecting misinformation can be roughly categorized into either academia, industry, or policy. While there is already a strong connection between industry and policy makers due to economic incentives, there aren’t enough people who have expertise bridging academia with industry, or academia with policy. If you have expertise in any one of these areas, you can help by building expertise in a second area and becoming that bridge.
Contribute through money
If you’re rich, you can contribute to the cause by funding efforts that involve bridging academia with policy or industry (the Aspen report was funded by Craig Newmark, of craigslist fame), or investing in trustworthy local media, especially for underserved communities. From the Aspen report, “A California study found that ‘when there are fewer reporters covering an area, fewer people run for mayor, and fewer people vote.’[11]”
Further Reading
Related organizations
The State of the Net conference for internet policy makers
Acknowledgements
Thanks To Tara, Divya, Sophie, and Nikita for reading over drafts and providing feedback. All mistakes are my own.
- ↩︎
In comparing how countries did with COVID, “Measures of trust in the government and interpersonal trust, as well as less government corruption, had larger, statistically significant associations with lower standardised infection rates.” Bollyky, T. J., Hulland, E. N., Barber, R. M., Collins, J. K., Kiernan, S., Moses, M., … & Dieleman, J. L. (2022). Pandemic preparedness and COVID-19: an exploratory analysis of infection and fatality rates, and contextual factors associated with preparedness in 177 countries, from Jan 1, 2020, to Sept 30, 2021. The Lancet. https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(22)00172-6/fulltext
- ↩︎
“reduced excess mortality associated with quality of and trust in governance” Rose, S. M., Paterra, M., Isaac, C., Bell, J., Stucke, A., Hagens, A., … & Nuzzo, J. B. (2021). Analysing COVID-19 outcomes in the context of the 2019 Global Health Security (GHS) Index. BMJ global health, 6(12), e007581. https://gh.bmj.com/content/6/12/e007581
- ↩︎
Detail on how Russians spread the AIDS rumor from NPR: https://www.npr.org/2018/11/15/668209008/inside-the-russian-disinformation-playbook-exploit-tension-sow-chaos
- ↩︎
Russian disinformation on Twitter was more likely to have political divisive messages in sowing vaccine hesitancy. Broniatowski, D. A., Jamison, A. M., Qi, S., AlKulaib, L., Chen, T., Benton, A., … & Dredze, M. (2018). Weaponized health communication: Twitter bots and Russian trolls amplify the vaccine debate. American journal of public health, 108(10), 1378-1384. https://ajph.aphapublications.org/doi/pdfplus/10.2105/AJPH.2018.304567
- ↩︎
Misinformation in Brazil related to the ebola crisis was more likely to contain political content: Sell, T. K., Hosangadi, D., & Trotochaud, M. (2020). Misinformation and the US Ebola communication crisis: analyzing the veracity and content of social media messages related to a fear-inducing infectious disease outbreak. BMC Public Health, 20(1), 1-10. https://bmcpublichealth.biomedcentral.com/articles/10.1186/s12889-020-08697-3
- ↩︎
analysis of failed misinformation taskforce efforts in Europe: https://www.researchgate.net/profile/Raffael-Heiss/publication/355370678_How_have_governments_and_public_health_agencies_responded_to_misinformation_during_the_COVID-19_pandemic_in_Europe/links/616d6267b90c512662618a1d/How-have-governments-and-public-health-agencies-responded-to-misinformation-during-the-COVID-19-pandemic-in-Europe.pdf
- ↩︎
A study found that on Twitter, an interstitial you had to click through that warned about misleading content before the content was shown helped reduce users’ belief in the content, but content warnings shared beside the content didn’t help. Sharevski, F., Alsaadi, R., Jachim, P., & Pieroni, E. (2022). Misinformation warnings: Twitter’s soft moderation effects on covid-19 vaccine belief echoes. Computers & security, 114, 102577. https://www.sciencedirect.com/science/article/pii/S0167404821004016#bib0043 Facebook found that showing related articles that were fact checked worked better than the “disputed” flag they had added on articles flagged to be incorrect by two fact checkers. https://medium.com/designatmeta/designing-against-misinformation-e5846b3aa1e2
- ↩︎
The Surgeon General also published a documentation with recommendations in a similar spirit to the Aspen Institute report: https://www.hhs.gov/sites/default/files/surgeon-general-misinformation-advisory.pdf
- ↩︎
Paper about how many resources are spent on preventing the spread of misinformation, but not enough are spent on increasing good sources of information and trust in good sources of information: Acerbi, A., Altay, S., & Mercier, H. (2022). Research note: Fighting misinformation or fighting for information?. https://bura.brunel.ac.uk/bitstream/2438/23968/1/FullText.pdf
- ↩︎
10Capture of Osama Bin Laden increases vaccine hesitency in Pakistan https://www.newscientist.com/article/2277145-cias-hunt-for-osama-bin-laden-fuelled-vaccine-hesitancy-in-pakistan/
- ↩︎
Holder, Sarah. As Local Newspapers Shrink, So Do Voters’ Choices. CityLab, Bloomberg News, 11 Apr., 2019.
Great post! Thanks for sharing and the great overview of all the resources out there.
I think part of the fundamental challenge is that trust and good information is expensive, because relationships are expensive. And oftentimes the truth (think, nuances, caveats, further considerations) is… psychologically unsatisfying.
The problem of setting up responsible institutions is itself a difficult issue, but no matter what, even the most responsible institutions will fail if they aren’t good at building relationships with people at large. This latter work is expensive and tricky and hard to scale, which may be part of why it’s under-addressed by people in the EA community?
Second is to focus on psychological understanding and interventions, and cheap easy ways to counter native human biases. I think the lateral reading example you gave is a prime example about this. The humor of a rumor is a similar example. Reason does not seem to be the most effective/efficient way to fight misinformation… and that may be another sticking point that makes it hard to address.
Thanks for posting this Ruth! If anyone wants to chat about this topic and what we are doing at the Johns Hopkins Center for Health Security, please reach out.
I don’t know much about this topic and funding in this area is most likely neglected but I’m unsure how to think about the scale of this issue and how to get a sense as to whether it’s getting better/worse/roughly the same year to year.
I think some of the concerns I have are highlighted by this post from Matthew Yglesias suggesting that the danger of misinformation is overblown.
The Wellcome Foundation also has a regular survey looking at global trust in science/institutions and found a ~10% increase in the publics trust of science and scientists between 2018 and 2020.
It might be that misinformation is a problem but isn’t the biggest part of the problem with vaccine hesitancy and that more general improvements in communication strategies/governance/economic growth could be more important.
Yes, I agree! I think “too much information and people have a difficult time telling what to trust” is a more accurate and nuanced descriptor than “misinformation”! and that your point about
would go a really long way.
I wonder that if people could more generally feel like they had a say and a stake in the way that the country is run, to the point where a regular person could advocate for improvements for themselves and their communities, that there would be more understanding and trust in government and public health institutions. I suspect that when people feel like they’re screwed, it makes the situation more ripe for misinformation to affect people. Here’s a paper that talks about how sharers of misinfo are more likely to express existentially based needs (e.g. fear of death or other threats). https://arxiv.org/pdf/2203.10560.pdf