Michael Bluhm: On a lot of indicators, democracy’s been in decline across the economically developed world for more than a decade and a half—longer than social media’s been around. How do you see the specific challenge it’s meant for democratic life?
Tobias Rose-Stockwell: There are certainly a lot of challenges in the mix, most of which, it’s true, go back further than social media. But social media has become a huge force in the contemporary world. And in the forms it currently takes, I think, it’s incompatible with democratic norms.
It’s a complex question, but it comes down to people’s ability to inhabit a shared world. Democracies require all kinds of diversity, including diversity in interpretations of current events. But they also require a certain shared understanding of them—a shared understanding of what is and isn’t happening. And they require a certain level of trust—trust in the democratic project, trust in the fundamental integrity of voting systems, trust in the fundamental integrity of information systems, and trust in each other. Social media has been toxic to shared understanding and trust.
Of course, the ability to manipulate shared understandings of current events has long been one of the most effective tools of authoritarians—and people with authoritarian dispositions or aspirations. In advanced democracies, journalists have traditionally limited that ability. They’ve traditionally represented an independent arbitration of the facts and called out political propaganda. Social media has given radically updated tools to rulers in authoritarian countries, and it’s fundamentally disrupted the work of journalists in democratic countries.
Digital media generally, and social media in particular, has shown great potential—to broaden public conversation in democracies, to bring greater diversity of ideas and opinions to them, to be a real force of dynamism. But social media has also shown that it can be extremely effective at fragmenting shared understanding and trust—to the point where it’s now extremely difficult for people to recognize for themselves when democratic norms are being violated, to recognize together when their democratic system is breaking down, and organize together to fix it.
Bluhm: How’s social media done this?
Rose-Stockwell: Misinformation and disinformation play a big role. Misinformation just means distributing false information. Disinformation is different: the strategic creation and structured deployment of misinformation to advance an agenda. Misinformation and disinformation are now serious problems in democratic societies.
Now, it may seem contradictory, but democratic societies actually need a certain degree of misinformation or disinformation; they need people to be able to get things wrong, so the society as a whole can develop and maintain the ability to sort out true from false, right from wrong. The danger of misinformation or disinformation isn’t in their mere presence; it’s in the ratio of bad information to good information increasing to the point where it’s overwhelming.
And social media doesn’t just expand the presence of misinformation and disinformation. It also rapidly fragments people’s interpretations of what’s happening in the world around them. The problem is subtle—as Mark Zuckerberg and his team at Facebook acknowledged back in 2018: The engagement patterns on their platform essentially create disproportionate visibility for content that approaches the edge of reasonable, fact-based discourse but doesn’t cross over entirely. And they do this in every direction, ideologically.
Bluhm: A central theme of your book is the role of outrage in the problems social media has introduced. The book itself is called Outrage Machine. How does outrage figure in the problems you’re describing?
Rose-Stockwell: Outrage as such isn’t the problem. In fact, the existence of outrage is at least as important to democratic societies as certain measures of bad information are. Outrage is a human experience that democracy is designed to be responsive to. As a functioning system, democracy channels outrage into into policy solutions: We can see a problem; we can feel passionately about the problem; we organize around a solution to the problem; and we can vote people into power who will advance that solution.
The trouble isn’t people experiencing outrage on account how they understand and feel about an issue; it’s people understanding and feeling about the issue in a way that’s fundamentally conditioned by outrage. So the question is, how can we ensure that the outrages we experience are fundamentally real, proportionate, and important to address?
The speed at which outrage spreads and gains intensity is crucial to this question. Functioning democracies need ways to let passions cool and thoughts focus as outrage prompts public debate. James Madison and Alexander Hamilton—two of the more prominent founders of the United States—believed that, even if every citizen had a brilliant intellect and the best interests of their country at heart, democratic society would still be largely a big mob if the right structures weren’t in place to ensure that people could effectively apply their anger constructively toward solutions. As they put it in The Federalist Papers, “passion never fails to wrest the scepter of reason.”
Channeling outrage constructively is a critical function that Madison, Hamilton, and others were trying to design American democracy for—to allow for passions to cool before they shape public policy. If passions don’t cool, and people pressure legislators to start passing laws rashly, they’ll often get the problem wrong, create new problems in the process, and distract themselves from other potentially serious problems. In this sense, bad responses to outrage tend to make everything worse over time, leaving people with an appropriate sense that their democratic system isn’t working—even leading them to become skeptical or angry with the democratic project as a whole.
Bluhm: What do social-media algorithms have to do with outrage?
Rose-Stockwell: Outrage tends to have an inherent viral advantage on social-media platforms—meaning, content expressing outrage will tend to spread faster and further. There’s good research that helps explain this, showing that certain kinds of language are disproportionately effective at capturing people’s attention and activating them.
For instance, if you’re on Twitter—or X, as it’s now called—every morally or emotionally loaded word you might use in a tweet increases the odds that your tweet will go viral by an average of 17 percent. If you use a word like disgusting, shame, punishment, diabolical, or evil, that word will boost your tweet’s viral potential in the system.
A broader effect, in turn, is that these kinds of language, and the emotions and perspectives associated with them, increasingly dominate public discourse as a whole. In other words, content that’s optimized for social-media platforms—content that tends to be attention-grabbing and sticky with people by advancing ideas that provoke anger and outrage—increasingly conditions public conversation in general.
This dynamic doesn’t entirely come from the algorithm, though; the dynamic works because of the ways the human brain works. The algorithm is highly responsive, for example, to the way we process social information. We tend to be innately interested in information that has to do with status or gossip. If someone is getting called out online, say, that information is like sugar for people’s brains. It’s like sugar, it seems because that kind of social information was important to survival in our evolution. So the algorithm manipulates people by targeting their brains in very precise ways.
The news media has actually been skilled at this for a long time, in it’s way. Previously, they targeted audiences without social media. Now, they depend heavily on social media, and social media gives them new tools for engaging audiences by triggering outrage on the basis of those audiences’ moral and emotional profiles.
Bluhm: A lot of journalists are on Twitter, but not a lot of people in general are. A lot of people in general are on Facebook, Instagram, and other platforms. What platforms would you say are the most significant here—and why?
Rose-Stockwell: Facebook has been enormously significant, particularly given its scale. But, I think, to get at this question, it’s important to understand how journalism itself has changed with social media. Social media now has a mediating function for society as a whole, not least because journalists use it so heavily—especially Twitter—and journalists are still the people most responsible for informing us. This is how the dynamics of social media seep into traditional news coverage. So, even if you’re not on Twitter yourself, you are on it, in a sense, if you consume traditional news media. If you’re reading, watching, or listening to the news, that information has usually been filtered through social media.
Which hasn’t just changed the way journalists select stories or report and write them; it’s changed where they might look for stories in the first place. If something goes viral on social media, many journalists will now cover that as a story in itself—because they know there’s such a high likelihood of audience “traffic” to the story as a result of its original viral popularity. Any journalist, even with little skill or effort, can take whatever might be trending or causing outrage online and turn that into content.
The result, when they do, is that the attention of a larger audience is now focused on a point of discontent that was originally marginal and significant to just a tiny fringe of people online. In this way, Twitter—despite its relatively small user base, as you say—has become one of the most troubling sources in the information ecosystem.
Bluhm: Social-media platforms’ algorithms have changed a lot over the years. Do you see these changes as significant in the way the platforms have affected democratic society?
Rose-Stockwell: There’ve been a number of significant changes. The most significant have been three that Facebook launched between 2009 and 2012, which dramatically shifted people’s perceptions of information and the way that they exchange it online.
The first was the one-click share, which began on Twitter. It created the ability to share content frictionlessly with your entire audience. Twitter’s version was the Retweet, then Facebook launched its Share button. As it turns out, the speed at which information moves is directly related to the quantity of misinformation and disinformation that users are exposed to. Fast information tends to be emotive, reactive, and impulsive—even, in some cases, violence, as it has in Burma, for example, or even the United States on January 6, 2021.
The second feature is the Like button, along with the ability for users to see how many followers people have or how many likes a post gets. It’s easy and free to give likes, but putting a numerical value on content has changed the type of content people share. Above all, it’s trained people to try to maximize the number of likes that their content gets on a regular basis. The rewards associated with sharing certain content and getting likes have created a status game—and one that’s very disruptive to how we perceive and make sense of the world. Visible metrics have trained us to try to find the pattern that gets us these hits of dopamine from sharing content other users “like.”
The third feature is the algorithmic feed you refer to, which is how all the major platforms serve content online. Effectively, they rank content based on predicted user engagement. Things that tend to make us angry will be served to us first. The basic ranking algorithm will often serve us the most sensitive and engaging content, but that tends to be moral, emotional—and outrageous. For any of the changes to the ranking algorithms along the way, that remains their fundamental feature today.
Bluhm: Do you see social-media platforms as having any positive effects in the world?
Rose-Stockwell: There’s tremendous value and potential value in these tools. On balance, social media would be a very good thing for the world—if we could solve this set of problems the big platforms have represented for democracy. Social media can give people the ability to share previously hidden problems. It can give marginalized people a way to connect and influence. It can help give everyone a much wider perspective on their world and the issues that need to be resolved in their society. Social media has been fantastic for humanitarian causes—for exposure, for fundraising, and for charitable action. It’s been very positive for activists addressing all kinds of ultimately important social issues. There’s just an untenable ratio, now, of misinformation, disinformation, and myth that undermines these benefits.
Bluhm: What would be the main practical ways you could see the big social-media platforms transforming and supporting democratic society?
Rose-Stockwell: I think it’s important to look at that question in a broad historical context. In the history of information systems, every new major technology has caused an explosion of unintended consequences, often including confusion and sometimes even viral misinformation. In the 1500s, the invention of the printing press led to a detonation of misinformation that caused a complete reformatting of European society. It took a while for people collectively to figure out how to mitigate some of the worst effects of the printing press.
The key, at historical moments like this, is for a society to figure out how to put the structures and regulations in place that can help turn the new explosion of information of a viral network into the dynamic resource of a functional knowledge network. By that, I mean a system that can effectively help us make sense of what’s good information and what’s bad—what’s real and what’s not. Ultimately, social media needs to operate less like a viral network and more like a knowledge network. If it does, it could be a fantastic resource for accurate information. But that will require better design structures and systems.
Wikipedia is already an example of a social knowledge network working fairly well. You might not expect that Wikipedia would have well-referenced, accurate, historical information about the world, because only volunteers write and edit it. But there’s a design structure and a system that allows people to source effectively and challenge each other’s work in good faith.
There are some examples. X now has a feature called Community Notes, which allows all users to weigh in and verify truth claims in a post to the site. In that sense, X is starting partly to resemble a knowledge network. Even its owner, Elon Musk, gets called out on Community Notes for saying things that are false or partly false, or lacking in context. The fact that he and so many others will continue to do this, and create waves of viral content as they do, is a measure of the challenge ahead. But the direction these initiatives represent for social media is promising.