How Online Mobs Act Like Flocks Of Birds
A growing body of research suggests human behavior on social media is strikingly similar to collective behavior in nature.
By Renée DiResta, Noema, 03/11/2022
Renée DiResta is the technical research manager at Stanford Internet Observatory.
You’ve probably seen it: a flock of starlings pulsing in the evening sky, swirling this way and that, feinting right, veering left. The flock gets denser, then sparser; it moves faster, then slower; it flies in a beautiful, chaotic concert, as if guided by a secret rhythm.
Biology has a word for this undulating dance: “murmuration.” In a murmuration, each bird sees, on average, the seven birds nearest it and adjusts its own behavior in response. If its nearest neighbors move left, the bird usually moves left. If they move right, the bird usually moves right. The bird does not know the flock’s ultimate destination and can make no radical change to the whole. But each of these birds’ small alterations, when occurring in rapid sequence, shift the course of the whole, creating mesmerizing patterns. We cannot quite understand it, but we are awed by it. It is a logic that emerges from — is an embodiment of — the network. The behavior is determined by the structure of the network, which shapes the behavior of the network, which shapes the structure, and so on. The stimulus — or information — passes from one organism to the next through this chain of connections.
While much is still mysterious and debated about the workings of murmurations, computational biologists and computer scientists who study them describe what is happening as “the rapid transmission of local behavioral response to neighbors.” Each animal is a node in a system of influence, with the capacity to affect the behavior of its neighbors. Scientists call this process, in which groups of disparate organisms move as a cohesive unit, collective behavior. The behavior is derived from the relationship of individual entities to each other, yet only by widening the aperture beyond individuals do we see the entirety of the dynamic.
Online Murmurations
A growing body of research suggests that human behavior on social media — coordinated activism, information cascades, harassment mobs — bears striking similarity to this kind of so-called “emergent behavior” in nature: occasions when organisms like birds or fish or ants act as a cohesive unit, without hierarchical direction from a designated leader. How that local response is transmitted — how one bird follows another, how I retweet you and you retweet me — is also determined by the structure of the network. For birds, signals along the network are passed from eyes or ears to brains pre-wired at birth with the accumulated wisdom of the millenia. For humans, signals are passed from screen to screen, news feed to news feed, along an artificial superstructure designed by humans but increasingly mediated by at-times-unpredictable algorithms. It is curation algorithms, for example, that choose what content or users appear in your feed; the algorithm determines the seven birds, and you react.
Our social media flocks first formed in the mid ‘00s, as the internet provided a new topology of human connection. At first, we ported our real, geographically constrained social graphs to nascent online social networks. Dunbar’s Number held — we had maybe 150 friends, probably fewer, and we saw and commented on their posts. However, it quickly became a point of pride to have thousands of friends, then thousands of followers (a term that conveys directional influence in its very tone). The friend or follower count was prominently displayed on a user’s profile, and a high number became a heuristic for assessing popularity or importance. “Friend” became a verb; we friended not only our friends, but our acquaintances, their friends, their friends’ acquaintances.
The virtual world was unconstrained by the limits of physical space or human cognition, but it was anchored to commercial incentives. Once people had exhaustively connected with their real-world friend networks, the platforms were financially incentivized to help them find whole new flocks in order to maximize the time they spent engaged on site. Time on site meant a user was available to be served more ads; activity on site enabled the gathering of more data, the better to infer a user’s preferences in order to serve them just the right content — and the right ads. People You May Know recommendation algorithms nudged us into particular social structures, doing what MIT network researcher Sinan Aral calls the “closing of triangles:” suggesting that two people with a mutual friend in common should be connected themselves.
Eventually, even this friend-of-friending was tapped out, and the platforms began to create friendships for us out of whole cloth, based on a combination of avowed, and then inferred, interests. They created and aggressively promoted Groups, algorithmically recommending that users join particular online communities based on a perception of statistical similarity to other users already active within them.
This practice, called collaborative filtering, combined with the increasing algorithmic curation of our ranked feeds to usher in a new era. Similarity to other users became a key determinant in positioning each of us within networks that ultimately determined what we saw and who we spoke to. These foundational nudges, borne of commercial incentives, had significant unintended consequences at the margins that increasingly appear to contribute to perennial social upheaval.
One notable example in the United States is the rise of the QAnon movement over the past few years. In 2015, recommendation engines had already begun to connect people interested in just about any conspiracy theory — anti-vaccine interests, chemtrails, flat earth — to each other, creating a sort of inadvertent conspiracy correlation matrix that cross-pollinated members of distinct alternate universes. A new conspiracy theory, Pizzagate, emerged during the 2016 presidential campaign, as online sleuths combed through a GRU hack of the Clinton campaign’s emails and decided that a Satanic pedophile cabal was holding children in the basement of a DC pizza parlor.
At the time, I was doing research into the anti-vaccine movement and received several algorithmic recommendations to join Pizzagate groups. Subsequently, as QAnon replaced Pizzagate, the highly active “Q research” groups were, in turn, recommended to believers in the prior pantheon of conspiracy theories. QAnon became an omni-conspiracy, an amoeba that welcomed believers and “researchers” of other movements and aggregated their esoteric concerns into a Grand Unified Theory.
After the nudges to assemble into flocks come the nudges to engage — “bait,” as the Extremely Online call it. Twitter’s Trending Topics, for example, will show a nascent “trend” to someone inclined to be interested, sometimes even if the purported trend is, at the time, more of a trickle — fewer than, say, 2,000 tweets. But that act, pushing something into the user’s field of view, has consequences: the Trending Topics feature not only surfaces trends, it shapes them. The provocation goes out to a small subset of people inclined to participate. The user who receives the nudge clicks in, perhaps posts their own take — increasing the post count, signaling to the algorithm that the bait was taken and raising the topic’s profile for their followers. Their post is now curated into their friends’ feeds; they are one of the seven birds their followers see. Recurring frenzies take shape among particular flocks, driving the participants mad with rage even as very few people outside of the community have any idea that anything has happened. Marx is trending for you, #ReopenSchools for me, #transwomenaremen for the Libs Of TikTok set. The provocation is delivered, a few more birds react to what’s suddenly in their field of view, and the flock follows, day in and day out.
Eventually, perhaps, an armed man decides to “liberate” a DC pizza parlor, or a violent mob storms a nation’s capitol. Although mainstream tech platforms now act to disrupt the groups most inclined to harassment and violence — as they did by taking down QAnon groups and shutting down tens of thousands of accounts after the January 6th insurrection — the networks they nudged into existence have by this point solidified into online friendships and comradeships spanning several years. The birds scatter when moderation is applied, but quickly re-congregate elsewhere, as flocks do.
Powerful economic incentives determined the current state of affairs. And yet, the individual user is not wholly passive — we have agency and can decide not to take the bait. We often deploy the phrase “it went viral” to describe our online murmurations. It’s a deceptive phrase that eliminates the how and thus absolves the participants of all responsibility. A rumor does not simply spread — it spreads because we spread it, even if the system is designed to facilitate capturing attention and to encourage that spread.
Old Phenomenon, New Consequences
We tend to think of what we see cascading across the network — the substance, the specific claims — as the problem. Much of it is old phenomena manifesting in new ways: rumors, harassment mobs, disinformation, propaganda. But it carries new consequences, in large part because of the size and speed of networks across which it moves. In the 1910s, a rumor may have stayed confined to a village or town. In the 1960s, it might have percolated across television programs, if it could get past powerful gatekeepers. Now, in the 2020s, it moves through a murmuration of millions, trends on Twitter and is picked up by 24/7 mass media.
“We shape our tools, and thereafter they shape us,” argued Father John Culkin, a contemporary and friend of media theorist Marshall McLuhan. Theorists like Culkin and McLuhan — working in the 1960s, when television had seemingly upended the societal order — operated on the premise that a given technological system engendered norms. The system, the infrastructure itself, shaped society, which shaped behavior, which shaped society. The programming — the substance, the content — was somewhat secondary.
This thinking progressed, spanning disciplines, with a sharpening focus on curation’s role in an information system then comprised of print, radio and the newest entrant, television. In a 1971 talk, Herbert Simon, a professor of computer science and organizational psychology, attempted to reckon with the information glut that broadcast media created: attention scarcity. His paper is perhaps most famous for this passage:
In an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.
Most of the cost of information is not incurred by the producers, Simon argues, but the recipients. The solution? Content curation — a system that, as he put it, “listens and thinks more than it speaks,” that thinks of curation in terms of withholding useless bait so that a recipient’s attention is not wasted flitting from one silly provocation to another.
I dug up the conference proceedings where Simon presented this argument. They include a discussion of the paper in which Simon’s colleagues responded to his theory, making arguments nearly identical to those of today. Karl Deutsch, then a professor of government at Harvard, expressed apprehension about curation, or “filtering,” as a solution to information glut — it might neglect to surface “uncongenial information,” in favor of showing the recipient only things they would receive favorably, leading to bad policy creation or suboptimal organizational behavior. Martin Shubik, then a professor of economics at Yale, tried to differentiate between data and information — is what we are seeing of value? From what was then the nascent ability of computers to play chess, he extrapolated the idea that information processing systems might eventually facilitate democracy. “Within a few years it may be possible to have a virtually instant referendum on many political issues,” he said. “This could represent a technical triumph — and a social disaster if instability resulted from instantaneous public reaction to incompletely understood affairs magnified by quick feedback.”
Though spoken half a century ago, the phrase encapsulates the dynamics of where we find ourselves today: “a technical triumph, and a social disaster.”
Simon, Deutsch and Shubik were discussing one of social media’s biggest fixations more than a decade before Mark Zuckerberg was even born. Content curation — deciding what information reaches whom — is complicated, yet critical. In the age of social media, however, conversations about this challenge have largely devolved into controversies about a particular form of reactive curation: content moderation, which attempts to sift the “good” from the “bad.” Today, the distributed character of the information ecosystem ensures that so-called “bad” content can emerge from anywhere and “go viral” at any time, with each individual participating user shouldering only a faint sliver of responsibility. A single re-tweet or share or like is individually inconsequential, but the murmuration may be collectively disastrous as it shapes the behavior of the network, which shapes the structure of the network, which shapes the behavior.
Substance As The Red Herring
In truth, the overwhelming majority of platform content moderation is mostly dedicated to unobjectionable things like protecting children from porn or eliminating fraud and spam. However, since curation organizes and then directs the attention of the flock, the argument is of great political importance because of its potential downstream impact on real-world power. And so, we have reached a point in which the conversation about what to do about disinformation, rumors, hate speech and harassment mobs is, itself, intractably polarized.
But the daily aggrievement cycles about individual pieces of content being moderated or not are a red herring. We are treating the worst dynamics of today’s online ecosystem as problems of speech in the new technological environment, rather than challenges of curation and network organization.
This overfocus on the substance — misinformation, disinformation, propaganda — and the fight over content moderation (and regulatory remedies like revising Section 230) makes us miss opportunities to examine the structure — and, in turn, to address the polarization, factional behavior and harmful dynamics that it sows.
So what would a structural reworking entail? How many birds should we see? Which birds? When?
First, it entails diverging from The Discourse of the past several years. Significant and sustained attention to the downsides of social media, including from Congressional leaders, began in 2017, but the idea that “it’s the design, stupid” never gained much currency in the public conversation. Some academic researchers and activist groups, such as the Center for Humane Technology, argued that recommender systems, nudges and attention traps seemed to be leading to Bad Things, but they had little in the way of evidence. We have more of that now, including from whistleblowers, independent researchers and journalists. At the time, though, the immediacy of some of the harms, from election interference to growing evidence of a genocide in Myanmar, suggested a need for quick solutions, not system-wide interrogations.
There was only minimal access to data for platform outsiders. Calls to reform the platforms turned primarily to arguments for either dismantling them (antitrust) or creating accountability via a stronger content moderation regime (the myriad of disjointed calls to reform 230 from both Republicans and Democrats). Since 2017, however, Congressional lawmakers have broached a few bills but accomplished very little. Hyperpartisans now fundraise off of public outrage; some have made being “tough on Big Tech” a key plank of their platform for years now, while delivering little beyond soundbites that can themselves be digested on Twitter Trending Topics.
Tech reformation conversations today still remain heavily focused on content moderation of the substance, now framed as “free speech vs. censorship” — a simplistic debate that goes nowhere, while driving daily murmurations of outrage. Trying to litigate rumors and fact-check conspiracy theories is a game of whack-a-mole that itself has negative political consequences. It attempts to address bad viral content — the end state — while leaving the network structures and nudges that facilitate its reach in place.
More promising ideas are emerging. On the regulatory front, there are bills that mandate transparency, like the Platform Accountability and Transparency Act (PATA), in order to grant visibility into what is actually happening on the network level and better differentiate between real harm and moral panic. At present, data access into these critical systems of social connection and communication is granted entirely at the beneficence of the owner, and owners may change. More visibility into the ways in which the networks are brought together, and the ways in which their attention is steered, could potentially give rise to far more substantive debates about what categories of online behavior we seek to promote or prevent. For example, transparency into how QAnon communities formed might have allowed us to better understand the phenomenon — perhaps in time to mitigate some of its destructive effects on its adherents, or to prevent offline violence.
But achieving real, enforceable transparency laws will be challenging. Understandably, social media companies are loath to admit outside scrutiny of their network structures. In part, platforms avoid transparency because transparency offers less immediately tangible benefits but several potential drawbacks, including negative press coverage or criticisms in academic research. In part, this is because of that foundational business incentive that keeps the flocks in motion: if my system produces more engagement than yours, I make more money. And, on the regulatory front, there is the simple reality that tough-on-tech language about revoking legal protections or breaking up businesses grabs attention; far fewer people get amped up over transparency.
Second, we must move beyond thinking of platform content moderation policies as “the solution” and prioritize rethinking design. Policy establishes guardrails and provides justification to disrupt certain information cascades, but does so reactively and, presently, based on the message substance. Although policy shapes propagation, it does so by serving as a limiter on certain topics or types of rhetoric. Design, by contrast, has the potential to shape propagation through curation, nudges or friction.
For example, Twitter might choose to eliminate its Trending feature entirely, or in certain geographies during sensitive moments like elections — it might, at a minimum, limit nudges to surfacing actual large-scale or regional trends, not simply small-scale ragebait. Instagram might enact a maximum follower count. Facebook might introduce more friction into its Groups, allowing only a certain number of users to join a specific Group within a given timeframe. These are substance-agnostic and not reactive.
In the short term, design interventions might be a self-regulatory endeavor — something platforms enact in good faith or to fend off looming, more draconian legislation. Here, too, however, we are confronted by the incentives: the design shapes the system and begets the behavior, but if the resulting behavior includes less time on site, less active flocks, less monetization, well…the incentives that run counter to that have won out for years now.
To complement policy and design, to reconcile these questions, we need an ambitious, dedicated field of study focused on the emergence and influence of collective beliefs that traces threads between areas like disinformation, extremism, and propaganda studies, and across disciplines including communication, information science, psychology, and sociology. We presently don’t know enough about how people believe and act together as groups, or how beliefs can be incepted, influenced or managed by other people, groups or information systems.
Studies of emergent behavioramong animals show that there are certain networks that are simply sub-optimal in their construction — networks that lead schools/hives/flocks to collapse, starve or die. Consider the ant mill, or “death spiral,” in which a collection of army ants lose the pheromone track by which they navigate and begin to follow each other in an endless spiral, walking in circles until they eventually die of exhaustion. While dubbing our current system of communications infrastructure and nudges a “death spiral” may seem theatrical, there are deep, systemic and dangerous flaws embedded in the structure’s DNA.
Indeed, we are presently paying debt on the bad design decisions of the past. The networks designed years ago — when amoral recommendation engines suggested, for example, that anti-vaccine activists might like to join QAnon communities — created real ties. They made suggestions and changed how we interact; the flocks surrounding us became established. Even as we rethink and rework recommendations and nudges, repositioning the specific seven birds in the field of view, the flocks from which we can choose are formed — and some are toxic. We may, at this point, be better served as a society by starting from scratch and making a mass exodus from the present ecosystem into something entirely new. Web3 or the metaverse, perhaps, if it materializes; new apps, if all of that turns out to be vaporware.
But if starting from scratch isn’t an option, we might draw on work from computational biology and complex systems in order to re-envision our social media experience in a more productive, content-agnostic way. We might re-evaluate how platforms connect their users or how factors that determine what platform recommenders and curation algorithms push into field-of-view, considering a combination of structure (network), substance (rhetoric, or emotional connotation) and incentives that shape information cascades. This could potentially have a far greater impact than battling over content moderation as a path toward constructing a healthier information ecosystem. Our online murmurations can be more like the starlings’ — chaotic, yes, but also elegant — and less like the toxic emergent behaviors we have fallen into today.