You can tell a lot about a society by what it makes effortless.
We made it effortless to publish.
Not write. Not edit. Not stand behind what you said the next morning.
Publish.
A share button is a printing press you can operate half-asleep. No apprenticeship. No accountability loop. Just a clean little arrow that says: move this.
That arrow sits under journalism, under propaganda, under satire, under someone’s uncle doing “research,” under a screenshot of a screenshot of a screenshot. It doesn’t discriminate. It can’t. The machine knows one verb: distribute.
Distribution is not a neutral act. The World Economic Forum lists misinformation and disinformation among the top short-term global risks, not because lies are new, but because the cost of moving them dropped to zero. 9
Still, none of this starts with bad actors. It starts with a moment most of us recognize.
You’re on your phone. A headline does its little magic trick. You feel a jolt, recognition, anger, relief. You want to mark yourself as the kind of person who sees this.
So you share it.
Maybe you open the link. Maybe you don’t. Either way, the post leaves your hands and enters the weather system.
Here’s the sentence to keep in your pocket:
I didn’t read it. I just moved it.
First loop: what you think you’re doing Link to heading
We tell ourselves a cozy story about sharing.
We’re “spreading awareness.” We’re “starting a conversation.” We’re “keeping people informed.” Sometimes we are. Often we’re doing something simpler: signaling.
Sharing is a social gesture with a news-shaped wrapper. It’s a nod to your group, a way to say look at this without spending the time to turn “this” into your own words. That’s normal human behavior. The weird part is that we built global infrastructure that treats this gesture like a reliable channel for facts.
The data on “read it first” is brutal.
A 2025 paper in Nature Human Behaviour analyzed over 35 million public Facebook posts and found that shares without clicks made up around 75% of forwarded links. 3 That doesn’t mean 75% of people are lazy. It means the interface made it easy to behave like a person in a hurry, and people took the deal.
Two caveats. First, “click” is not “read.” A click can be a bounce, a mis-tap, a skim-and-close. The “actually read carefully” number is almost certainly smaller. Second, “no click” doesn’t mean “no knowledge”; some people read the piece elsewhere. But the system doesn’t know the difference. It treats all shares as the same endorsement.
That’s the design bug: the system collapses different intentions into the same high-powered action.
A headline is not a receipt.
Yet we let it function like one.
I didn’t read it. I just moved it.
Second loop: what a million taps do to the network Link to heading
When you scale an action, you change its meaning.
One person forwarding a rumor is annoying. A network optimized for forwarding rumors is a governance problem.
Misinformation thrives here for a reason that’s almost boring: it fits the container.
Fast feeds reward novelty, emotional punch, and social utility. Truth is often slower. It’s full of qualifiers. It arrives with context, uncertainty, “we don’t know yet.” That’s how reality talks when it’s being honest.
In 2018, Vosoughi, Roy, and Aral analyzed rumor cascades on Twitter and found that false news diffused farther, faster, deeper, and more broadly than true news. 4 The architecture of the platform, combined with human behavior, produced a measurable advantage for falsehood.
You can blame bots, foreign actors, algorithms. They exist. They matter. But they don’t get us off the hook.
The same research emphasizes that humans are key drivers of diffusion. The lure is not evil; it’s novelty and social reinforcement. 4 We share what feels surprising, what marks us as insiders, what confirms the shape of the world we already believe in.
This is where “frictionless” stops being a UX preference and becomes an epistemic policy.
If the default mode of participation is reflexive broadcasting, the network will privilege content that works at reflex speed. That content tends to be emotionally clean (anger, fear, triumph), cognitively cheap (a simple villain, a simple fix), and socially useful (easy to signal with, easy to recruit with). It also tends to be resistant to correction.
Corrections are heavy. They don’t travel far in a system that rewards light objects moving fast.
Stuart Hall made the point decades ago: meaning isn’t transmitted like a file; it’s interpreted through culture and identity. bell hooks pushed the same theme with more heat: representation shapes who gets seen as credible, human, or disposable.
Translate that into platform dynamics and you get a sharper warning: misinformation is not evenly distributed harm. It lands differently depending on who you are. It is often targeted. 9
The usual response is to treat this as a content problem. Delete the post. Label the tweet. Demote the link.
Sometimes necessary. But it’s downstream.
Upstream is the share button.
A printing press that requires no pause will eventually print things nobody can unprint.
I didn’t read it. I just moved it.
The pivot: attention, not intelligence Link to heading
Here’s where the debate usually goes wrong. It starts insulting people.
The common narrative: people share misinformation because they’re gullible.
That’s not what the evidence suggests.
A big chunk of misinformation sharing looks like an attention problem, not a credulity problem. Pennycook and colleagues argue that people often share low-quality content not because they prefer falsehood, but because their attention is focused on other goals at the moment of sharing: social signaling, group identity, the small dopamine hit of participation. 5 Shift attention toward accuracy and misinformation sharing drops. A meta-analysis of 20 experiments confirms the effect is replicable. 6
This reframe matters.
We’re not trying to fix the user. We’re trying to stop designing interfaces that keep accuracy out of view at the exact moment it matters.
A feed can make you faster than you are wise. That’s not a personal failing. That’s a systems property.
The person sharing isn’t stupid. The person sharing is busy, and the platform is happy to convert that busyness into distribution.
I didn’t read it. I just moved it.
Third loop: the intervention trap Link to heading
Now the part where people get twitchy.
Suggest friction and someone will accuse you of wanting to nanny users, censor speech, or install a moral pop-up asking Are you sure you want to be a bad person today?
That fear is reasonable. The internet is full of fake concern and safety theater.
But “friction” is not a single move. It’s a design family.
Some friction is disrespectful; it treats the user like a lab rat and the designer like a parent. People learn to click through it. It becomes noise.
Other friction is a well-placed handrail. It doesn’t force a decision. It makes the edge of the cliff easier to see.
Twitter’s “read before retweet” prompt is a rare case where a small intervention produced measurable change without pretending to solve epistemology. In 2020, Twitter tested a prompt that appeared when someone tried to retweet an article they hadn’t opened. People opened articles 40% more often after seeing the prompt. The share pattern “opened the article and then retweeted” increased by 33%. 1
Not magic. Not huge. Still meaningful.
The prompt asked one question in UI form: Do you want to amplify something you didn’t even open?
Some people clicked through. Some chose not to retweet after reading. That’s not failure. That’s the point. Some posts are best left unshared.
WhatsApp tackled a different failure mode: not “did you read,” but “can one message go exponential.” They tightened forwarding rules for highly forwarded messages, one chat at a time. The result: a 70% reduction in viral forwards globally. 7
In systems terms, this isn’t a morality prompt. It’s a rate limiter. You add rate limiters because one unbounded behavior can dominate the whole network.
Two interventions, two failure modes:
Twitter nudges attention at the moment of amplification. WhatsApp throttles the growth curve of already-viral content.
Neither solves misinformation. Both change the physics of spread.
That’s the right level of ambition for UI interventions.
The trap within the trap Link to heading
Product teams see results like “40% more opens” and conclude: prompts work. Add prompts everywhere.
That’s how you get cookie banners and safety modals that teach people to click “continue” faster.
Interventions that hold up in the real world need three properties.
Proportional. A person sharing a link to one friend is having a conversation. A person broadcasting to 50,000 followers is publishing. These are not the same action, even if the UI uses the same button. Friction should scale with reach. Low reach, low friction. High reach, higher intentionality. Not as punishment, as alignment.
Legible. The best friction feels like a clear road sign, not a surprise barricade. Twitter’s prompt worked because it was tied to a verifiable fact: you have not opened this link. It wasn’t “be more responsible.” It was “you haven’t even looked." 1
Fair. Here’s the equity snag. If you tell someone “read before you share” and the link is behind a paywall, you’ve created a class gate. If the site is slow, bloated, and covered in traps, you’ve punished people for not having time or bandwidth.
That’s how “responsible sharing” turns into “sharing is for people with subscriptions and fiber internet.”
Good friction often requires supporting infrastructure: fast previews, clear source context, frictionless access to reading, not frictionless access to amplification.
Safiya Noble and Ruha Benjamin have made the point well: design choices that look neutral can reproduce power and bias. A friction that’s easy for some and burdensome for others isn’t responsibility. It’s uneven governance.
So yes, you can be sharp about this:
Platforms made sharing effortless because it was profitable. Then they acted surprised when cheap distribution produced expensive social outcomes.
The fix isn’t to moralize users. It’s to stop subsidizing reflexive amplification.
What this looks like, concretely Link to heading
If you want a public sphere that can carry information without constantly flooding, treat sharing as a high-consequence action where it actually is one.
Not everywhere. Precisely where it matters.
Treat “share” like “publish” when reach is large. If the system knows you’re about to broadcast widely, require more than a tap. Show the headline plus a one-sentence excerpt. Show the publisher and date. Show “you have not opened this link” if true. Offer “share with note” as the default. The point isn’t to force commentary. It’s to slow the autopilot.
Make reading easier than reacting. This is the unglamorous part. If you add read-before-share nudges but keep the reading experience hostile, you’ve built a checkpoint with no road. Platforms can do basics: fast-rendered previews, consistent source cards, clear indicators for updates and corrections.
Put rate limits where exponential spread begins. WhatsApp proved that throttling virality changes behavior at scale. 7 Public platforms can do a version: when a link is already going viral, slow the repost velocity. Add a cooldown. Reduce the algorithmic boost until the system has more signals. It won’t feel great. Safety rarely does.
Use accuracy cues sparingly. Accuracy prompts work. 5 They also become noise if overdeployed. Tie them to moments where autopilot sharing is most likely: sharing without opening, resharing screenshots, forwarding messages that have already been forwarded many times.
The goal is to make accuracy available to attention. Not to moralize.
Back to the beginning Link to heading
You’re on your phone. A headline grabs you.
You are not dumb. You are busy. You are human.
The platform is happy to convert your feeling into distribution.
This is the spiral lesson: the same tiny action means different things at different scales.
At the personal scale, sharing is a gesture. At the network scale, sharing is an accelerant. At the civic scale, sharing is infrastructure.
We built the infrastructure to optimize for velocity. Then we asked it to deliver trust. That’s like paving a highway through a neighborhood and expecting it to feel like a park.
If you want something that functions like a public square, you have to design it like one. You don’t need dramatic censorship. You need better traffic engineering.
Which returns us to the sentence that should make you slightly uncomfortable, because it’s true often enough to matter:
I didn’t read it. I just moved it.
The good news is that this is not fate. It’s a button.
Buttons can be redesigned.
The question is whether anyone with the power to redesign them has any incentive to try.
And you just moved this.
Sources
- Following successful experiments, Twitter will prompt all users to read the articles they’re about to retweet Laura Hazard Owen (2020) Nieman Lab. 1
- Twitter plans to bring prompts to ‘read before you retweet’ to all users Sarah Perez (2020) TechCrunch. 2
- Sharing without clicking on news in social media S. Shyam Sundar et al. (2025) Nature Human Behaviour (abstract via PubMed). 3
- The spread of true and false news online Soroush Vosoughi, Deb Roy, Sinan Aral (2018) Science. 4
- Shifting attention to accuracy can reduce misinformation online Gordon Pennycook et al. (2021) Nature. 5
- Accuracy prompts are a replicable and generalizable approach for reducing the spread of misinformation Gordon Pennycook, David G. Rand (2022) Nature Communications. 6
- WhatsApp’s new limit cuts virality of ‘highly forwarded’ messages by 70% Manish Singh (2020) TechCrunch. 7
- WhatsApp to impose new limit on forwarding to fight fake news Alex Hern (2020) The Guardian. 8
- Global Risks Report 2025 press release World Economic Forum (2025). 9
