Screen Time Was Built to Feel Like Your Fault
Screen Time Was Built to Feel Like Your Fault

Screen Time Was Built to Feel Like Your Fault

Who owns your time when your time is the product?

Free is a price, not an ethic.

The attention economy didn’t start as a grand conspiracy. It started as a scarcity problem. Herbert Simon spelled it out in 1971: in an information-rich world, the wealth of information creates a poverty of attention. [1] Information consumes attention, so attention becomes the resource that needs allocating.

That line is a blueprint. If attention is scarce, somebody will build an allocation system. If an allocation system exists, somebody will monetize it. If monetization works, the system will optimize for what pays.

This is not sinister. It’s just physics, applied to markets and dashboards. The ethical problem arrives when allocation systems become very good at capturing attention, and much less good at respecting it.

The business model that measures minutes Link to heading

Tim Wu says the quiet part out loud: attention merchants don’t mainly sell content. They sell access to you. 2 They capture attention and resell it, usually through advertising and behavioral targeting.

Once you see that, product decisions stop looking like “features” and start looking like revenue plumbing. Infinite scroll removes stopping points. Autoplay keeps the flow going after conscious choice ends. Notifications pull you back when you wander off. Personalization reduces the odds you get bored and exit.

These are not inherently immoral. They become morally suspect when the system’s success metric is “more minutes” and your success metric is “useful, meaningful, enough.”

Global reports put average daily time online for adults at roughly six and a half hours per day. 3 When a system gets that much of your waking life, “it’s just an app” stops being a serious ethical defense. You don’t build a city-wide transit system and then shrug when it changes how people move. You also don’t build a consciousness-scale persuasion system and pretend your only responsibility is uptime.

Why the utilitarian math keeps failing Link to heading

A utilitarian would weigh benefits against harms. The benefit column is real: free services, broad access, useful discovery, connection for isolated people, entertainment that makes life lighter. The harm column is also real: sleep erosion, fragmented focus, anxiety loops, political polarization, reduced capacity for sustained thought, a steady feeling of being late to your own life.

The problem is not that benefits don’t exist. The problem is that platforms are instrumented for engagement. They A/B test for clicks, retention, shares. They don’t test for “did the user feel proud of this hour later?” because that’s not what the business model pays for.

So the utilitarian spreadsheet is biased at the data-collection layer. We score the world using sensors built to maximize revenue.

There’s also a distribution issue. Even if the average outcome looks fine, harms land disproportionately on minors, compulsive users, and people in vulnerable life phases. Karolinska Institutet followed more than 8,000 children from ages 10 to 14 and found heavier social media use linked to declining concentration. 11 Not a moral panic. A signal that certain designs have predictable cognitive costs, especially for developing brains.

Utilitarianism doesn’t condemn attention-based businesses automatically. What it condemns is a system that systematically undercounts harms because harms are somebody else’s problem.

When “choice” stops being choice Link to heading

Kantian ethics cares less about totals and more about whether people are treated with dignity, including the ability to make free and informed decisions. Here’s the core issue for what I’ll call attention autonomy: persuasive systems can win without persuasion. They can win by bypassing deliberation, by exhausting users into compliance, by arranging interfaces so that “choice” becomes a predictable outcome of friction, asymmetry, and confusion.

Onora O’Neill has spent years pointing out a modern mistake: we treat autonomy as a thin notion of individual choice backed by checkboxes. 6 Her critique, developed in bioethics but portable to product design, is that this “autonomy” can be ethically hollow.

Translated into app reality: a consent banner is not consent if it’s engineered for surrender. A settings page is not respect if it’s designed as a maze. “You agreed to the terms” is not a moral trump card if the terms are unreadable and the alternative is social exclusion.

Europe’s Digital Services Act makes this concern explicit. Article 25 prohibits interfaces that deceive, manipulate, or materially impair users’ ability to make free decisions. 5 You can read that as regulation. You can also read it as an ethical diagnosis: some interface designs function like a rigged intersection. Traffic still flows, but not because drivers freely chose the route.

If autonomy matters, attention capture becomes morally charged the moment the system shapes behavior in ways the user did not meaningfully authorize.

What habits are we training Link to heading

Virtue ethics asks a different question: what kind of character is this environment cultivating?

You don’t become patient, attentive, or wise by subscribing to those traits. You become them through repeated practice in environments that reward them. Shannon Vallor treats “technomoral virtues” as real skills we need to cultivate in a technologically saturated world. 7 Virtue ethics is not nostalgia. It’s a practical recognition that environments train people. So do tools. So do metrics.

Now consider the default training regimen: fast rewards, social feedback as primary reinforcer, low-cost switching between stimuli, few stopping points, a constant whisper to compare, react, refresh.

This doesn’t make anyone evil. It makes certain virtues harder to practice and certain vices easier to fall into.

If the system trains impulsivity, it gets impulsive users. If it trains outrage, it gets outraged users. If it trains shallow browsing, it gets people who can browse anything and remember nothing.

That’s not a personal failing. That’s how training works.

Matthew Crawford’s “attentional commons” idea becomes hard to ignore here. 9 If attention is shaped by public spaces, noise, and commercial claims on perception, then attention isn’t only private property. It’s a shared environment we all breathe. Pollute the commons and you don’t only harm individuals who walk through it. You change what the place becomes.

Now we’re back to systems. Not “people are weak,” but “the terrain is engineered.”

Technology participates Link to heading

At this point a common defense shows up: “We’re just giving people what they want.”

That claim assumes technology is a neutral pipe carrying preferences. Post-phenomenology challenges this. Peter-Paul Verbeek argues that technologies mediate our relationship with the world; they shape perception, action, and even moral agency. 8

This breaks the simplistic division of labor where users are responsible for choices, platforms are responsible for options, and the rest is just preference. Verbeek’s point is that design does more than provide options. It configures the choice architecture itself. It changes what feels salient, easy, normal, “next.”

The system participates.

Once you accept that, the ethics changes shape. It’s no longer enough for a company to say “we didn’t force anyone.” The moral question becomes: what kinds of agency did your system make likely? What kinds did it make costly?

A platform optimizing for engagement has a built-in temptation to make entering frictionless and exiting weird. Pausing feels like falling behind. That’s not an accident. It’s an optimizer doing what it was paid to do.

The Fogg Behavior Model is a clean description of why this works: behavior occurs when motivation, ability, and a prompt converge. 10 Increase prompts, reduce friction, keep motivation simmering, and you don’t need mind control. You get predictable behavior.

Ethics enters when those levers serve the platform’s ends at the expense of yours, especially when you can’t see the levers being pulled.

So who owns your time Link to heading

Nobody owns your time like property. Time isn’t a car you can lock. It’s the medium you’re made of.

But you can talk about control rights. You can talk about whether a system respects your ability to direct attention toward what you judge valuable.

That’s why attention autonomy is the real stake. Not productivity. Not screen time. Not a purity contest about devices.

Attention autonomy means you can enter an experience knowingly, stay for reasons you endorse, leave without a wrestling match, and return to your life with your agency intact.

In a fair system, your goals and the system’s goals align often enough. In an extractive system, alignment is incidental. The system doesn’t need you satisfied. It needs you there.

James Williams, writing from inside ad tech before stepping into philosophy, frames the attention economy as a threat to freedom precisely because it aims at what we attend to and therefore at what we become able to choose. 4 That is the cleanest ethical diagnosis I know: when you can steer attention, you can steer choice. If you can steer choice, you are no longer selling a service. You are renting influence over a person.

A note for designers Link to heading

The ethical target is not “add more pop-ups that scold people for scrolling.” That just adds noise and trains dismissal.

The target is a fair attention contract: clear stopping points, exits as legible as entrances, controls that don’t punish you for using them, defaults that don’t treat attention as captive inventory. You can instrument for regret, not just engagement. Ask users what they came for and whether they got it. Treat “no” as a signal, not as churn to be patched.

This isn’t about making products boring. It’s about making agency non-optional.

The question to test Link to heading

Back on the train, you look up and catch a stranger doing the same. A half-second of recognition. Then both of you drop back into your feeds like commuters re-entering tunnels.

Attention is where life actually occurs. We can outsource a lot, but we can’t outsource living.

So here’s a practical test you can run today, on any product:

When you try to stop, does the system help you stop? When you try to aim your mind, does the system help you aim? When you try to choose, does the system help you choose?

If the system makes those actions harder, it’s not competing for your attention. It’s claiming it.

And claims should be examined.

  • Designing Organizations for an Information-Rich World Herbert A. Simon (1971) 1
  • The Attention Merchants: The Epic Scramble to Get Inside Our Heads Tim Wu (2016) 2
  • Digital 2025: Global Overview Report DataReportal (2025) 3
  • Stand Out of Our Light: Freedom and Resistance in the Attention Economy James Williams (2018) 4
  • Regulation (EU) 2022/2065 (Digital Services Act), Article 25 European Union (2022) 5
  • Autonomy and Trust in Bioethics Onora O’Neill (2002) 6
  • Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting Shannon Vallor (2016) 7
  • What Things Do: Philosophical Reflections on Technology, Agency, and Design Peter-Paul Verbeek (2005) 8
  • The World Beyond Your Head: On Becoming an Individual in an Age of Distraction Matthew B. Crawford (2015) 9
  • Fogg Behavior Model (B=MAP) BJ Fogg (n.d.) 10
  • Using social media may impair children’s attention Karolinska Institutet (2025) 11

Continue reading