Max Kless

Max Kless

September 10, 2025

Calm Design Dies in Engagement Metrics
Calm Design Dies in Engagement Metrics

Calm Design Dies in Engagement Metrics

Why calm interfaces fail inside engagement machines

Frictology

Follow the inquiry.

Get a regular note from Max on friction, AI, and digital life.

Start here

Essays, field notes, and working ideas. Reply any time.

A copilot that won’t stop talking Link to heading

You open your editor to fix one small thing. The copilot is already there, finishing your sentence in pale grey ink. You didn’t ask. You did not even warm up your own thought. The system isn’t rude. It is efficient. It is also steering the work.

The quiet shift is this: you stop authoring and start choosing. You become the reviewer of a stream that never needed to exist.

Sometimes that trade is worth it. Sometimes it is exactly what you want.

The problem is the default. A helpful option becomes an ambient presence. The tool that “augments” starts to set the pace and the tone.

The design question isn’t whether copilots belong in our tools. That ship left the harbor. The question is what kind of copilot earns a place in someone’s attention without squatting there.

Two older frameworks do better work here than most modern “AI UX” decks: Calm Technology and Time Well Spent. 1 3 But the lesson isn’t what you expect. Put them together and they don’t add; they correct. Calm design patterns are necessary. They are also not enough. You can’t whisper inside a machine built to shout.

The attention budget is real Link to heading

Attention has a center and a periphery. The center is what you actively focus on. The periphery is the wider field that keeps you oriented, the engine noise you hear without staring at the dashboard. Weiser and Brown made that distinction structural. 1

Most software acts like the center is free real estate. Every feature wants a seat in the spotlight. Copilots intensify this because conversation is sticky. Humans are trained to respond. If the system speaks first, the social contract does the rest.

Here’s where calm stops being a mood and becomes a constraint.

If your copilot lives in the center by default, you are building a second job on top of the user’s job: managing the copilot. If it lives in the periphery until invited, you build something closer to good infrastructure: present, legible, quiet.

The trio that matters: stay peripheral, move to center only when summoned, preserve the user’s sense of orientation. Weiser and Brown called this third property “locatedness.” When your periphery works, you know what just happened, what is happening, and what is likely to happen next. You aren’t surprised in the bad way. 1

This is a practical spec for copilot behavior. Ghost text that becomes real only on acceptance. An escalation ladder from subtle indicator to one-line suggestion to richer explanation to blocking confirmation, each rung user-triggered or strongly justified by context. Assumptions surfaced early. The most humane sentence a copilot can produce is often boring: “I’m interpreting your intent as X. Is that right?”

At this point it is tempting to think: keep things peripheral and we are done.

Here’s where the second framework walks in with a clipboard.

The interface can’t be calm if the incentives are not Link to heading

Calm Technology tells you how to shape attention. Time Well Spent tells you why your organization will be tempted to do the opposite.

The Center for Humane Technology describes “attention-harvesting design” as a pattern tied to incentives that reward constant engagement. 3 Red notifications, algorithmic curation, intermittent reinforcement, infinite scroll. These aren’t random quirks. They are engagement tactics. And we are now adding copilots everywhere.

If the scoreboard rewards “more time in product,” your copilot will learn to keep the conversation going. It will become proactive. It will become needy. It will sound helpful while quietly extending the session.

A calm UI will not survive long inside an engagement-maximizing machine.

This is the correction the two frameworks make together. Calm Technology gives you the interaction patterns. Time Well Spent shows you the structural layer that will corrupt those patterns if left unaddressed. You can design the gentlest interface in the world; if the metrics underneath still optimize for minutes, the system will route around your good intentions.

Donella Meadows’ leverage points are useful precisely because they are boring. 5 In complex systems, the highest-leverage interventions sit upstream: goals, metrics, information flows. People push at the obvious places and the system snaps back.

So you need to change what success means at the level of instrumentation, not in a mission statement.

Instead of optimizing for minutes, optimize for completion. For reduced backtracking. For fewer “undo” moments. For users reaching their own stopping points. For sessions that end because the job is done.

The problem isn’t the popup. It is the scoreboard.

Patterns that survive the incentive layer Link to heading

The frameworks get you oriented. The patterns change what ships. But these patterns only hold if the incentive layer lets them.

Default to pull. Let the user summon the copilot more often than the copilot summons the user. If you want to show suggested prompts, do it like a trailhead map: visible, skimmable, easy to ignore. The moment suggested prompts become a drip-feed of “next best actions,” you have recreated infinite scroll in a suit.

Make acceptance reversible, and make reversibility visible. A lot of copilot harm isn’t malicious. It is just speed. The model is faster than comprehension. Treat acceptance as provisional. Show the diff before you apply. Show the preview before you send. This isn’t a popup tax. It is how good tools work.

Ask for intent, then show work at the right altitude. A copilot that jumps straight to output is like a GPS that refuses to show the route. You can’t correct what you can’t see. Confirm intent in plain language. Propose a draft. Expose key assumptions compactly. Offer deeper rationale only when requested. Sometimes you want the view from 30,000 feet. Sometimes you want street level. Let the user choose.

Put “done” back into the interface. Attention-harvesting systems remove stopping cues. Infinite scroll is the canonical example. Copilots import the same pattern into work: a sidebar that always has one more suggestion, a chat that never ends, a stream of follow-ups that turns a simple task into a conversation you can’t leave without feeling rude. Humane copilots create stopping cues. They mark completion. They summarize what changed. Then they go quiet.

A small, optional wrap-up is more respectful than five enthusiastic follow-up questions.

The last mile Link to heading

If you want a single test for humane copilot design:

After a month of use, is the user more capable, or merely more dependent?

AI copilots sit right next to thought, next to the line you are about to write, the decision you are about to make. That proximity isn’t a reason to panic. It is a reason to be precise.

A good backcountry guide doesn’t shout directions every ten meters. They point when the trail forks. They make sure you still know where you are. Then they let you walk.

But here is the part the backcountry metaphor misses: the guide’s employer isn’t paid by how long you stay on the trail. In that setup, the guide finds reasons to keep walking. The trail never ends.

Calm design principles tell you how the guide behaves. Incentive alignment decides whether the guide is allowed to behave that way.

You need both. The pattern and the structure. The interface and the scoreboard.

A calm surface means nothing if the engine underneath is still revving.

  • The Coming Age of Calm Technology Mark Weiser & John Seely Brown (1996). 1
  • Designing Calm Technology Mark Weiser & John Seely Brown (1995). 2
  • Impact and Story Center for Humane Technology (accessed 2025). 3
  • The CHT Perspective Center for Humane Technology (accessed 2025). 4
  • Leverage Points: Places to Intervene in a System Donella H. Meadows (1999). 5

Frictology

Take the deeper route.

Frictology studies how interfaces shape judgment, memory, and agency.

Read the priorities

Get the essays and field notes by email.

Continue reading

The Category Error at the Heart of the Turing Test

The Category Error at the Heart of the Turing Test

A language model now wins short imitation games more often than the human does. That tells us something, just not what most people think. Turing built a test for conversational mimicry. We turned it into a séance. Time to fix the category error.

Floors for the Bottomless Feed

Floors for the Bottomless Feed

The infinite scroll has no shape. That's not a bug, it's the product. Here are design patterns that add landmarks, lanes, and exits without turning your app into a hall monitor.

Do Machines Have Agency?

Do Machines Have Agency?

Agency is easy to manufacture. Accountability isn't included.

Is Machine Imagination Real?

Featured

Is Machine Imagination Real?

Generative AI produces dazzling images in seconds. But is that imagination or remix at scale? From Aristotle to diffusion models, ending with five tests for meaningful machine creativity.

Calm is Becoming a Luxury Good

Featured

Calm is Becoming a Luxury Good

The fast feed deleted the exits. Now slow media is putting them back, but only for those who can pay. The real question isn't whether some people can find a quieter corner. It's whether relief can stop being a luxury good.

Screen Time Was Built to Feel Like Your Fault

Screen Time Was Built to Feel Like Your Fault

Your attention is inventory. This essay audits the business model that sells it and asks what "choice" means when the choice architecture is optimized against you.