Max Kless

Max Kless

September 3, 2025

The Two-Second Betrayal: When Speed Trains Vigilance
The Two-Second Betrayal: When Speed Trains Vigilance

The Two-Second Betrayal: When Speed Trains Vigilance

How speed gains backfire and why AI makes it personal.

Friction Kitchen Dispatch

Stay in the friction.

Monthly writing on humane design, agency, and digital depth.

Read what friction means

No spam. No growth hacks. One thoughtful dispatch.

Speed used to feel like a feature. Now it’s the baseline. When an app loads instantly, nobody thinks “how elegant”. They think nothing. Smoothness becomes invisible. The only thing that registers is when something doesn’t; and that lands like a small betrayal.

But here’s what I’ve started noticing: it’s not impatience. It’s vigilance.

A loading spinner used to mean “please hold.” Now it means “start diagnosing.” Is it frozen? Should I refresh? Did I lose work? Three seconds of silence and the mind is already running failure scenarios. We didn’t get less patient. We lost faith in the pause.


Edward Tenner’s The Efficiency Paradox is, among other things, a book about what happens when that faith erodes everywhere at once. Not because he argues for slowness, or for some sepia-toned return to card catalogs. He argues for something more useful and more annoying: efficiency isn’t a virtue. It’s a setting. Crank it without asking what it’s connected to, and you get side effects. Some are funny. Some are expensive. Some quietly change how people think, relate, and lose things they didn’t know they were holding. 2

Tenner’s book is nominally about big data and platforms. The better way to read it, especially now, with AI assistants finishing our sentences, is as a field guide to a deeper habit: treating “faster” as a synonym for “better.”

It isn’t. But it’s easy to forget why.

Efficiency as temperature Link to heading

Heat isn’t morally good. It’s a control variable. You turn it up to cook. You turn it down to preserve. Turn it up everywhere, all the time, and you don’t get a better kitchen. You get smoke.

Efficiency is like that. It’s a knob. We keep wiring it to virtue.

Tenner’s core claim, stated plainly: efficiency is often local. You optimize a slice of the system and get a gain you can measure. Then the system adapts. New behaviors appear. Dependencies deepen. Skills atrophy. Edge cases sharpen. The total outcome drifts in the wrong direction even while the local metrics look great. 2

This isn’t an argument against optimization. It’s an argument for asking a better question: What exactly are we speeding up, and what will people stop doing once it’s fast?

The map that knows too much Link to heading

Tenner has a short essay on GPS and wayfinding called “Let’s get lost.” 3 It starts with a simple observation: travel has become wildly efficient. Strip maps from an auto club have been replaced by instant routing, live traffic, rerouting on the fly. You can preview the parking situation before you leave your couch.

Then he asks the question that matters: what does “efficient travel” actually mean?

His answer has stayed with me. Wayfinding, he says, has a complement: way-losing. Productive disorientation. The detours and dead ends that teach you the shape of a place, not just how to pass through it.

When a system routes you perfectly every time, you stop building your own internal model. You arrive, but you don’t necessarily know where you are. The moment you need to improvise, you have less to work with.

This isn’t nostalgia for paper maps. (Though I do miss the aggressive refolding.) It’s an observation about learning loops. Perfect guidance can quietly amputate the skill it replaces.

Now hold that thought. We’re about to need it.

From streets to screens Link to heading

Translate the GPS problem to digital life and the pattern scales.

A recommendation system routes you to content you’ll probably like. An AI assistant proposes the next sentence you’ll probably write. A “smart” interface narrows options to the predicted best action.

All of this can be helpful. It can also delete the weird side roads where you notice things you didn’t know you cared about.

Efficiency deletes randomness first, because randomness looks like waste. But randomness is where a lot of understanding comes from.

So far, so abstract. Let’s get concrete.

When “efficient” mobility makes everyone slower Link to heading

Tenner uses ride-hailing as a modern efficiency story: instant matching of supply and demand, friction removed, wait times reduced. If you only measure the local gain, it looks like progress.

Zoom out, and the picture gets uncomfortable.

San Francisco’s County Transportation Authority studied ride-hailing services and found they accounted for roughly half the rise in congestion between 2010 and 2016, measured in vehicle hours of delay, miles traveled, and average speeds. 10 New York’s Taxi & Limousine Commission found a similar dynamic: apps saturated the streets with vehicles to keep wait times low, with drivers spending over 40% of work time cruising empty. 11

Efficient matching at the app layer. Systemic gridlock on the street.

This is the efficiency paradox in traffic-cone orange. Make the service smoother and you often get more of it. More trips. More cruising. More induced demand. The metric you were celebrating in the product meeting doesn’t contain the traffic jam you helped create.

Most digital teams, if we’re honest, don’t have the instrumentation to notice when they’ve made the system slower. They have a conversion funnel. A city has a street.

But here’s the thing: even this isn’t the most interesting application of Tenner’s framework. That comes when we move from systems to selves.

The real frontier: tiny automations that change you Link to heading

Not co-authoring a book. Not “vibe coding” an app.

The small stuff.

The micro-decisions you used to make without noticing. The little bits of interpretation, tone, and intention that accumulate into a relationship, a work culture, a self.

Start with the most innocuous example: suggested replies.

Google’s Smart Reply system, as described by Kannan et al., generates short one-tap responses. At the time the paper was written, the system assisted with about 10% of all mobile responses in Inbox by Gmail. 4 That’s not a rounding error. That’s a structural change in how language moves through a platform.

Here’s the trick: these systems don’t need to be wrong to change things. They only need to become the default.

A CHI paper by Robertson et al. makes this concrete. 5 People judge messages on content and structural features: greetings, closings, the small ceremonies of acknowledgment. Reply suggestions often strip these by design. The researchers found that nudging people toward abbreviated replies, especially without awareness of social context, affects how messages land. Participants worried about sounding canned. About losing texture. About leaking something they couldn’t name.

Wenker’s study frames it as a transfer of agency. 6 The claim isn’t that developers “intended” anything. The mere presence of AI suggestions shifts what people author, and the behaviors that follow.

A 2024 paper by Falip and Gauducheau finds a pattern that matches intuition once you see it: people are more willing to use smart replies for simple acknowledgments, and more reluctant when tone matters. 7 In other words, people still sense that these micro-decisions are doing social work. They just don’t always have the time to do that work themselves.

This is the new efficiency frontier: outsourcing micro-intentions.

You’re not delegating a task. You’re delegating small acts of authorship. And authorship isn’t just output. It’s how you stay in contact with what you mean.

Default factories Link to heading

Now look at your keyboard suggestion bar.

It is a default factory.

It offers three paths that cost one tap. The path that costs more is the one where you actually think. No villain required. Just design economics.

Behavioral research has a dry term for this: Johnson and Goldstein showed that even small amounts of effort increase acceptance of the default option. 9 Defaults feel like the path of least resistance because they are.

Lyell and Coiera call the downstream effect “automation bias,” overreliance on decision support that reduces vigilance. 8 People use automated cues as shortcuts when verification is cognitively expensive. And verification is always expensive when there’s no ground truth. You don’t know if your text message had the right tone until later, if you find out at all.

So people accept the cue.

Not because they’re lazy. Because the system made the cue cheap and made the alternative feel like a detour.

This is how “two seconds is a betrayal” becomes a worldview.

What efficiency costs when you stop noticing Link to heading

We’ve been talking about GPS, gridlock, and keyboards. But the stakes are larger than UX.

Here’s what I think Tenner’s framework illuminates when you push it far enough:

Efficiency, applied without asking what it’s connected to, doesn’t just speed things up. It installs a lens. A way of seeing. Gradually, anything that takes time starts to look like friction. Anything that resists measurement starts to look like waste. Anything that can’t be optimized starts to feel optional.

At first, the lens is invisible. You’re just saving time. Reducing steps. Removing obstacles.

Then one day you notice the texture is gone.

Nuance in communication gets sanded down. Relationships become transactional. The small acts of interpretation that used to signal care: choosing a word, pausing before sending, rewriting because the first version felt off, those become inefficiencies. They get optimized away.

What’s left is fast. Frictionless. Empty.

Not because anyone planned it. Because the defaults accumulated.

This is the real efficiency paradox. It doesn’t just change what we do. It changes what we notice. And what we stop noticing, we stop protecting.

A more useful question than “should we optimize?” Link to heading

So what do we do with Tenner’s lens, as practitioners, without turning this into hand-wringing?

First, stop treating efficiency as a moral category.

If you design for healthcare access, emergency alerts, fraud prevention, or disability accommodations, speed and automation aren’t indulgences. They’re necessities. Removing unnecessary effort is basic respect.

The question isn’t “should we optimize?” The question is:

What are we optimizing, and what are we training people to stop doing?

Tenner’s book pushes you toward second-order questions. That’s where good design reviews live, and where most don’t go.

One thing to try Link to heading

If you want to feel the argument instead of just reading it, try this:

Turn off one suggestion layer for a week. Smart replies, predictive text, auto-complete; pick whichever you lean on most. Don’t write longer messages. Just notice where you hesitate.

That hesitation isn’t a failure. It’s contact with your own intention.

If you find yourself wanting to adjust what you wrote, catching a tone, adding warmth, removing a phrase that sounded canned, that’s not inefficiency. That’s authorship.

It’s what the system was quietly replacing.

Back to the two-second betrayal Link to heading

Tenner grinding coffee by hand isn’t a call to romanticize effort. It’s a reminder that some resistance is how we locate ourselves inside an action.

The bigger risk in the current AI wave isn’t that we’ll stop writing. It’s that we’ll outsource small acts of intention, thousands of times a week, until our outputs are efficient and our relationships feel thinner, and nobody can point to the exact moment it happened.

That’s how systems change. Quietly. By defaults.

And once you’ve trained a culture to feel betrayed by two seconds of waiting, you’ve built something very fast, and not particularly human.

The antidote isn’t slowness for its own sake. It’s asking, each time we shave off a second, what that second was holding.

Sometimes it was just friction.

Sometimes it was us.

  • Frictionless UX Isn’t Always Better Jeff Link (2022) Built In. 1
  • The Efficiency Paradox: What Big Data Can’t Do Edward Tenner (2018). 2
  • “Let’s get lost” Edward Tenner (2018) University of Chicago Magazine (Spring 2018 issue PDF). 3
  • Smart Reply: Automated Response Suggestion for Email Anjuli Kannan et al. (2016) arXiv preprint. 4
  • “I Can’t Reply with That”: Characterizing Problematic Email Reply Suggestions Ronald E. Robertson et al. (2021) CHI 2021 paper (PDF). 5
  • Who Wrote this? How Smart Replies Impact Language and Agency in the Workplace Kilian Wenker (2022) arXiv preprint. 6
  • Do we really want AI answering on our behalf? A study of smart replies usage Joris Falip and Nadia Gauducheau (2024) ECSCW exploratory paper (PDF). 7
  • Automation bias and verification complexity: a systematic review David Lyell and Enrico Coiera (2017) JAMIA, via PubMed Central. 8
  • Defaults and Donation Decisions Eric J. Johnson and Daniel G. Goldstein (2004) Transplantation (PDF). 9
  • TNCs & Congestion San Francisco County Transportation Authority (2018) Report (PDF). 10
  • FHV Congestion Study Report NYC Taxi & Limousine Commission and NYC Department of Transportation (2019) Report (PDF). 11

Friction Kitchen Dispatch

That jolt you felt matters.

Frictology maps where interface decisions bend judgment, memory, and agency.

See the Friction Dynamics model

One email a month. Sharp, useful, and readable.

Continue reading

The Category Error at the Heart of the Turing Test

The Category Error at the Heart of the Turing Test

A language model now wins short imitation games more often than the human does. That tells us something, just not what most people think. Turing built a test for conversational mimicry. We turned it into a séance. Time to fix the category error.

Floors for the Bottomless Feed

Floors for the Bottomless Feed

The infinite scroll has no shape. That's not a bug, it's the product. Here are design patterns that add landmarks, lanes, and exits without turning your app into a hall monitor.

Do Machines Have Agency?

Do Machines Have Agency?

Agency is easy to manufacture. Accountability isn't included.

Is Machine Imagination Real?

Featured

Is Machine Imagination Real?

Generative AI produces dazzling images in seconds. But is that imagination or remix at scale? From Aristotle to diffusion models, ending with five tests for meaningful machine creativity.

Calm is Becoming a Luxury Good

Featured

Calm is Becoming a Luxury Good

The fast feed deleted the exits. Now slow media is putting them back, but only for those who can pay. The real question isn't whether some people can find a quieter corner. It's whether relief can stop being a luxury good.

Screen Time Was Built to Feel Like Your Fault

Screen Time Was Built to Feel Like Your Fault

Your attention is inventory. This essay audits the business model that sells it and asks what "choice" means when the choice architecture is optimized against you.