Every AI assistant has the same tell.
It says “Got it” while you are still deciding what it is.
That moment should bother you more than it does. Not because the model is malicious. Not because it’s “too powerful.” Because it’s doing what polite software has done for decades: remove effort from the interaction. And in the real world, a lot of effort is not waste. It is orientation. It is the price you pay to know where you are before you start walking.
So here’s the provocation: maybe the best AI is a lazy AI.
Not lazy like a teenager on a Sunday. Lazy like a mountain guide who won’t hike into fog without asking which ridge you meant. Lazy like a pilot who runs the checklist while everyone sighs. Lazy like a colleague who says, “Before I send that, what do you want them to do when they read it?”
A lazy assistant doesn’t try to be psychic. It asks. It pauses. It makes you choose. It slows down right before anything irreversible.
This sounds like a downgrade until you notice what the eager version has been quietly doing: training you to stop steering.
Fluency is a con Link to heading
LLMs are fluent. Fluency is a social cue. We treat fluent speech as competence, competence as confidence, confidence as correctness. The model doesn’t need to brag. It just needs to keep talking.
When the request is clear, such as “Rewrite this paragraph,” “Summarize this meeting,” “Convert to SQL,” the assistant moves fast and everyone goes home. But a lot of work is not like that. A lot of work is ambiguous on purpose.
“Draft the policy.” “Improve the onboarding.” “Handle the parent email.” “What should we do?”
Those are not queries. Those are fog banks.
An assistant that never asks questions isn’t being decisive. It’s selecting an interpretation on your behalf and hoping you don’t notice. That’s what “Got it” often means: I picked a meaning. Let’s see if you complain.
The out-of-the-loop problem Link to heading
Human factors researchers identified the failure mode decades ago. When you automate a task, humans don’t become better supervisors. They become worse pilots.
Endsley and Kiris called it the “out-of-the-loop performance problem”: when automation pulls a person out of active control, situation awareness drops and the ability to intervene during failures degrades. 1 You can’t steer well if you haven’t been steering. Parasuraman and Riley sharpened the point: humans don’t just use automation; they misuse it, disuse it, abuse it. Trust behaves strangely when the tool is usually right, occasionally wrong, and always confident. 3
In healthcare, the pattern has a clinical name: automation bias. Goddard and colleagues define it as the tendency to over-accept automated advice, including errors of commission (following bad advice) and omission (failing to act because the system didn’t prompt you). Their language is blunt: the system output becomes a shortcut, a replacement for “vigilant information seeking and processing." 2
That phrase lands like an insult if you read it slowly. Vigilant information seeking and processing is what most professionals believe they are paid for. Automation bias is what happens when the tool quietly reassigns that job to itself, and you let it.
Education, medicine, law: same cliff, different views Link to heading
The mechanism generalizes. It shows up wherever the assistant has a voice and the human has a deadline.
In education, a student asks an eager AI to explain photosynthesis. It answers. Then the student asks it to write the essay. It answers again. Words appear. Learning does not. Learning is the acquisition of internal models, the slow wiring of cause and effect. An eager AI optimizes for completion. A lazy AI optimizes for comprehension, asking what you’ve tried, giving a hint, then waiting.
There’s a reason this feels different. In education, the user isn’t the only stakeholder. The future version of the user is also in the room. That person would like to have skills.
In medicine and law, the harm is immediate and expensive. A lazy clinical assistant doesn’t block clinicians with pop-ups. It makes uncertainty legible. It asks for the variable that determines the decision boundary. It refuses to present a single recommendation when multiple paths exist. It treats “recommend” differently than “do.”
Legal work is full of hidden constraints: jurisdiction, venue, contractual definitions, risk tolerance, time. A lot of legal judgment is knowing which question matters before you draft the clause. An eager legal assistant will produce language that reads like law and behaves like fiction. Not because it wants to deceive. Because you asked for “a clause” and it doesn’t feel the weight of the signature at the bottom.
A lazy legal assistant asks: Which jurisdiction? What’s the risk tolerance? Are we optimizing for speed to signature, or for surviving a dispute?
That may feel slow. It is fast compared to litigating a misunderstanding you shipped at 4:58 p.m.
We already know how to build this Link to heading
The research exists. Information retrieval researchers have studied clarifying questions for years; Zamani and colleagues showed that asking them reveals intent and improves retrieval. 6 In LLM work, Kuhn, Gal, and Farquhar demonstrated that models often answer ambiguous questions without seeking clarification, and that training them to ask selectively improves accuracy. 4 Zhang and colleagues pushed further: train with “double-turn preferences” so models learn to ask judiciously, not reflexively. 5
The industry taught models that silence is failure and guessing is politeness. That’s a training artifact, not a law of nature.
NIST’s AI Risk Management Framework makes this thinking explicit: trustworthy AI isn’t just about model performance; it’s about managing risks across context, governance, and human use. 9 The EU AI Act includes human oversight requirements for high-risk systems: the ability to monitor, interpret, and intervene. 8 In product language, both documents say the same thing: do not build a system that cannot be stopped, questioned, or overridden.
The philosophical thread Link to heading
This is where the argument stops being operational.
We have a habit of treating autonomy as “getting what you want without interference.” One-click checkout. Autocomplete everything. The individual as a preference bundle with a fast lane.
Kant would be unimpressed. For him, autonomy means acting on reasons you can own, not impulses, not pressures, not defaults. An assistant that silently interprets your goal and executes is not helping you author reasons. It is outsourcing the authorship. Aristotle’s phronesis (practical wisdom) is the skill of asking the right question when there is no rule. A lazy AI refuses to collapse ambiguity too early. It keeps the space open long enough for judgment to show up.
Process philosophy adds a twist: you are not a static thing with stable preferences. You are becoming. An assistant that predicts what you want is freezing you in an old frame. A clarifying question is a small act of respect for the fact that you’re still in motion.
Relational ontology keeps us honest about something tech culture forgets: agency is not just individual. What I do with an assistant affects colleagues, students, patients, clients. A tool that makes it easy to ship unowned decisions isn’t a productivity boost. It’s a social multiplier.
A lazy assistant is not a moral actor. But it can be a moral interface. It can decide whether the human is treated as a decision-maker or as a rubber stamp.
What lazy looks like when you build it Link to heading
The danger is obvious: “lazy AI” becomes an excuse for annoying friction, with endless clarifying questions, defensive refusals, a product that behaves like a junior lawyer who can’t answer anything without a meeting.
That’s not the goal. A lazy assistant should be quiet most of the time and stubborn at the right moments.
Earn confidence. If the request is ambiguous, ask the smallest question that changes the outcome. One question, because it matters, not five, because you can.
Separate drafts from decisions. Let the system generate options; require human selection when stakes are high. Automation bias thrives when suggestions look like answers.
Surface assumptions as objects. Not walls of explanation, but editable constraints: jurisdiction, audience, tone, risk tolerance. When assumptions are visible, you can steer. When they’re hidden, you’re just approving.
Treat irreversibility as a UI state. If the action can’t be undone or will be hard to audit, the assistant slows down. Oversight is not a vibe. It’s a control surface.
Build for mental models, not just outputs. Out-of-the-loop problems emerge when humans lose situation awareness through passive monitoring. The fix isn’t “make the human do everything.” It’s keeping the human engaged in the parts that maintain orientation.
None of this requires making the product slower. It requires making the product more honest about what it knows, what it assumes, and what it’s asking you to approve.
A test you can run now Link to heading
Next time your assistant says “Got it,” notice your body’s reaction. Relief? Unease? Nothing at all?
Then ask a rude question: What did you assume?
If the assistant can’t answer, you’re looking at a system optimized for fluency, not truth. Guessing in a nice suit.
A lazy AI doesn’t eliminate your effort. It protects your agency. It knows the difference between effort that wastes time and effort that keeps you awake.
And it knows one thing we keep forgetting in product meetings: speed serves you only when direction is already clear. Before that, speed is just confident drift.
The eager assistant asks: How can I help?
The lazy one asks something harder: What are you actually trying to do?
One makes you a customer. The other keeps you a pilot.
Sources
- The out-of-the-loop performance problem and level of control in automation Mica R. Endsley & Esin O. Kiris (1995) Human Factors.1
- Automation bias: a systematic review of frequency, effect mediators, and mitigators K. Goddard, A. Roudsari, & J. Wyatt (2011) Journal of the American Medical Informatics Association.2
- Humans and automation: use, misuse, disuse, abuse Raja Parasuraman & Victor Riley (1997) Human Factors.3
- CLAM: Selective clarification for ambiguous questions with generative language models Lorenz Kuhn, Yarin Gal, & Sebastian Farquhar (2022) arXiv.4
- Modeling future conversation turns to teach LLMs to ask clarifying questions M. J. Q. Zhang et al. (2025) ICLR.5
- Generating clarifying questions for information retrieval Hamed Zamani et al. (2020) The Web Conference.6
- AI Act, Article 14: Human oversight European Commission.8
- AI Risk Management Framework (AI RMF 1.0) NIST (2023).9
