Do Machines Have Agency?
Do Machines Have Agency?

Do Machines Have Agency?

Autopilot made agency cheap. Accountability stayed expensive.

At cruise altitude, the cockpit feels like a dark office with better windows. The autopilot is flying. The pilots are monitoring. Everyone is calm, because the whole point of modern aviation is that nothing dramatic happens.

Then the point arrives.

A chime. A light. A terse message that reads like a passive-aggressive teammate: your turn. The autopilot disengages right when the machine has decided it is no longer confident.

If you build AI systems, you have seen this movie. Just with different scenery.

Your phone doesn’t beep “AUTOPILOT OFF,” but it does something similar. It holds course on your attention for hours, makes a thousand micro-selections on your behalf, and then returns control at 1:13 a.m. when you finally look up and ask, honestly, “Was that what I meant to do tonight?”

This essay is about that handoff. Not because machines are evil, or because humans are weak. Because delegation is never neutral. It changes who acts, who notices, and who answers.

Here’s the claim up front: machines can have agency, but accountability does not come for free. Agency is cheap. Accountability is scarce.

Two threads, one knot Link to heading

Thread one is the cockpit.

Aviation automation is an old, serious, expensive attempt to let machines do what they’re good at: steady control, endless attention to small signals, quick corrections. We automated flight not because we hate pilots, but because humans get tired, bored, distracted, and occasionally unlucky.

Thread two is everywhere else.

AI systems now recommend what we watch, route our commutes, flag our transactions, shape hiring funnels, nudge medical workflows, manage workers by the minute. The “autopilot” metaphor is not poetic. It’s functional. These systems take a goal, select actions, adjust based on feedback. They act.

The knot is the same in both threads: when a system acts for you, the system also changes your capacity to act. And when something goes wrong, the system rarely shows up to explain itself.

The imprint problem Link to heading

If we keep autopiloting more of life, do we imprint the way we thought at the time of autopiloting into perpetuity?

Yes. Unless we design against it.

Autopilot encodes a policy. It takes a messy domain of human judgment and turns it into rules: what to optimize, what counts as noise, what’s safe, when to disengage. Aviation does this with care, testing, redundancy, and training. Physics is not forgiving.

AI systems also encode policies. A recommender optimizes engagement, watch time, conversion, whatever the dashboard rewards. A fraud model optimizes loss reduction and false-positive tolerances. A hiring system optimizes some proxy for success that someone chose, under deadline, with imperfect data.

The imprint happens when the objective hardens into the environment, and the environment starts training the humans who live inside it.

Aviation is explicit about this loop. Pilots train for automation, train for failures, train for takeover. The system assumes humans are fallible and makes training part of the system boundary.

Most consumer AI does the opposite. It removes the training, hides the objective, and then acts surprised when users become over-reliant or aggressively distrustful.

“Algorithm aversion” is one expression of that distrust. 9 After people see an algorithm make mistakes, they often avoid it, even when it outperforms humans. The pattern is not “humans hate math.” It’s “humans hate being trapped in someone else’s mistakes.”

Aviation has a mature response: give the operator models, procedures, and clear authority lines. Nobody says “trust the autopilot” as a worldview. They say: here’s when it’s reliable, here’s when it’s not, here’s how you verify.

The oldest lesson of automation is also the newest Link to heading

Lisanne Bainbridge called it the “ironies of automation." 5 Automate the routine, and you leave the human with the rare. That sounds efficient, until you notice what the human is doing the rest of the time: waiting. Monitoring. Staying vigilant in a situation designed to be boring.

Bainbridge’s point is not that automation is bad. It’s that automation rearranges work in ways that quietly set people up for failure, especially at takeover.

The research makes this concrete. When operators are not actively engaged, situation awareness degrades. Ability to take over drops. 6 This is not a character flaw. It’s a cognitive constraint. Vigilance is expensive. Passive monitoring is brittle.

The irony shows up again in “automation bias”: people treat automated aids as shortcuts for thinking. 8 In simulated flight tasks, participants with a decision aid made errors of commission, doing what the aid recommended even when it contradicted valid indicators, and errors of omission, missing events when not prompted. The system becomes a magnet for attention. Even when it is wrong.

Now switch threads.

You can feel the same dynamic in recommendation systems. A feed makes the easy selections. You consume. Your capacity for deliberate selection atrophies, not because you are lazy, but because the environment stops requiring the skill.

Then the handoff arrives. It might be subtle: the hollow feeling after an hour of scrolling. Or it might be sharp: a public mistake triggered by autofill, an AI-written message, a routing decision, a default you never noticed.

The system does not apologize. It does not even admit it acted.

Agency is easy to spot. Autonomy is where the arguments start. Link to heading

Philosophers use “agency” plainly: an agent has the capacity to act, and agency is the exercise of that capacity. 1 Under that definition, plenty of machines qualify. They choose. They trigger. They steer.

“Autonomy” is thicker. In moral and political philosophy, autonomy means self-governance, living according to reasons and motives you can treat as your own, not the product of manipulative external forces. 2 Autonomy is tied to accountability: if you are not calling the shots, why should anyone hold you responsible?

So: machines can have agency without having autonomy. They can act without being authors.

That distinction sounds fine in a seminar. Put it back in the cockpit.

A pilot is still legally and morally responsible for the flight, even when the machine is flying most of it. A user is “responsible” for their choices online even when those choices have been pre-shaped by ranking and recommendation. Responsibility sticks to the human because we don’t know where else to put it.

That’s the first reason accountability is scarce. We keep assigning it to whoever is still made of meat.

Why we keep calling machines “agents” Link to heading

Someone objects: “We should stop anthropomorphizing. It’s just code.”

Fair. Also incomplete.

We talk about machines as if they have beliefs and goals because it’s often the best available compression. Daniel Dennett calls this the intentional stance: a predictive strategy where you treat an entity as rational-ish and explain its behavior in terms of beliefs, desires, intentions. 3 It’s a modeling choice. You don’t need to believe the machine has a soul. You just need the stance to produce correct predictions.

This is why we say the autopilot “wants” to maintain altitude, the recommender “tries” to keep you watching, the spam filter “thinks” this is suspicious.

The stance is useful. The danger is that it drags a second idea in with it: moral responsibility.

We start with “it behaves like an agent,” and we slide into “it deserves blame like an agent.” That slide is smooth, fast, and almost always wrong.

Agency is not the same thing as accountability. 1 One is about action. The other is about being answerable inside a moral community, where reasons matter, apologies matter, repair matters.

Machines can act. Machines do not participate in moral repair. Not yet. Not on this contract.

So we do what humans always do when we need a story. We point at the nearest intelligible character. Sometimes that’s “the algorithm.” Sometimes that’s “the user.” The result is blame that feels satisfying and explains nothing.

Autonomy was never purely internal Link to heading

It’s tempting to frame autonomy as a private property, like a battery you either have or don’t have. Western individualism loves that picture.

But autonomy was always cultivated in a world that either supports reflection or hijacks it, that either teaches skill or removes the need for it, that either makes reasons visible or turns life into stimulus-response loops.

AI changes those conditions.

That’s why it feels hollow to say “people should just choose better.” Choice is not an on-off switch. Choice is a skill in a setting.

Autonomy is not eliminated by influence. Autonomy is eliminated when the environment systematically routes around your capacity to notice and revise.

Accountability is the missing instrument panel Link to heading

In the cockpit, when something goes wrong, investigators have data. Logs. Procedures. Responsibilities defined before the failure.

In many AI systems, accountability is mostly vibes.

Who tuned the objective? Who approved deployment? Who monitored drift? Who decided the acceptable error rate, and for whom? What happens when the system fails in ways nobody predicted? What can the user override, and what is “just how it works”?

Can’t answer those questions? You built agency without governance. An autopilot with no checklists and no black box.

The EU’s ALTAI framework is one attempt to make this less hand-wavy, explicitly naming “human agency and oversight” as a requirement. 16 You can disagree with the EU’s framing and still recognize the underlying point: oversight is not a moral afterthought. It’s an engineering constraint.

A useful rule: every delegated action needs a paired accountability surface.

Not an inspirational paragraph. A surface. Something a real person can use.

What counts depends on the domain, but the families are recognizable:

Traceability. If a system decides, someone must be able to ask why, at that moment, with those inputs, under that model version, under that policy. 16

Contestability. If a system affects outcomes that matter, there needs to be a path for challenge that doesn’t require a PhD and a week off work. “You can’t appeal to the algorithm” is not a neutral design choice. It’s a power choice.

Handoff design. If the system disengages, it must not dump the user cold. Autopilot handoff is hard because the human has to rebuild situational awareness fast. 6 That’s a design problem, not a motivational problem.

Skill maintenance. If automation removes practice, you have to replace it somehow. Aviation does this in simulators. If your AI system deskills users, you need an equivalent, or you should admit you’re trading autonomy for throughput. 5

None of this requires arrogance. It requires the opposite: the humility to assume that rare handoffs will be messy. The humility to assume users will misunderstand you. The humility to treat misunderstanding as your problem.

So, do machines have agency? Link to heading

Yes, in the basic sense. They act. 1 They do not merely advise. They move the world.

Do they have autonomy?

Not in the sense that matters for accountability. Autonomy is tied to authorship, to reasons that can be owned, defended, revised, repaired. 2 17 Today’s systems do not participate in that practice. They execute policies.

The question worth keeping is not “is the machine an agent?” The question is “what did we do to the human when we handed the machine the controls?”

Autopilot did not remove pilots. It changed them. It turned flying into systems management and made takeover a core skill, not a side skill. 5 AI is doing something similar to everyday autonomy. It turns choosing into supervising, and then asks you to intervene at exactly the moment your competence is least rehearsed.

Agency is cheap. Accountability is scarce.

If you ship systems that can act, you do not get to ship mystery along with them. You owe the people downstream a way to understand, contest, and reclaim the wheel, not in theory, but in practice.

Otherwise you’re not building autonomy. You’re just building faster defaults.

  • Agency Martin Schlosser (2015) Stanford Encyclopedia of Philosophy. 1
  • Personal autonomy Sarah Buss (2002) Stanford Encyclopedia of Philosophy. 2
  • The intentional stance Daniel C. Dennett (1987) MIT Press book page. 3
  • Autonomy in moral and political philosophy John Christman (2003) Stanford Encyclopedia of Philosophy. 4
  • Ironies of automation Lisanne Bainbridge (1983) Automatica (DOI). 5
  • The out-of-the-loop performance problem and level of control in automation Mica R. Endsley & Esin O. Kiris (1995) Human Factors (DOI). 6
  • Humans and automation: Use, misuse, disuse, abuse Raja Parasuraman & Victor Riley (1997) Human Factors (DOI). 7
  • Does automation bias decision-making? Linda J. Skitka, Kathleen L. Mosier, & Mark Burdick (1999) International Journal of Human-Computer Studies (DOI). 8
  • Algorithm aversion: People erroneously avoid algorithms after seeing them err Berkeley J. Dietvorst, Joseph P. Simmons, & Cade Massey (2015) Journal of Experimental Psychology: General (DOI). 9
  • Assessment list for trustworthy artificial intelligence (ALTAI) European Commission (2020) Digital Strategy portal. 16
  • Moral responsibility Matthew Talbert (2019, rev. 2024) Stanford Encyclopedia of Philosophy. 17

Continue reading