Humanity in the Loop
Humanity in the Loop

Humanity in the Loop

When the system says no, a person needs more than a notification.

At 6:42 a.m., the pharmacy is open, the lights are harsh, the line is short. Good. You’re here to pick up a prescription, then drive to a meeting where you’ll ask for a loan extension because the last few months have been expensive in the boring, adult way.

You tap your card.

Declined.

You try again, that’s what humans do when reality gives a strange answer.

Declined.

The app loads. The message was clearly written by someone who has never stood at a pharmacy counter needing medication.

“For your security, we have temporarily restricted your account. We’re unable to provide more details at this time.”

No details. No timeline. No name. No path. Just a locked door wearing a smiley face.


If you work in fintech or healthcare, you know this genre. Sometimes it’s an account freeze. Sometimes it’s a loan denial. Sometimes it’s triage. The system makes a call. The human becomes a rounding error.

We keep telling ourselves a comforting phrase: human in the loop. It sounds like a safety harness. In practice, it often functions as a liability shield.

So here is the question, and we will return to it at higher elevations:

Where, exactly, is the human in this loop?

Not “is there a human somewhere in the org chart.” Not “can support open a ticket.” In lending and health triage, “no” is not a UX moment. It is a life moment.

The governance gap Link to heading

The first answer most teams give is organizational. We have a human reviewer. We have a risk committee. We follow a framework.

Frameworks help. NIST’s AI Risk Management Framework treats AI risk as a lifecycle problem 1 . OECD principles emphasize transparency and accountability 2 . UNESCO centers human dignity 3 . The EU AI Act hardens oversight requirements into law 4 .

All of this is useful. None of it guarantees that the person at the pharmacy counter can get their account unfrozen in time to buy medication.

Governance can exist while the lived experience is still a dead end.

Most “human in the loop” designs fail in one of two modes. They treat the human as a ceremonial stamp at the end of a pipeline. Or they treat the human as cleanup crew after harm has landed.

You can comply with a framework and still build a system that is, in the moment, non-negotiable. A map with no “you are here.” No detours. No way back.

The design question that matters: Where is the human in the loop when the system is wrong, uncertain, or simply not built for this case?

The shadow of no Link to heading

In lending, “no” casts a long shadow. A denial shapes housing, education, family stability, health. In healthcare, triage decides who gets attention first. Who waits.

In both domains, a dignified “no” requires something systems do not like to provide: reasons that can be acted on.

This is not just philosophy. The CFPB has warned lenders that complex models do not excuse them from giving specific, accurate reasons for adverse actions 5 . GDPR Article 22 addresses decisions based solely on automated processing that produce legal or similarly significant effects 6 . The EU AI Act requires human oversight in high-risk contexts 4 .

Different legal regimes, same moral point: if a decision can reshape a life, the person deserves more than “computer says no.”

Consider health triage. A symptom checker routing someone away from care, a population health model deciding who gets extra support, these present as neutral. But neutrality is often just opacity with better posture. Obermeyer and colleagues showed how a widely used healthcare algorithm exhibited significant racial bias because it used costs as a proxy for need 7 . A technical choice became a moral outcome.

Here is the design lesson that stings: your proxy variables are your values, whether you admit it or not.

If a system denies a loan, can the person contest it without begging? If triage downgrades a patient, can someone intervene without fighting the software? If an account is frozen, can the person reach a decision-maker, or only a script?

A loop that ends in a notification is not a loop. It is a chute.

The person, not the case Link to heading

This is where old philosophy stops being a museum and starts being a tool.

Kant’s formulation, stripped of period language, is rude in the right way: do not treat people merely as means 9 . The point is not “never use anyone for anything.” The point is that a person must not be reduced to an instrument for someone else’s goal.

In AI product terms, “merely as a means” looks like this: The user exists to feed your model. The denial exists to protect your loss rate. The explanation exists to protect your liability. The human exists to take blame when the model makes a mess.

You can feel the moral geometry. The system faces inward, toward the institution. The person faces a wall.

Buber sharpens this with his distinction between I-It and I-Thou relations 10 . Most automated decisioning is structurally I-It. It has to be, to scale. The ethical failure is not that the system models people. The failure is that the experience never gives people a way to show up as more than a model.

In practical terms, an I-Thou move looks like this: The system admits fallibility instead of pretending to be fate. The person can correct the record in a way that changes outcomes. The person can be an exception without becoming a scandal.

Teams confuse “we have an explanation” with “we have a relationship.” An explanation is a label. A relationship has handles.

Onora O’Neill pushes back on lazy autonomy 14 . “Autonomy” gets invoked as a magic word to justify paperwork rituals, especially consent forms. O’Neill argues that thin autonomy can undermine trust. What matters is not choice in the abstract but how systems enable trustworthy relations.

That maps onto AI systems with uncomfortable precision:

A checkbox is not autonomy. A settings page nobody understands is not autonomy. A disclosure that says “we use AI” is not autonomy.

Autonomy, in lived terms, is the ability to author your path through a system. To know what is happening well enough to respond. To refuse without being punished. To appeal without being humiliated.

Where is the human in the loop as an agent with a say, not a dossier with a score?

The off-screen cost Link to heading

Even with decent recourse for the primary user, harm can still accumulate in the margins.

This is Levinas’s territory. Ethics begins with the demand that comes from the Other, prior to our categories and convenience 11 . He warns: your categories will erase someone. Your efficiency will make that erasure feel normal.

In product work, the “Other” is often off-screen. The patient never prioritized because the model learned a proxy for compliance. The borrower routed into worse terms because the system “helped” by narrowing options. The caregiver doing invisible labor that your funnel does not count as income. The community absorbing the downstream effects of systematic denial.

Ubuntu and related communitarian ethics sharpen the point: personhood is relational. A person is a person through other persons 12 . The design implication is hard to dodge: the unit of impact is often a network, not an individual.

Lending decisions ripple through families. Triage decisions ripple through workplaces, schools, neighborhoods. When you treat the decision as a private interaction between user and system, you hide the real surface area of responsibility.

This is also why bias conversations get stuck. Teams want bias to be a property of the model. Bias is a property of the whole pipeline: data, proxies, incentives, deployment context, the social world the system lands in. Obermeyer’s algorithm did not need to “see race” to reproduce racialized outcomes 7 . It only needed a proxy that reflected unequal access.

The loop cannot be only a UX affordance. It must be a governance affordance:

Who is allowed to question the proxy choices? Who audits outcomes across groups? Who can stop the system, not next quarter, but now?

Where is the human in the loop when the harmed person is not your user, not your customer, and not in your metrics?

The inverted loop Link to heading

There is one more altitude change, because it explains a lot of quiet failures.

Post-phenomenology argues that technologies do not just help us do things. They shape how we perceive, decide, and act. Verbeek describes technologies as mediating relations between humans and world 13 . In lending and health triage, AI becomes a mediation layer between professional judgment and reality. The risk score stands between a loan officer and a borrower. The triage recommendation stands between a nurse and a patient. The fraud flag stands between a support rep and the person at the pharmacy counter.

This does something subtle: it changes what responsibility feels like.

If the model says “high risk,” the human operator often experiences that as instruction, not input. The system becomes authority. The human becomes messenger.

Parasuraman and Riley warned about this decades ago: automation misuse, disuse, and abuse, including cases where people over-trust automated aids and stop monitoring 8 . You can see it in modern support scripts that read like moral surrender:

“I understand, but the system won’t let me.”

At that point, the loop is not just missing. It has inverted. The human is inside the tool’s loop.

“Human in the loop” cannot mean a human touched the process. It must mean:

  • The human can interpret the system’s output.
  • The human can challenge it.
  • The human can override it.
  • And doing so is normal, not heroic.

If you fear human override because it might increase errors, you are claiming the model is safer than the people operating it. If you fear override because it might increase losses, you are claiming the institution’s metrics outrank the person’s agency.

Neither claim is crazy. Both have moral cost. You do not get to avoid that cost by hiding it under “AI did it.”

Where is the human in the loop when the system is shaping what humans can see, say, and do?

Doors, not dashboards Link to heading

If you want this to land in shipping code and policy language, aim for one thing: build doors.

Not content. Not disclaimers. Doors.

A door is a path a person can actually take when the system is wrong, or when the system is right but the outcome is still unacceptable.

A door of reasons. The system must give reasons specific enough to respond to. In lending, this aligns with adverse action requirements 5 . In healthcare, the triage recommendation should expose key drivers and uncertainty, not just outcome.

A door of correction. People need a way to fix the record. Not “submit feedback.” A correction that changes the model’s future behavior, or at least the case outcome. If a false fraud flag froze an account, the system should learn from the release. It should not repeat the same mistake next month.

A door of refusal. The person must be able to say “no, use a different path” without punishment. If opting out of automated decisioning makes the service unusable, the autonomy talk is decoration.

A door of escalation. A human with authority, reachable at human speed, in cases where stakes are high or harm is likely. Not all cases. The ones that matter.

A door of audit. Outcomes monitored across groups and contexts, not just overall accuracy. Obermeyer’s study shows why: if your proxy is wrong, the system can be accurate and still be unjust 7 .

Notice what is missing: a demand for perfect explainability, perfect fairness, perfect anything. Those are fantasies. What you can build is answerability.

A quick test for design review:

When the system says no, can a person reach yes through a legitimate path, without begging and without performing?

If the only available action is to accept, the loop is fake. If the only way out is social media outrage, the loop is broken. If the operator cannot override because “the tool won’t let me,” the loop has eaten the human.

Back at the pharmacy counter, nobody needs a lecture on Kant. They need a door. They need to be a person in a system that currently treats them like an anomaly.

“Human in the loop” is a comfortable phrase because it pretends the problem is where we put the human.

The real problem is what we let the loop do to them.

  • Artificial Intelligence Risk Management Framework (AI RMF 1.0) National Institute of Standards and Technology (2023). 1
  • Recommendation of the Council on Artificial Intelligence OECD (2019). 2
  • Recommendation on the Ethics of Artificial Intelligence UNESCO (2021). 3
  • Regulation (EU) 2024/1689 (Artificial Intelligence Act) European Union, EUR-Lex (2024). 4
  • CFPB issues guidance on credit denials by lenders using artificial intelligence Consumer Financial Protection Bureau (2023). 5
  • Article 22 GDPR: Automated individual decision-making, including profiling GDPR text (consolidated reference). 6
  • Dissecting racial bias in an algorithm used to manage the health of populations Obermeyer et al., Science (2019). 7
  • Humans and Automation: Use, Misuse, Disuse, Abuse Parasuraman & Riley, Human Factors (1997). 8
  • Treating Persons as Means Kerstein, Stanford Encyclopedia of Philosophy (2019). 9
  • Martin Buber Zank, Stanford Encyclopedia of Philosophy (2004). 10
  • Emmanuel Levinas Bergo, Stanford Encyclopedia of Philosophy (2006). 11
  • Hunhu/Ubuntu in the Traditional Thought of Southern Africa Internet Encyclopedia of Philosophy. 12
  • Mediation theory Peter-Paul Verbeek (web resource). 13
  • Autonomy and Trust in Bioethics Onora O’Neill (Cambridge University Press). 14

Continue reading