Personalization is not the problem. Ownership is. Link to heading
Personalization is not going away. If anything, AI makes it unavoidable.
The real question is simpler and more uncomfortable: who owns the filter that shapes what you see, what you trust, and what you come to believe?
Right now, most people rent that filter from whoever runs the platform. It comes bundled with an ad model, a growth target, and a set of incentives that rarely align with depth. The interface calls it “your feed.” The system treats it like inventory.
If we want a more human digital environment, we do not need less personalization. We need personalization that is legible, adjustable, and user-owned.
AI raises the stakes because persuasion gets cheap Link to heading
Shallowfication has always been about incentives plus interfaces. AI adds a third factor: scale.
When content can be generated instantly, tailored to your profile, and continuously refined, the limiting factor stops being production. It becomes selection.
What you see first matters more than what exists. What gets repeated matters more than what is true. What feels emotionally right can outrun what is carefully reasoned.
This is not dystopia. It is just engineering. If you can measure response and optimize output, systems will optimize output. The question is whether they optimize for the user’s goals or for someone else’s.
If a pilot cannot trust the instrument panel, it does not matter how advanced the aircraft is. You get a lot of motion and very little control.
The design direction: open personalization Link to heading
“Open personalization” does not mean everyone must reveal proprietary ranking code. It means platforms stop treating personalization as a private steering mechanism and start treating it as a user-facing capability.
In practice, that means two shifts.
First, users should be able to bring preferences into a system, not just react to what the system serves them. Most products currently offer reactive controls. Mute this. Hide that. Don’t recommend this. Useful, but late.
Second, users should be able to choose or swap the logic that shapes their experience. Not as a hidden setting for power users. As a first-class design decision.
This is what turns the series from diagnosis into direction: user sovereignty over filters.
Open personalization, in practice
- Let users declare preferences (values, boundaries, novelty appetite) instead of only inferring them.
- Make filters swappable and inspectable—choose lenses, not just mute.
- Keep the profile portable and user-held so control survives across surfaces.
A preference profile is the missing artifact Link to heading
We have profiles today, but they are mostly extracted. Behavioral exhaust turned into predictions.
A preference profile, as I mean it here, is different. It is declared. It represents a user’s intentions, values, boundaries, and curiosity in a form systems can respect.
Think of it as a compact, portable set of parameters that answers questions like:
- What kind of information diet do you want right now?
- What do you want less of, even if it “performs” well?
- How much novelty versus continuity do you want?
- Do you want quick updates, or do you want the long version?
- Do you want agreement, or do you want challenge?
- What are your red lines. What are your priorities.
The profile does not need to be perfect. It can be messy and evolving, like real humans are. The point is that it is yours. You can revise it, fork it, and carry it between systems.
And importantly, it can be privately held. Stored locally. Encrypted. Shared selectively. Enough for systems to serve you, not enough for systems to own you.
Marketplaces of preferences Link to heading
Once preference profiles exist as portable artifacts, a marketplace becomes obvious.
Not a marketplace in the “growth funnel” sense. A marketplace in the civic sense: a place where people can exchange tools for seeing, prioritizing, and making meaning.
In a marketplace of preferences, individuals could subscribe to filter sets the way they subscribe to newsletters, playlists, or software libraries, except with stronger transparency and control.
A few examples, just to make it concrete:
Slow news
Downrank breaking churn; boost follow-ups, corrections, and long-form context so pacing matches understanding.
Craft mode
Fewer sources, deeper sequences; tuned for learning a topic instead of grazing headlines.
Diversity by design
Structured variation that widens perspective without random chaos; surfaces credible counter-lenses.
Low outrage
Treat ragebait like spam; filter for substance over heat to reduce reflexive escalation.
Family and friends
Prioritize actual relationships and quieter updates over high-performing posts.
These filters could be created by communities, journalists, educators, researchers, or simply obsessive individuals with good taste and patience. People could inspect them, modify them, and share them.
The point is not to create perfect neutrality. That does not exist. The point is to make values explicit and configurable instead of smuggled in through engagement metrics.
This is not censorship. It is accountability. Link to heading
The predictable objection is that user-owned filters will create bubbles.
We already have bubbles. They are just outsourced.
Right now, most filtering happens invisibly through ranking systems optimized for engagement. That is still filtering. It is just filtering with incentives you did not choose.
User-owned filtering does not eliminate the risk of narrowing. It makes narrowing a choice that can be examined, discussed, and corrected.
A healthy system can even build guardrails without becoming paternalistic. For example:
- make filters declare their goals in plain language
- show “why this showed up” in a way a normal person can understand
- allow multiple filters to be combined, compared, and toggled
- create a visible record of what the system downranked and why, so users can audit the shape of their attention over time
When filters become legible, disagreement becomes more productive. You can argue about parameters instead of accusing each other of being irrational.
That is a small step toward a more adult digital culture.
The honest future of AI is “bring your own filter” Link to heading
AI can help people think. It can summarize, compare, explain, translate, and tutor. It can help you explore unfamiliar terrain without drowning.
But AI also makes it easier to flood the zone with persuasive content, synthetic social proof, and tailored narratives. If the filter remains controlled by institutions whose incentives run against depth, then “AI assistance” becomes another layer of steering.
So the core proposal is this:
Let users bring their own privately held filters and preference profiles into the systems that mediate their lives.
When personalization becomes user-held, AI becomes less of a manipulator and more of an instrument. You can use it to explore without being quietly herded. You can build a feed that reflects what you are trying to become, not just what you are easiest to trigger.
That is what a more human environment looks like. Not friction everywhere. Not nostalgia. Not a ban on fun. Just a clear shift in control.
The future of digital life will involve filters. The only question left is whether they belong to the user or to the business model.